Search (742 results, page 1 of 38)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.11
    0.1090321 = product of:
      0.46728045 = sum of:
        0.028466314 = product of:
          0.08539894 = sum of:
            0.08539894 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.08539894 = score(doc=562,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.08539894 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.08539894 = score(doc=562,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.08539894 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.08539894 = score(doc=562,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.08539894 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.08539894 = score(doc=562,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.08539894 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.08539894 = score(doc=562,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.08539894 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.08539894 = score(doc=562,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.01181941 = product of:
          0.017729115 = sum of:
            0.003159283 = weight(_text_:a in 562) [ClassicSimilarity], result of:
              0.003159283 = score(doc=562,freq=8.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.15287387 = fieldWeight in 562, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
            0.014569832 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.014569832 = score(doc=562,freq=2.0), product of:
                0.06276294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.017922899 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.6666667 = coord(2/3)
      0.23333333 = coord(7/30)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Type
    a
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.11
    0.106548965 = product of:
      0.45663843 = sum of:
        0.028466314 = product of:
          0.08539894 = sum of:
            0.08539894 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.08539894 = score(doc=862,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.08539894 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.08539894 = score(doc=862,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.08539894 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.08539894 = score(doc=862,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.08539894 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.08539894 = score(doc=862,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.08539894 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.08539894 = score(doc=862,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.08539894 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.08539894 = score(doc=862,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.0011773953 = product of:
          0.003532186 = sum of:
            0.003532186 = weight(_text_:a in 862) [ClassicSimilarity], result of:
              0.003532186 = score(doc=862,freq=10.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.1709182 = fieldWeight in 862, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.23333333 = coord(7/30)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
    Type
    a
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.09
    0.08770639 = product of:
      0.43853194 = sum of:
        0.08539894 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.08539894 = score(doc=563,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.08539894 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.08539894 = score(doc=563,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.08539894 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.08539894 = score(doc=563,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.08539894 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.08539894 = score(doc=563,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.08539894 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.08539894 = score(doc=563,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.011537234 = product of:
          0.017305851 = sum of:
            0.0027360192 = weight(_text_:a in 563) [ClassicSimilarity], result of:
              0.0027360192 = score(doc=563,freq=6.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.13239266 = fieldWeight in 563, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
            0.014569832 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.014569832 = score(doc=563,freq=2.0), product of:
                0.06276294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.017922899 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.6666667 = coord(2/3)
      0.2 = coord(6/30)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Linguistik und neue Medien (1998) 0.02
    0.019282058 = product of:
      0.19282058 = sum of:
        0.07350009 = weight(_text_:neue in 5770) [ClassicSimilarity], result of:
          0.07350009 = score(doc=5770,freq=10.0), product of:
            0.07302189 = queryWeight, product of:
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.017922899 = queryNorm
            1.0065488 = fieldWeight in 5770, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.078125 = fieldNorm(doc=5770)
        0.0980886 = weight(_text_:medien in 5770) [ClassicSimilarity], result of:
          0.0980886 = score(doc=5770,freq=10.0), product of:
            0.084356464 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.017922899 = queryNorm
            1.162787 = fieldWeight in 5770, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.078125 = fieldNorm(doc=5770)
        0.021231882 = weight(_text_:u in 5770) [ClassicSimilarity], result of:
          0.021231882 = score(doc=5770,freq=2.0), product of:
            0.058687534 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.017922899 = queryNorm
            0.3617784 = fieldWeight in 5770, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=5770)
      0.1 = coord(3/30)
    
    Editor
    Heyer, G. u. C. Wolff
    RSWK
    Lexikographie / Neue Medien / Kongress / Leipzig <1997> (2134)
    Syntaktische Analyse / Neue Medien / Kongress / Leipzig <1997> (2134)
    Subject
    Lexikographie / Neue Medien / Kongress / Leipzig <1997> (2134)
    Syntaktische Analyse / Neue Medien / Kongress / Leipzig <1997> (2134)
  5. Endres-Niggemeyer, B.: Sprachverarbeitung im Informationsbereich (1989) 0.01
    0.012418301 = product of:
      0.12418301 = sum of:
        0.052592386 = weight(_text_:neue in 4860) [ClassicSimilarity], result of:
          0.052592386 = score(doc=4860,freq=2.0), product of:
            0.07302189 = queryWeight, product of:
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.017922899 = queryNorm
            0.72022766 = fieldWeight in 4860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.125 = fieldNorm(doc=4860)
        0.07018649 = weight(_text_:medien in 4860) [ClassicSimilarity], result of:
          0.07018649 = score(doc=4860,freq=2.0), product of:
            0.084356464 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.017922899 = queryNorm
            0.8320226 = fieldWeight in 4860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.125 = fieldNorm(doc=4860)
        0.0014041257 = product of:
          0.004212377 = sum of:
            0.004212377 = weight(_text_:a in 4860) [ClassicSimilarity], result of:
              0.004212377 = score(doc=4860,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.20383182 = fieldWeight in 4860, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=4860)
          0.33333334 = coord(1/3)
      0.1 = coord(3/30)
    
    Source
    Linguistische Datenverarbeitung und Neue Medien. Hrsg.: Winfried Lenders
    Type
    a
  6. Schmitz, K.-D.: Projektforschung und Infrastrukturen im Bereich der Terminologie : Wie kann die Wirtschaft davon profitieren? (2000) 0.01
    0.012316173 = product of:
      0.09237129 = sum of:
        0.037786398 = product of:
          0.075572796 = sum of:
            0.075572796 = weight(_text_:industrie in 5568) [ClassicSimilarity], result of:
              0.075572796 = score(doc=5568,freq=4.0), product of:
                0.12019911 = queryWeight, product of:
                  6.7064548 = idf(docFreq=146, maxDocs=44218)
                  0.017922899 = queryNorm
                0.6287301 = fieldWeight in 5568, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.7064548 = idf(docFreq=146, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5568)
          0.5 = coord(1/2)
        0.034336206 = product of:
          0.06867241 = sum of:
            0.06867241 = weight(_text_:handel in 5568) [ClassicSimilarity], result of:
              0.06867241 = score(doc=5568,freq=2.0), product of:
                0.1362596 = queryWeight, product of:
                  7.602543 = idf(docFreq=59, maxDocs=44218)
                  0.017922899 = queryNorm
                0.5039822 = fieldWeight in 5568, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.602543 = idf(docFreq=59, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5568)
          0.5 = coord(1/2)
        0.019722143 = weight(_text_:neue in 5568) [ClassicSimilarity], result of:
          0.019722143 = score(doc=5568,freq=2.0), product of:
            0.07302189 = queryWeight, product of:
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.017922899 = queryNorm
            0.27008536 = fieldWeight in 5568, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.046875 = fieldNorm(doc=5568)
        5.265472E-4 = product of:
          0.0015796415 = sum of:
            0.0015796415 = weight(_text_:a in 5568) [ClassicSimilarity], result of:
              0.0015796415 = score(doc=5568,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.07643694 = fieldWeight in 5568, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5568)
          0.33333334 = coord(1/3)
      0.13333334 = coord(4/30)
    
    Abstract
    In der heutigen Informationsgesellschaft bieten sich der Industrie neue Perspektiven für Kommunikation und Handel auf dem europäischen und internationalen Markt; beide Märkte sind von einer großen sprachlichen, kulturellen und gesellschaftlichen Vielfalt geprägt. Uni Nutzen aus diesen neuen Möglichkeiten zu ziehen und um weiterhin konkurrenzfähig zu bleiben, muß die Industrie spezifische und adäquate Lösungen zur Überwindung der Sprachbarrieren finden. Voraussetzung hierfür ist die genaue Definition, systematische Ordnung und exakte Benennung der Begriffe innerhalb der jeweiligen Fachgebiete, in der eigenen Sprache ebenso wie in den Fremdsprachen. Genau dies sind die Themenbereiche, mit dem sich die Terminologiewissenschaft und die praktische Temninologiearbeit beschäftigen. Die Ergebnisse der Terminologiearbeit im Unternehmen beeinflussen Konstruktion, Produktion, Einkauf, Marketing und Verkauf, Vertragswesen, technische Dokumentation und Übersetzung
    Type
    a
  7. Sprachtechnologie für die multilinguale Kommunikation : Textproduktion, Recherche, Übersetzung, Lokalisierung. Beiträge der GLDV-Frühjahrstagung 2003 (2003) 0.01
    0.011756241 = product of:
      0.11756241 = sum of:
        0.039444286 = weight(_text_:neue in 4042) [ClassicSimilarity], result of:
          0.039444286 = score(doc=4042,freq=2.0), product of:
            0.07302189 = queryWeight, product of:
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.017922899 = queryNorm
            0.5401707 = fieldWeight in 4042, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.09375 = fieldNorm(doc=4042)
        0.052639864 = weight(_text_:medien in 4042) [ClassicSimilarity], result of:
          0.052639864 = score(doc=4042,freq=2.0), product of:
            0.084356464 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.017922899 = queryNorm
            0.62401694 = fieldWeight in 4042, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.09375 = fieldNorm(doc=4042)
        0.025478259 = weight(_text_:u in 4042) [ClassicSimilarity], result of:
          0.025478259 = score(doc=4042,freq=2.0), product of:
            0.058687534 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.017922899 = queryNorm
            0.43413407 = fieldWeight in 4042, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=4042)
      0.1 = coord(3/30)
    
    Editor
    Seewald-Heeg, U.
    Series
    Sprachwissenschaft, Computerlinguistik und Neue Medien; 5
  8. Hahn, U.; Reimer, U.: Informationslinguistische Konzepte der Volltextverarbeitung in TOPIC (1983) 0.01
    0.008451655 = product of:
      0.08451655 = sum of:
        0.017152434 = product of:
          0.03430487 = sum of:
            0.03430487 = weight(_text_:29 in 450) [ClassicSimilarity], result of:
              0.03430487 = score(doc=450,freq=2.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.5441145 = fieldWeight in 450, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=450)
          0.5 = coord(1/2)
        0.042036984 = weight(_text_:u in 450) [ClassicSimilarity], result of:
          0.042036984 = score(doc=450,freq=4.0), product of:
            0.058687534 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.017922899 = queryNorm
            0.7162847 = fieldWeight in 450, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.109375 = fieldNorm(doc=450)
        0.025327131 = product of:
          0.037990697 = sum of:
            0.00368583 = weight(_text_:a in 450) [ClassicSimilarity], result of:
              0.00368583 = score(doc=450,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.17835285 = fieldWeight in 450, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=450)
            0.03430487 = weight(_text_:29 in 450) [ClassicSimilarity], result of:
              0.03430487 = score(doc=450,freq=2.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.5441145 = fieldWeight in 450, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=450)
          0.6666667 = coord(2/3)
      0.1 = coord(3/30)
    
    Source
    Deutscher Dokumentartag 1982, Lübeck-Travemünde, 29.-30.9.1982: Fachinformation im Zeitalter der Informationsindustrie. Bearb.: H. Strohl-Goebel
    Type
    a
  9. Linguistische Datenverarbeitung und neue Medien (1989) 0.01
    0.008185259 = product of:
      0.12277888 = sum of:
        0.052592386 = weight(_text_:neue in 8084) [ClassicSimilarity], result of:
          0.052592386 = score(doc=8084,freq=2.0), product of:
            0.07302189 = queryWeight, product of:
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.017922899 = queryNorm
            0.72022766 = fieldWeight in 8084, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.125 = fieldNorm(doc=8084)
        0.07018649 = weight(_text_:medien in 8084) [ClassicSimilarity], result of:
          0.07018649 = score(doc=8084,freq=2.0), product of:
            0.084356464 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.017922899 = queryNorm
            0.8320226 = fieldWeight in 8084, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.125 = fieldNorm(doc=8084)
      0.06666667 = coord(2/30)
    
  10. Egger, W.: Helferlein für jedermann : Elektronische Wörterbücher (2004) 0.01
    0.007859587 = product of:
      0.07859587 = sum of:
        0.06859876 = weight(_text_:einzelne in 1501) [ClassicSimilarity], result of:
          0.06859876 = score(doc=1501,freq=2.0), product of:
            0.10548963 = queryWeight, product of:
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.017922899 = queryNorm
            0.6502892 = fieldWeight in 1501, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.078125 = fieldNorm(doc=1501)
        0.009119529 = product of:
          0.018239059 = sum of:
            0.018239059 = weight(_text_:online in 1501) [ClassicSimilarity], result of:
              0.018239059 = score(doc=1501,freq=2.0), product of:
                0.05439423 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.017922899 = queryNorm
                0.33531237 = fieldWeight in 1501, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1501)
          0.5 = coord(1/2)
        8.775785E-4 = product of:
          0.0026327355 = sum of:
            0.0026327355 = weight(_text_:a in 1501) [ClassicSimilarity], result of:
              0.0026327355 = score(doc=1501,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.12739488 = fieldWeight in 1501, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1501)
          0.33333334 = coord(1/3)
      0.1 = coord(3/30)
    
    Abstract
    Zahllose online-dictionaries und einzelne, teilweise ausgezeichnete elektronische Wörterbücher wollen hier nicht erwähnt werden, da ihre Vorzüge teilweise folgenden Nachteilen gegenüber stehen: Internet-Verbindung, CD-Rom, bzw. zeitaufwändiges Aufrufen der Wörterbücher oder Wechsel der Sprachrichtung sind erforderlich.
    Type
    a
  11. Erbach, G.: Sprachdialogsysteme für Telefondienste : Stand der Technik und zukünftige Entwicklungen (2000) 0.01
    0.0055771405 = product of:
      0.055771403 = sum of:
        0.033399336 = weight(_text_:post in 5556) [ClassicSimilarity], result of:
          0.033399336 = score(doc=5556,freq=2.0), product of:
            0.10409636 = queryWeight, product of:
              5.808009 = idf(docFreq=360, maxDocs=44218)
              0.017922899 = queryNorm
            0.3208502 = fieldWeight in 5556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.808009 = idf(docFreq=360, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5556)
        0.021933276 = weight(_text_:medien in 5556) [ClassicSimilarity], result of:
          0.021933276 = score(doc=5556,freq=2.0), product of:
            0.084356464 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.017922899 = queryNorm
            0.26000705 = fieldWeight in 5556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5556)
        4.3878925E-4 = product of:
          0.0013163678 = sum of:
            0.0013163678 = weight(_text_:a in 5556) [ClassicSimilarity], result of:
              0.0013163678 = score(doc=5556,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.06369744 = fieldWeight in 5556, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5556)
          0.33333334 = coord(1/3)
      0.1 = coord(3/30)
    
    Abstract
    Trotz des ungebrernsten Wachstums des Internet wird das Telefon auch weiterhin eines der wichtigsten Medien für die Kommunikation zwischen Unternehmen und ihren Kunden bleiben. Die Bedeutung der gesprochenen Sprache wird durch die rasante Verbreitung von Mobiltelefonen noch verstärkt. Fast alle großen Unternehmen betreiben oder beauftragen Call Centers, um ihren Kunden telefonisch zu Diensten zu stehen. Oft sind Call Centers mit sogenannten IVR-Systemen (Interactive Voice Response) ausgestattet, die dem Benutzer eine eingeschränkte Menüauswahl über die Telefontasten oder eine rudimentäre Spracheingabe bieten. Diese Art der Eingabe wird aber bei mehr als fünf Wahlmöglichkeiten als lästig empfunden. Hier bietet sich ein großes Potenzial für automatische Spracherkennung und Sprachdialogsysteme. In diesem Artikel werden die technischen Grundlagen sowie die derzeitigen Möglichkeiten und Grenzen der automatischen Spracherkennungstechnologie dargestellt. Wir berichten über Erfahrungen mit einem System für telefonische Posttarifauskünfte, das am Forschungszentrum Telekommunikation Wien (FTW) in Zusammenarbeit mit Philips Speech Processing und der Österreichischen Post AG realisiert und erprobt wurde. Der Stand der Technik in Sprachausgabe und Sprechererkennung wird kurz dargestellt. Zum Abschluss wird ein Ausblick auf die Rolle von Sprachdialogen in zukünftigen mobilen Multirnedia-Anwendungen gegeben
    Type
    a
  12. Airio, E.: Who benefits from CLIR in web retrieval? (2008) 0.01
    0.005529519 = product of:
      0.082942784 = sum of:
        0.08176539 = weight(_text_:760 in 2342) [ClassicSimilarity], result of:
          0.08176539 = score(doc=2342,freq=2.0), product of:
            0.1486828 = queryWeight, product of:
              8.29569 = idf(docFreq=29, maxDocs=44218)
              0.017922899 = queryNorm
            0.5499317 = fieldWeight in 2342, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.29569 = idf(docFreq=29, maxDocs=44218)
              0.046875 = fieldNorm(doc=2342)
        0.0011773953 = product of:
          0.003532186 = sum of:
            0.003532186 = weight(_text_:a in 2342) [ClassicSimilarity], result of:
              0.003532186 = score(doc=2342,freq=10.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.1709182 = fieldWeight in 2342, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2342)
          0.33333334 = coord(1/3)
      0.06666667 = coord(2/30)
    
    Abstract
    Purpose - The aim of the current paper is to test whether query translation is beneficial in web retrieval. Design/methodology/approach - The language pairs were Finnish-Swedish, English-German and Finnish-French. A total of 12-18 participants were recruited for each language pair. Each participant performed four retrieval tasks. The author's aim was to compare the performance of the translated queries with that of the target language queries. Thus, the author asked participants to formulate a source language query and a target language query for each task. The source language queries were translated into the target language utilizing a dictionary-based system. In English-German, also machine translation was utilized. The author used Google as the search engine. Findings - The results differed depending on the language pair. The author concluded that the dictionary coverage had an effect on the results. On average, the results of query-translation were better than in the traditional laboratory tests. Originality/value - This research shows that query translation in web is beneficial especially for users with moderate and non-active language skills. This is valuable information for developers of cross-language information retrieval systems.
    Source
    Journal of documentation. 64(2008) no.5, S.760-778
    Type
    a
  13. Rau, L.F.: Conceptual information extraction and retrieval from natural language input (198) 0.01
    0.0051574437 = product of:
      0.051574435 = sum of:
        0.01225174 = product of:
          0.02450348 = sum of:
            0.02450348 = weight(_text_:29 in 1955) [ClassicSimilarity], result of:
              0.02450348 = score(doc=1955,freq=2.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.38865322 = fieldWeight in 1955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1955)
          0.5 = coord(1/2)
        0.021231882 = weight(_text_:u in 1955) [ClassicSimilarity], result of:
          0.021231882 = score(doc=1955,freq=2.0), product of:
            0.058687534 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.017922899 = queryNorm
            0.3617784 = fieldWeight in 1955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=1955)
        0.01809081 = product of:
          0.027136216 = sum of:
            0.0026327355 = weight(_text_:a in 1955) [ClassicSimilarity], result of:
              0.0026327355 = score(doc=1955,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.12739488 = fieldWeight in 1955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1955)
            0.02450348 = weight(_text_:29 in 1955) [ClassicSimilarity], result of:
              0.02450348 = score(doc=1955,freq=2.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.38865322 = fieldWeight in 1955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1955)
          0.6666667 = coord(2/3)
      0.1 = coord(3/30)
    
    Date
    16. 8.1998 13:29:20
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.527-533
    Type
    a
  14. Nhongkai, S.N.; Bentz, H.-J.: Bilinguale Suche mittels Konzeptnetzen (2006) 0.00
    0.004398376 = product of:
      0.04398376 = sum of:
        0.026296193 = weight(_text_:neue in 3914) [ClassicSimilarity], result of:
          0.026296193 = score(doc=3914,freq=2.0), product of:
            0.07302189 = queryWeight, product of:
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.017922899 = queryNorm
            0.36011383 = fieldWeight in 3914, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.0625 = fieldNorm(doc=3914)
        0.016985506 = weight(_text_:u in 3914) [ClassicSimilarity], result of:
          0.016985506 = score(doc=3914,freq=2.0), product of:
            0.058687534 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.017922899 = queryNorm
            0.28942272 = fieldWeight in 3914, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=3914)
        7.0206285E-4 = product of:
          0.0021061886 = sum of:
            0.0021061886 = weight(_text_:a in 3914) [ClassicSimilarity], result of:
              0.0021061886 = score(doc=3914,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.10191591 = fieldWeight in 3914, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3914)
          0.33333334 = coord(1/3)
      0.1 = coord(3/30)
    
    Abstract
    Eine neue Methode der Volltextsuche in bilingualen Textsammlungen wird vorgestellt und anhand eines parallelen Textkorpus (Englisch-Deutsch) geprüft. Die Brücke liefern passende Wortcluster, die aus einer Kookkurrenzanalyse stammen, geliefert von der neuartigen Suchmaschine SENTRAX (Essente Extractor Engine). Diese Cluster repräsentieren Konzepte, die sich in beiden Textsammlungen finden. Die Hypothese ist, dass das Finden mittels solcher Strukturvergleiche erfolgreich möglich ist.
    Source
    Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
    Type
    a
  15. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.00
    0.0043831337 = product of:
      0.043831337 = sum of:
        0.00612587 = product of:
          0.01225174 = sum of:
            0.01225174 = weight(_text_:29 in 2541) [ClassicSimilarity], result of:
              0.01225174 = score(doc=2541,freq=2.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.19432661 = fieldWeight in 2541, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
        0.0045597646 = product of:
          0.009119529 = sum of:
            0.009119529 = weight(_text_:online in 2541) [ClassicSimilarity], result of:
              0.009119529 = score(doc=2541,freq=2.0), product of:
                0.05439423 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.017922899 = queryNorm
                0.16765618 = fieldWeight in 2541, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
        0.033145703 = sum of:
          0.0037232507 = weight(_text_:a in 2541) [ClassicSimilarity], result of:
            0.0037232507 = score(doc=2541,freq=16.0), product of:
              0.020665944 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.017922899 = queryNorm
              0.18016359 = fieldWeight in 2541, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
          0.01225174 = weight(_text_:29 in 2541) [ClassicSimilarity], result of:
            0.01225174 = score(doc=2541,freq=2.0), product of:
              0.063047156 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.017922899 = queryNorm
              0.19432661 = fieldWeight in 2541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
          0.017170712 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
            0.017170712 = score(doc=2541,freq=4.0), product of:
              0.06276294 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.017922899 = queryNorm
              0.27358043 = fieldWeight in 2541, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
      0.1 = coord(3/30)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
    Type
    a
  16. Bernhard, U.; Mistrik, I.: Rechnergestützte Übersetzung : Einführung und Technik (1998) 0.00
    0.0041157003 = product of:
      0.041157 = sum of:
        0.027891325 = weight(_text_:neue in 5343) [ClassicSimilarity], result of:
          0.027891325 = score(doc=5343,freq=4.0), product of:
            0.07302189 = queryWeight, product of:
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.017922899 = queryNorm
            0.38195843 = fieldWeight in 5343, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.046875 = fieldNorm(doc=5343)
        0.012739129 = weight(_text_:u in 5343) [ClassicSimilarity], result of:
          0.012739129 = score(doc=5343,freq=2.0), product of:
            0.058687534 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.017922899 = queryNorm
            0.21706703 = fieldWeight in 5343, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=5343)
        5.265472E-4 = product of:
          0.0015796415 = sum of:
            0.0015796415 = weight(_text_:a in 5343) [ClassicSimilarity], result of:
              0.0015796415 = score(doc=5343,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.07643694 = fieldWeight in 5343, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5343)
          0.33333334 = coord(1/3)
      0.1 = coord(3/30)
    
    Abstract
    Softwaresysteme zur maschinellen und maschinengestützten Übersetzung natürlicher Sprachen erfuhren in den letzten 2 bis 3 Jahren eine erstaunliche Entwicklung. Fortschritte in der Datenbanktechnik, neue leistungsfähigere computerlinguistische Ansätze und eine grundlegende Verbesserung des Preis / Leistungsverhältnisses bei Ein- und Mehrplatz-Hard- und Software machen heute bisher noch nie dagewesene Lösungen möglich, die zu einem Bruchteil der früheren Kosten angeschafft und betrieben werden können. Als Folge diese Entwicklung drängte eine Vielzahl neuer Produkte auf den Übersetzungssoftware-Markt, was - obwohl generell zu begrüßen - für potentielle neue Benutzer die Auswahl des für ihre Anwendungsumgebung geeigneten Produkts erschwert. Vor diesem Hintergrund stellt der vorliegende Artikel die Technik der maschinellen und maschinengestützten Übersetzung dar. Es werden Richtlinien vorgestellt, die potentiellen neuen Benutzern der MÜ-Technik die Auswahl eines geeigneten Werkzeugs erleichtern sollen. Im Anhang werden einige Übersetzungssoftware-Produkte kurz vorgestellt
    Type
    a
  17. Kuhlen, R.: Morphologische Relationen durch Reduktionsalgorithmen (1974) 0.00
    0.0039371583 = product of:
      0.05905737 = sum of:
        0.024257207 = product of:
          0.048514415 = sum of:
            0.048514415 = weight(_text_:29 in 4251) [ClassicSimilarity], result of:
              0.048514415 = score(doc=4251,freq=4.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.7694941 = fieldWeight in 4251, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4251)
          0.5 = coord(1/2)
        0.034800164 = product of:
          0.052200243 = sum of:
            0.00368583 = weight(_text_:a in 4251) [ClassicSimilarity], result of:
              0.00368583 = score(doc=4251,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.17835285 = fieldWeight in 4251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4251)
            0.048514415 = weight(_text_:29 in 4251) [ClassicSimilarity], result of:
              0.048514415 = score(doc=4251,freq=4.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.7694941 = fieldWeight in 4251, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4251)
          0.6666667 = coord(2/3)
      0.06666667 = coord(2/30)
    
    Date
    29. 1.2011 14:56:29
    Type
    a
  18. Clark, M.; Kim, Y.; Kruschwitz, U.; Song, D.; Albakour, D.; Dignum, S.; Beresi, U.C.; Fasli, M.; Roeck, A De: Automatically structuring domain knowledge from text : an overview of current research (2012) 0.00
    0.0039102524 = product of:
      0.039102525 = sum of:
        0.010395946 = product of:
          0.020791892 = sum of:
            0.020791892 = weight(_text_:29 in 2738) [ClassicSimilarity], result of:
              0.020791892 = score(doc=2738,freq=4.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.3297832 = fieldWeight in 2738, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2738)
          0.5 = coord(1/2)
        0.012739129 = weight(_text_:u in 2738) [ClassicSimilarity], result of:
          0.012739129 = score(doc=2738,freq=2.0), product of:
            0.058687534 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.017922899 = queryNorm
            0.21706703 = fieldWeight in 2738, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=2738)
        0.015967451 = product of:
          0.023951175 = sum of:
            0.003159283 = weight(_text_:a in 2738) [ClassicSimilarity], result of:
              0.003159283 = score(doc=2738,freq=8.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.15287387 = fieldWeight in 2738, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2738)
            0.020791892 = weight(_text_:29 in 2738) [ClassicSimilarity], result of:
              0.020791892 = score(doc=2738,freq=4.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.3297832 = fieldWeight in 2738, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2738)
          0.6666667 = coord(2/3)
      0.1 = coord(3/30)
    
    Abstract
    This paper presents an overview of automatic methods for building domain knowledge structures (domain models) from text collections. Applications of domain models have a long history within knowledge engineering and artificial intelligence. In the last couple of decades they have surfaced noticeably as a useful tool within natural language processing, information retrieval and semantic web technology. Inspired by the ubiquitous propagation of domain model structures that are emerging in several research disciplines, we give an overview of the current research landscape and some techniques and approaches. We will also discuss trade-offs between different approaches and point to some recent trends.
    Date
    29. 1.2016 18:29:51
    Type
    a
  19. Manhart, K.: Digitales Kauderwelsch : Online-Übersetzungsdienste (2004) 0.00
    0.003864808 = product of:
      0.05797212 = sum of:
        0.057212114 = sum of:
          0.022338195 = weight(_text_:online in 2077) [ClassicSimilarity], result of:
            0.022338195 = score(doc=2077,freq=12.0), product of:
              0.05439423 = queryWeight, product of:
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.017922899 = queryNorm
              0.41067213 = fieldWeight in 2077, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2077)
          0.034873918 = weight(_text_:dienste in 2077) [ClassicSimilarity], result of:
            0.034873918 = score(doc=2077,freq=2.0), product of:
              0.106369466 = queryWeight, product of:
                5.934836 = idf(docFreq=317, maxDocs=44218)
                0.017922899 = queryNorm
              0.32785648 = fieldWeight in 2077, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.934836 = idf(docFreq=317, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2077)
        7.6000544E-4 = product of:
          0.0022800162 = sum of:
            0.0022800162 = weight(_text_:a in 2077) [ClassicSimilarity], result of:
              0.0022800162 = score(doc=2077,freq=6.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.11032722 = fieldWeight in 2077, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2077)
          0.33333334 = coord(1/3)
      0.06666667 = coord(2/30)
    
    Abstract
    Eine englische oder französische Website mal schnell ins Deutsche übersetzen - nichts einfacher als das. OnlineÜbersetzungsdienste versprechen den Sprachtransfer per Mausklick und zum Nulltarif. Doch was taugen sie wirklich? Online-Übersetzungsdienste wollen die Sprachbarriere im WWW beseitigen. Die automatischen Übersetzer versprechen, die E-Mail-Korrespondenz verständlich zu machen und das deutschsprachige Surfen in fremdsprachigen Webangeboten zu ermöglichen. Englische, spanische oder gar chinesische EMails und Websites können damit per Mausklick schnell in die eigene Sprache übertragen werden. Auch komplizierte englische Bedienungsanleitungen oder russische Nachrichten sollen für die Dienste kein Problem sein. Und der eine oder andere Homepage-Besitzer träumt davon, mit Hilfe der digitalen Übersetzungshelfer seine deutsche Website in perfektem Englisch online stellen zu können - in der Hoffung auf internationale Kontakte und höhere Besucherzahlen. Das klingt schön - doch die Realität sieht anders aus. Wer jemals einen solchen Dienst konsultiert hat, reibt sich meist verwundert die Augen über die gebotenen Ergebnisse. Schon einfache Sätze bereiten vielen Online-Über setzern Probleme-und sorgen unfreiwillig für Humor. Aus der CNN-Meldung "Iraq blast injures 31 U.S. troops" wird im Deutschen der Satz: "Der Irak Knall verletzt 31 Vereinigte Staaten Truppen." Sites mit schwierigem Satzbau können die Übersetzer oft nur unverständlich wiedergeben. Den Satz "The Slider is equipped with a brilliant color screen and sports an innovative design that slides open with a push of your thumb" übersetzt der bekannteste Online-Dolmetscher Babelfish mit folgendem Kauderwelsch: "Der Schweber wird mit einem leuchtenden Farbe Schirm ausgerüstet und ein erfinderisches Design sports, das geöffnetes mit einem Stoß Ihres Daumens schiebt." Solch dadaistische Texte muten alle Übersetzer ihren Nutzern zu.
    Object
    Promt Online Translator
    Type
    a
  20. Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia : Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000 Fachhochschule Köln (2000) 0.00
    0.0038427007 = product of:
      0.038427006 = sum of:
        0.01643512 = weight(_text_:neue in 5527) [ClassicSimilarity], result of:
          0.01643512 = score(doc=5527,freq=2.0), product of:
            0.07302189 = queryWeight, product of:
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.017922899 = queryNorm
            0.22507115 = fieldWeight in 5527, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5527)
        0.021231882 = weight(_text_:u in 5527) [ClassicSimilarity], result of:
          0.021231882 = score(doc=5527,freq=8.0), product of:
            0.058687534 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.017922899 = queryNorm
            0.3617784 = fieldWeight in 5527, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5527)
        7.6000544E-4 = product of:
          0.0022800162 = sum of:
            0.0022800162 = weight(_text_:a in 5527) [ClassicSimilarity], result of:
              0.0022800162 = score(doc=5527,freq=6.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.11032722 = fieldWeight in 5527, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5527)
          0.33333334 = coord(1/3)
      0.1 = coord(3/30)
    
    Content
    Enthält die Beiträge: WRIGHT, S.E.: Leveraging terminology resources across application boundaries: accessing resources in future integrated environments; PALME, K.: E-Commerce: Verhindert Sprache Business-to-business?; RÜEGGER, R.: Die qualität der virtuellen Information als Wettbewerbsvorteil: Information im Internet ist Sprache - noch; SCHIRMER, K. u. J. HALLER: Zugang zu mehrsprachigen Nachrichten im Internet; WEISS, A. u. W. WIEDEN: Die Herstellung mehrsprachiger Informations- und Wissensressourcen in Unternehmen; FULFORD, H.: Monolingual or multilingual web sites? An exploratory study of UK SMEs; SCHMIDTKE-NIKELLA, M.: Effiziente Hypermediaentwicklung: Die Autorenentlastung durch eine Engine; SCHMIDT, R.: Maschinelle Text-Ton-Synchronisation in Wissenschaft und Wirtschaft; HELBIG, H. u.a.: Natürlichsprachlicher Zugang zu Informationsanbietern im Internet und zu lokalen Datenbanken; SIENEL, J. u.a.: Sprachtechnologien für die Informationsgesellschaft des 21. Jahrhunderts; ERBACH, G.: Sprachdialogsysteme für Telefondienste: Stand der Technik und zukünftige Entwicklungen; SUSEN, A.: Spracherkennung: Akteulle Einsatzmöglichkeiten im Bereich der Telekommunikation; BENZMÜLLER, R.: Logox WebSpeech: die neue Technologie für sprechende Internetseiten; JAARANEN, K. u.a.: Webtran tools for in-company language support; SCHMITZ, K.-D.: Projektforschung und Infrastrukturen im Bereich der Terminologie: Wie kann die Wirtschaft davon profitieren?; SCHRÖTER, F. u. U. MEYER: Entwicklung sprachlicher Handlungskompetenz in englisch mit hilfe eines Multimedia-Sprachlernsystems; KLEIN, A.: Der Einsatz von Sprachverarbeitungstools beim Sprachenlernen im Intranet; HAUER, M.: Knowledge Management braucht Terminologie Management; HEYER, G. u.a.: Texttechnologische Anwendungen am Beispiel Text Mining

Languages

Types

  • a 629
  • el 83
  • m 61
  • s 30
  • x 11
  • p 7
  • d 2
  • b 1
  • pat 1
  • r 1
  • More… Less…

Subjects

Classifications