Search (273 results, page 1 of 14)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.09882007 = sum of:
      0.054013528 = product of:
        0.21605411 = sum of:
          0.21605411 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.21605411 = score(doc=562,freq=2.0), product of:
              0.38442558 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.04534384 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.04480654 = product of:
        0.06720981 = sum of:
          0.030349022 = weight(_text_:j in 562) [ClassicSimilarity], result of:
            0.030349022 = score(doc=562,freq=2.0), product of:
              0.14407988 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.04534384 = queryNorm
              0.21064025 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
          0.036860786 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.036860786 = score(doc=562,freq=2.0), product of:
              0.1587864 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04534384 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Schwarz, C.: THESYS: Thesaurus Syntax System : a fully automatic thesaurus building aid (1988) 0.06
    0.057361886 = product of:
      0.11472377 = sum of:
        0.11472377 = sum of:
          0.021646196 = weight(_text_:h in 1361) [ClassicSimilarity], result of:
            0.021646196 = score(doc=1361,freq=2.0), product of:
              0.11265446 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.04534384 = queryNorm
              0.19214681 = fieldWeight in 1361, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1361)
          0.05007333 = weight(_text_:j in 1361) [ClassicSimilarity], result of:
            0.05007333 = score(doc=1361,freq=4.0), product of:
              0.14407988 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.04534384 = queryNorm
              0.34753868 = fieldWeight in 1361, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1361)
          0.04300425 = weight(_text_:22 in 1361) [ClassicSimilarity], result of:
            0.04300425 = score(doc=1361,freq=2.0), product of:
              0.1587864 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04534384 = queryNorm
              0.2708308 = fieldWeight in 1361, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1361)
      0.5 = coord(1/2)
    
    Date
    6. 1.1999 10:22:07
    Source
    Wissensorganisation im Wandel: Dezimalklassifikation - Thesaurusfragen - Warenklassifikation. Proc. 11. Jahrestagung der Gesellschaft für Klassifikation, Aachen, 29.6.-1.7.1987. Hrsg.: H.-J. Hermes u. J. Hölzl
  3. Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023) 0.06
    0.057175793 = product of:
      0.114351586 = sum of:
        0.114351586 = sum of:
          0.02473851 = weight(_text_:h in 835) [ClassicSimilarity], result of:
            0.02473851 = score(doc=835,freq=2.0), product of:
              0.11265446 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.04534384 = queryNorm
              0.21959636 = fieldWeight in 835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0625 = fieldNorm(doc=835)
          0.040465362 = weight(_text_:j in 835) [ClassicSimilarity], result of:
            0.040465362 = score(doc=835,freq=2.0), product of:
              0.14407988 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.04534384 = queryNorm
              0.28085366 = fieldWeight in 835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.0625 = fieldNorm(doc=835)
          0.049147714 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
            0.049147714 = score(doc=835,freq=2.0), product of:
              0.1587864 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04534384 = queryNorm
              0.30952093 = fieldWeight in 835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=835)
      0.5 = coord(1/2)
    
    Date
    29.12.2022 18:22:55
    Source
    c't. 2023, H.1, S.46- [https://www.heise.de/select/ct/2023/1/2233908274346530870]
  4. Somers, H.: Example-based machine translation : Review article (1999) 0.04
    0.043100297 = product of:
      0.086200595 = sum of:
        0.086200595 = product of:
          0.12930089 = sum of:
            0.043292392 = weight(_text_:h in 6672) [ClassicSimilarity], result of:
              0.043292392 = score(doc=6672,freq=2.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.38429362 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
            0.0860085 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.0860085 = score(doc=6672,freq=2.0), product of:
                0.1587864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04534384 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  5. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.04
    0.043100297 = product of:
      0.086200595 = sum of:
        0.086200595 = product of:
          0.12930089 = sum of:
            0.043292392 = weight(_text_:h in 3117) [ClassicSimilarity], result of:
              0.043292392 = score(doc=3117,freq=2.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.38429362 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
            0.0860085 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.0860085 = score(doc=3117,freq=2.0), product of:
                0.1587864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04534384 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  6. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.04
    0.037338786 = product of:
      0.07467757 = sum of:
        0.07467757 = product of:
          0.11201635 = sum of:
            0.0505817 = weight(_text_:j in 1463) [ClassicSimilarity], result of:
              0.0505817 = score(doc=1463,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.35106707 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
            0.061434645 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.061434645 = score(doc=1463,freq=2.0), product of:
                0.1587864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04534384 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  7. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.04
    0.036943115 = product of:
      0.07388623 = sum of:
        0.07388623 = product of:
          0.11082934 = sum of:
            0.037107762 = weight(_text_:h in 5429) [ClassicSimilarity], result of:
              0.037107762 = score(doc=5429,freq=2.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.32939452 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
            0.07372157 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.07372157 = score(doc=5429,freq=2.0), product of:
                0.1587864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04534384 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  8. Snajder, J.: Distributional semantics of multi-word expressions (2013) 0.03
    0.034152158 = product of:
      0.068304315 = sum of:
        0.068304315 = product of:
          0.102456465 = sum of:
            0.030923137 = weight(_text_:h in 2868) [ClassicSimilarity], result of:
              0.030923137 = score(doc=2868,freq=2.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.27449545 = fieldWeight in 2868, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2868)
            0.07153333 = weight(_text_:j in 2868) [ClassicSimilarity], result of:
              0.07153333 = score(doc=2868,freq=4.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.4964838 = fieldWeight in 2868, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2868)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Content
    Folien einer Präsentation anlässlich COST Action IC1207 PARSEME Meeting, Warsaw, September 16, 2013. Vgl. den Beitrag: Snajder, J., P. Almic: Modeling semantic compositionality of Croatian multiword expressions. In: Informatica. 39(2015) H.3, S.301-309.
  9. Krause, J.: Was leisten informationslinguistische Komponenten von Referenz-Retrievalsystemen für Massendaten? : Von der 'Pragmatik im Computer' zur Pragmatikanalyse als Designgrundlage (1986) 0.03
    0.032601938 = product of:
      0.065203875 = sum of:
        0.065203875 = product of:
          0.097805806 = sum of:
            0.037107762 = weight(_text_:h in 7395) [ClassicSimilarity], result of:
              0.037107762 = score(doc=7395,freq=2.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.32939452 = fieldWeight in 7395, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7395)
            0.060698044 = weight(_text_:j in 7395) [ClassicSimilarity], result of:
              0.060698044 = score(doc=7395,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.4212805 = fieldWeight in 7395, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7395)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Deutscher Dokumentartag 1986, Freiburg, 8.-10.10.1986: Bedarfsorientierte Fachinformation: Methoden und Techniken am Arbeitsplatz. Bearb.: H. Strohl-Goebel
  10. Rolland, M.T.: Sprachverarbeitung durch Logotechnik : Sprachtheorie, Methodik, Anwendungen (1994) 0.03
    0.032601938 = product of:
      0.065203875 = sum of:
        0.065203875 = product of:
          0.097805806 = sum of:
            0.037107762 = weight(_text_:h in 5365) [ClassicSimilarity], result of:
              0.037107762 = score(doc=5365,freq=2.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.32939452 = fieldWeight in 5365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5365)
            0.060698044 = weight(_text_:j in 5365) [ClassicSimilarity], result of:
              0.060698044 = score(doc=5365,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.4212805 = fieldWeight in 5365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5365)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: Nachrichten für Dokumentation 46(1995) H.2, S.130-132 (E. Lutterbeck); Knowledge organization 23(1996) no.3, S.147-156 (Ausführliche Rezension als eigener Beitrag von J. Heinrichs)
  11. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.03
    0.03092248 = product of:
      0.06184496 = sum of:
        0.06184496 = sum of:
          0.015306172 = weight(_text_:h in 1616) [ClassicSimilarity], result of:
            0.015306172 = score(doc=1616,freq=4.0), product of:
              0.11265446 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.04534384 = queryNorm
              0.13586831 = fieldWeight in 1616, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
          0.025036665 = weight(_text_:j in 1616) [ClassicSimilarity], result of:
            0.025036665 = score(doc=1616,freq=4.0), product of:
              0.14407988 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.04534384 = queryNorm
              0.17376934 = fieldWeight in 1616, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
          0.021502124 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
            0.021502124 = score(doc=1616,freq=2.0), product of:
              0.1587864 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04534384 = queryNorm
              0.1354154 = fieldWeight in 1616, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
      0.5 = coord(1/2)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
  12. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.03
    0.03078593 = product of:
      0.06157186 = sum of:
        0.06157186 = product of:
          0.092357785 = sum of:
            0.030923137 = weight(_text_:h in 5428) [ClassicSimilarity], result of:
              0.030923137 = score(doc=5428,freq=2.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.27449545 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
            0.061434645 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.061434645 = score(doc=5428,freq=2.0), product of:
                0.1587864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04534384 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.220-229
  13. Ludwig, B.; Reischer, J.: Informationslinguistik in Regensburg (2012) 0.03
    0.027321724 = product of:
      0.05464345 = sum of:
        0.05464345 = product of:
          0.08196517 = sum of:
            0.02473851 = weight(_text_:h in 555) [ClassicSimilarity], result of:
              0.02473851 = score(doc=555,freq=2.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.21959636 = fieldWeight in 555, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=555)
            0.05722666 = weight(_text_:j in 555) [ClassicSimilarity], result of:
              0.05722666 = score(doc=555,freq=4.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.39718705 = fieldWeight in 555, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0625 = fieldNorm(doc=555)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.degruyter.com/view/j/iwp.2012.63.issue-5/iwp-2012-0065/iwp-2012-0065.xml?format=INT.
    Source
    Information - Wissenschaft und Praxis. 63(2012) H.5, S.292-296
  14. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.027006764 = product of:
      0.054013528 = sum of:
        0.054013528 = product of:
          0.21605411 = sum of:
            0.21605411 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.21605411 = score(doc=862,freq=2.0), product of:
                0.38442558 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04534384 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  15. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.03
    0.026417308 = product of:
      0.052834615 = sum of:
        0.052834615 = product of:
          0.07925192 = sum of:
            0.018553881 = weight(_text_:h in 6386) [ClassicSimilarity], result of:
              0.018553881 = score(doc=6386,freq=2.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.16469726 = fieldWeight in 6386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6386)
            0.060698044 = weight(_text_:j in 6386) [ClassicSimilarity], result of:
              0.060698044 = score(doc=6386,freq=8.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.4212805 = fieldWeight in 6386, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6386)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Retrieval Tests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das auf Grund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
    Source
    nfd Information - Wissenschaft und Praxis. 52(2001) H.5, S.251-262
  16. Godby, J.: WordSmith research project bridges gap between tokens and indexes (1998) 0.03
    0.026137149 = product of:
      0.052274298 = sum of:
        0.052274298 = product of:
          0.078411445 = sum of:
            0.035407193 = weight(_text_:j in 4729) [ClassicSimilarity], result of:
              0.035407193 = score(doc=4729,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.24574696 = fieldWeight in 4729, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4729)
            0.04300425 = weight(_text_:22 in 4729) [ClassicSimilarity], result of:
              0.04300425 = score(doc=4729,freq=2.0), product of:
                0.1587864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04534384 = queryNorm
                0.2708308 = fieldWeight in 4729, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4729)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    OCLC newsletter. 1998, no.234, Jul/Aug, S.22-24
  17. Dietze, J.; Völkel, H.: Verifikation einer Methode der lexikalischen Semantik : zur computergestützten Bestimmung der semantischen Konsistenz und des semantischen Abstands (1992) 0.03
    0.0251503 = product of:
      0.0503006 = sum of:
        0.0503006 = product of:
          0.0754509 = sum of:
            0.034985535 = weight(_text_:h in 6680) [ClassicSimilarity], result of:
              0.034985535 = score(doc=6680,freq=4.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.31055614 = fieldWeight in 6680, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6680)
            0.040465362 = weight(_text_:j in 6680) [ClassicSimilarity], result of:
              0.040465362 = score(doc=6680,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.28085366 = fieldWeight in 6680, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6680)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Nachrichten für Dokumentation. 43(1992) H.3, S.159-164
  18. Experimentelles und praktisches Information Retrieval : Festschrift für Gerhard Lustig (1992) 0.02
    0.02305305 = product of:
      0.0461061 = sum of:
        0.0461061 = product of:
          0.06915915 = sum of:
            0.026239151 = weight(_text_:h in 4) [ClassicSimilarity], result of:
              0.026239151 = score(doc=4,freq=4.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.2329171 = fieldWeight in 4, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4)
            0.04292 = weight(_text_:j in 4) [ClassicSimilarity], result of:
              0.04292 = score(doc=4,freq=4.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.2978903 = fieldWeight in 4, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: SALTON, G.: Effective text understanding in information retrieval; KRAUSE, J.: Intelligentes Information retrieval; FUHR, N.: Konzepte zur Gestaltung zukünftiger Information-Retrieval-Systeme; HÜTHER, H.: Überlegungen zu einem mathematischen Modell für die Type-Token-, die Grundform-Token und die Grundform-Type-Relation; KNORZ, G.: Automatische Generierung inferentieller Links in und zwischen Hyperdokumenten; KONRAD, E.: Zur Effektivitätsbewertung von Information-Retrieval-Systemen; HENRICHS, N.: Retrievalunterstützung durch automatisch generierte Wortfelder; LÜCK, W., W. RITTBERGER u. M. SCHWANTNER: Der Einsatz des Automatischen Indexierungs- und Retrieval-System (AIR) im Fachinformationszentrum Karlsruhe; REIMER, U.: Verfahren der Automatischen Indexierung. Benötigtes Vorwissen und Ansätze zu seiner automatischen Akquisition: Ein Überblick; ENDRES-NIGGEMEYER, B.: Dokumentrepräsentation: Ein individuelles prozedurales Modell des Abstracting, des Indexierens und Klassifizierens; SEELBACH, D.: Zur Entwicklung von zwei- und mehrsprachigen lexikalischen Datenbanken und Terminologiedatenbanken; ZIMMERMANN, H.: Der Einfluß der Sprachbarrieren in Europa und Möglichkeiten zu ihrer Minderung; LENDERS, W.: Wörter zwischen Welt und Wissen; PANYR, J.: Frames, Thesauri und automatische Klassifikation (Clusteranalyse): HAHN, U.: Forschungsstrategien und Erkenntnisinteressen in der anwendungsorientierten automatischen Sprachverarbeitung. Überlegungen zu einer ingenieurorientierten Computerlinguistik; KUHLEN, R.: Hypertext und Information Retrieval - mehr als Browsing und Suche.
  19. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.02
    0.02240327 = product of:
      0.04480654 = sum of:
        0.04480654 = product of:
          0.06720981 = sum of:
            0.030349022 = weight(_text_:j in 1848) [ClassicSimilarity], result of:
              0.030349022 = score(doc=1848,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.21064025 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
            0.036860786 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
              0.036860786 = score(doc=1848,freq=2.0), product of:
                0.1587864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04534384 = queryNorm
                0.23214069 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
  20. Bager, J.: Teurer Dolmetscher : Forschungsprojekt Verbmobil - Rückblick und Ausblick (2001) 0.02
    0.021734625 = product of:
      0.04346925 = sum of:
        0.04346925 = product of:
          0.065203875 = sum of:
            0.02473851 = weight(_text_:h in 5659) [ClassicSimilarity], result of:
              0.02473851 = score(doc=5659,freq=2.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.21959636 = fieldWeight in 5659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5659)
            0.040465362 = weight(_text_:j in 5659) [ClassicSimilarity], result of:
              0.040465362 = score(doc=5659,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.28085366 = fieldWeight in 5659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5659)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    c't. 2001, H.26, S.xxx

Languages

Types

  • a 221
  • m 29
  • el 21
  • s 18
  • x 6
  • d 3
  • p 2
  • b 1
  • More… Less…

Subjects

Classifications