Search (65 results, page 1 of 4)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.06
    0.055082146 = product of:
      0.09180357 = sum of:
        0.04823906 = product of:
          0.19295624 = sum of:
            0.19295624 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.19295624 = score(doc=562,freq=2.0), product of:
                0.3433275 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04049623 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.25 = coord(1/4)
        0.027104476 = weight(_text_:j in 562) [ClassicSimilarity], result of:
          0.027104476 = score(doc=562,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.21064025 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.016460039 = product of:
          0.032920077 = sum of:
            0.032920077 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.032920077 = score(doc=562,freq=2.0), product of:
                0.1418109 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049623 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.03
    0.030971339 = product of:
      0.051618896 = sum of:
        0.022360051 = weight(_text_:j in 1616) [ClassicSimilarity], result of:
          0.022360051 = score(doc=1616,freq=4.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.17376934 = fieldWeight in 1616, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1616)
        0.019657154 = weight(_text_:b in 1616) [ClassicSimilarity], result of:
          0.019657154 = score(doc=1616,freq=2.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.13700598 = fieldWeight in 1616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1616)
        0.00960169 = product of:
          0.01920338 = sum of:
            0.01920338 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.01920338 = score(doc=1616,freq=2.0), product of:
                0.1418109 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049623 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
  3. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.03
    0.030162472 = product of:
      0.07540618 = sum of:
        0.047972776 = weight(_text_:u in 5428) [ClassicSimilarity], result of:
          0.047972776 = score(doc=5428,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.3617784 = fieldWeight in 5428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=5428)
        0.027433401 = product of:
          0.054866802 = sum of:
            0.054866802 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.054866802 = score(doc=5428,freq=2.0), product of:
                0.1418109 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049623 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    c't. 2000, H.22, S.220-229
  4. Nhongkai, S.N.; Bentz, H.-J.: Bilinguale Suche mittels Konzeptnetzen (2006) 0.03
    0.029807007 = product of:
      0.07451752 = sum of:
        0.036139302 = weight(_text_:j in 3914) [ClassicSimilarity], result of:
          0.036139302 = score(doc=3914,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.28085366 = fieldWeight in 3914, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0625 = fieldNorm(doc=3914)
        0.03837822 = weight(_text_:u in 3914) [ClassicSimilarity], result of:
          0.03837822 = score(doc=3914,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.28942272 = fieldWeight in 3914, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=3914)
      0.4 = coord(2/5)
    
    Source
    Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
  5. Granitzer, M.: Statistische Verfahren der Textanalyse (2006) 0.03
    0.029158099 = product of:
      0.072895244 = sum of:
        0.03358094 = weight(_text_:u in 5809) [ClassicSimilarity], result of:
          0.03358094 = score(doc=5809,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.25324488 = fieldWeight in 5809, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5809)
        0.039314307 = weight(_text_:b in 5809) [ClassicSimilarity], result of:
          0.039314307 = score(doc=5809,freq=2.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.27401197 = fieldWeight in 5809, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5809)
      0.4 = coord(2/5)
    
    Abstract
    Der vorliegende Artikel bietet einen Überblick über statistische Verfahren der Textanalyse im Kontext des Semantic Webs. Als Einleitung erfolgt die Diskussion von Methoden und gängigen Techniken zur Vorverarbeitung von Texten wie z. B. Stemming oder Part-of-Speech Tagging. Die so eingeführten Repräsentationsformen dienen als Basis für statistische Merkmalsanalysen sowie für weiterführende Techniken wie Information Extraction und maschinelle Lernverfahren. Die Darstellung dieser speziellen Techniken erfolgt im Überblick, wobei auf die wichtigsten Aspekte in Bezug auf das Semantic Web detailliert eingegangen wird. Die Anwendung der vorgestellten Techniken zur Erstellung und Wartung von Ontologien sowie der Verweis auf weiterführende Literatur bilden den Abschluss dieses Artikels.
    Source
    Semantic Web: Wege zur vernetzten Wissensgesellschaft. Hrsg.: T. Pellegrini, u. A. Blumauer
  6. Goller, C.; Löning, J.; Will, T.; Wolff, W.: Automatic document classification : a thourough evaluation of various methods (2000) 0.02
    0.022355257 = product of:
      0.05588814 = sum of:
        0.027104476 = weight(_text_:j in 5480) [ClassicSimilarity], result of:
          0.027104476 = score(doc=5480,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.21064025 = fieldWeight in 5480, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=5480)
        0.028783662 = weight(_text_:u in 5480) [ClassicSimilarity], result of:
          0.028783662 = score(doc=5480,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.21706703 = fieldWeight in 5480, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=5480)
      0.4 = coord(2/5)
    
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  7. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.02
    0.02111373 = product of:
      0.05278432 = sum of:
        0.03358094 = weight(_text_:u in 5483) [ClassicSimilarity], result of:
          0.03358094 = score(doc=5483,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.25324488 = fieldWeight in 5483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5483)
        0.01920338 = product of:
          0.03840676 = sum of:
            0.03840676 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
              0.03840676 = score(doc=5483,freq=2.0), product of:
                0.1418109 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049623 = queryNorm
                0.2708308 = fieldWeight in 5483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5483)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    10.12.2000 18:22:35
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  8. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.02
    0.02111373 = product of:
      0.05278432 = sum of:
        0.03358094 = weight(_text_:u in 156) [ClassicSimilarity], result of:
          0.03358094 = score(doc=156,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.25324488 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.01920338 = product of:
          0.03840676 = sum of:
            0.03840676 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.03840676 = score(doc=156,freq=2.0), product of:
                0.1418109 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049623 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    8. 3.2007 19:55:22
    Source
    Context: nature, impact and role. 5th International Conference an Conceptions of Library and Information Sciences, CoLIS 2005 Glasgow, UK, June 2005. Ed. by F. Crestani u. I. Ruthven
  9. Niemi, T.; Jämsen, J.: ¬A query language for discovering semantic associations, part II : sample queries and query evaluation (2007) 0.02
    0.018629381 = product of:
      0.046573453 = sum of:
        0.022587063 = weight(_text_:j in 580) [ClassicSimilarity], result of:
          0.022587063 = score(doc=580,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.17553353 = fieldWeight in 580, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=580)
        0.023986388 = weight(_text_:u in 580) [ClassicSimilarity], result of:
          0.023986388 = score(doc=580,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.1808892 = fieldWeight in 580, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0390625 = fieldNorm(doc=580)
      0.4 = coord(2/5)
    
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  10. Niemi, T.; Jämsen , J.: ¬A query language for discovering semantic associations, part I : approach and formal definition of query primitives (2007) 0.02
    0.018629381 = product of:
      0.046573453 = sum of:
        0.022587063 = weight(_text_:j in 591) [ClassicSimilarity], result of:
          0.022587063 = score(doc=591,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.17553353 = fieldWeight in 591, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=591)
        0.023986388 = weight(_text_:u in 591) [ClassicSimilarity], result of:
          0.023986388 = score(doc=591,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.1808892 = fieldWeight in 591, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0390625 = fieldNorm(doc=591)
      0.4 = coord(2/5)
    
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  11. Patrick, J.; Zhang, J.; Artola-Zubillaga, X.: ¬An architecture and query language for a federation of heterogeneous dictionary databases (2000) 0.02
    0.017888041 = product of:
      0.089440204 = sum of:
        0.089440204 = weight(_text_:j in 339) [ClassicSimilarity], result of:
          0.089440204 = score(doc=339,freq=4.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.69507736 = fieldWeight in 339, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.109375 = fieldNorm(doc=339)
      0.2 = coord(1/5)
    
  12. Sienel, J.; Weiss, M.; Laube, M.: Sprachtechnologien für die Informationsgesellschaft des 21. Jahrhunderts (2000) 0.01
    0.014521505 = product of:
      0.036303762 = sum of:
        0.022587063 = weight(_text_:j in 5557) [ClassicSimilarity], result of:
          0.022587063 = score(doc=5557,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.17553353 = fieldWeight in 5557, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5557)
        0.0137167005 = product of:
          0.027433401 = sum of:
            0.027433401 = weight(_text_:22 in 5557) [ClassicSimilarity], result of:
              0.027433401 = score(doc=5557,freq=2.0), product of:
                0.1418109 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049623 = queryNorm
                0.19345059 = fieldWeight in 5557, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5557)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    26.12.2000 13:22:17
  13. Perez-Carballo, J.; Strzalkowski, T.: Natural language information retrieval : progress report (2000) 0.01
    0.012648756 = product of:
      0.06324378 = sum of:
        0.06324378 = weight(_text_:j in 6421) [ClassicSimilarity], result of:
          0.06324378 = score(doc=6421,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.4914939 = fieldWeight in 6421, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.109375 = fieldNorm(doc=6421)
      0.2 = coord(1/5)
    
  14. Vilar, P.; Dimec, J.: Krnjenje kot osnova nekaterih nekonvencionalnih metod poizvedovanja (2000) 0.01
    0.010841791 = product of:
      0.054208953 = sum of:
        0.054208953 = weight(_text_:j in 6331) [ClassicSimilarity], result of:
          0.054208953 = score(doc=6331,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.4212805 = fieldWeight in 6331, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.09375 = fieldNorm(doc=6331)
      0.2 = coord(1/5)
    
  15. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.01
    0.010841791 = product of:
      0.054208953 = sum of:
        0.054208953 = weight(_text_:j in 6386) [ClassicSimilarity], result of:
          0.054208953 = score(doc=6386,freq=8.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.4212805 = fieldWeight in 6386, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=6386)
      0.2 = coord(1/5)
    
    Abstract
    Retrieval Tests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das auf Grund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
  16. Diaz, I.; Morato, J.; Lioréns, J.: ¬An algorithm for term conflation based on tree structures (2002) 0.01
    0.010221737 = product of:
      0.051108688 = sum of:
        0.051108688 = weight(_text_:j in 246) [ClassicSimilarity], result of:
          0.051108688 = score(doc=246,freq=4.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.39718705 = fieldWeight in 246, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0625 = fieldNorm(doc=246)
      0.2 = coord(1/5)
    
  17. Klein, A.; Weis, U.; Stede, M.: ¬Der Einsatz von Sprachverarbeitungstools beim Sprachenlernen im Intranet (2000) 0.01
    0.009594555 = product of:
      0.047972776 = sum of:
        0.047972776 = weight(_text_:u in 5542) [ClassicSimilarity], result of:
          0.047972776 = score(doc=5542,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.3617784 = fieldWeight in 5542, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=5542)
      0.2 = coord(1/5)
    
  18. Sidhom, S.; Hassoun, M.: Morpho-syntactic parsing to text mining environment : NP recognition model to knowledge visualization and information (2003) 0.01
    0.009594555 = product of:
      0.047972776 = sum of:
        0.047972776 = weight(_text_:u in 3546) [ClassicSimilarity], result of:
          0.047972776 = score(doc=3546,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.3617784 = fieldWeight in 3546, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=3546)
      0.2 = coord(1/5)
    
    Source
    Tendencias de investigación en organización del conocimient: IV Cologuio International de Ciencas de la Documentación , VI Congreso del Capitulo Espanol de ISKO = Trends in knowledge organization research. Eds.: J.A. Frias u. C. Travieso
  19. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.01
    0.0090348255 = product of:
      0.045174126 = sum of:
        0.045174126 = weight(_text_:j in 5863) [ClassicSimilarity], result of:
          0.045174126 = score(doc=5863,freq=8.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.35106707 = fieldWeight in 5863, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5863)
      0.2 = coord(1/5)
    
    Abstract
    Retrievaltests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das aufgrund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
  20. Xu, J.; Weischedel, R.; Licuanan, A.: Evaluation of an extraction-based approach to answering definitional questions (2004) 0.01
    0.0090348255 = product of:
      0.045174126 = sum of:
        0.045174126 = weight(_text_:j in 4107) [ClassicSimilarity], result of:
          0.045174126 = score(doc=4107,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.35106707 = fieldWeight in 4107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.078125 = fieldNorm(doc=4107)
      0.2 = coord(1/5)
    

Languages

  • e 38
  • d 26
  • slv 1
  • More… Less…