Search (124 results, page 1 of 7)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.17
    0.17240648 = product of:
      0.28734413 = sum of:
        0.06751644 = product of:
          0.20254931 = sum of:
            0.20254931 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20254931 = score(doc=562,freq=2.0), product of:
                0.36039644 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042509552 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.20254931 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.20254931 = score(doc=562,freq=2.0), product of:
            0.36039644 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042509552 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.017278373 = product of:
          0.034556746 = sum of:
            0.034556746 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.034556746 = score(doc=562,freq=2.0), product of:
                0.14886121 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042509552 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.11
    0.1080263 = product of:
      0.27006575 = sum of:
        0.06751644 = product of:
          0.20254931 = sum of:
            0.20254931 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.20254931 = score(doc=862,freq=2.0), product of:
                0.36039644 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042509552 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.20254931 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.20254931 = score(doc=862,freq=2.0), product of:
            0.36039644 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042509552 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.4 = coord(2/5)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.09
    0.087931074 = product of:
      0.21982768 = sum of:
        0.20254931 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.20254931 = score(doc=563,freq=2.0), product of:
            0.36039644 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042509552 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.017278373 = product of:
          0.034556746 = sum of:
            0.034556746 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.034556746 = score(doc=563,freq=2.0), product of:
                0.14886121 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042509552 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.04
    0.039774638 = product of:
      0.09943659 = sum of:
        0.08609679 = weight(_text_:inc in 5863) [ClassicSimilarity], result of:
          0.08609679 = score(doc=5863,freq=2.0), product of:
            0.2573945 = queryWeight, product of:
              6.0549803 = idf(docFreq=281, maxDocs=44218)
              0.042509552 = queryNorm
            0.33449355 = fieldWeight in 5863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0549803 = idf(docFreq=281, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5863)
        0.013339795 = product of:
          0.02667959 = sum of:
            0.02667959 = weight(_text_:management in 5863) [ClassicSimilarity], result of:
              0.02667959 = score(doc=5863,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.18620178 = fieldWeight in 5863, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5863)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Retrievaltests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das aufgrund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
    Source
    Information Research & Content Management: Orientierung, Ordnung und Organisation im Wissensmarkt; 23. DGI-Online-Tagung der DGI und 53. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. DGI, Frankfurt am Main, 8.-10.5.2001. Proceedings. Hrsg.: R. Schmidt
  5. Frappaolo, C.: Artificial intelligence and text retrieval : a current perspective on the state of the art (1992) 0.03
    0.034438718 = product of:
      0.17219359 = sum of:
        0.17219359 = weight(_text_:inc in 7097) [ClassicSimilarity], result of:
          0.17219359 = score(doc=7097,freq=2.0), product of:
            0.2573945 = queryWeight, product of:
              6.0549803 = idf(docFreq=281, maxDocs=44218)
              0.042509552 = queryNorm
            0.6689871 = fieldWeight in 7097, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0549803 = idf(docFreq=281, maxDocs=44218)
              0.078125 = fieldNorm(doc=7097)
      0.2 = coord(1/5)
    
    Imprint
    Medford, NJ : Learned Information Inc.
  6. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.02
    0.020663233 = product of:
      0.103316166 = sum of:
        0.103316166 = weight(_text_:inc in 6386) [ClassicSimilarity], result of:
          0.103316166 = score(doc=6386,freq=2.0), product of:
            0.2573945 = queryWeight, product of:
              6.0549803 = idf(docFreq=281, maxDocs=44218)
              0.042509552 = queryNorm
            0.40139228 = fieldWeight in 6386, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0549803 = idf(docFreq=281, maxDocs=44218)
              0.046875 = fieldNorm(doc=6386)
      0.2 = coord(1/5)
    
    Abstract
    Retrieval Tests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das auf Grund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
  7. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.01
    0.010501078 = product of:
      0.05250539 = sum of:
        0.05250539 = sum of:
          0.032347288 = weight(_text_:management in 3807) [ClassicSimilarity], result of:
            0.032347288 = score(doc=3807,freq=6.0), product of:
              0.14328322 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.042509552 = queryNorm
              0.22575769 = fieldWeight in 3807, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
          0.0201581 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
            0.0201581 = score(doc=3807,freq=2.0), product of:
              0.14886121 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.042509552 = queryNorm
              0.1354154 = fieldWeight in 3807, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
      0.2 = coord(1/5)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 67(2015) no.2, S.203-229
  8. Warner, A.J.: Natural language processing (1987) 0.01
    0.009215132 = product of:
      0.04607566 = sum of:
        0.04607566 = product of:
          0.09215132 = sum of:
            0.09215132 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.09215132 = score(doc=337,freq=2.0), product of:
                0.14886121 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042509552 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  9. Bernth, A.; McCord, M.; Warburton, K.: Terminology extraction for global content management (2003) 0.01
    0.008537469 = product of:
      0.042687345 = sum of:
        0.042687345 = product of:
          0.08537469 = sum of:
            0.08537469 = weight(_text_:management in 4122) [ClassicSimilarity], result of:
              0.08537469 = score(doc=4122,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.5958457 = fieldWeight in 4122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.125 = fieldNorm(doc=4122)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
  10. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.01
    0.008063241 = product of:
      0.0403162 = sum of:
        0.0403162 = product of:
          0.0806324 = sum of:
            0.0806324 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.0806324 = score(doc=3164,freq=2.0), product of:
                0.14886121 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042509552 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  11. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.01
    0.008063241 = product of:
      0.0403162 = sum of:
        0.0403162 = product of:
          0.0806324 = sum of:
            0.0806324 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.0806324 = score(doc=4506,freq=2.0), product of:
                0.14886121 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042509552 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    8.10.2000 11:52:22
  12. Somers, H.: Example-based machine translation : Review article (1999) 0.01
    0.008063241 = product of:
      0.0403162 = sum of:
        0.0403162 = product of:
          0.0806324 = sum of:
            0.0806324 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.0806324 = score(doc=6672,freq=2.0), product of:
                0.14886121 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042509552 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    31. 7.1996 9:22:19
  13. New tools for human translators (1997) 0.01
    0.008063241 = product of:
      0.0403162 = sum of:
        0.0403162 = product of:
          0.0806324 = sum of:
            0.0806324 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.0806324 = score(doc=1179,freq=2.0), product of:
                0.14886121 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042509552 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    31. 7.1996 9:22:19
  14. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.01
    0.008063241 = product of:
      0.0403162 = sum of:
        0.0403162 = product of:
          0.0806324 = sum of:
            0.0806324 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.0806324 = score(doc=3117,freq=2.0), product of:
                0.14886121 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042509552 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    28. 2.1999 10:48:22
  15. ¬Der Student aus dem Computer (2023) 0.01
    0.008063241 = product of:
      0.0403162 = sum of:
        0.0403162 = product of:
          0.0806324 = sum of:
            0.0806324 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.0806324 = score(doc=1079,freq=2.0), product of:
                0.14886121 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042509552 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    27. 1.2023 16:22:55
  16. Kettunen, K.: Reductive and generative approaches to management of morphological variation of keywords in monolingual information retrieval : an overview (2009) 0.01
    0.007842166 = product of:
      0.03921083 = sum of:
        0.03921083 = product of:
          0.07842166 = sum of:
            0.07842166 = weight(_text_:management in 2835) [ClassicSimilarity], result of:
              0.07842166 = score(doc=2835,freq=12.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.54731923 = fieldWeight in 2835, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2835)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - The purpose of this article is to discuss advantages and disadvantages of various means to manage morphological variation of keywords in monolingual information retrieval. Design/methodology/approach - The authors present a compilation of query results from 11 mostly European languages and a new general classification of the language dependent techniques for management of morphological variation. Variants of the different techniques are compared in some detail in terms of retrieval effectiveness and other criteria. The paper consists mainly of an overview of different management methods for keyword variation in information retrieval. Typical IR retrieval results of 11 languages and a new classification for keyword management methods are also presented. Findings - The main results of the paper are an overall comparison of reductive and generative keyword management methods in terms of retrieval effectiveness and other broader criteria. Originality/value - The paper is of value to anyone who wants to get an overall picture of keyword management techniques used in IR.
  17. Sheridan, P.; Smeaton, A.F.: ¬The application of morpho-syntactic language processing to effective phrase matching (1992) 0.01
    0.007470285 = product of:
      0.037351426 = sum of:
        0.037351426 = product of:
          0.07470285 = sum of:
            0.07470285 = weight(_text_:management in 6575) [ClassicSimilarity], result of:
              0.07470285 = score(doc=6575,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.521365 = fieldWeight in 6575, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6575)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 28(1992) no.3, S.349-369
  18. Salton, G.; Buckley, C.; Smith, M.: On the application of syntactic methodologies in automatic text analysis (1990) 0.01
    0.007470285 = product of:
      0.037351426 = sum of:
        0.037351426 = product of:
          0.07470285 = sum of:
            0.07470285 = weight(_text_:management in 7864) [ClassicSimilarity], result of:
              0.07470285 = score(doc=7864,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.521365 = fieldWeight in 7864, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7864)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 26(1990) no.1, S.73-92
  19. Haas, S.W.: ¬A feasibility study of the case hierarchy model for the construction and porting of natural language interfaces (1990) 0.01
    0.007470285 = product of:
      0.037351426 = sum of:
        0.037351426 = product of:
          0.07470285 = sum of:
            0.07470285 = weight(_text_:management in 8071) [ClassicSimilarity], result of:
              0.07470285 = score(doc=8071,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.521365 = fieldWeight in 8071, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8071)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 26(1990), S.615-628
  20. Atlam, E.S.: Similarity measurement using term negative weight and its application to word similarity (2000) 0.01
    0.007470285 = product of:
      0.037351426 = sum of:
        0.037351426 = product of:
          0.07470285 = sum of:
            0.07470285 = weight(_text_:management in 4844) [ClassicSimilarity], result of:
              0.07470285 = score(doc=4844,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.521365 = fieldWeight in 4844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4844)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 36(2000) no.5, S.717-736

Years

Languages

  • e 101
  • d 22
  • m 1
  • More… Less…

Types

  • a 105
  • m 10
  • s 7
  • el 5
  • x 3
  • p 2
  • d 1
  • More… Less…