Search (141 results, page 2 of 8)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  1. Figuerola, C.G.; Gomez, R.; Lopez de San Roman, E.: Stemming and n-grams in Spanish : an evaluation of their impact in information retrieval (2000) 0.02
    0.018393612 = product of:
      0.036787223 = sum of:
        0.036787223 = product of:
          0.110361665 = sum of:
            0.110361665 = weight(_text_:n in 6501) [ClassicSimilarity], result of:
              0.110361665 = score(doc=6501,freq=2.0), product of:
                0.19305801 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044775832 = queryNorm
                0.57165027 = fieldWeight in 6501, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6501)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  2. Gencosman, B.C.; Ozmutlu, H.C.; Ozmutlu, S.: Character n-gram application for automatic new topic identification (2014) 0.02
    0.017137237 = product of:
      0.034274474 = sum of:
        0.034274474 = product of:
          0.10282342 = sum of:
            0.10282342 = weight(_text_:n in 2688) [ClassicSimilarity], result of:
              0.10282342 = score(doc=2688,freq=10.0), product of:
                0.19305801 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044775832 = queryNorm
                0.53260374 = fieldWeight in 2688, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2688)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The widespread availability of the Internet and the variety of Internet-based applications have resulted in a significant increase in the amount of web pages. Determining the behaviors of search engine users has become a critical step in enhancing search engine performance. Search engine user behaviors can be determined by content-based or content-ignorant algorithms. Although many content-ignorant studies have been performed to automatically identify new topics, previous results have demonstrated that spelling errors can cause significant errors in topic shift estimates. In this study, we focused on minimizing the number of wrong estimates that were based on spelling errors. We developed a new hybrid algorithm combining character n-gram and neural network methodologies, and compared the experimental results with results from previous studies. For the FAST and Excite datasets, the proposed algorithm improved topic shift estimates by 6.987% and 2.639%, respectively. Moreover, we analyzed the performance of the character n-gram method in different aspects including the comparison with Levenshtein edit-distance method. The experimental results demonstrated that the character n-gram method outperformed to the Levensthein edit distance method in terms of topic identification.
    Object
    n-grams
  3. Patrick, J.; Zhang, J.; Artola-Zubillaga, X.: ¬An architecture and query language for a federation of heterogeneous dictionary databases (2000) 0.02
    0.01648203 = product of:
      0.03296406 = sum of:
        0.03296406 = product of:
          0.09889217 = sum of:
            0.09889217 = weight(_text_:j in 339) [ClassicSimilarity], result of:
              0.09889217 = score(doc=339,freq=4.0), product of:
                0.14227505 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044775832 = queryNorm
                0.69507736 = fieldWeight in 339, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.109375 = fieldNorm(doc=339)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  4. Warner, A.J.: Natural language processing (1987) 0.02
    0.016177353 = product of:
      0.032354705 = sum of:
        0.032354705 = product of:
          0.097064115 = sum of:
            0.097064115 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.097064115 = score(doc=337,freq=2.0), product of:
                0.15679733 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044775832 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  5. Chen, L.; Fang, H.: ¬An automatic method for ex-tracting innovative ideas based on the Scopus® database (2019) 0.02
    0.015328009 = product of:
      0.030656017 = sum of:
        0.030656017 = product of:
          0.09196805 = sum of:
            0.09196805 = weight(_text_:n in 5310) [ClassicSimilarity], result of:
              0.09196805 = score(doc=5310,freq=8.0), product of:
                0.19305801 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044775832 = queryNorm
                0.47637522 = fieldWeight in 5310, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5310)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The novelty of knowledge claims in a research paper can be considered an evaluation criterion for papers to supplement citations. To provide a foundation for research evaluation from the perspective of innovativeness, we propose an automatic approach for extracting innovative ideas from the abstracts of technology and engineering papers. The approach extracts N-grams as candidates based on part-of-speech tagging and determines whether they are novel by checking the Scopus® database to determine whether they had ever been presented previously. Moreover, we discussed the distributions of innovative ideas in different abstract structures. To improve the performance by excluding noisy N-grams, a list of stopwords and a list of research description characteristics were developed. We selected abstracts of articles published from 2011 to 2017 with the topic of semantic analysis as the experimental texts. Excluding noisy N-grams, considering the distribution of innovative ideas in abstracts, and suitably combining N-grams can effectively improve the performance of automatic innovative idea extraction. Unlike co-word and co-citation analysis, innovative-idea extraction aims to identify the differences in a paper from all previously published papers.
  6. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.02
    0.015318605 = product of:
      0.03063721 = sum of:
        0.03063721 = product of:
          0.045955814 = sum of:
            0.024723042 = weight(_text_:j in 1616) [ClassicSimilarity], result of:
              0.024723042 = score(doc=1616,freq=4.0), product of:
                0.14227505 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044775832 = queryNorm
                0.17376934 = fieldWeight in 1616, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
            0.021232774 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.021232774 = score(doc=1616,freq=2.0), product of:
                0.15679733 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044775832 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
  7. Rorvig, M.; Smith, M.M.; Uemura, A.: ¬The N-gram hypothesis applied to matched sets of visualized Japanese-English technical documents (1999) 0.02
    0.015173956 = product of:
      0.030347912 = sum of:
        0.030347912 = product of:
          0.09104373 = sum of:
            0.09104373 = weight(_text_:n in 6675) [ClassicSimilarity], result of:
              0.09104373 = score(doc=6675,freq=4.0), product of:
                0.19305801 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044775832 = queryNorm
                0.47158742 = fieldWeight in 6675, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6675)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Shape Recovery Analysis (SHERA), a new visual analytical technique, is applied to the N-Gram hypothesis on matched Japanese-English technical documents supplied by the National Center for Science Information Systems (NACSIS) in Japan. The results of the SHERA study reveal compaction in the translation of Japanese subject terms to English subject terms. Surprisingly, the bigram approach to the Japanese data yields a remarkable similarity to the matching visualized English texts
  8. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.01
    0.014155183 = product of:
      0.028310366 = sum of:
        0.028310366 = product of:
          0.0849311 = sum of:
            0.0849311 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.0849311 = score(doc=3164,freq=2.0), product of:
                0.15679733 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044775832 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  9. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.01
    0.014155183 = product of:
      0.028310366 = sum of:
        0.028310366 = product of:
          0.0849311 = sum of:
            0.0849311 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.0849311 = score(doc=4506,freq=2.0), product of:
                0.15679733 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044775832 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  10. Somers, H.: Example-based machine translation : Review article (1999) 0.01
    0.014155183 = product of:
      0.028310366 = sum of:
        0.028310366 = product of:
          0.0849311 = sum of:
            0.0849311 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.0849311 = score(doc=6672,freq=2.0), product of:
                0.15679733 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044775832 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  11. New tools for human translators (1997) 0.01
    0.014155183 = product of:
      0.028310366 = sum of:
        0.028310366 = product of:
          0.0849311 = sum of:
            0.0849311 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.0849311 = score(doc=1179,freq=2.0), product of:
                0.15679733 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044775832 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  12. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.01
    0.014155183 = product of:
      0.028310366 = sum of:
        0.028310366 = product of:
          0.0849311 = sum of:
            0.0849311 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.0849311 = score(doc=3117,freq=2.0), product of:
                0.15679733 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044775832 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  13. Noel, J.: Syntax, semantics and pragmatics in the automatic analysis of texts (1980) 0.01
    0.0133194905 = product of:
      0.026638981 = sum of:
        0.026638981 = product of:
          0.07991694 = sum of:
            0.07991694 = weight(_text_:j in 7512) [ClassicSimilarity], result of:
              0.07991694 = score(doc=7512,freq=2.0), product of:
                0.14227505 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044775832 = queryNorm
                0.5617073 = fieldWeight in 7512, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.125 = fieldNorm(doc=7512)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  14. Cowie, J.; Lehner, W.: Information extraction (1996) 0.01
    0.0133194905 = product of:
      0.026638981 = sum of:
        0.026638981 = product of:
          0.07991694 = sum of:
            0.07991694 = weight(_text_:j in 6827) [ClassicSimilarity], result of:
              0.07991694 = score(doc=6827,freq=2.0), product of:
                0.14227505 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044775832 = queryNorm
                0.5617073 = fieldWeight in 6827, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.125 = fieldNorm(doc=6827)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  15. Bellaachia, A.; Amor-Tijani, G.: Proper nouns in English-Arabic cross language information retrieval (2008) 0.01
    0.013274446 = product of:
      0.026548892 = sum of:
        0.026548892 = product of:
          0.07964668 = sum of:
            0.07964668 = weight(_text_:n in 2372) [ClassicSimilarity], result of:
              0.07964668 = score(doc=2372,freq=6.0), product of:
                0.19305801 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044775832 = queryNorm
                0.41255307 = fieldWeight in 2372, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2372)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Out of vocabulary words, mostly proper nouns and technical terms, are one main source of performance degradation in Cross Language Information Retrieval (CLIR) systems. Those are words not found in the dictionary. Bilingual dictionaries in general do not cover most proper nouns, which are usually primary keys in the query. As they are spelling variants of each other in most languages, using an approximate string matching technique against the target database index is the common approach taken to find the target language correspondents of the original query key. N-gram technique proved to be the most effective among other string matching techniques. The issue arises when the languages dealt with have different alphabets. Transliteration is then applied based on phonetic similarities between the languages involved. In this study, both transliteration and the n-gram technique are combined to generate possible transliterations in an English-Arabic CLIR system. We refer to this technique as Transliteration N-Gram (TNG). We further enhance TNG by applying Part Of Speech disambiguation on the set of transliterations so that words with a similar spelling, but a different meaning, are excluded. Experimental results show that TNG gives promising results, and enhanced TNG further improves performance.
  16. Pepper, S.; Arnaud, P.J.L.: Absolutely PHAB : toward a general model of associative relations (2020) 0.01
    0.013274446 = product of:
      0.026548892 = sum of:
        0.026548892 = product of:
          0.07964668 = sum of:
            0.07964668 = weight(_text_:n in 103) [ClassicSimilarity], result of:
              0.07964668 = score(doc=103,freq=6.0), product of:
                0.19305801 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044775832 = queryNorm
                0.41255307 = fieldWeight in 103, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=103)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    There have been many attempts at classifying the semantic modification relations (R) of N + N compounds but this work has not led to the acceptance of a definitive scheme, so that devising a reusable classification is a worthwhile aim. The scope of this undertaking is extended to other binominal lexemes, i.e. units that contain two thing-morphemes without explicitly stating R, like prepositional units, N + relational adjective units, etc. The 25-relation taxonomy of Bourque (2014) was tested against over 15,000 binominal lexemes from 106 languages and extended to a 29-relation scheme ("Bourque2") through the introduction of two new reversible relations. Bourque2 is then mapped onto Hatcher's (1960) four-relation scheme (extended by the addition of a fifth relation, similarity , as "Hatcher2"). This results in a two-tier system usable at different degrees of granularities. On account of its semantic proximity to compounding, metonymy is then taken into account, following Janda's (2011) suggestion that it plays a role in word formation; Peirsman and Geeraerts' (2006) inventory of 23 metonymic patterns is mapped onto Bourque2, confirming the identity of metonymic and binominal modification relations. Finally, Blank's (2003) and Koch's (2001) work on lexical semantics justifies the addition to the scheme of a third, superordinate level which comprises the three Aristotelean principles of similarity, contiguity and contrast.
  17. Frakes, W.B.: Stemming algorithms (1992) 0.01
    0.012262408 = product of:
      0.024524815 = sum of:
        0.024524815 = product of:
          0.073574446 = sum of:
            0.073574446 = weight(_text_:n in 3503) [ClassicSimilarity], result of:
              0.073574446 = score(doc=3503,freq=2.0), product of:
                0.19305801 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044775832 = queryNorm
                0.38110018 = fieldWeight in 3503, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3503)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Desribes stemming algorithms - programs that relate morphologically similar indexing and search terms. Stemming is used to improve retrieval effectiveness and to reduce the size of indexing files. Several approaches to stemming are describes - table lookup, affix removal, successor variety, and n-gram. empirical studies of stemming are summarized. The Porter stemmer is described in detail, and a full implementation in C is presented
  18. Koppel, M.; Akiva, N.; Dagan, I.: Feature instability as a criterion for selecting potential style markers (2006) 0.01
    0.012262408 = product of:
      0.024524815 = sum of:
        0.024524815 = product of:
          0.073574446 = sum of:
            0.073574446 = weight(_text_:n in 6092) [ClassicSimilarity], result of:
              0.073574446 = score(doc=6092,freq=2.0), product of:
                0.19305801 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044775832 = queryNorm
                0.38110018 = fieldWeight in 6092, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6092)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  19. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.01
    0.012133013 = product of:
      0.024266027 = sum of:
        0.024266027 = product of:
          0.07279808 = sum of:
            0.07279808 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.07279808 = score(doc=4483,freq=2.0), product of:
                0.15679733 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044775832 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  20. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.01
    0.012133013 = product of:
      0.024266027 = sum of:
        0.024266027 = product of:
          0.07279808 = sum of:
            0.07279808 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.07279808 = score(doc=4888,freq=2.0), product of:
                0.15679733 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044775832 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22

Years

Types

  • a 120
  • el 12
  • s 8
  • m 7
  • n 2
  • p 2
  • x 2
  • More… Less…