Search (166 results, page 1 of 9)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.08341907 = sum of:
      0.062196497 = product of:
        0.24878599 = sum of:
          0.24878599 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24878599 = score(doc=562,freq=2.0), product of:
              0.4426655 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052213363 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.021222569 = product of:
        0.042445138 = sum of:
          0.042445138 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.042445138 = score(doc=562,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.04
    0.040135235 = sum of:
      0.027755402 = product of:
        0.11102161 = sum of:
          0.11102161 = weight(_text_:authors in 3807) [ClassicSimilarity], result of:
            0.11102161 = score(doc=3807,freq=14.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.46641657 = fieldWeight in 3807, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.25 = coord(1/4)
      0.012379832 = product of:
        0.024759663 = sum of:
          0.024759663 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
            0.024759663 = score(doc=3807,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.1354154 = fieldWeight in 3807, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.5 = coord(1/2)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Research limitations/implications In total, 42 definitions were identified spanning a period of 11 years. This represented the first use of KM through the estimated apex of terms used. From 2006 onwards definitions were used in repetition, and all definitions that were considered to repeat were therefore subsequently excluded as not being unique instances. All definitions listed are by no means complete and exhaustive. The definitions are viewed outside the scope and context in which they were originally formulated and then used to review the key concepts in the definitions themselves. Social implications When the authors refer to the aforementioned discussion of KM content as well as the presentation of the method followed in this paper, the authors may have a few implications for future research in KM. First the research validates ideas presented by the OECD in 2005 pertaining to KM. It also validates that through the evolution of KM, the authors ended with a description of KM that may be seen as a standardised description. If the authors as academics and practitioners, for example, refer to KM as the same construct and/or idea, it has the potential to speculatively, distinguish between what KM may or may not be. Originality/value By simplifying the term used to define KM, by focusing on the most common definitions, the paper assist in refocusing KM by reconsidering the dimensions that is the most common in how it has been defined over time. This would hopefully assist in reigniting discussions about KM and how it may be used to the benefit of an organisation.
    Date
    20. 1.2015 18:30:22
  3. Sparck Jones, K.; Galliers, J.R.: Evaluating natural language processing systems : an analysis and review (1996) 0.04
    0.040037964 = sum of:
      0.017983811 = product of:
        0.071935244 = sum of:
          0.071935244 = weight(_text_:authors in 2934) [ClassicSimilarity], result of:
            0.071935244 = score(doc=2934,freq=2.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.30220953 = fieldWeight in 2934, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=2934)
        0.25 = coord(1/4)
      0.022054153 = product of:
        0.044108305 = sum of:
          0.044108305 = weight(_text_:k in 2934) [ClassicSimilarity], result of:
            0.044108305 = score(doc=2934,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.23664509 = fieldWeight in 2934, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=2934)
        0.5 = coord(1/2)
    
    Abstract
    This comprehensive state-of-the-art book is the first devoted to the important and timely issue of evaluating NLP systems. It addresses the whole area of NLP system evaluation, including aims and scope, problems and methodology. The authors provide a wide-ranging and careful analysis of evaluation concepts, reinforced with extensive illustrations; they relate systems to their environments and develop a framework for proper evaluation. The discussion of principles is completed by a detailed review of practice and strategies in the field, covering both systems for specific tasks, like translation, and core language processors. The methodology lessons drawn from the analysis and review are applied in a series of example cases. A comprehensive bibliography, a subject index, and term glossary are included
  4. Kettunen, K.: Reductive and generative approaches to management of morphological variation of keywords in monolingual information retrieval : an overview (2009) 0.04
    0.040037964 = sum of:
      0.017983811 = product of:
        0.071935244 = sum of:
          0.071935244 = weight(_text_:authors in 2835) [ClassicSimilarity], result of:
            0.071935244 = score(doc=2835,freq=2.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.30220953 = fieldWeight in 2835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=2835)
        0.25 = coord(1/4)
      0.022054153 = product of:
        0.044108305 = sum of:
          0.044108305 = weight(_text_:k in 2835) [ClassicSimilarity], result of:
            0.044108305 = score(doc=2835,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.23664509 = fieldWeight in 2835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=2835)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this article is to discuss advantages and disadvantages of various means to manage morphological variation of keywords in monolingual information retrieval. Design/methodology/approach - The authors present a compilation of query results from 11 mostly European languages and a new general classification of the language dependent techniques for management of morphological variation. Variants of the different techniques are compared in some detail in terms of retrieval effectiveness and other criteria. The paper consists mainly of an overview of different management methods for keyword variation in information retrieval. Typical IR retrieval results of 11 languages and a new classification for keyword management methods are also presented. Findings - The main results of the paper are an overall comparison of reductive and generative keyword management methods in terms of retrieval effectiveness and other broader criteria. Originality/value - The paper is of value to anyone who wants to get an overall picture of keyword management techniques used in IR.
  5. Al-Shawakfa, E.; Al-Badarneh, A.; Shatnawi, S.; Al-Rabab'ah, K.; Bani-Ismail, B.: ¬A comparison study of some Arabic root finding algorithms (2010) 0.04
    0.040037964 = sum of:
      0.017983811 = product of:
        0.071935244 = sum of:
          0.071935244 = weight(_text_:authors in 3457) [ClassicSimilarity], result of:
            0.071935244 = score(doc=3457,freq=2.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.30220953 = fieldWeight in 3457, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=3457)
        0.25 = coord(1/4)
      0.022054153 = product of:
        0.044108305 = sum of:
          0.044108305 = weight(_text_:k in 3457) [ClassicSimilarity], result of:
            0.044108305 = score(doc=3457,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.23664509 = fieldWeight in 3457, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=3457)
        0.5 = coord(1/2)
    
    Abstract
    Arabic has a complex structure, which makes it difficult to apply natural language processing (NLP). Much research on Arabic NLP (ANLP) does exist; however, it is not as mature as that of other languages. Finding Arabic roots is an important step toward conducting effective research on most of ANLP applications. The authors have studied and compared six root-finding algorithms with success rates of over 90%. All algorithms of this study did not use the same testing corpus and/or benchmarking measures. They unified the testing process by implementing their own algorithm descriptions and building a corpus out of 3823 triliteral roots, applying 73 triliteral patterns, and with 18 affixes, producing around 27.6 million words. They tested the algorithms with the generated corpus and have obtained interesting results; they offer to share the corpus freely for benchmarking and ANLP research.
  6. Bowker, L.; Ciro, J.B.: Machine translation and global research : towards improved machine translation literacy in the scholarly community (2019) 0.04
    0.03745515 = sum of:
      0.011989206 = product of:
        0.047956824 = sum of:
          0.047956824 = weight(_text_:authors in 5970) [ClassicSimilarity], result of:
            0.047956824 = score(doc=5970,freq=2.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.20147301 = fieldWeight in 5970, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=5970)
        0.25 = coord(1/4)
      0.025465943 = product of:
        0.050931886 = sum of:
          0.050931886 = weight(_text_:k in 5970) [ClassicSimilarity], result of:
            0.050931886 = score(doc=5970,freq=6.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.27325422 = fieldWeight in 5970, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.03125 = fieldNorm(doc=5970)
        0.5 = coord(1/2)
    
    Abstract
    In the global research community, English has become the main language of scholarly publishing in many disciplines. At the same time, online machine translation systems have become increasingly easy to access and use. Is this a researcher's match made in heaven, or the road to publication perdition? Here Lynne Bowker and Jairo Buitrago Ciro introduce the concept of machine translation literacy, a new kind of literacy for scholars and librarians in the digital age. For scholars, they explain how machine translation works, how it is (or could be) used for scholarly communication, and how both native and non-native English-speakers can write in a translation-friendly way in order to harness its potential. Native English speakers can continue to write in English, but expand the global reach of their research by making it easier for their peers around the world to access and understand their works, while non-native English speakers can write in their mother tongues, but leverage machine translation technology to help them produce draft publications in English. For academic librarians, the authors provide a framework for supporting researchers in all disciplines as they grapple with producing translation-friendly texts and using machine translation for scholarly communication - a form of support that will only become more important as campuses become increasingly international and as universities continue to strive to excel on the global stage. Machine Translation and Global Research is a must-read for scientists, researchers, students, and librarians eager to maximize the global reach and impact of any form of scholarly work.
    Classification
    BFP (FH K)
    Footnote
    Rez. in: JASIST 71(2020) no.10, S.1275-1278 (Krystyna K. Matusiak).
    GHBS
    BFP (FH K)
  7. Sienel, J.; Weiss, M.; Laube, M.: Sprachtechnologien für die Informationsgesellschaft des 21. Jahrhunderts (2000) 0.04
    0.036063936 = product of:
      0.07212787 = sum of:
        0.07212787 = sum of:
          0.03675692 = weight(_text_:k in 5557) [ClassicSimilarity], result of:
            0.03675692 = score(doc=5557,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.19720423 = fieldWeight in 5557, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5557)
          0.03537095 = weight(_text_:22 in 5557) [ClassicSimilarity], result of:
            0.03537095 = score(doc=5557,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.19345059 = fieldWeight in 5557, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5557)
      0.5 = coord(1/2)
    
    Date
    26.12.2000 13:22:17
    Source
    Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia: Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000, Fachhochschule Köln. Hrsg.: K.-D. Schmitz
  8. Heid, U.; Jauss, S.; Krüger, K.; Hohmann, A.: Term extraction with standard tools for corpus exploration : experience from German (1996) 0.03
    0.03118928 = product of:
      0.06237856 = sum of:
        0.06237856 = product of:
          0.12475712 = sum of:
            0.12475712 = weight(_text_:k in 6333) [ClassicSimilarity], result of:
              0.12475712 = score(doc=6333,freq=4.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.66933334 = fieldWeight in 6333, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6333)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    TKE'96: Terminology and knowledge engineering. Proceedings 4th International Congress on Terminology and Knowledge Engineering, 26.-28.8.1996, Wien. Ed.: C. Galinski u. K.-D. Schmitz
  9. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.031098248 = product of:
      0.062196497 = sum of:
        0.062196497 = product of:
          0.24878599 = sum of:
            0.24878599 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24878599 = score(doc=862,freq=2.0), product of:
                0.4426655 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052213363 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  10. Kay, M.; Sparck Jones, K.: Automated language processing (1971) 0.03
    0.029405536 = product of:
      0.058811072 = sum of:
        0.058811072 = product of:
          0.117622145 = sum of:
            0.117622145 = weight(_text_:k in 250) [ClassicSimilarity], result of:
              0.117622145 = score(doc=250,freq=2.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.63105357 = fieldWeight in 250, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.125 = fieldNorm(doc=250)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Bernth, A.; McCord, M.; Warburton, K.: Terminology extraction for global content management (2003) 0.03
    0.029405536 = product of:
      0.058811072 = sum of:
        0.058811072 = product of:
          0.117622145 = sum of:
            0.117622145 = weight(_text_:k in 4122) [ClassicSimilarity], result of:
              0.117622145 = score(doc=4122,freq=2.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.63105357 = fieldWeight in 4122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.125 = fieldNorm(doc=4122)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Warner, A.J.: Natural language processing (1987) 0.03
    0.02829676 = product of:
      0.05659352 = sum of:
        0.05659352 = product of:
          0.11318704 = sum of:
            0.11318704 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.11318704 = score(doc=337,freq=2.0), product of:
                0.1828423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052213363 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  13. Lewis, D.D.; Sparck Jones, K.: Natural language processing for information retrieval (1996) 0.03
    0.025729846 = product of:
      0.051459692 = sum of:
        0.051459692 = product of:
          0.102919385 = sum of:
            0.102919385 = weight(_text_:k in 4144) [ClassicSimilarity], result of:
              0.102919385 = score(doc=4144,freq=2.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.5521719 = fieldWeight in 4144, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4144)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Dahlgren, K.: Naive semantics for natural language understanding (19??) 0.03
    0.025729846 = product of:
      0.051459692 = sum of:
        0.051459692 = product of:
          0.102919385 = sum of:
            0.102919385 = weight(_text_:k in 5302) [ClassicSimilarity], result of:
              0.102919385 = score(doc=5302,freq=2.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.5521719 = fieldWeight in 5302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5302)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Humphreys, K.; Demetriou, G.; Gaizauskas, R.: Bioinformatics applications of information extraction from scientific journal articles (2000) 0.03
    0.025729846 = product of:
      0.051459692 = sum of:
        0.051459692 = product of:
          0.102919385 = sum of:
            0.102919385 = weight(_text_:k in 4545) [ClassicSimilarity], result of:
              0.102919385 = score(doc=4545,freq=2.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.5521719 = fieldWeight in 4545, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4545)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Jones, K.: Linguistic searching versus relevance ranking : DR-LINK and TARGET (1999) 0.03
    0.025729846 = product of:
      0.051459692 = sum of:
        0.051459692 = product of:
          0.102919385 = sum of:
            0.102919385 = weight(_text_:k in 6423) [ClassicSimilarity], result of:
              0.102919385 = score(doc=6423,freq=2.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.5521719 = fieldWeight in 6423, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6423)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Pirkola, A.; Hedlund, T.; Keskustalo, H.; Järvelin, K.: Dictionary-based cross-language information retrieval : problems, methods, and research findings (2001) 0.03
    0.025729846 = product of:
      0.051459692 = sum of:
        0.051459692 = product of:
          0.102919385 = sum of:
            0.102919385 = weight(_text_:k in 3908) [ClassicSimilarity], result of:
              0.102919385 = score(doc=3908,freq=2.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.5521719 = fieldWeight in 3908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3908)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.024759663 = product of:
      0.049519327 = sum of:
        0.049519327 = product of:
          0.09903865 = sum of:
            0.09903865 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.09903865 = score(doc=3164,freq=2.0), product of:
                0.1828423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052213363 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  19. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.024759663 = product of:
      0.049519327 = sum of:
        0.049519327 = product of:
          0.09903865 = sum of:
            0.09903865 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.09903865 = score(doc=4506,freq=2.0), product of:
                0.1828423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052213363 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  20. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.024759663 = product of:
      0.049519327 = sum of:
        0.049519327 = product of:
          0.09903865 = sum of:
            0.09903865 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.09903865 = score(doc=6672,freq=2.0), product of:
                0.1828423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052213363 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19

Languages

  • e 121
  • d 41
  • m 2
  • chi 1
  • f 1
  • More… Less…

Types

  • a 132
  • m 22
  • el 11
  • s 11
  • x 3
  • p 2
  • d 1
  • More… Less…

Subjects

Classifications