Search (120 results, page 1 of 6)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.083250985 = sum of:
      0.062071178 = product of:
        0.24828471 = sum of:
          0.24828471 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24828471 = score(doc=562,freq=2.0), product of:
              0.44177356 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05210816 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.021179808 = product of:
        0.042359617 = sum of:
          0.042359617 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.042359617 = score(doc=562,freq=2.0), product of:
              0.1824739 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05210816 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.05
    0.053375304 = product of:
      0.10675061 = sum of:
        0.10675061 = sum of:
          0.057331055 = weight(_text_:i in 156) [ClassicSimilarity], result of:
            0.057331055 = score(doc=156,freq=2.0), product of:
              0.1965379 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.05210816 = queryNorm
              0.29170483 = fieldWeight in 156, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
          0.049419552 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
            0.049419552 = score(doc=156,freq=2.0), product of:
              0.1824739 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05210816 = queryNorm
              0.2708308 = fieldWeight in 156, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
      0.5 = coord(1/2)
    
    Date
    8. 3.2007 19:55:22
    Source
    Context: nature, impact and role. 5th International Conference an Conceptions of Library and Information Sciences, CoLIS 2005 Glasgow, UK, June 2005. Ed. by F. Crestani u. I. Ruthven
  3. Computational linguistics for the new millennium : divergence or synergy? Proceedings of the International Symposium held at the Ruprecht-Karls Universität Heidelberg, 21-22 July 2000. Festschrift in honour of Peter Hellwig on the occasion of his 60th birthday (2002) 0.05
    0.046606395 = product of:
      0.09321279 = sum of:
        0.09321279 = sum of:
          0.05791311 = weight(_text_:i in 4900) [ClassicSimilarity], result of:
            0.05791311 = score(doc=4900,freq=4.0), product of:
              0.1965379 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.05210816 = queryNorm
              0.29466638 = fieldWeight in 4900, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4900)
          0.03529968 = weight(_text_:22 in 4900) [ClassicSimilarity], result of:
            0.03529968 = score(doc=4900,freq=2.0), product of:
              0.1824739 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05210816 = queryNorm
              0.19345059 = fieldWeight in 4900, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4900)
      0.5 = coord(1/2)
    
    Content
    Contents: Manfred Klenner / Henriette Visser: Introduction - Khurshid Ahmad: Writing Linguistics: When I use a word it means what I choose it to mean - Jürgen Handke: 2000 and Beyond: The Potential of New Technologies in Linguistics - Jurij Apresjan / Igor Boguslavsky / Leonid Iomdin / Leonid Tsinman: Lexical Functions in NU: Possible Uses - Hubert Lehmann: Practical Machine Translation and Linguistic Theory - Karin Haenelt: A Contextbased Approach towards Content Processing of Electronic Documents - Petr Sgall / Eva Hajicová: Are Linguistic Frameworks Comparable? - Wolfgang Menzel: Theory and Applications in Computational Linguistics - Is there Common Ground? - Robert Porzel / Michael Strube: Towards Context-adaptive Natural Language Processing Systems - Nicoletta Calzolari: Language Resources in a Multilingual Setting: The European Perspective - Piek Vossen: Computational Linguistics for Theory and Practice.
  4. Solvberg, I.; Nordbo, I.; Aamodt, A.: Knowledge-based information retrieval (1991/92) 0.05
    0.04633049 = product of:
      0.09266098 = sum of:
        0.09266098 = product of:
          0.18532196 = sum of:
            0.18532196 = weight(_text_:i in 546) [ClassicSimilarity], result of:
              0.18532196 = score(doc=546,freq=4.0), product of:
                0.1965379 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.05210816 = queryNorm
                0.9429324 = fieldWeight in 546, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.125 = fieldNorm(doc=546)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Blair, D.C.: Information retrieval and the philosophy of language (2002) 0.04
    0.044725653 = sum of:
      0.01196505 = product of:
        0.0478602 = sum of:
          0.0478602 = weight(_text_:authors in 4283) [ClassicSimilarity], result of:
            0.0478602 = score(doc=4283,freq=2.0), product of:
              0.23755142 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05210816 = queryNorm
              0.20147301 = fieldWeight in 4283, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=4283)
        0.25 = coord(1/4)
      0.0327606 = product of:
        0.0655212 = sum of:
          0.0655212 = weight(_text_:i in 4283) [ClassicSimilarity], result of:
            0.0655212 = score(doc=4283,freq=8.0), product of:
              0.1965379 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.05210816 = queryNorm
              0.33337694 = fieldWeight in 4283, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.03125 = fieldNorm(doc=4283)
        0.5 = coord(1/2)
    
    Abstract
    Information retrieval - the retrieval, primarily, of documents or textual material - is fundamentally a linguistic process. At the very least we must describe what we want and match that description with descriptions of the information that is available to us. Furthermore, when we describe what we want, we must mean something by that description. This is a deceptively simple act, but such linguistic events have been the grist for philosophical analysis since Aristotle. Although there are complexities involved in referring to authors, document types, or other categories of information retrieval context, here I wish to focus an one of the most problematic activities in information retrieval: the description of the intellectual content of information items. And even though I take information retrieval to involve the description and retrieval of written text, what I say here is applicable to any information item whose intellectual content can be described for retrieval-books, documents, images, audio clips, video clips, scientific specimens, engineering schematics, and so forth. For convenience, though, I will refer only to the description and retrieval of documents. The description of intellectual content can go wrong in many obvious ways. We may describe what we want incorrectly; we may describe it correctly but in such general terms that its description is useless for retrieval; or we may describe what we want correctly, but misinterpret the descriptions of available information, and thereby match our description of what we want incorrectly. From a linguistic point of view, we can be misunderstood in the process of retrieval in many ways. Because the philosophy of language deals specifically with how we are understood and mis-understood, it should have some use for understanding the process of description in information retrieval. First, however, let us examine more closely the kinds of misunderstandings that can occur in information retrieval. We use language in searching for information in two principal ways. We use it to describe what we want and to discriminate what we want from other information that is available to us but that we do not want. Description and discrimination together articulate the goals of the information search process; they also delineate the two principal ways in which language can fail us in this process. Van Rijsbergen (1979) was the first to make this distinction, calling them "representation" and "discrimination.""
  6. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.04
    0.040054366 = sum of:
      0.02769948 = product of:
        0.11079792 = sum of:
          0.11079792 = weight(_text_:authors in 3807) [ClassicSimilarity], result of:
            0.11079792 = score(doc=3807,freq=14.0), product of:
              0.23755142 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05210816 = queryNorm
              0.46641657 = fieldWeight in 3807, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.25 = coord(1/4)
      0.012354888 = product of:
        0.024709776 = sum of:
          0.024709776 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
            0.024709776 = score(doc=3807,freq=2.0), product of:
              0.1824739 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05210816 = queryNorm
              0.1354154 = fieldWeight in 3807, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.5 = coord(1/2)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Research limitations/implications In total, 42 definitions were identified spanning a period of 11 years. This represented the first use of KM through the estimated apex of terms used. From 2006 onwards definitions were used in repetition, and all definitions that were considered to repeat were therefore subsequently excluded as not being unique instances. All definitions listed are by no means complete and exhaustive. The definitions are viewed outside the scope and context in which they were originally formulated and then used to review the key concepts in the definitions themselves. Social implications When the authors refer to the aforementioned discussion of KM content as well as the presentation of the method followed in this paper, the authors may have a few implications for future research in KM. First the research validates ideas presented by the OECD in 2005 pertaining to KM. It also validates that through the evolution of KM, the authors ended with a description of KM that may be seen as a standardised description. If the authors as academics and practitioners, for example, refer to KM as the same construct and/or idea, it has the potential to speculatively, distinguish between what KM may or may not be. Originality/value By simplifying the term used to define KM, by focusing on the most common definitions, the paper assist in refocusing KM by reconsidering the dimensions that is the most common in how it has been defined over time. This would hopefully assist in reigniting discussions about KM and how it may be used to the benefit of an organisation.
    Date
    20. 1.2015 18:30:22
  7. Bátori, I.: ¬Der sprachliche Verarbeitungsprozeß als paradigmatischer Kern der linguistischen Datenverarbeitung (1982) 0.03
    0.03474787 = product of:
      0.06949574 = sum of:
        0.06949574 = product of:
          0.13899148 = sum of:
            0.13899148 = weight(_text_:i in 8422) [ClassicSimilarity], result of:
              0.13899148 = score(doc=8422,freq=4.0), product of:
                0.1965379 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.05210816 = queryNorm
                0.70719934 = fieldWeight in 8422, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.09375 = fieldNorm(doc=8422)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Linguistische Datenverarbeitung: Versuch einer Standortbestimmung im Umfeld von Informationslinguistik und Künstlicher Intelligenz. Hrsg.: I. Bátori u.a
  8. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.031035589 = product of:
      0.062071178 = sum of:
        0.062071178 = product of:
          0.24828471 = sum of:
            0.24828471 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24828471 = score(doc=862,freq=2.0), product of:
                0.44177356 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05210816 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  9. Jones, I.; Cunliffe, D.; Tudhope, D.: Natural language processing and knowledge organization systems as an aid to retrieval (2004) 0.03
    0.030739006 = sum of:
      0.010469419 = product of:
        0.041877676 = sum of:
          0.041877676 = weight(_text_:authors in 2677) [ClassicSimilarity], result of:
            0.041877676 = score(doc=2677,freq=2.0), product of:
              0.23755142 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05210816 = queryNorm
              0.17628889 = fieldWeight in 2677, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2677)
        0.25 = coord(1/4)
      0.020269588 = product of:
        0.040539175 = sum of:
          0.040539175 = weight(_text_:i in 2677) [ClassicSimilarity], result of:
            0.040539175 = score(doc=2677,freq=4.0), product of:
              0.1965379 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.05210816 = queryNorm
              0.20626646 = fieldWeight in 2677, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2677)
        0.5 = coord(1/2)
    
    Content
    1. Introduction The need for research into the application of linguistic techniques in Information Retrieval (IR) in general, and a similar need in faceted Knowledge Organization Systems (KOS) has been indicated by various authors. Smeaton (1997) points out the inherent limitations of conventional approaches to IR based an "bags of words", mainly difficulties caused by lexical ambiguity in the words concerned, and goes an to suggest the possibility of using Natural Language Processing (NLP) in query formulation. Past experience with a faceted retrieval system highlighted the need for integrating the linguistic perspective in order to fully utilise the potential of a KOS (Tudhope et al." 2002). The present research seeks to address some of these needs in using NLP to improve the efficacy of KOS tools in query and retrieval systems. Syntactic parsing and part-of-speech tagging can substantially reduce lexical ambiguity through homograph disambiguation. Given the two strings "1 fable the motion" and "I put the motion an the fable", for instance, the parser used in this research clearly indicates that 'fable' in the first string is a verb, while 'table' in the second string is a noun, a distinction that would be missed in the "bag of words" approach. This syntactic disambiguation enables a more precise matching from free text to the controlled vocabulary of a KOS and vice versa. The use of a general linguistic resource, namely Roget's Thesaurus of English Words and Phrases (RTEWP), as an intermediary in this process, is investigated. The adaptation of the Link parser (Sleator & Temperley, 1993) to the purposes of the research is reported. The design and implementation of the early practical stages of the project are described, and the results of the initial experiments are presented and evaluated. Applications of the techniques developed are foreseen in the areas of query disambiguation, information retrieval and automatic indexing. In the first section of the paper a brief review of the literature and relevant current work in the field is presented. The second section includes reports an the development of algorithms, the construction of data sets and theoretical and experimental work undertaken to date. The third section evaluates the results obtained, and outlines directions for future research.
  10. Pimenov, E.N.: Normativnost' i nekotorye problem razrabotki tezauruzov i drugikh lingvistiicheskikh sredstv IPS (2000) 0.03
    0.028956555 = product of:
      0.05791311 = sum of:
        0.05791311 = product of:
          0.11582622 = sum of:
            0.11582622 = weight(_text_:i in 3281) [ClassicSimilarity], result of:
              0.11582622 = score(doc=3281,freq=4.0), product of:
                0.1965379 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.05210816 = queryNorm
                0.58933276 = fieldWeight in 3281, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3281)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Feldman, S.: Find what I mean, not what I say : meaning-based search tools (2000) 0.03
    0.028956555 = product of:
      0.05791311 = sum of:
        0.05791311 = product of:
          0.11582622 = sum of:
            0.11582622 = weight(_text_:i in 4799) [ClassicSimilarity], result of:
              0.11582622 = score(doc=4799,freq=4.0), product of:
                0.1965379 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.05210816 = queryNorm
                0.58933276 = fieldWeight in 4799, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4799)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Linguistische Datenverarbeitung : Versuch einer Standortbestimmung im Umfeld von Informationslinguistik und künstlicher Intelligenz (1982) 0.03
    0.028665528 = product of:
      0.057331055 = sum of:
        0.057331055 = product of:
          0.11466211 = sum of:
            0.11466211 = weight(_text_:i in 5143) [ClassicSimilarity], result of:
              0.11466211 = score(doc=5143,freq=2.0), product of:
                0.1965379 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.05210816 = queryNorm
                0.58340967 = fieldWeight in 5143, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5143)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Editor
    Bátori, I. u.a.
  13. Warner, A.J.: Natural language processing (1987) 0.03
    0.028239746 = product of:
      0.05647949 = sum of:
        0.05647949 = product of:
          0.11295898 = sum of:
            0.11295898 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.11295898 = score(doc=337,freq=2.0), product of:
                0.1824739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05210816 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  14. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.024709776 = product of:
      0.049419552 = sum of:
        0.049419552 = product of:
          0.098839104 = sum of:
            0.098839104 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.098839104 = score(doc=3164,freq=2.0), product of:
                0.1824739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05210816 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  15. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.024709776 = product of:
      0.049419552 = sum of:
        0.049419552 = product of:
          0.098839104 = sum of:
            0.098839104 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.098839104 = score(doc=4506,freq=2.0), product of:
                0.1824739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05210816 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  16. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.024709776 = product of:
      0.049419552 = sum of:
        0.049419552 = product of:
          0.098839104 = sum of:
            0.098839104 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.098839104 = score(doc=6672,freq=2.0), product of:
                0.1824739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05210816 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  17. New tools for human translators (1997) 0.02
    0.024709776 = product of:
      0.049419552 = sum of:
        0.049419552 = product of:
          0.098839104 = sum of:
            0.098839104 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.098839104 = score(doc=1179,freq=2.0), product of:
                0.1824739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05210816 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  18. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.024709776 = product of:
      0.049419552 = sum of:
        0.049419552 = product of:
          0.098839104 = sum of:
            0.098839104 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.098839104 = score(doc=3117,freq=2.0), product of:
                0.1824739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05210816 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  19. ¬Der Student aus dem Computer (2023) 0.02
    0.024709776 = product of:
      0.049419552 = sum of:
        0.049419552 = product of:
          0.098839104 = sum of:
            0.098839104 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.098839104 = score(doc=1079,freq=2.0), product of:
                0.1824739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05210816 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  20. Kunze, C.; Wagner, A.: Anwendungsperspektive des GermaNet, eines lexikalisch-semantischen Netzes für das Deutsche (2001) 0.02
    0.024570452 = product of:
      0.049140904 = sum of:
        0.049140904 = product of:
          0.09828181 = sum of:
            0.09828181 = weight(_text_:i in 7456) [ClassicSimilarity], result of:
              0.09828181 = score(doc=7456,freq=2.0), product of:
                0.1965379 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.05210816 = queryNorm
                0.50006545 = fieldWeight in 7456, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7456)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Chancen und Perspektiven computergestützter Lexikographie. Hrsg.: I. Lemberg u.a

Years

Languages

  • e 85
  • d 27
  • ru 7
  • More… Less…

Types

  • a 94
  • el 14
  • m 13
  • s 8
  • p 3
  • x 3
  • d 1
  • More… Less…

Classifications