Search (173 results, page 1 of 9)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10427822 = sum of:
      0.08302978 = product of:
        0.24908933 = sum of:
          0.24908933 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24908933 = score(doc=562,freq=2.0), product of:
              0.44320524 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05227703 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.021248447 = product of:
        0.042496894 = sum of:
          0.042496894 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.042496894 = score(doc=562,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.10
    0.09713356 = sum of:
      0.08302978 = product of:
        0.24908933 = sum of:
          0.24908933 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
            0.24908933 = score(doc=862,freq=2.0), product of:
              0.44320524 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05227703 = queryNorm
              0.56201804 = fieldWeight in 862, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.33333334 = coord(1/3)
      0.014103786 = product of:
        0.028207572 = sum of:
          0.028207572 = weight(_text_:research in 862) [ClassicSimilarity], result of:
            0.028207572 = score(doc=862,freq=2.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.18912788 = fieldWeight in 862, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.5 = coord(1/2)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Wanner, L.: Lexical choice in text generation and machine translation (1996) 0.05
    0.054925613 = product of:
      0.109851226 = sum of:
        0.109851226 = sum of:
          0.053188704 = weight(_text_:research in 8521) [ClassicSimilarity], result of:
            0.053188704 = score(doc=8521,freq=4.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.35662293 = fieldWeight in 8521, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0625 = fieldNorm(doc=8521)
          0.056662526 = weight(_text_:22 in 8521) [ClassicSimilarity], result of:
            0.056662526 = score(doc=8521,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.30952093 = fieldWeight in 8521, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=8521)
      0.5 = coord(1/2)
    
    Abstract
    Presents the state of the art in lexical choice research in text generation and machine translation. Discusses the existing implementations with respect to: the place of lexical choice in the overall generation rates; the information flow within the generation process and the consequences thereof for lexical choice; the internal organization of the lexical choice process; and the phenomena covered by lexical choice. Identifies possible future directions in lexical choice research
    Date
    31. 7.1996 9:22:19
  4. Godby, J.: WordSmith research project bridges gap between tokens and indexes (1998) 0.05
    0.048059914 = product of:
      0.09611983 = sum of:
        0.09611983 = sum of:
          0.04654012 = weight(_text_:research in 4729) [ClassicSimilarity], result of:
            0.04654012 = score(doc=4729,freq=4.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.31204507 = fieldWeight in 4729, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4729)
          0.04957971 = weight(_text_:22 in 4729) [ClassicSimilarity], result of:
            0.04957971 = score(doc=4729,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.2708308 = fieldWeight in 4729, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4729)
      0.5 = coord(1/2)
    
    Abstract
    Reports on an OCLC natural language processing research project to develop methods for identifying terminology in unstructured electronic text, especially material associated with new cultural trends and emerging subjects. Current OCLC production software can only identify single words as indexable terms in full text documents, thus a major goal of the WordSmith project is to develop software that can automatically identify and intelligently organize phrases for uses in database indexes. By analyzing user terminology from local newspapers in the USA, the latest cultural trends and technical developments as well as personal and geographic names have been drawm out. Notes that this new vocabulary can also be mapped into reference works
    Source
    OCLC newsletter. 1998, no.234, Jul/Aug, S.22-24
  5. Melby, A.: Some notes on 'The proper place of men and machines in language translation' (1997) 0.04
    0.04124427 = product of:
      0.08248854 = sum of:
        0.08248854 = sum of:
          0.03290883 = weight(_text_:research in 330) [ClassicSimilarity], result of:
            0.03290883 = score(doc=330,freq=2.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.22064918 = fieldWeight in 330, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0546875 = fieldNorm(doc=330)
          0.04957971 = weight(_text_:22 in 330) [ClassicSimilarity], result of:
            0.04957971 = score(doc=330,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.2708308 = fieldWeight in 330, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=330)
      0.5 = coord(1/2)
    
    Abstract
    Responds to Kay, M.: The proper place of men and machines in language translation. Examines the appropriateness of machine translation (MT) under the following special circumstances: controlled domain-specific text and high-quality output; controlled domain-specific text and indicative output; dynamic general text and indicative output and dynamic general text and high-quality output. MT is appropriate in the 1st 3 cases but the 4th case requires human translation. Examines how MT research could be more useful for aiding human translation
    Date
    31. 7.1996 9:22:19
  6. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.03
    0.03254739 = product of:
      0.06509478 = sum of:
        0.06509478 = sum of:
          0.04030492 = weight(_text_:research in 3807) [ClassicSimilarity], result of:
            0.04030492 = score(doc=3807,freq=12.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.27023894 = fieldWeight in 3807, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
          0.024789855 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
            0.024789855 = score(doc=3807,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.1354154 = fieldWeight in 3807, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Research limitations/implications In total, 42 definitions were identified spanning a period of 11 years. This represented the first use of KM through the estimated apex of terms used. From 2006 onwards definitions were used in repetition, and all definitions that were considered to repeat were therefore subsequently excluded as not being unique instances. All definitions listed are by no means complete and exhaustive. The definitions are viewed outside the scope and context in which they were originally formulated and then used to review the key concepts in the definitions themselves. Social implications When the authors refer to the aforementioned discussion of KM content as well as the presentation of the method followed in this paper, the authors may have a few implications for future research in KM. First the research validates ideas presented by the OECD in 2005 pertaining to KM. It also validates that through the evolution of KM, the authors ended with a description of KM that may be seen as a standardised description. If the authors as academics and practitioners, for example, refer to KM as the same construct and/or idea, it has the potential to speculatively, distinguish between what KM may or may not be. Originality/value By simplifying the term used to define KM, by focusing on the most common definitions, the paper assist in refocusing KM by reconsidering the dimensions that is the most common in how it has been defined over time. This would hopefully assist in reigniting discussions about KM and how it may be used to the benefit of an organisation.
    Date
    20. 1.2015 18:30:22
  7. Computational linguistics for the new millennium : divergence or synergy? Proceedings of the International Symposium held at the Ruprecht-Karls Universität Heidelberg, 21-22 July 2000. Festschrift in honour of Peter Hellwig on the occasion of his 60th birthday (2002) 0.03
    0.029460195 = product of:
      0.05892039 = sum of:
        0.05892039 = sum of:
          0.023506312 = weight(_text_:research in 4900) [ClassicSimilarity], result of:
            0.023506312 = score(doc=4900,freq=2.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.15760657 = fieldWeight in 4900, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4900)
          0.03541408 = weight(_text_:22 in 4900) [ClassicSimilarity], result of:
            0.03541408 = score(doc=4900,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.19345059 = fieldWeight in 4900, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4900)
      0.5 = coord(1/2)
    
    Abstract
    The two seemingly conflicting tendencies, synergy and divergence, are both fundamental to the advancement of any science. Their interplay defines the demarcation line between application-oriented and theoretical research. The papers in this festschrift in honour of Peter Hellwig are geared to answer questions that arise from this insight: where does the discipline of Computational Linguistics currently stand, what has been achieved so far and what should be done next. Given the complexity of such questions, no simple answers can be expected. However, each of the practitioners and researchers are contributing from their very own perspective a piece of insight into the overall picture of today's and tomorrow's computational linguistics.
  8. Fóris, A.: Network theory and terminology (2013) 0.03
    0.029460195 = product of:
      0.05892039 = sum of:
        0.05892039 = sum of:
          0.023506312 = weight(_text_:research in 1365) [ClassicSimilarity], result of:
            0.023506312 = score(doc=1365,freq=2.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.15760657 = fieldWeight in 1365, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1365)
          0.03541408 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
            0.03541408 = score(doc=1365,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.19345059 = fieldWeight in 1365, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1365)
      0.5 = coord(1/2)
    
    Abstract
    The paper aims to present the relations of network theory and terminology. The model of scale-free networks, which has been recently developed and widely applied since, can be effectively used in terminology research as well. Operation based on the principle of networks is a universal characteristic of complex systems. Networks are governed by general laws. The model of scale-free networks can be viewed as a statistical-probability model, and it can be described with mathematical tools. Its main feature is that "everything is connected to everything else," that is, every node is reachable (in a few steps) starting from any other node; this phenomena is called "the small world phenomenon." The existence of a linguistic network and the general laws of the operation of networks enable us to place issues of language use in the complex system of relations that reveal the deeper connection s between phenomena with the help of networks embedded in each other. The realization of the metaphor that language also has a network structure is the basis of the classification methods of the terminological system, and likewise of the ways of creating terminology databases, which serve the purpose of providing easy and versatile accessibility to specialised knowledge.
    Date
    2. 9.2014 21:22:48
  9. Warner, A.J.: Natural language processing (1987) 0.03
    0.028331263 = product of:
      0.056662526 = sum of:
        0.056662526 = product of:
          0.11332505 = sum of:
            0.11332505 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.11332505 = score(doc=337,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  10. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.03
    0.02664487 = product of:
      0.05328974 = sum of:
        0.05328974 = sum of:
          0.028499886 = weight(_text_:research in 1616) [ClassicSimilarity], result of:
            0.028499886 = score(doc=1616,freq=6.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.19108781 = fieldWeight in 1616, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
          0.024789855 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
            0.024789855 = score(doc=1616,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.1354154 = fieldWeight in 1616, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
      0.5 = coord(1/2)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
  11. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.024789855 = product of:
      0.04957971 = sum of:
        0.04957971 = product of:
          0.09915942 = sum of:
            0.09915942 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.09915942 = score(doc=3164,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  12. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.024789855 = product of:
      0.04957971 = sum of:
        0.04957971 = product of:
          0.09915942 = sum of:
            0.09915942 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.09915942 = score(doc=4506,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  13. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.024789855 = product of:
      0.04957971 = sum of:
        0.04957971 = product of:
          0.09915942 = sum of:
            0.09915942 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.09915942 = score(doc=6672,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  14. New tools for human translators (1997) 0.02
    0.024789855 = product of:
      0.04957971 = sum of:
        0.04957971 = product of:
          0.09915942 = sum of:
            0.09915942 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.09915942 = score(doc=1179,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  15. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.024789855 = product of:
      0.04957971 = sum of:
        0.04957971 = product of:
          0.09915942 = sum of:
            0.09915942 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.09915942 = score(doc=3117,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  16. ¬Der Student aus dem Computer (2023) 0.02
    0.024789855 = product of:
      0.04957971 = sum of:
        0.04957971 = product of:
          0.09915942 = sum of:
            0.09915942 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.09915942 = score(doc=1079,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  17. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.021248447 = product of:
      0.042496894 = sum of:
        0.042496894 = product of:
          0.08499379 = sum of:
            0.08499379 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.08499379 = score(doc=4483,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  18. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.021248447 = product of:
      0.042496894 = sum of:
        0.042496894 = product of:
          0.08499379 = sum of:
            0.08499379 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.08499379 = score(doc=4888,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  19. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.02
    0.021248447 = product of:
      0.042496894 = sum of:
        0.042496894 = product of:
          0.08499379 = sum of:
            0.08499379 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.08499379 = score(doc=5429,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  20. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.02
    0.01770704 = product of:
      0.03541408 = sum of:
        0.03541408 = product of:
          0.07082816 = sum of:
            0.07082816 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.07082816 = score(doc=1463,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19

Years

Languages

  • e 149
  • d 21
  • chi 1
  • el 1
  • m 1
  • More… Less…

Types

  • a 142
  • el 18
  • s 12
  • m 11
  • x 4
  • p 2
  • b 1
  • d 1
  • r 1
  • More… Less…

Classifications