Search (56 results, page 1 of 3)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.09975362 = sum of:
      0.07942714 = product of:
        0.23828141 = sum of:
          0.23828141 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.23828141 = score(doc=562,freq=2.0), product of:
              0.42397466 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05000874 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.02032648 = product of:
        0.04065296 = sum of:
          0.04065296 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.04065296 = score(doc=562,freq=2.0), product of:
              0.17512208 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05000874 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.03971357 = product of:
      0.07942714 = sum of:
        0.07942714 = product of:
          0.23828141 = sum of:
            0.23828141 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.23828141 = score(doc=862,freq=2.0), product of:
                0.42397466 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05000874 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Kim, W.; Wilbur, W.J.: Corpus-based statistical screening for content-bearing terms (2001) 0.03
    0.032885723 = product of:
      0.065771446 = sum of:
        0.065771446 = product of:
          0.13154289 = sum of:
            0.13154289 = weight(_text_:spaces in 5188) [ClassicSimilarity], result of:
              0.13154289 = score(doc=5188,freq=4.0), product of:
                0.32442674 = queryWeight, product of:
                  6.487401 = idf(docFreq=182, maxDocs=44218)
                  0.05000874 = queryNorm
                0.40546256 = fieldWeight in 5188, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.487401 = idf(docFreq=182, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5188)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Kim and Wilber present three techniques for the algorithmic identification in text of content bearing terms and phrases intended for human use as entry points or hyperlinks. Using a set of 1,075 terms from MEDLINE evaluated on a zero to four, stop word to definite content word scale, they evaluate the ranked lists of their three methods based on their placement of content words in the top ranks. Data consist of the natural language elements of 304,057 MEDLINE records from 1996, and 173,252 Wall Street Journal records from the TIPSTER collection. Phrases are extracted by breaking at punctuation marks and stop words, normalized by lower casing, replacement of nonalphanumerics with spaces, and the reduction of multiple spaces. In the ``strength of context'' approach each document is a vector of binary values for each word or word pair. The words or word pairs are removed from all documents, and the Robertson, Spark Jones relevance weight for each term computed, negative weights replaced with zero, those below a randomness threshold ignored, and the remainder summed for each document, to yield a score for the document and finally to assign to the term the average document score for documents in which it occurred. The average of these word scores is assigned to the original phrase. The ``frequency clumping'' approach defines a random phrase as one whose distribution among documents is Poisson in character. A pvalue, the probability that a phrase frequency of occurrence would be equal to, or less than, Poisson expectations is computed, and a score assigned which is the negative log of that value. In the ``database comparison'' approach if a phrase occurring in a document allows prediction that the document is in MEDLINE rather that in the Wall Street Journal, it is considered to be content bearing for MEDLINE. The score is computed by dividing the number of occurrences of the term in MEDLINE by occurrences in the Journal, and taking the product of all these values. The one hundred top and bottom ranked phrases that occurred in at least 500 documents were collected for each method. The union set had 476 phrases. A second selection was made of two word phrases occurring each in only three documents with a union of 599 phrases. A judge then ranked the two sets of terms as to subject specificity on a 0 to 4 scale. Precision was the average subject specificity of the first r ranks and recall the fraction of the subject specific phrases in the first r ranks and eleven point average precision was used as a summary measure. The three methods all move content bearing terms forward in the lists as does the use of the sum of the logs of the three methods.
  4. Lee, K.H.; Ng, M.K.M.; Lu, Q.: Text segmentation for Chinese spell checking (1999) 0.03
    0.029067146 = product of:
      0.05813429 = sum of:
        0.05813429 = product of:
          0.11626858 = sum of:
            0.11626858 = weight(_text_:spaces in 3913) [ClassicSimilarity], result of:
              0.11626858 = score(doc=3913,freq=2.0), product of:
                0.32442674 = queryWeight, product of:
                  6.487401 = idf(docFreq=182, maxDocs=44218)
                  0.05000874 = queryNorm
                0.35838163 = fieldWeight in 3913, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.487401 = idf(docFreq=182, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3913)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Chinese spell checking is different from its counterparts for Western languages because Chinese words in texts are not separated by spaces. Chinese spell checking in this article refers to how to identify the misuse of characters in text composition. In other words, it is error correction at the word level rather than at the character level. Before Chinese sentences are spell checked, the text is segmented into semantic units. Error detection can then be carried out on the segmented text based on thesaurus and grammar rules. Segmentation is not a trivial process due to ambiguities in the Chinese language and errors in texts. Because it is not practical to define all Chinese words in a dictionary, words not predefined must also be dealt with. The number of word combinations increases exponentially with the length of the sentence. In this article, a Block-of-Combinations (BOC) segmentation method based on frequency of word usage is proposed to reduce the word combinations from exponential growth to linear growth. From experiments carried out on Hong Kong newspapers, BOC can correctly solve 10% more ambiguities than the Maximum Match segmentation method. To make the segmentation more suitable for spell checking, user interaction is also suggested
  5. Wang, F.L.; Yang, C.C.: Mining Web data for Chinese segmentation (2007) 0.03
    0.029067146 = product of:
      0.05813429 = sum of:
        0.05813429 = product of:
          0.11626858 = sum of:
            0.11626858 = weight(_text_:spaces in 604) [ClassicSimilarity], result of:
              0.11626858 = score(doc=604,freq=2.0), product of:
                0.32442674 = queryWeight, product of:
                  6.487401 = idf(docFreq=182, maxDocs=44218)
                  0.05000874 = queryNorm
                0.35838163 = fieldWeight in 604, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.487401 = idf(docFreq=182, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=604)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Modern information retrieval systems use keywords within documents as indexing terms for search of relevant documents. As Chinese is an ideographic character-based language, the words in the texts are not delimited by white spaces. Indexing of Chinese documents is impossible without a proper segmentation algorithm. Many Chinese segmentation algorithms have been proposed in the past. Traditional segmentation algorithms cannot operate without a large dictionary or a large corpus of training data. Nowadays, the Web has become the largest corpus that is ideal for Chinese segmentation. Although most search engines have problems in segmenting texts into proper words, they maintain huge databases of documents and frequencies of character sequences in the documents. Their databases are important potential resources for segmentation. In this paper, we propose a segmentation algorithm by mining Web data with the help of search engines. On the other hand, the Romanized pinyin of Chinese language indicates boundaries of words in the text. Our algorithm is the first to utilize the Romanized pinyin to segmentation. It is the first unified segmentation algorithm for the Chinese language from different geographical areas, and it is also domain independent because of the nature of the Web. Experiments have been conducted on the datasets of a recent Chinese segmentation competition. The results show that our algorithm outperforms the traditional algorithms in terms of precision and recall. Moreover, our algorithm can effectively deal with the problems of segmentation ambiguity, new word (unknown word) detection, and stop words.
  6. Park, J.S.; O'Brien, J.C.; Cai, C.J.; Ringel Morris, M.; Liang, P.; Bernstein, M.S.: Generative agents : interactive simulacra of human behavior (2023) 0.03
    0.029067146 = product of:
      0.05813429 = sum of:
        0.05813429 = product of:
          0.11626858 = sum of:
            0.11626858 = weight(_text_:spaces in 972) [ClassicSimilarity], result of:
              0.11626858 = score(doc=972,freq=2.0), product of:
                0.32442674 = queryWeight, product of:
                  6.487401 = idf(docFreq=182, maxDocs=44218)
                  0.05000874 = queryNorm
                0.35838163 = fieldWeight in 972, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.487401 = idf(docFreq=182, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=972)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents--computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent's experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine's Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture--observation, planning, and reflection--each contribute critically to the believability of agent behavior. By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.
  7. Warner, A.J.: Natural language processing (1987) 0.03
    0.027101975 = product of:
      0.05420395 = sum of:
        0.05420395 = product of:
          0.1084079 = sum of:
            0.1084079 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.1084079 = score(doc=337,freq=2.0), product of:
                0.17512208 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05000874 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  8. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.023714228 = product of:
      0.047428455 = sum of:
        0.047428455 = product of:
          0.09485691 = sum of:
            0.09485691 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.09485691 = score(doc=3164,freq=2.0), product of:
                0.17512208 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05000874 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  9. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.023714228 = product of:
      0.047428455 = sum of:
        0.047428455 = product of:
          0.09485691 = sum of:
            0.09485691 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.09485691 = score(doc=4506,freq=2.0), product of:
                0.17512208 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05000874 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  10. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.023714228 = product of:
      0.047428455 = sum of:
        0.047428455 = product of:
          0.09485691 = sum of:
            0.09485691 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.09485691 = score(doc=6672,freq=2.0), product of:
                0.17512208 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05000874 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  11. New tools for human translators (1997) 0.02
    0.023714228 = product of:
      0.047428455 = sum of:
        0.047428455 = product of:
          0.09485691 = sum of:
            0.09485691 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.09485691 = score(doc=1179,freq=2.0), product of:
                0.17512208 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05000874 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  12. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.023714228 = product of:
      0.047428455 = sum of:
        0.047428455 = product of:
          0.09485691 = sum of:
            0.09485691 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.09485691 = score(doc=3117,freq=2.0), product of:
                0.17512208 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05000874 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  13. ¬Der Student aus dem Computer (2023) 0.02
    0.023714228 = product of:
      0.047428455 = sum of:
        0.047428455 = product of:
          0.09485691 = sum of:
            0.09485691 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.09485691 = score(doc=1079,freq=2.0), product of:
                0.17512208 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05000874 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  14. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.02032648 = product of:
      0.04065296 = sum of:
        0.04065296 = product of:
          0.08130592 = sum of:
            0.08130592 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.08130592 = score(doc=4483,freq=2.0), product of:
                0.17512208 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05000874 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  15. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.02032648 = product of:
      0.04065296 = sum of:
        0.04065296 = product of:
          0.08130592 = sum of:
            0.08130592 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.08130592 = score(doc=4888,freq=2.0), product of:
                0.17512208 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05000874 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  16. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.02
    0.02032648 = product of:
      0.04065296 = sum of:
        0.04065296 = product of:
          0.08130592 = sum of:
            0.08130592 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.08130592 = score(doc=5429,freq=2.0), product of:
                0.17512208 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05000874 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  17. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.02
    0.016938735 = product of:
      0.03387747 = sum of:
        0.03387747 = product of:
          0.06775494 = sum of:
            0.06775494 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.06775494 = score(doc=1463,freq=2.0), product of:
                0.17512208 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05000874 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  18. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.02
    0.016938735 = product of:
      0.03387747 = sum of:
        0.03387747 = product of:
          0.06775494 = sum of:
            0.06775494 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.06775494 = score(doc=5428,freq=2.0), product of:
                0.17512208 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05000874 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.220-229
  19. Lezius, W.; Rapp, R.; Wettler, M.: ¬A morphology-system and part-of-speech tagger for German (1996) 0.02
    0.016938735 = product of:
      0.03387747 = sum of:
        0.03387747 = product of:
          0.06775494 = sum of:
            0.06775494 = weight(_text_:22 in 1693) [ClassicSimilarity], result of:
              0.06775494 = score(doc=1693,freq=2.0), product of:
                0.17512208 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05000874 = queryNorm
                0.38690117 = fieldWeight in 1693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1693)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:37:18
  20. Wanner, L.: Lexical choice in text generation and machine translation (1996) 0.01
    0.0135509875 = product of:
      0.027101975 = sum of:
        0.027101975 = product of:
          0.05420395 = sum of:
            0.05420395 = weight(_text_:22 in 8521) [ClassicSimilarity], result of:
              0.05420395 = score(doc=8521,freq=2.0), product of:
                0.17512208 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05000874 = queryNorm
                0.30952093 = fieldWeight in 8521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8521)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19

Years

Languages

  • e 40
  • d 16

Types

  • a 44
  • el 6
  • m 5
  • s 3
  • p 2
  • x 2
  • d 1
  • More… Less…