Search (57 results, page 1 of 3)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.09965296 = sum of:
      0.07934699 = product of:
        0.23804097 = sum of:
          0.23804097 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.23804097 = score(doc=562,freq=2.0), product of:
              0.42354685 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.04995828 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.02030597 = product of:
        0.04061194 = sum of:
          0.04061194 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.04061194 = score(doc=562,freq=2.0), product of:
              0.17494538 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04995828 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Belbachir, F.; Boughanem, M.: Using language models to improve opinion detection (2018) 0.09
    0.08536626 = product of:
      0.17073251 = sum of:
        0.17073251 = product of:
          0.34146503 = sum of:
            0.34146503 = weight(_text_:opinion in 5044) [ClassicSimilarity], result of:
              0.34146503 = score(doc=5044,freq=26.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                1.0436088 = fieldWeight in 5044, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5044)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Opinion mining is one of the most important research tasks in the information retrieval research community. With the huge volume of opinionated data available on the Web, approaches must be developed to differentiate opinion from fact. In this paper, we present a lexicon-based approach for opinion retrieval. Generally, opinion retrieval consists of two stages: relevance to the query and opinion detection. In our work, we focus on the second state which itself focusses on detecting opinionated documents . We compare the document to be analyzed with opinionated sources that contain subjective information. We hypothesize that a document with a strong similarity to opinionated sources is more likely to be opinionated itself. Typical lexicon-based approaches treat and choose their opinion sources according to their test collection, then calculate the opinion score based on the frequency of subjective terms in the document. In our work, we use different open opinion collections without any specific treatment and consider them as a reference collection. We then use language models to determine opinion scores. The analysis document and reference collection are represented by different language models (i.e., Dirichlet, Jelinek-Mercer and two-stage models). These language models are generally used in information retrieval to represent the relationship between documents and queries. However, in our study, we modify these language models to represent opinionated documents. We carry out several experiments using Text REtrieval Conference (TREC) Blogs 06 as our analysis collection and Internet Movie Data Bases (IMDB), Multi-Perspective Question Answering (MPQA) and CHESLY as our reference collection. To improve opinion detection, we study the impact of using different language models to represent the document and reference collection alongside different combinations of opinion and retrieval scores. We then use this data to deduce the best opinion detection models. Using the best models, our approach improves on the best baseline of TREC Blog (baseline4) by 30%.
  3. Fernández, R.T.; Losada, D.E.: Effective sentence retrieval based on query-independent evidence (2012) 0.07
    0.071029015 = product of:
      0.14205803 = sum of:
        0.14205803 = product of:
          0.28411606 = sum of:
            0.28411606 = weight(_text_:opinion in 2728) [ClassicSimilarity], result of:
              0.28411606 = score(doc=2728,freq=8.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                0.86833495 = fieldWeight in 2728, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2728)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper we propose an effective sentence retrieval method that consists of incorporating query-independent features into standard sentence retrieval models. To meet this aim, we apply a formal methodology and consider different query-independent features. In particular, we show that opinion-based features are promising. Opinion mining is an increasingly important research topic but little is known about how to improve retrieval algorithms with opinion-based components. In this respect, we consider here different kinds of opinion-based features to act as query-independent evidence and study whether this incorporation improves retrieval performance. On the other hand, information needs are usually related to people, locations or organizations. We hypothesize here that using these named entities as query-independent features may also improve the sentence relevance estimation. Finally, the length of the retrieval unit has been shown to be an important component in different retrieval scenarios. We therefore include length-based features in our study. Our evaluation demonstrates that, either in isolation or in combination, these query-independent features help to improve substantially the performance of state-of-the-art sentence retrieval methods.
  4. Luo, Z.; Yu, Y.; Osborne, M.; Wang, T.: Structuring tweets for improving Twitter search (2015) 0.05
    0.05126078 = product of:
      0.10252156 = sum of:
        0.10252156 = product of:
          0.20504312 = sum of:
            0.20504312 = weight(_text_:opinion in 2335) [ClassicSimilarity], result of:
              0.20504312 = score(doc=2335,freq=6.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                0.62666684 = fieldWeight in 2335, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2335)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Spam and wildly varying documents make searching in Twitter challenging. Most Twitter search systems generally treat a Tweet as a plain text when modeling relevance. However, a series of conventions allows users to Tweet in structural ways using a combination of different blocks of texts. These blocks include plain texts, hashtags, links, mentions, etc. Each block encodes a variety of communicative intent and the sequence of these blocks captures changing discourse. Previous work shows that exploiting the structural information can improve the structured documents (e.g., web pages) retrieval. In this study we utilize the structure of Tweets, induced by these blocks, for Twitter retrieval and Twitter opinion retrieval. For Twitter retrieval, a set of features, derived from the blocks of text and their combinations, is used into a learning-to-rank scenario. We show that structuring Tweets can achieve state-of-the-art performance. Our approach does not rely on social media features, but when we do add this additional information, performance improves significantly. For Twitter opinion retrieval, we explore the question of whether structural information derived from the body of Tweets and opinionatedness ratings of Tweets can improve performance. Experimental results show that retrieval using a novel unsupervised opinionatedness feature based on structuring Tweets achieves comparable performance with a supervised method using manually tagged Tweets. Topic-related specific structured Tweet sets are shown to help with query-dependent opinion retrieval.
  5. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.039673496 = product of:
      0.07934699 = sum of:
        0.07934699 = product of:
          0.23804097 = sum of:
            0.23804097 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.23804097 = score(doc=862,freq=2.0), product of:
                0.42354685 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04995828 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  6. Costa-jussà, M.R.: How much hybridization does machine translation need? (2015) 0.04
    0.035514507 = product of:
      0.071029015 = sum of:
        0.071029015 = product of:
          0.14205803 = sum of:
            0.14205803 = weight(_text_:opinion in 2227) [ClassicSimilarity], result of:
              0.14205803 = score(doc=2227,freq=2.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                0.43416747 = fieldWeight in 2227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2227)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Series
    Opinion paper
  7. Warner, A.J.: Natural language processing (1987) 0.03
    0.02707463 = product of:
      0.05414926 = sum of:
        0.05414926 = product of:
          0.10829852 = sum of:
            0.10829852 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.10829852 = score(doc=337,freq=2.0), product of:
                0.17494538 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04995828 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  8. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.0236903 = product of:
      0.0473806 = sum of:
        0.0473806 = product of:
          0.0947612 = sum of:
            0.0947612 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.0947612 = score(doc=3164,freq=2.0), product of:
                0.17494538 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04995828 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  9. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.0236903 = product of:
      0.0473806 = sum of:
        0.0473806 = product of:
          0.0947612 = sum of:
            0.0947612 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.0947612 = score(doc=4506,freq=2.0), product of:
                0.17494538 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04995828 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  10. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.0236903 = product of:
      0.0473806 = sum of:
        0.0473806 = product of:
          0.0947612 = sum of:
            0.0947612 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.0947612 = score(doc=6672,freq=2.0), product of:
                0.17494538 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04995828 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  11. New tools for human translators (1997) 0.02
    0.0236903 = product of:
      0.0473806 = sum of:
        0.0473806 = product of:
          0.0947612 = sum of:
            0.0947612 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.0947612 = score(doc=1179,freq=2.0), product of:
                0.17494538 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04995828 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  12. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.0236903 = product of:
      0.0473806 = sum of:
        0.0473806 = product of:
          0.0947612 = sum of:
            0.0947612 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.0947612 = score(doc=3117,freq=2.0), product of:
                0.17494538 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04995828 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  13. ¬Der Student aus dem Computer (2023) 0.02
    0.0236903 = product of:
      0.0473806 = sum of:
        0.0473806 = product of:
          0.0947612 = sum of:
            0.0947612 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.0947612 = score(doc=1079,freq=2.0), product of:
                0.17494538 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04995828 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  14. Olsen, K.A.; Williams, J.G.: Spelling and grammar checking using the Web as a text repository (2004) 0.02
    0.023676338 = product of:
      0.047352675 = sum of:
        0.047352675 = product of:
          0.09470535 = sum of:
            0.09470535 = weight(_text_:opinion in 2891) [ClassicSimilarity], result of:
              0.09470535 = score(doc=2891,freq=2.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                0.28944498 = fieldWeight in 2891, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2891)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Natural languages are both complex and dynamic. They are in part formalized through dictionaries and grammar. Dictionaries attempt to provide definitions and examples of various usages for all the words in a language. Grammar, on the other hand, is the system of rules that defines the structure of a language and is concerned with the correct use and application of the language in speaking or writing. The fact that these two mechanisms lag behind the language as currently used is not a serious problem for those living in a language culture and talking their native language. However, the correct choice of words, expressions, and word relationships is much more difficult when speaking or writing in a foreign language. The basics of the grammar of a language may have been learned in school decades ago, and even then there were always several choices for the correct expression for an idea, fact, opinion, or emotion. Although many different parts of speech and their relationships can make for difficult language decisions, prepositions tend to be problematic for nonnative speakers of English, and, in reality, prepositions are a major problem in most languages. Does a speaker or writer say "in the West Coast" or "on the West Coast," or perhaps "at the West Coast"? In Norwegian, we are "in" a city, but "at" a place. But the distinction between cities and places is vague. To be absolutely correct, one really has to learn the right preposition for every single place. A simplistic way of resolving these language issues is to ask a native speaker. But even native speakers may disagree about the right choice of words. If there is disagreement, then one will have to ask more than one native speaker, treat his/her response as a vote for a particular choice, and perhaps choose the majority choice as the best possible alternative. In real life, such a procedure may be impossible or impractical, but in the electronic world, as we shall see, this is quite easy to achieve. Using the vast text repository of the Web, we may get a significant voting base for even the most detailed and distinct phrases. We shall start by introducing a set of examples to present our idea of using the text repository an the Web to aid in making the best word selection, especially for the use of prepositions. Then we will present a more general discussion of the possibilities and limitations of using the Web as an aid for correct writing.
  15. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.02030597 = product of:
      0.04061194 = sum of:
        0.04061194 = product of:
          0.08122388 = sum of:
            0.08122388 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.08122388 = score(doc=4483,freq=2.0), product of:
                0.17494538 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04995828 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  16. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.02030597 = product of:
      0.04061194 = sum of:
        0.04061194 = product of:
          0.08122388 = sum of:
            0.08122388 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.08122388 = score(doc=4888,freq=2.0), product of:
                0.17494538 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04995828 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  17. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.02
    0.02030597 = product of:
      0.04061194 = sum of:
        0.04061194 = product of:
          0.08122388 = sum of:
            0.08122388 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.08122388 = score(doc=5429,freq=2.0), product of:
                0.17494538 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04995828 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  18. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.02
    0.016921643 = product of:
      0.033843286 = sum of:
        0.033843286 = product of:
          0.06768657 = sum of:
            0.06768657 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.06768657 = score(doc=1463,freq=2.0), product of:
                0.17494538 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04995828 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  19. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.02
    0.016921643 = product of:
      0.033843286 = sum of:
        0.033843286 = product of:
          0.06768657 = sum of:
            0.06768657 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.06768657 = score(doc=5428,freq=2.0), product of:
                0.17494538 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04995828 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.220-229
  20. Lezius, W.; Rapp, R.; Wettler, M.: ¬A morphology-system and part-of-speech tagger for German (1996) 0.02
    0.016921643 = product of:
      0.033843286 = sum of:
        0.033843286 = product of:
          0.06768657 = sum of:
            0.06768657 = weight(_text_:22 in 1693) [ClassicSimilarity], result of:
              0.06768657 = score(doc=1693,freq=2.0), product of:
                0.17494538 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04995828 = queryNorm
                0.38690117 = fieldWeight in 1693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1693)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:37:18

Years

Languages

  • e 41
  • d 16

Types

  • a 45
  • el 5
  • m 5
  • s 3
  • p 2
  • x 2
  • d 1
  • More… Less…