Search (22 results, page 1 of 2)

  • × year_i:[2010 TO 2020}
  • × theme_ss:"Computerlinguistik"
  1. Sprachtechnologie : ein Überblick (2012) 0.02
    0.015781997 = product of:
      0.031563994 = sum of:
        0.031563994 = product of:
          0.06312799 = sum of:
            0.06312799 = weight(_text_:b in 1750) [ClassicSimilarity], result of:
              0.06312799 = score(doc=1750,freq=8.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.3914457 = fieldWeight in 1750, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1750)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Seit mehr als einem halben Jahrhundert existieren ernsthafte und ernst zu nehmende Versuche, menschliche Sprache maschinell zu verarbeiten. Maschinelle Übersetzung oder "natürliche" Dialoge mit Computern gehören zu den ersten Ideen, die den Bereich der späteren Computerlinguistik oder Sprachtechnologie abgesteckt und deren Vorhaben geleitet haben. Heute ist dieser auch maschinelle Sprachverarbeitung (natural language processing, NLP) genannte Bereich stark ausdiversifiziert: Durch die rapide Entwicklung der Informatik ist vieles vorher Unvorstellbare Realität (z. B. automatische Telefonauskunft), einiges früher Unmögliche immerhin möglich geworden (z. B. Handhelds mit Sprachein- und -ausgabe als digitale persönliche (Informations-)Assistenten). Es gibt verschiedene Anwendungen der Computerlinguistik, von denen einige den Sprung in die kommerzielle Nutzung geschafft haben (z. B. Diktiersysteme, Textklassifikation, maschinelle Übersetzung). Immer noch wird an natürlichsprachlichen Systemen (natural language systems, NLS) verschiedenster Funktionalität (z. B. zur Beantwortung beliebiger Fragen oder zur Generierung komplexer Texte) intensiv geforscht, auch wenn die hoch gesteckten Ziele von einst längst nicht erreicht sind (und deshalb entsprechend "heruntergefahren" wurden). Wo die maschinelle Sprachverarbeitung heute steht, ist allerdings angesichts der vielfältigen Aktivitäten in der Computerlinguistik und Sprachtechnologie weder offensichtlich noch leicht in Erfahrung zu bringen (für Studierende des Fachs und erst recht für Laien). Ein Ziel dieses Buches ist, es, die aktuelle Literaturlage in dieser Hinsicht zu verbessern, indem spezifisch systembezogene Aspekte der Computerlinguistik als Überblick über die Sprachtechnologie zusammengetragen werden.
  2. Rettinger, A.; Schumilin, A.; Thoma, S.; Ell, B.: Learning a cross-lingual semantic representation of relations expressed in text (2015) 0.02
    0.015781997 = product of:
      0.031563994 = sum of:
        0.031563994 = product of:
          0.06312799 = sum of:
            0.06312799 = weight(_text_:b in 2027) [ClassicSimilarity], result of:
              0.06312799 = score(doc=2027,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.3914457 = fieldWeight in 2027, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2027)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Endres-Niggemeyer, B.: Thinkie: Lautes Denken mit Spracherkennung (mobil) (2013) 0.01
    0.013391469 = product of:
      0.026782937 = sum of:
        0.026782937 = product of:
          0.053565875 = sum of:
            0.053565875 = weight(_text_:b in 1145) [ClassicSimilarity], result of:
              0.053565875 = score(doc=1145,freq=4.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.3321527 = fieldWeight in 1145, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1145)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Lautes Denken ist eine bewährte Methode zum Erforschen kognitiver Prozesse. Es wird in vielen Disziplinen benutzt, z. B. um aufzudecken, welche Erfahrungen Benutzer bei der Interaktion mit Computerschnittstellen machen. Nach einer kurzen Erklärung des Lauten Denkens wird die App Thinkie vorgestellt. Thinkie ist eine mobile Lösung für das Laute Denken auf iPhone und iPad. Die Testperson nimmt auf dem iPhone den Ton auf. Die Spracherkennungssoftware Siri (http://www.apple.com/de/ios/siri/) transkribiert ihn. Parallel wird auf dem iPad oder einem anderen Gerät gefilmt. Auf dem iPad kann man - mit Video im Blick - das Transkript aufarbeiten und interpretieren. Die Textdateien transportiert Thinkie über eine Cloud-Kollektion, die Filme werden mit iTunes übertragen. Thinkie ist noch nicht tauglich für den praktischen Gebrauch. Noch sind die Sequenzen zu kurz, die Siri verarbeiten kann. Das wird sich ändern.
  4. Ludwig, B.; Reischer, J.: Informationslinguistik in Regensburg (2012) 0.01
    0.012625597 = product of:
      0.025251195 = sum of:
        0.025251195 = product of:
          0.05050239 = sum of:
            0.05050239 = weight(_text_:b in 555) [ClassicSimilarity], result of:
              0.05050239 = score(doc=555,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.31315655 = fieldWeight in 555, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0625 = fieldNorm(doc=555)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.01
    0.012334143 = product of:
      0.024668286 = sum of:
        0.024668286 = product of:
          0.04933657 = sum of:
            0.04933657 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
              0.04933657 = score(doc=1490,freq=2.0), product of:
                0.15939656 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045518078 = queryNorm
                0.30952093 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:30:24
  6. Radev, D.R.; Joseph, M.T.; Gibson, B.; Muthukrishnan, P.: ¬A bibliometric and network analysis of the field of computational linguistics (2016) 0.01
    0.011047398 = product of:
      0.022094795 = sum of:
        0.022094795 = product of:
          0.04418959 = sum of:
            0.04418959 = weight(_text_:b in 2764) [ClassicSimilarity], result of:
              0.04418959 = score(doc=2764,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.27401197 = fieldWeight in 2764, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2764)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Liu, P.J.; Saleh, M.; Pot, E.; Goodrich, B.; Sepassi, R.; Kaiser, L.; Shazeer, N.: Generating Wikipedia by summarizing long sequences (2018) 0.01
    0.011047398 = product of:
      0.022094795 = sum of:
        0.022094795 = product of:
          0.04418959 = sum of:
            0.04418959 = weight(_text_:b in 773) [ClassicSimilarity], result of:
              0.04418959 = score(doc=773,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.27401197 = fieldWeight in 773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=773)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Al-Shawakfa, E.; Al-Badarneh, A.; Shatnawi, S.; Al-Rabab'ah, K.; Bani-Ismail, B.: ¬A comparison study of some Arabic root finding algorithms (2010) 0.01
    0.009469198 = product of:
      0.018938396 = sum of:
        0.018938396 = product of:
          0.037876792 = sum of:
            0.037876792 = weight(_text_:b in 3457) [ClassicSimilarity], result of:
              0.037876792 = score(doc=3457,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23486741 = fieldWeight in 3457, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3457)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Perovsek, M.; Kranjca, J.; Erjaveca, T.; Cestnika, B.; Lavraca, N.: TextFlows : a visual programming platform for text mining and natural language processing (2016) 0.01
    0.009469198 = product of:
      0.018938396 = sum of:
        0.018938396 = product of:
          0.037876792 = sum of:
            0.037876792 = weight(_text_:b in 2697) [ClassicSimilarity], result of:
              0.037876792 = score(doc=2697,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23486741 = fieldWeight in 2697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2697)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Ghazzawi, N.; Robichaud, B.; Drouin, P.; Sadat, F.: Automatic extraction of specialized verbal units (2018) 0.01
    0.009469198 = product of:
      0.018938396 = sum of:
        0.018938396 = product of:
          0.037876792 = sum of:
            0.037876792 = weight(_text_:b in 4094) [ClassicSimilarity], result of:
              0.037876792 = score(doc=4094,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23486741 = fieldWeight in 4094, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4094)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Lu, C.; Bu, Y.; Wang, J.; Ding, Y.; Torvik, V.; Schnaars, M.; Zhang, C.: Examining scientific writing styles from the perspective of linguistic complexity : a cross-level moderation model (2019) 0.01
    0.009469198 = product of:
      0.018938396 = sum of:
        0.018938396 = product of:
          0.037876792 = sum of:
            0.037876792 = weight(_text_:b in 5219) [ClassicSimilarity], result of:
              0.037876792 = score(doc=5219,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23486741 = fieldWeight in 5219, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5219)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Publishing articles in high-impact English journals is difficult for scholars around the world, especially for non-native English-speaking scholars (NNESs), most of whom struggle with proficiency in English. To uncover the differences in English scientific writing between native English-speaking scholars (NESs) and NNESs, we collected a large-scale data set containing more than 150,000 full-text articles published in PLoS between 2006 and 2015. We divided these articles into three groups according to the ethnic backgrounds of the first and corresponding authors, obtained by Ethnea, and examined the scientific writing styles in English from a two-fold perspective of linguistic complexity: (a) syntactic complexity, including measurements of sentence length and sentence complexity; and (b) lexical complexity, including measurements of lexical diversity, lexical density, and lexical sophistication. The observations suggest marginal differences between groups in syntactical and lexical complexity.
  12. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.01
    0.009250606 = product of:
      0.018501213 = sum of:
        0.018501213 = product of:
          0.037002426 = sum of:
            0.037002426 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.037002426 = score(doc=563,freq=2.0), product of:
                0.15939656 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 1.2013 19:22:47
  13. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.01
    0.009250606 = product of:
      0.018501213 = sum of:
        0.018501213 = product of:
          0.037002426 = sum of:
            0.037002426 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
              0.037002426 = score(doc=1848,freq=2.0), product of:
                0.15939656 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23214069 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
  14. Fegley, B.D.; Torvik, V.I.: On the role of poetic versus nonpoetic features in "kindred" and diachronic poetry attribution (2012) 0.01
    0.007890998 = product of:
      0.015781997 = sum of:
        0.015781997 = product of:
          0.031563994 = sum of:
            0.031563994 = weight(_text_:b in 488) [ClassicSimilarity], result of:
              0.031563994 = score(doc=488,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.19572285 = fieldWeight in 488, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=488)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Author attribution studies have demonstrated remarkable success in applying orthographic and lexicographic features of text in a variety of discrimination problems. What might poetic features, such as syllabic stress and mood, contribute? We address this question in the context of two different attribution problems: (a) kindred: differentiate Langston Hughes' early poems from those of kindred poets and (b) diachronic: differentiate Hughes' early from his later poems. Using a diverse set of 535 generic text features, each categorized as poetic or nonpoetic, correlation-based greedy forward search ranked the features and a support vector machine classified the poems. A small subset of features (~10) achieved cross-validated precision and recall as high as 87%. Poetic features (rhyme patterns particularly) were nearly as effective as nonpoetic in kindred discrimination, but less effective diachronically. In other words, Hughes used both poetic and nonpoetic features in distinctive ways and his use of nonpoetic features evolved systematically while he continued to experiment with poetic features. These findings affirm qualitative studies attesting to structural elements from Black oral tradition and Black folk music (blues) and to the internal consistency of Hughes' early poetry.
  15. Ye, Z.; He, B.; Wang, L.; Luo, T.: Utilizing term proximity for blog post retrieval (2013) 0.01
    0.007890998 = product of:
      0.015781997 = sum of:
        0.015781997 = product of:
          0.031563994 = sum of:
            0.031563994 = weight(_text_:b in 1126) [ClassicSimilarity], result of:
              0.031563994 = score(doc=1126,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.19572285 = fieldWeight in 1126, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1126)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Malo, P.; Sinha, A.; Korhonen, P.; Wallenius, J.; Takala, P.: Good debt or bad debt : detecting semantic orientations in economic texts (2014) 0.01
    0.007890998 = product of:
      0.015781997 = sum of:
        0.015781997 = product of:
          0.031563994 = sum of:
            0.031563994 = weight(_text_:b in 1226) [ClassicSimilarity], result of:
              0.031563994 = score(doc=1226,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.19572285 = fieldWeight in 1226, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1226)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The use of robo-readers to analyze news texts is an emerging technology trend in computational finance. Recent research has developed sophisticated financial polarity lexicons for investigating how financial sentiments relate to future company performance. However, based on experience from fields that commonly analyze sentiment, it is well known that the overall semantic orientation of a sentence may differ from that of individual words. This article investigates how semantic orientations can be better detected in financial and economic news by accommodating the overall phrase-structure information and domain-specific use of language. Our three main contributions are the following: (a) a human-annotated finance phrase bank that can be used for training and evaluating alternative models; (b) a technique to enhance financial lexicons with attributes that help to identify expected direction of events that affect sentiment; and (c) a linearized phrase-structure model for detecting contextual semantic orientations in economic texts. The relevance of the newly added lexicon features and the benefit of using the proposed learning algorithm are demonstrated in a comparative study against general sentiment models as well as the popular word frequency models used in recent financial studies. The proposed framework is parsimonious and avoids the explosion in feature space caused by the use of conventional n-gram features.
  17. Symonds, M.; Bruza, P.; Zuccon, G.; Koopman, B.; Sitbon, L.; Turner, I.: Automatic query expansion : a structural linguistic perspective (2014) 0.01
    0.007890998 = product of:
      0.015781997 = sum of:
        0.015781997 = product of:
          0.031563994 = sum of:
            0.031563994 = weight(_text_:b in 1338) [ClassicSimilarity], result of:
              0.031563994 = score(doc=1338,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.19572285 = fieldWeight in 1338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1338)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. Reyes Ayala, B.; Knudson, R.; Chen, J.; Cao, G.; Wang, X.: Metadata records machine translation combining multi-engine outputs with limited parallel data (2018) 0.01
    0.007890998 = product of:
      0.015781997 = sum of:
        0.015781997 = product of:
          0.031563994 = sum of:
            0.031563994 = weight(_text_:b in 4010) [ClassicSimilarity], result of:
              0.031563994 = score(doc=4010,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.19572285 = fieldWeight in 4010, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4010)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Agarwal, B.; Ramampiaro, H.; Langseth, H.; Ruocco, M.: ¬A deep network model for paraphrase detection in short text messages (2018) 0.01
    0.007890998 = product of:
      0.015781997 = sum of:
        0.015781997 = product of:
          0.031563994 = sum of:
            0.031563994 = weight(_text_:b in 5043) [ClassicSimilarity], result of:
              0.031563994 = score(doc=5043,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.19572285 = fieldWeight in 5043, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5043)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  20. Fóris, A.: Network theory and terminology (2013) 0.01
    0.0077088396 = product of:
      0.015417679 = sum of:
        0.015417679 = product of:
          0.030835358 = sum of:
            0.030835358 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
              0.030835358 = score(doc=1365,freq=2.0), product of:
                0.15939656 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045518078 = queryNorm
                0.19345059 = fieldWeight in 1365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 9.2014 21:22:48