Search (120 results, page 6 of 6)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  1. Kishida, K.: Term disambiguation techniques based on target document collection for cross-language information retrieval : an empirical comparison of performance between techniques (2007) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 897) [ClassicSimilarity], result of:
              0.032037273 = score(doc=897,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 897, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=897)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  2. Shaalan, K.; Raza, H.: NERA: Named Entity Recognition for Arabic (2009) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 2953) [ClassicSimilarity], result of:
              0.032037273 = score(doc=2953,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 2953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2953)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Sankarasubramaniam, Y.; Ramanathan, K.; Ghosh, S.: Text summarization using Wikipedia (2014) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 2693) [ClassicSimilarity], result of:
              0.032037273 = score(doc=2693,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 2693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2693)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Savoy, J.: Text representation strategies : an example with the State of the union addresses (2016) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 3042) [ClassicSimilarity], result of:
              0.032037273 = score(doc=3042,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 3042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3042)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Based on State of the Union addresses from 1790 to 2014 (225 speeches delivered by 42 presidents), this paper describes and evaluates different text representation strategies. To determine the most important words of a given text, the term frequencies (tf) or the tf?idf weighting scheme can be applied. Recently, latent Dirichlet allocation (LDA) has been proposed to define the topics included in a corpus. As another strategy, this study proposes to apply a vocabulary specificity measure (Z?score) to determine the most significantly overused word-types or short sequences of them. Our experiments show that the simple term frequency measure is not able to discriminate between specific terms associated with a document or a set of texts. Using the tf idf or LDA approach, the selection requires some arbitrary decisions. Based on the term-specific measure (Z?score), the term selection has a clear theoretical basis. Moreover, the most significant sentences for each presidency can be determined. As another facet, we can visualize the dynamic evolution of usage of some terms associated with their specificity measures. Finally, this technique can be employed to define the most important lexical leaders introducing terms overused by the k following presidencies.
  5. Lian, T.; Yu, C.; Wang, W.; Yuan, Q.; Hou, Z.: Doctoral dissertations on tourism in China : a co-word analysis (2016) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 3178) [ClassicSimilarity], result of:
              0.032037273 = score(doc=3178,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 3178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3178)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The aim of this paper is to map the foci of research in doctoral dissertations on tourism in China. In the paper, coword analysis is applied, with keywords coming from six public dissertation databases, i.e. CDFD, Wanfang Data, NLC, CALIS, ISTIC, and NSTL, as well as some university libraries providing doctoral dissertations on tourism. Altogether we have examined 928 doctoral dissertations on tourism written between 1989 and 2013. Doctoral dissertations on tourism in China involve 36 first level disciplines and 102 secondary level disciplines. We collect the top 68 keywords of practical significance in tourism which are mentioned at least four times or more. These keywords are classified into 12 categories based on co-word analysis, including cluster analysis, strategic diagrams analysis, and social network analysis. According to the strategic diagram of the 12 categories, we find the mature and immature areas in tourism study. From social networks, we can see the social network maps of original co-occurrence matrix and k-cores analysis of binary matrix. The paper provides valuable insight into the study of tourism by analyzing doctoral dissertations on tourism in China.
  6. Lhadj, L.S.; Boughanem, M.; Amrouche, K.: Enhancing information retrieval through concept-based language modeling and semantic smoothing (2016) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 3221) [ClassicSimilarity], result of:
              0.032037273 = score(doc=3221,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 3221, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3221)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Järvelin, A.; Keskustalo, H.; Sormunen, E.; Saastamoinen, M.; Kettunen, K.: Information retrieval from historical newspaper collections in highly inflectional languages : a query expansion approach (2016) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 3223) [ClassicSimilarity], result of:
              0.032037273 = score(doc=3223,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 3223, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3223)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. K., Vani; Gupta, D.: Unmasking text plagiarism using syntactic-semantic based natural language processing techniques : comparisons, analysis and challenges (2018) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 5084) [ClassicSimilarity], result of:
              0.032037273 = score(doc=5084,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 5084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5084)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Soni, S.; Lerman, K.; Eisenstein, J.: Follow the leader : documents on the leading edge of semantic change get more citations (2021) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 169) [ClassicSimilarity], result of:
              0.032037273 = score(doc=169,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 169, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=169)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Tao, J.; Zhou, L.; Hickey, K.: Making sense of the black-boxes : toward interpretable text classification using deep learning models (2023) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 990) [ClassicSimilarity], result of:
              0.032037273 = score(doc=990,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 990, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=990)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Laparra, E.; Binford-Walsh, A.; Emerson, K.; Miller, M.L.; López-Hoffman, L.; Currim, F.; Bethard, S.: Addressing structural hurdles for metadata extraction from environmental impact statements (2023) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 1042) [ClassicSimilarity], result of:
              0.032037273 = score(doc=1042,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 1042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1042)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Fóris, A.: Network theory and terminology (2013) 0.01
    0.007707316 = product of:
      0.015414632 = sum of:
        0.015414632 = product of:
          0.030829264 = sum of:
            0.030829264 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
              0.030829264 = score(doc=1365,freq=2.0), product of:
                0.15936506 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19345059 = fieldWeight in 1365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 9.2014 21:22:48
  13. RWI/PH: Auf der Suche nach dem entscheidenden Wort : die Häufung bestimmter Wörter innerhalb eines Textes macht diese zu Schlüsselwörtern (2012) 0.01
    0.006796132 = product of:
      0.013592264 = sum of:
        0.013592264 = product of:
          0.027184527 = sum of:
            0.027184527 = weight(_text_:k in 331) [ClassicSimilarity], result of:
              0.027184527 = score(doc=331,freq=4.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.16733333 = fieldWeight in 331, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=331)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Die Dresdner Wissenschaftler haben die semantischen Eigenschaften von Texten mathematisch untersucht, indem sie zehn verschiedene englische Texte in unterschiedlichen Formen kodierten. Dazu zählt unter anderem die englische Ausgabe von Leo Tolstois "Krieg und Frieden". Beispielsweise übersetzten die Forscher Buchstaben innerhalb eines Textes in eine Binär-Sequenz. Dazu ersetzten sie alle Vokale durch eine Eins und alle Konsonanten durch eine Null. Mit Hilfe weiterer mathematischer Funktionen beleuchteten die Wissenschaftler dabei verschiedene Ebenen des Textes, also sowohl einzelne Vokale, Buchstaben als auch ganze Wörter, die in verschiedenen Formen kodiert wurden. Innerhalb des ganzen Textes lassen sich so wiederkehrende Muster finden. Diesen Zusammenhang innerhalb des Textes bezeichnet man als Langzeitkorrelation. Diese gibt an, ob zwei Buchstaben an beliebig weit voneinander entfernten Textstellen miteinander in Verbindung stehen - beispielsweise gibt es wenn wir an einer Stelle einen Buchstaben "K" finden, eine messbare höhere Wahrscheinlichkeit den Buchstaben "K" einige Seiten später nochmal zu finden. "Es ist zu erwarten, dass wenn es in einem Buch an einer Stelle um Krieg geht, die Wahrscheinlichkeit hoch ist das Wort Krieg auch einige Seiten später zu finden. Überraschend ist es, dass wir die hohe Wahrscheinlichkeit auch auf der Buchstabenebene finden", so Altmann.
  14. Vlachidis, A.; Binding, C.; Tudhope, D.; May, K.: Excavating grey literature : a case study on the rich indexing of archaeological documents via natural language-processing techniques and knowledge-based resources (2010) 0.01
    0.006407455 = product of:
      0.01281491 = sum of:
        0.01281491 = product of:
          0.02562982 = sum of:
            0.02562982 = weight(_text_:k in 3948) [ClassicSimilarity], result of:
              0.02562982 = score(doc=3948,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.15776339 = fieldWeight in 3948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3948)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Kajanan, S.; Bao, Y.; Datta, A.; VanderMeer, D.; Dutta, K.: Efficient automatic search query formulation using phrase-level analysis (2014) 0.01
    0.006407455 = product of:
      0.01281491 = sum of:
        0.01281491 = product of:
          0.02562982 = sum of:
            0.02562982 = weight(_text_:k in 1264) [ClassicSimilarity], result of:
              0.02562982 = score(doc=1264,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.15776339 = fieldWeight in 1264, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1264)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.01
    0.006165853 = product of:
      0.012331706 = sum of:
        0.012331706 = product of:
          0.024663411 = sum of:
            0.024663411 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.024663411 = score(doc=4217,freq=2.0), product of:
                0.15936506 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045509085 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2018 11:32:44
  17. Needham, R.M.; Sparck Jones, K.: Keywords and clumps (1985) 0.01
    0.0056065232 = product of:
      0.0112130465 = sum of:
        0.0112130465 = product of:
          0.022426093 = sum of:
            0.022426093 = weight(_text_:k in 3645) [ClassicSimilarity], result of:
              0.022426093 = score(doc=3645,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.13804297 = fieldWeight in 3645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3645)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. Schürmann, H.: Software scannt Radio- und Fernsehsendungen : Recherche in Nachrichtenarchiven erleichtert (2001) 0.01
    0.005395121 = product of:
      0.010790242 = sum of:
        0.010790242 = product of:
          0.021580484 = sum of:
            0.021580484 = weight(_text_:22 in 5759) [ClassicSimilarity], result of:
              0.021580484 = score(doc=5759,freq=2.0), product of:
                0.15936506 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045509085 = queryNorm
                0.1354154 = fieldWeight in 5759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5759)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Handelsblatt. Nr.79 vom 24.4.2001, S.22
  19. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.01
    0.005395121 = product of:
      0.010790242 = sum of:
        0.010790242 = product of:
          0.021580484 = sum of:
            0.021580484 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.021580484 = score(doc=1616,freq=2.0), product of:
                0.15936506 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045509085 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
  20. Melzer, C.: ¬Der Maschine anpassen : PC-Spracherkennung - Programme sind mittlerweile alltagsreif (2005) 0.01
    0.005395121 = product of:
      0.010790242 = sum of:
        0.010790242 = product of:
          0.021580484 = sum of:
            0.021580484 = weight(_text_:22 in 4044) [ClassicSimilarity], result of:
              0.021580484 = score(doc=4044,freq=2.0), product of:
                0.15936506 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045509085 = queryNorm
                0.1354154 = fieldWeight in 4044, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4044)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    3. 5.1997 8:44:22

Years

Languages

  • e 87
  • d 31
  • chi 1
  • f 1
  • More… Less…

Types