Search (18 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  • × year_i:[2010 TO 2020}
  1. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.30
    0.29648927 = product of:
      0.59297854 = sum of:
        0.14616652 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14616652 = score(doc=563,freq=2.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14616652 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14616652 = score(doc=563,freq=2.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14616652 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14616652 = score(doc=563,freq=2.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14616652 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14616652 = score(doc=563,freq=2.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.008312443 = product of:
          0.02493733 = sum of:
            0.02493733 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.02493733 = score(doc=563,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.33333334 = coord(1/3)
      0.5 = coord(5/10)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  2. Wu, H.; He, J.; Pei, Y.: Scientific impact at the topic level : a case study in computational linguistics (2010) 0.00
    0.0028607734 = product of:
      0.028607734 = sum of:
        0.028607734 = product of:
          0.0858232 = sum of:
            0.0858232 = weight(_text_:2010 in 4103) [ClassicSimilarity], result of:
              0.0858232 = score(doc=4103,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.5849073 = fieldWeight in 4103, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4103)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.11, S.2274-2287
    Year
    2010
  3. Grigonyte, G.: Building and evaluating domain ontologies : NLP contributions (2010) 0.00
    0.0028607734 = product of:
      0.028607734 = sum of:
        0.028607734 = product of:
          0.0858232 = sum of:
            0.0858232 = weight(_text_:2010 in 481) [ClassicSimilarity], result of:
              0.0858232 = score(doc=481,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.5849073 = fieldWeight in 481, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=481)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Content
    Zugl.: Saarbrücken, Univ., Diss., 2010
    Year
    2010
  4. Al-Shawakfa, E.; Al-Badarneh, A.; Shatnawi, S.; Al-Rabab'ah, K.; Bani-Ismail, B.: ¬A comparison study of some Arabic root finding algorithms (2010) 0.00
    0.0024520915 = product of:
      0.024520915 = sum of:
        0.024520915 = product of:
          0.07356274 = sum of:
            0.07356274 = weight(_text_:2010 in 3457) [ClassicSimilarity], result of:
              0.07356274 = score(doc=3457,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.5013491 = fieldWeight in 3457, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3457)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.5, S.1015-1024
    Year
    2010
  5. Dolamic, L.; Savoy, J.: Retrieval effectiveness of machine translated queries (2010) 0.00
    0.0024520915 = product of:
      0.024520915 = sum of:
        0.024520915 = product of:
          0.07356274 = sum of:
            0.07356274 = weight(_text_:2010 in 4102) [ClassicSimilarity], result of:
              0.07356274 = score(doc=4102,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.5013491 = fieldWeight in 4102, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4102)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.11, S.2266-2273
    Year
    2010
  6. Wong, W.; Liu, W.; Bennamoun, M.: Ontology learning from text : a look back and into the future (2010) 0.00
    0.0022159456 = product of:
      0.022159455 = sum of:
        0.022159455 = product of:
          0.066478364 = sum of:
            0.066478364 = weight(_text_:2010 in 4733) [ClassicSimilarity], result of:
              0.066478364 = score(doc=4733,freq=3.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.45306724 = fieldWeight in 4733, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4733)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Year
    2010
  7. Hmeidi, I.I.; Al-Shalabi, R.F.; Al-Taani, A.T.; Najadat, H.; Al-Hazaimeh, S.A.: ¬A novel approach to the extraction of roots from Arabic words using bigrams (2010) 0.00
    0.0020434097 = product of:
      0.020434096 = sum of:
        0.020434096 = product of:
          0.06130229 = sum of:
            0.06130229 = weight(_text_:2010 in 3426) [ClassicSimilarity], result of:
              0.06130229 = score(doc=3426,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.41779095 = fieldWeight in 3426, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3426)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.3, S.583-591
    Year
    2010
  8. Keselman, A.; Rosemblat, G.; Kilicoglu, H.; Fiszman, M.; Jin, H.; Shin, D.; Rindflesch, T.C.: Adapting semantic natural language processing technology to address information overload in influenza epidemic management (2010) 0.00
    0.0020434097 = product of:
      0.020434096 = sum of:
        0.020434096 = product of:
          0.06130229 = sum of:
            0.06130229 = weight(_text_:2010 in 1312) [ClassicSimilarity], result of:
              0.06130229 = score(doc=1312,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.41779095 = fieldWeight in 1312, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1312)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.12, S.2531-2543
    Year
    2010
  9. Rayson, P.; Piao, S.; Sharoff, S.; Evert, S.; Moiron, B.V.: Multiword expressions : hard going or plain sailing? (2015) 0.00
    0.001809312 = product of:
      0.01809312 = sum of:
        0.01809312 = product of:
          0.054279357 = sum of:
            0.054279357 = weight(_text_:2010 in 2918) [ClassicSimilarity], result of:
              0.054279357 = score(doc=2918,freq=2.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.36992785 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Language resources and evaluation. 44(2010) no.1, S.1-5
  10. Fang, L.; Tuan, L.A.; Hui, S.C.; Wu, L.: Syntactic based approach for grammar question retrieval (2018) 0.00
    0.0017626584 = product of:
      0.017626584 = sum of:
        0.017626584 = product of:
          0.052879747 = sum of:
            0.052879747 = weight(_text_:problem in 5086) [ClassicSimilarity], result of:
              0.052879747 = score(doc=5086,freq=6.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.4061259 = fieldWeight in 5086, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5086)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    With the popularity of online educational platforms, English learners can learn and practice no matter where they are and what they do. English grammar is one of the important components in learning English. To learn English grammar effectively, it requires students to practice questions containing focused grammar knowledge. In this paper, we study a novel problem of retrieving English grammar questions with similar grammatical focus. Since the grammatical focus similarity is different from textual similarity or sentence syntactic similarity, existing approaches cannot be applied directly to our problem. To address this problem, we propose a syntactic based approach for English grammar question retrieval which can retrieve related grammar questions with similar grammatical focus effectively. In the proposed syntactic based approach, we first propose a new syntactic tree, namely parse-key tree, to capture English grammar questions' grammatical focus. Next, we propose two kernel functions, namely relaxed tree kernel and part-of-speech order kernel, to compute the similarity between two parse-key trees of the query and grammar questions in the collection. Then, the retrieved grammar questions are ranked according to the similarity between the parse-key trees. In addition, if a query is submitted together with answer choices, conceptual similarity and textual similarity are also incorporated to further improve the retrieval accuracy. The performance results have shown that our proposed approach outperforms the state-of-the-art methods based on statistical analysis and syntactic analysis.
  11. Vlachidis, A.; Binding, C.; Tudhope, D.; May, K.: Excavating grey literature : a case study on the rich indexing of archaeological documents via natural language-processing techniques and knowledge-based resources (2010) 0.00
    0.0016347278 = product of:
      0.016347278 = sum of:
        0.016347278 = product of:
          0.04904183 = sum of:
            0.04904183 = weight(_text_:2010 in 3948) [ClassicSimilarity], result of:
              0.04904183 = score(doc=3948,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.33423275 = fieldWeight in 3948, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3948)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Aslib proceedings. 62(2010) nos.4/5, S.466-475
    Year
    2010
  12. Xinglin, L.: Automatic summarization method based on compound word recognition (2015) 0.00
    0.0012212053 = product of:
      0.012212053 = sum of:
        0.012212053 = product of:
          0.03663616 = sum of:
            0.03663616 = weight(_text_:problem in 1841) [ClassicSimilarity], result of:
              0.03663616 = score(doc=1841,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.28137225 = fieldWeight in 1841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1841)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    After analyzing main methods of automatic summarization today, we find they all ignore the weight of unknown words in the sentence. In order to overcome this problem, a method for automatic summarization based on compound word recognition is proposed. According to this method, the compound word in the text was identified and the segmentation word was corrected at first. Then, keyword set was extracted from Chinese documents and the sentence weights were calculated according to the weights of the keyword set. Because the weight of compound words was calculated by different weight calculation formula, the corresponding total weight of each sentence will be determined. Finally, sentences with higher weight which will be outputted to make up the summarization sentences by original order were selected by percentage. Experiments were conducted on HIT IR-lab Text Summarization Corpus, the results show that the precision can be achieved 76.51% by the proposed method, and we can conclude that the method is applicable for automatic summarization and the effect is good.
  13. Perovsek, M.; Kranjca, J.; Erjaveca, T.; Cestnika, B.; Lavraca, N.: TextFlows : a visual programming platform for text mining and natural language processing (2016) 0.00
    0.0012212053 = product of:
      0.012212053 = sum of:
        0.012212053 = product of:
          0.03663616 = sum of:
            0.03663616 = weight(_text_:problem in 2697) [ClassicSimilarity], result of:
              0.03663616 = score(doc=2697,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.28137225 = fieldWeight in 2697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2697)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    Text mining and natural language processing are fast growing areas of research, with numerous applications in business, science and creative industries. This paper presents TextFlows, a web-based text mining and natural language processing platform supporting workflow construction, sharing and execution. The platform enables visual construction of text mining workflows through a web browser, and the execution of the constructed workflows on a processing cloud. This makes TextFlows an adaptable infrastructure for the construction and sharing of text processing workflows, which can be reused in various applications. The paper presents the implemented text mining and language processing modules, and describes some precomposed workflows. Their features are demonstrated on three use cases: comparison of document classifiers and of different part-of-speech taggers on a text categorization problem, and outlier detection in document corpora.
  14. Anizi, M.; Dichy, J.: Improving information retrieval in Arabic through a multi-agent approach and a rich lexical resource (2011) 0.00
    0.0010176711 = product of:
      0.010176711 = sum of:
        0.010176711 = product of:
          0.03053013 = sum of:
            0.03053013 = weight(_text_:problem in 4738) [ClassicSimilarity], result of:
              0.03053013 = score(doc=4738,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.23447686 = fieldWeight in 4738, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4738)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    This paper addresses the optimization of information retrieval in Arabic. The results derived from the expanding development of sites in Arabic are often spectacular. Nevertheless, several observations indicate that the responses remain disappointing, particularly upon comparing users' requests and quality of responses. One of the problems encountered by users is the loss of time when navigating between different URLs to find adequate responses. This, in many cases, is due to the absence of forms morphologically related to the research keyword. Such problems can be approached through a morphological analyzer drawing on the DIINAR.1 morpho-lexical resource. A second problem concerns the formulation of the query, which may prove ambiguous, as in everyday language. We then focus on contextual disambiguation based on a rich lexical resource that includes collocations and set expressions. The overall scheme of such a resource will only be hinted at here. Our approach leads to the elaboration of a multi-agent system, motivated by a need to solve problems encountered when using conventional methods of analysis, and to improve the results of queries thanks to a better collaboration between different levels of analysis. We suggest resorting to four agents: morphological, morpho-lexical, contextualization, and an interface agent. These agents 'negotiate' and 'cooperate' throughout the analysis process, starting from the submission of the initial query, and going on until an adequate query is obtained.
  15. Muneer, I.; Sharjeel, M.; Iqbal, M.; Adeel Nawab, R.M.; Rayson, P.: CLEU - A Cross-language english-urdu corpus and benchmark for text reuse experiments (2019) 0.00
    0.0010176711 = product of:
      0.010176711 = sum of:
        0.010176711 = product of:
          0.03053013 = sum of:
            0.03053013 = weight(_text_:problem in 5299) [ClassicSimilarity], result of:
              0.03053013 = score(doc=5299,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.23447686 = fieldWeight in 5299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5299)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    Text reuse is becoming a serious issue in many fields and research shows that it is much harder to detect when it occurs across languages. The recent rise in multi-lingual content on the Web has increased cross-language text reuse to an unprecedented scale. Although researchers have proposed methods to detect it, one major drawback is the unavailability of large-scale gold standard evaluation resources built on real cases. To overcome this problem, we propose a cross-language sentence/passage level text reuse corpus for the English-Urdu language pair. The Cross-Language English-Urdu Corpus (CLEU) has source text in English whereas the derived text is in Urdu. It contains in total 3,235 sentence/passage pairs manually tagged into three categories that is near copy, paraphrased copy, and independently written. Further, as a second contribution, we evaluate the Translation plus Mono-lingual Analysis method using three sets of experiments on the proposed dataset to highlight its usefulness. Evaluation results (f1=0.732 binary, f1=0.552 ternary classification) indicate that it is harder to detect cross-language real cases of text reuse, especially when the language pairs have unrelated scripts. The corpus is a useful benchmark resource for the future development and assessment of cross-language text reuse detection systems for the English-Urdu language pair.
  16. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.00
    8.3124434E-4 = product of:
      0.008312443 = sum of:
        0.008312443 = product of:
          0.02493733 = sum of:
            0.02493733 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
              0.02493733 = score(doc=1848,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.23214069 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
  17. Fóris, A.: Network theory and terminology (2013) 0.00
    6.927037E-4 = product of:
      0.0069270367 = sum of:
        0.0069270367 = product of:
          0.02078111 = sum of:
            0.02078111 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
              0.02078111 = score(doc=1365,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.19345059 = fieldWeight in 1365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    2. 9.2014 21:22:48
  18. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.00
    4.8489255E-4 = product of:
      0.0048489254 = sum of:
        0.0048489254 = product of:
          0.014546776 = sum of:
            0.014546776 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
              0.014546776 = score(doc=3807,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.1354154 = fieldWeight in 3807, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3807)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    20. 1.2015 18:30:22