Search (163 results, page 1 of 9)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.05
    0.04910157 = sum of:
      0.032478295 = product of:
        0.19486977 = sum of:
          0.19486977 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.19486977 = score(doc=562,freq=2.0), product of:
              0.34673223 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.040897828 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.16666667 = coord(1/6)
      0.016623272 = product of:
        0.033246543 = sum of:
          0.033246543 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.033246543 = score(doc=562,freq=2.0), product of:
              0.14321722 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.040897828 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Schwarz, C.: THESYS: Thesaurus Syntax System : a fully automatic thesaurus building aid (1988) 0.04
    0.03821131 = product of:
      0.07642262 = sum of:
        0.07642262 = sum of:
          0.03763498 = weight(_text_:c in 1361) [ClassicSimilarity], result of:
            0.03763498 = score(doc=1361,freq=2.0), product of:
              0.14107318 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.040897828 = queryNorm
              0.2667763 = fieldWeight in 1361, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1361)
          0.038787637 = weight(_text_:22 in 1361) [ClassicSimilarity], result of:
            0.038787637 = score(doc=1361,freq=2.0), product of:
              0.14321722 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.040897828 = queryNorm
              0.2708308 = fieldWeight in 1361, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1361)
      0.5 = coord(1/2)
    
    Date
    6. 1.1999 10:22:07
  3. Lu, C.; Bu, Y.; Wang, J.; Ding, Y.; Torvik, V.; Schnaars, M.; Zhang, C.: Examining scientific writing styles from the perspective of linguistic complexity : a cross-level moderation model (2019) 0.03
    0.032201186 = sum of:
      0.009390939 = product of:
        0.056345634 = sum of:
          0.056345634 = weight(_text_:authors in 5219) [ClassicSimilarity], result of:
            0.056345634 = score(doc=5219,freq=2.0), product of:
              0.1864456 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.040897828 = queryNorm
              0.30220953 = fieldWeight in 5219, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=5219)
        0.16666667 = coord(1/6)
      0.022810245 = product of:
        0.04562049 = sum of:
          0.04562049 = weight(_text_:c in 5219) [ClassicSimilarity], result of:
            0.04562049 = score(doc=5219,freq=4.0), product of:
              0.14107318 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.040897828 = queryNorm
              0.32338172 = fieldWeight in 5219, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.046875 = fieldNorm(doc=5219)
        0.5 = coord(1/2)
    
    Abstract
    Publishing articles in high-impact English journals is difficult for scholars around the world, especially for non-native English-speaking scholars (NNESs), most of whom struggle with proficiency in English. To uncover the differences in English scientific writing between native English-speaking scholars (NESs) and NNESs, we collected a large-scale data set containing more than 150,000 full-text articles published in PLoS between 2006 and 2015. We divided these articles into three groups according to the ethnic backgrounds of the first and corresponding authors, obtained by Ethnea, and examined the scientific writing styles in English from a two-fold perspective of linguistic complexity: (a) syntactic complexity, including measurements of sentence length and sentence complexity; and (b) lexical complexity, including measurements of lexical diversity, lexical density, and lexical sophistication. The observations suggest marginal differences between groups in syntactical and lexical complexity.
  4. Schwarz, C.: Probleme der syntaktischen Indexierung (1986) 0.03
    0.030413661 = product of:
      0.060827322 = sum of:
        0.060827322 = product of:
          0.121654645 = sum of:
            0.121654645 = weight(_text_:c in 8180) [ClassicSimilarity], result of:
              0.121654645 = score(doc=8180,freq=4.0), product of:
                0.14107318 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.040897828 = queryNorm
                0.8623513 = fieldWeight in 8180, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.125 = fieldNorm(doc=8180)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Informationslinguistische Texterschließung. Hrsg.: C. Schwarz u. G. Thurmair
  5. Chou, C.; Chu, T.: ¬An analysis of BERT (NLP) for assisted subject indexing for Project Gutenberg (2022) 0.03
    0.029773585 = sum of:
      0.0109560955 = product of:
        0.06573657 = sum of:
          0.06573657 = weight(_text_:authors in 1139) [ClassicSimilarity], result of:
            0.06573657 = score(doc=1139,freq=2.0), product of:
              0.1864456 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.040897828 = queryNorm
              0.35257778 = fieldWeight in 1139, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1139)
        0.16666667 = coord(1/6)
      0.01881749 = product of:
        0.03763498 = sum of:
          0.03763498 = weight(_text_:c in 1139) [ClassicSimilarity], result of:
            0.03763498 = score(doc=1139,freq=2.0), product of:
              0.14107318 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.040897828 = queryNorm
              0.2667763 = fieldWeight in 1139, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1139)
        0.5 = coord(1/2)
    
    Abstract
    In light of AI (Artificial Intelligence) and NLP (Natural language processing) technologies, this article examines the feasibility of using AI/NLP models to enhance the subject indexing of digital resources. While BERT (Bidirectional Encoder Representations from Transformers) models are widely used in scholarly communities, the authors assess whether BERT models can be used in machine-assisted indexing in the Project Gutenberg collection, through suggesting Library of Congress subject headings filtered by certain Library of Congress Classification subclass labels. The findings of this study are informative for further research on BERT models to assist with automatic subject indexing for digital library collections.
  6. Schwarz, C.: Natural language and information retrieval : Kommentierte Literaturliste zu Systemen, Verfahren und Tools (1986) 0.03
    0.026611952 = product of:
      0.053223904 = sum of:
        0.053223904 = product of:
          0.10644781 = sum of:
            0.10644781 = weight(_text_:c in 408) [ClassicSimilarity], result of:
              0.10644781 = score(doc=408,freq=4.0), product of:
                0.14107318 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.040897828 = queryNorm
                0.7545574 = fieldWeight in 408, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.109375 = fieldNorm(doc=408)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Informationsliguistische Texterschließung. Hrsg.: C. Schwarz u. G. Thurmair
  7. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.02
    0.024190463 = sum of:
      0.014493553 = product of:
        0.086961314 = sum of:
          0.086961314 = weight(_text_:authors in 3807) [ClassicSimilarity], result of:
            0.086961314 = score(doc=3807,freq=14.0), product of:
              0.1864456 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.040897828 = queryNorm
              0.46641657 = fieldWeight in 3807, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.16666667 = coord(1/6)
      0.009696909 = product of:
        0.019393818 = sum of:
          0.019393818 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
            0.019393818 = score(doc=3807,freq=2.0), product of:
              0.14321722 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.040897828 = queryNorm
              0.1354154 = fieldWeight in 3807, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.5 = coord(1/2)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Research limitations/implications In total, 42 definitions were identified spanning a period of 11 years. This represented the first use of KM through the estimated apex of terms used. From 2006 onwards definitions were used in repetition, and all definitions that were considered to repeat were therefore subsequently excluded as not being unique instances. All definitions listed are by no means complete and exhaustive. The definitions are viewed outside the scope and context in which they were originally formulated and then used to review the key concepts in the definitions themselves. Social implications When the authors refer to the aforementioned discussion of KM content as well as the presentation of the method followed in this paper, the authors may have a few implications for future research in KM. First the research validates ideas presented by the OECD in 2005 pertaining to KM. It also validates that through the evolution of KM, the authors ended with a description of KM that may be seen as a standardised description. If the authors as academics and practitioners, for example, refer to KM as the same construct and/or idea, it has the potential to speculatively, distinguish between what KM may or may not be. Originality/value By simplifying the term used to define KM, by focusing on the most common definitions, the paper assist in refocusing KM by reconsidering the dimensions that is the most common in how it has been defined over time. This would hopefully assist in reigniting discussions about KM and how it may be used to the benefit of an organisation.
    Date
    20. 1.2015 18:30:22
  8. Warner, A.J.: Natural language processing (1987) 0.02
    0.022164363 = product of:
      0.044328727 = sum of:
        0.044328727 = product of:
          0.08865745 = sum of:
            0.08865745 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.08865745 = score(doc=337,freq=2.0), product of:
                0.14321722 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040897828 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  9. Schwarz, C.: Linguistische Hilfsmittel beim Information Retrieval (1984) 0.02
    0.021505704 = product of:
      0.04301141 = sum of:
        0.04301141 = product of:
          0.08602282 = sum of:
            0.08602282 = weight(_text_:c in 545) [ClassicSimilarity], result of:
              0.08602282 = score(doc=545,freq=2.0), product of:
                0.14107318 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.040897828 = queryNorm
                0.6097744 = fieldWeight in 545, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.125 = fieldNorm(doc=545)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Schwarz, C.: Freitextrecherche: Grenzen und Möglichkeiten (1982) 0.02
    0.021505704 = product of:
      0.04301141 = sum of:
        0.04301141 = product of:
          0.08602282 = sum of:
            0.08602282 = weight(_text_:c in 1349) [ClassicSimilarity], result of:
              0.08602282 = score(doc=1349,freq=2.0), product of:
                0.14107318 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.040897828 = queryNorm
                0.6097744 = fieldWeight in 1349, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.125 = fieldNorm(doc=1349)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Barriere, C.: Building a concept hierarchy from corpus analysis (2004) 0.02
    0.021505704 = product of:
      0.04301141 = sum of:
        0.04301141 = product of:
          0.08602282 = sum of:
            0.08602282 = weight(_text_:c in 6787) [ClassicSimilarity], result of:
              0.08602282 = score(doc=6787,freq=2.0), product of:
                0.14107318 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.040897828 = queryNorm
                0.6097744 = fieldWeight in 6787, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.125 = fieldNorm(doc=6787)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.019393818 = product of:
      0.038787637 = sum of:
        0.038787637 = product of:
          0.077575274 = sum of:
            0.077575274 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.077575274 = score(doc=3164,freq=2.0), product of:
                0.14321722 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040897828 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  13. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.019393818 = product of:
      0.038787637 = sum of:
        0.038787637 = product of:
          0.077575274 = sum of:
            0.077575274 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.077575274 = score(doc=4506,freq=2.0), product of:
                0.14321722 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040897828 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  14. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.019393818 = product of:
      0.038787637 = sum of:
        0.038787637 = product of:
          0.077575274 = sum of:
            0.077575274 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.077575274 = score(doc=6672,freq=2.0), product of:
                0.14321722 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040897828 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  15. New tools for human translators (1997) 0.02
    0.019393818 = product of:
      0.038787637 = sum of:
        0.038787637 = product of:
          0.077575274 = sum of:
            0.077575274 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.077575274 = score(doc=1179,freq=2.0), product of:
                0.14321722 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040897828 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  16. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.019393818 = product of:
      0.038787637 = sum of:
        0.038787637 = product of:
          0.077575274 = sum of:
            0.077575274 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.077575274 = score(doc=3117,freq=2.0), product of:
                0.14321722 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040897828 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  17. ¬Der Student aus dem Computer (2023) 0.02
    0.019393818 = product of:
      0.038787637 = sum of:
        0.038787637 = product of:
          0.077575274 = sum of:
            0.077575274 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.077575274 = score(doc=1079,freq=2.0), product of:
                0.14321722 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040897828 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  18. Melzer, C.: ¬Der Maschine anpassen : PC-Spracherkennung - Programme sind mittlerweile alltagsreif (2005) 0.02
    0.019105654 = product of:
      0.03821131 = sum of:
        0.03821131 = sum of:
          0.01881749 = weight(_text_:c in 4044) [ClassicSimilarity], result of:
            0.01881749 = score(doc=4044,freq=2.0), product of:
              0.14107318 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.040897828 = queryNorm
              0.13338815 = fieldWeight in 4044, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.02734375 = fieldNorm(doc=4044)
          0.019393818 = weight(_text_:22 in 4044) [ClassicSimilarity], result of:
            0.019393818 = score(doc=4044,freq=2.0), product of:
              0.14321722 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.040897828 = queryNorm
              0.1354154 = fieldWeight in 4044, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=4044)
      0.5 = coord(1/2)
    
    Date
    3. 5.1997 8:44:22
  19. Kocijan, K.: Visualizing natural language resources (2015) 0.02
    0.019008538 = product of:
      0.038017076 = sum of:
        0.038017076 = product of:
          0.07603415 = sum of:
            0.07603415 = weight(_text_:c in 2995) [ClassicSimilarity], result of:
              0.07603415 = score(doc=2995,freq=4.0), product of:
                0.14107318 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.040897828 = queryNorm
                0.5389696 = fieldWeight in 2995, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2995)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  20. Salton, G.; Buckley, C.; Smith, M.: On the application of syntactic methodologies in automatic text analysis (1990) 0.02
    0.01881749 = product of:
      0.03763498 = sum of:
        0.03763498 = product of:
          0.07526996 = sum of:
            0.07526996 = weight(_text_:c in 7864) [ClassicSimilarity], result of:
              0.07526996 = score(doc=7864,freq=2.0), product of:
                0.14107318 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.040897828 = queryNorm
                0.5335526 = fieldWeight in 7864, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7864)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Years

Languages

  • e 122
  • d 41

Types

  • a 134
  • m 18
  • el 12
  • s 12
  • x 3
  • p 2
  • d 1
  • More… Less…

Subjects

Classifications