Search (80 results, page 1 of 4)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.24
    0.2433096 = sum of:
      0.05978776 = product of:
        0.23915105 = sum of:
          0.23915105 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.23915105 = score(doc=562,freq=2.0), product of:
              0.425522 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.050191253 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.18352184 = sum of:
        0.1427205 = weight(_text_:q in 562) [ClassicSimilarity], result of:
          0.1427205 = score(doc=562,freq=2.0), product of:
            0.32872224 = queryWeight, product of:
              6.5493927 = idf(docFreq=171, maxDocs=44218)
              0.050191253 = queryNorm
            0.43416747 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5493927 = idf(docFreq=171, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.04080133 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
          0.04080133 = score(doc=562,freq=2.0), product of:
            0.17576122 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.050191253 = queryNorm
            0.23214069 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. He, Q.: Knowledge discovery through co-word analysis (1999) 0.08
    0.08325363 = product of:
      0.16650726 = sum of:
        0.16650726 = product of:
          0.33301452 = sum of:
            0.33301452 = weight(_text_:q in 6082) [ClassicSimilarity], result of:
              0.33301452 = score(doc=6082,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                1.0130575 = fieldWeight in 6082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6082)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.04
    0.038580883 = sum of:
      0.026680496 = product of:
        0.10672198 = sum of:
          0.10672198 = weight(_text_:authors in 3807) [ClassicSimilarity], result of:
            0.10672198 = score(doc=3807,freq=14.0), product of:
              0.22881259 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.050191253 = queryNorm
              0.46641657 = fieldWeight in 3807, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.25 = coord(1/4)
      0.011900389 = product of:
        0.023800777 = sum of:
          0.023800777 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
            0.023800777 = score(doc=3807,freq=2.0), product of:
              0.17576122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050191253 = queryNorm
              0.1354154 = fieldWeight in 3807, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.5 = coord(1/2)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Research limitations/implications In total, 42 definitions were identified spanning a period of 11 years. This represented the first use of KM through the estimated apex of terms used. From 2006 onwards definitions were used in repetition, and all definitions that were considered to repeat were therefore subsequently excluded as not being unique instances. All definitions listed are by no means complete and exhaustive. The definitions are viewed outside the scope and context in which they were originally formulated and then used to review the key concepts in the definitions themselves. Social implications When the authors refer to the aforementioned discussion of KM content as well as the presentation of the method followed in this paper, the authors may have a few implications for future research in KM. First the research validates ideas presented by the OECD in 2005 pertaining to KM. It also validates that through the evolution of KM, the authors ended with a description of KM that may be seen as a standardised description. If the authors as academics and practitioners, for example, refer to KM as the same construct and/or idea, it has the potential to speculatively, distinguish between what KM may or may not be. Originality/value By simplifying the term used to define KM, by focusing on the most common definitions, the paper assist in refocusing KM by reconsidering the dimensions that is the most common in how it has been defined over time. This would hopefully assist in reigniting discussions about KM and how it may be used to the benefit of an organisation.
    Date
    20. 1.2015 18:30:22
  4. He, Q.: ¬A study of the strength indexes in co-word analysis (2000) 0.04
    0.035680126 = product of:
      0.07136025 = sum of:
        0.07136025 = product of:
          0.1427205 = sum of:
            0.1427205 = weight(_text_:q in 111) [ClassicSimilarity], result of:
              0.1427205 = score(doc=111,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.43416747 = fieldWeight in 111, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.046875 = fieldNorm(doc=111)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.02989388 = product of:
      0.05978776 = sum of:
        0.05978776 = product of:
          0.23915105 = sum of:
            0.23915105 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.23915105 = score(doc=862,freq=2.0), product of:
                0.425522 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050191253 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  6. Lee, K.H.; Ng, M.K.M.; Lu, Q.: Text segmentation for Chinese spell checking (1999) 0.03
    0.029733438 = product of:
      0.059466876 = sum of:
        0.059466876 = product of:
          0.11893375 = sum of:
            0.11893375 = weight(_text_:q in 3913) [ClassicSimilarity], result of:
              0.11893375 = score(doc=3913,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.3618062 = fieldWeight in 3913, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3913)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Helbig, H.; Gnörlich, C.; Leveling, J.: Natürlichsprachlicher Zugang zu Informationsanbietern im Internet und zu lokalen Datenbanken (2000) 0.03
    0.029733438 = product of:
      0.059466876 = sum of:
        0.059466876 = product of:
          0.11893375 = sum of:
            0.11893375 = weight(_text_:q in 5558) [ClassicSimilarity], result of:
              0.11893375 = score(doc=5558,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.3618062 = fieldWeight in 5558, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5558)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Die Schaffung eines natürlichsprachlichen Interfaces (NLI), (das einem Nutzer die Formulierung von Anfragen an Informationsanbieter in seiner Muttersprache erlaubt, stellt eine der interessantesten Herausforderungen im Bereich des Information-Retrieval und der Verarbeitung natürlicher Sprache dar. Dieser Beitrag beschreibt Methoden zur Obersetzung natürlichsprachlicher Anfragen in Ausdrücke formaler Retrievalsprachen sowohl für Informationsressourcen im Internet als auch für lokale Datenbanken. Die vorgestellten Methoden sind Teil das Informationsrecherchesystems LINAS, das an der Fernuniversität Hagen entwickelt wurde, um Nutzern einen natürlichsprachlichen Zugang zu lokalen und zu im Internet verteilten wissenschaftlichen und technischen Informationen anzubieten. Das LINAS-System unterscheidet sich von anderen Systemen und natürlichsprachlichen Interfaces (vgl. OSIRIS) oder die früheren Systeme INTELLECT, Q&A durch die explizite Einbeziehung von Hintergrundwissen und speziellen Dialogmodellen in den Übersetzungsprozeß. Darüber hinaus ist das System auf ein vollständiges Verstehen des natürlichsprachlichen Textes ausgerichtet, während andere Systeme typischerweise nur nach Stichworten oder bestimmten grammatikalischen Mustern in der Eingabe suchen. Ein besonderer Schwerpunkt von LINAS liegt in der Repräsentation und Auswertung der semantischen Relationen zwischen den in der Nutzeranfrage gegebenen Konzepten
  8. Li, W.; Wong, K.-F.; Yuan, C.: Toward automatic Chinese temporal information extraction (2001) 0.03
    0.029733438 = product of:
      0.059466876 = sum of:
        0.059466876 = product of:
          0.11893375 = sum of:
            0.11893375 = weight(_text_:q in 6029) [ClassicSimilarity], result of:
              0.11893375 = score(doc=6029,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.3618062 = fieldWeight in 6029, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6029)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Over the past few years, temporal information processing and temporal database management have increasingly become hot topics. Nevertheless, only a few researchers have investigated these areas in the Chinese language. This lays down the objective of our research: to exploit Chinese language processing techniques for temporal information extraction and concept reasoning. In this article, we first study the mechanism for expressing time in Chinese. On the basis of the study, we then design a general frame structure for maintaining the extracted temporal concepts and propose a system for extracting time-dependent information from Hong Kong financial news. In the system, temporal knowledge is represented by different types of temporal concepts (TTC) and different temporal relations, including absolute and relative relations, which are used to correlate between action times and reference times. In analyzing a sentence, the algorithm first determines the situation related to the verb. This in turn will identify the type of temporal concept associated with the verb. After that, the relevant temporal information is extracted and the temporal relations are derived. These relations link relevant concept frames together in chronological order, which in turn provide the knowledge to fulfill users' queries, e.g., for question-answering (i.e., Q&A) applications
  9. Yang, Y.; Lu, Q.; Zhao, T.: ¬A delimiter-based general approach for Chinese term extraction (2009) 0.03
    0.029733438 = product of:
      0.059466876 = sum of:
        0.059466876 = product of:
          0.11893375 = sum of:
            0.11893375 = weight(_text_:q in 3315) [ClassicSimilarity], result of:
              0.11893375 = score(doc=3315,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.3618062 = fieldWeight in 3315, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3315)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Li, Q.; Chen, Y.P.; Myaeng, S.-H.; Jin, Y.; Kang, B.-Y.: Concept unification of terms in different languages via web mining for Information Retrieval (2009) 0.03
    0.029733438 = product of:
      0.059466876 = sum of:
        0.059466876 = product of:
          0.11893375 = sum of:
            0.11893375 = weight(_text_:q in 4215) [ClassicSimilarity], result of:
              0.11893375 = score(doc=4215,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.3618062 = fieldWeight in 4215, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4215)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Lian, T.; Yu, C.; Wang, W.; Yuan, Q.; Hou, Z.: Doctoral dissertations on tourism in China : a co-word analysis (2016) 0.03
    0.029733438 = product of:
      0.059466876 = sum of:
        0.059466876 = product of:
          0.11893375 = sum of:
            0.11893375 = weight(_text_:q in 3178) [ClassicSimilarity], result of:
              0.11893375 = score(doc=3178,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.3618062 = fieldWeight in 3178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3178)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Xiang, R.; Chersoni, E.; Lu, Q.; Huang, C.-R.; Li, W.; Long, Y.: Lexical data augmentation for sentiment analysis (2021) 0.03
    0.029733438 = product of:
      0.059466876 = sum of:
        0.059466876 = product of:
          0.11893375 = sum of:
            0.11893375 = weight(_text_:q in 392) [ClassicSimilarity], result of:
              0.11893375 = score(doc=392,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.3618062 = fieldWeight in 392, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=392)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Warner, A.J.: Natural language processing (1987) 0.03
    0.027200889 = product of:
      0.054401778 = sum of:
        0.054401778 = product of:
          0.108803555 = sum of:
            0.108803555 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.108803555 = score(doc=337,freq=2.0), product of:
                0.17576122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050191253 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  14. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.023800777 = product of:
      0.047601555 = sum of:
        0.047601555 = product of:
          0.09520311 = sum of:
            0.09520311 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.09520311 = score(doc=3164,freq=2.0), product of:
                0.17576122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050191253 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  15. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.023800777 = product of:
      0.047601555 = sum of:
        0.047601555 = product of:
          0.09520311 = sum of:
            0.09520311 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.09520311 = score(doc=4506,freq=2.0), product of:
                0.17576122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050191253 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  16. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.023800777 = product of:
      0.047601555 = sum of:
        0.047601555 = product of:
          0.09520311 = sum of:
            0.09520311 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.09520311 = score(doc=6672,freq=2.0), product of:
                0.17576122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050191253 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  17. New tools for human translators (1997) 0.02
    0.023800777 = product of:
      0.047601555 = sum of:
        0.047601555 = product of:
          0.09520311 = sum of:
            0.09520311 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.09520311 = score(doc=1179,freq=2.0), product of:
                0.17576122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050191253 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  18. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.023800777 = product of:
      0.047601555 = sum of:
        0.047601555 = product of:
          0.09520311 = sum of:
            0.09520311 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.09520311 = score(doc=3117,freq=2.0), product of:
                0.17576122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050191253 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  19. ¬Der Student aus dem Computer (2023) 0.02
    0.023800777 = product of:
      0.047601555 = sum of:
        0.047601555 = product of:
          0.09520311 = sum of:
            0.09520311 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.09520311 = score(doc=1079,freq=2.0), product of:
                0.17576122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050191253 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  20. Hoenkamp, E.; Bruza, P.D.; Song, D.; Huang, Q.: ¬An effective approach to verbose queries using a limited dependencies language model (2009) 0.02
    0.023786752 = product of:
      0.047573503 = sum of:
        0.047573503 = product of:
          0.095147006 = sum of:
            0.095147006 = weight(_text_:q in 2122) [ClassicSimilarity], result of:
              0.095147006 = score(doc=2122,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.28944498 = fieldWeight in 2122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2122)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Years

Languages

  • e 63
  • d 17

Types

  • a 64
  • m 9
  • el 6
  • s 4
  • p 2
  • x 2
  • d 1
  • More… Less…

Classifications