Search (5 results, page 1 of 1)

  • × theme_ss:"Computerlinguistik"
  1. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.03
    0.026301075 = product of:
      0.065752685 = sum of:
        0.041615978 = weight(_text_:study in 6752) [ClassicSimilarity], result of:
          0.041615978 = score(doc=6752,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.2873863 = fieldWeight in 6752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0625 = fieldNorm(doc=6752)
        0.02413671 = product of:
          0.04827342 = sum of:
            0.04827342 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
              0.04827342 = score(doc=6752,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.30952093 = fieldWeight in 6752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6752)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    6. 3.1997 16:22:15
  2. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.02
    0.02301344 = product of:
      0.0575336 = sum of:
        0.03641398 = weight(_text_:study in 156) [ClassicSimilarity], result of:
          0.03641398 = score(doc=156,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.251463 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.021119623 = product of:
          0.042239245 = sum of:
            0.042239245 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.042239245 = score(doc=156,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The present study investigates the ability of a bibliometric based semi-automatic method to select candidate thesaurus terms from citation contexts. The method consists of document co-citation analysis, citation context analysis, and noun phrase parsing. The investigation is carried out within the specialty area of periodontology. The results clearly demonstrate that the method is able to select important candidate thesaurus terms within the chosen specialty area.
    Date
    8. 3.2007 19:55:22
  3. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.02
    0.02301344 = product of:
      0.0575336 = sum of:
        0.03641398 = weight(_text_:study in 3840) [ClassicSimilarity], result of:
          0.03641398 = score(doc=3840,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.251463 = fieldWeight in 3840, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3840)
        0.021119623 = product of:
          0.042239245 = sum of:
            0.042239245 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
              0.042239245 = score(doc=3840,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.2708308 = fieldWeight in 3840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Linguistics is the scientific study of language which emphasizes language spoken in everyday settings by human beings. It has a long history of interdisciplinarity, both internally and in contribution to other fields, including information science. A linguistic perspective is beneficial in many ways in information science, since it examines the relationship between the forms of meaningful expressions and their social, cognitive, institutional, and communicative context, these being two perspectives on information that are actively studied, to different degrees, in information science. Examples of issues relevant to information science are presented for which the approach taken under a linguistic perspective is illustrated.
    Date
    27. 8.2011 14:22:33
  4. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.02
    0.019725805 = product of:
      0.049314514 = sum of:
        0.031211983 = weight(_text_:study in 4436) [ClassicSimilarity], result of:
          0.031211983 = score(doc=4436,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.21553972 = fieldWeight in 4436, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.046875 = fieldNorm(doc=4436)
        0.018102532 = product of:
          0.036205065 = sum of:
            0.036205065 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
              0.036205065 = score(doc=4436,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.23214069 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Language barrier is the major problem that people face in searching for, retrieving, and understanding multilingual collections on the Internet. This paper deals with query translation and document translation in a Chinese-English information retrieval system called MTIR. Bilingual dictionary and monolingual corpus-based approaches are adopted to select suitable tranlated query terms. A machine transliteration algorithm is introduced to resolve proper name searching. We consider several design issues for document translation, including which material is translated, what roles the HTML tags play in translation, what the tradeoff is between the speed performance and the translation performance, and what from the translated result is presented in. About 100.000 Web pages translated in the last 4 months of 1997 are used for quantitative study of online and real-time Web page translation
    Date
    16. 2.2000 14:22:39
  5. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.01
    0.01150672 = product of:
      0.0287668 = sum of:
        0.01820699 = weight(_text_:study in 3807) [ClassicSimilarity], result of:
          0.01820699 = score(doc=3807,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.1257315 = fieldWeight in 3807, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3807)
        0.010559811 = product of:
          0.021119623 = sum of:
            0.021119623 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
              0.021119623 = score(doc=3807,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.1354154 = fieldWeight in 3807, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3807)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Date
    20. 1.2015 18:30:22