Search (7 results, page 1 of 1)

  • × theme_ss:"Informetrie"
  • × theme_ss:"Computerlinguistik"
  1. He, Q.: Knowledge discovery through co-word analysis (1999) 0.00
    0.0036767495 = product of:
      0.007353499 = sum of:
        0.007353499 = product of:
          0.014706998 = sum of:
            0.014706998 = weight(_text_:e in 6082) [ClassicSimilarity], result of:
              0.014706998 = score(doc=6082,freq=2.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.2223318 = fieldWeight in 6082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6082)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e
  2. Ahonen, H.: Knowledge discovery in documents by extracting frequent word sequences (1999) 0.00
    0.0036767495 = product of:
      0.007353499 = sum of:
        0.007353499 = product of:
          0.014706998 = sum of:
            0.014706998 = weight(_text_:e in 6088) [ClassicSimilarity], result of:
              0.014706998 = score(doc=6088,freq=2.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.2223318 = fieldWeight in 6088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6088)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e
  3. He, Q.: ¬A study of the strength indexes in co-word analysis (2000) 0.00
    0.002729279 = product of:
      0.005458558 = sum of:
        0.005458558 = product of:
          0.010917116 = sum of:
            0.010917116 = weight(_text_:e in 111) [ClassicSimilarity], result of:
              0.010917116 = score(doc=111,freq=6.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.16503859 = fieldWeight in 111, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.046875 = fieldNorm(doc=111)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Co-word analysis is a technique for detecting the knowledge structure of scientific literature and mapping the dynamics in a research field. It is used to count the co-occurrences of term pairs, compute the strength between term pairs, and map the research field by inserting terms and their linkages into a graphical structure according to the strength values. In previous co-word studies, there are two indexes used to measure the strength between term pairs in order to identify the major areas in a research field - the inclusion index (I) and the equivalence index (E). This study will conduct two co-word analysis experiments using the two indexes, respectively, and compare the results from the two experiments. The results show, due to the difference in their computation, index I is more likely to identify general subject areas in a research field while index E is more likely to identify subject areas at more specific levels
    Language
    e
  4. Radev, D.R.; Joseph, M.T.; Gibson, B.; Muthukrishnan, P.: ¬A bibliometric and network analysis of the field of computational linguistics (2016) 0.00
    0.0018383748 = product of:
      0.0036767495 = sum of:
        0.0036767495 = product of:
          0.007353499 = sum of:
            0.007353499 = weight(_text_:e in 2764) [ClassicSimilarity], result of:
              0.007353499 = score(doc=2764,freq=2.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.1111659 = fieldWeight in 2764, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2764)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e
  5. Moohebat, M.; Raj, R.G.; Kareem, S.B.A.; Thorleuchter, D.: Identifying ISI-indexed articles by their lexical usage : a text analysis approach (2015) 0.00
    0.0015757497 = product of:
      0.0031514994 = sum of:
        0.0031514994 = product of:
          0.006302999 = sum of:
            0.006302999 = weight(_text_:e in 1664) [ClassicSimilarity], result of:
              0.006302999 = score(doc=1664,freq=2.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.09528506 = fieldWeight in 1664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1664)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e
  6. Levin, M.; Krawczyk, S.; Bethard, S.; Jurafsky, D.: Citation-based bootstrapping for large-scale author disambiguation (2012) 0.00
    0.0013131249 = product of:
      0.0026262498 = sum of:
        0.0026262498 = product of:
          0.0052524996 = sum of:
            0.0052524996 = weight(_text_:e in 246) [ClassicSimilarity], result of:
              0.0052524996 = score(doc=246,freq=2.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.07940422 = fieldWeight in 246, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=246)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e
  7. Chen, L.; Fang, H.: ¬An automatic method for ex-tracting innovative ideas based on the Scopus® database (2019) 0.00
    0.0013131249 = product of:
      0.0026262498 = sum of:
        0.0026262498 = product of:
          0.0052524996 = sum of:
            0.0052524996 = weight(_text_:e in 5310) [ClassicSimilarity], result of:
              0.0052524996 = score(doc=5310,freq=2.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.07940422 = fieldWeight in 5310, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5310)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e