Search (7 results, page 1 of 1)

  • × theme_ss:"Computerlinguistik"
  • × theme_ss:"Informetrie"
  1. Moohebat, M.; Raj, R.G.; Kareem, S.B.A.; Thorleuchter, D.: Identifying ISI-indexed articles by their lexical usage : a text analysis approach (2015) 0.01
    0.013986527 = product of:
      0.027973054 = sum of:
        0.027973054 = product of:
          0.04195958 = sum of:
            0.038397755 = weight(_text_:k in 1664) [ClassicSimilarity], result of:
              0.038397755 = score(doc=1664,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.23664509 = fieldWeight in 1664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1664)
            0.003561823 = weight(_text_:s in 1664) [ClassicSimilarity], result of:
              0.003561823 = score(doc=1664,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.072074346 = fieldWeight in 1664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1664)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This research creates an architecture for investigating the existence of probable lexical divergences between articles, categorized as Institute for Scientific Information (ISI) and non-ISI, and consequently, if such a difference is discovered, to propose the best available classification method. Based on a collection of ISI- and non-ISI-indexed articles in the areas of business and computer science, three classification models are trained. A sensitivity analysis is applied to demonstrate the impact of words in different syntactical forms on the classification decision. The results demonstrate that the lexical domains of ISI and non-ISI articles are distinguishable by machine learning techniques. Our findings indicate that the support vector machine identifies ISI-indexed articles in both disciplines with higher precision than do the Naïve Bayesian and K-Nearest Neighbors techniques.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.3, S.501-511
  2. He, Q.: Knowledge discovery through co-word analysis (1999) 0.00
    0.0013851533 = product of:
      0.0027703065 = sum of:
        0.0027703065 = product of:
          0.00831092 = sum of:
            0.00831092 = weight(_text_:s in 6082) [ClassicSimilarity], result of:
              0.00831092 = score(doc=6082,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.16817348 = fieldWeight in 6082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6082)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Library trends. 48(1999) no.1, S.133-159
  3. Ahonen, H.: Knowledge discovery in documents by extracting frequent word sequences (1999) 0.00
    0.0013851533 = product of:
      0.0027703065 = sum of:
        0.0027703065 = product of:
          0.00831092 = sum of:
            0.00831092 = weight(_text_:s in 6088) [ClassicSimilarity], result of:
              0.00831092 = score(doc=6088,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.16817348 = fieldWeight in 6088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6088)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Library trends. 48(1999) no.1, S.160-181
  4. Levin, M.; Krawczyk, S.; Bethard, S.; Jurafsky, D.: Citation-based bootstrapping for large-scale author disambiguation (2012) 0.00
    8.5684157E-4 = product of:
      0.0017136831 = sum of:
        0.0017136831 = product of:
          0.005141049 = sum of:
            0.005141049 = weight(_text_:s in 246) [ClassicSimilarity], result of:
              0.005141049 = score(doc=246,freq=6.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.10403037 = fieldWeight in 246, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=246)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.5, S.1030-1047
  5. Radev, D.R.; Joseph, M.T.; Gibson, B.; Muthukrishnan, P.: ¬A bibliometric and network analysis of the field of computational linguistics (2016) 0.00
    6.9257664E-4 = product of:
      0.0013851533 = sum of:
        0.0013851533 = product of:
          0.00415546 = sum of:
            0.00415546 = weight(_text_:s in 2764) [ClassicSimilarity], result of:
              0.00415546 = score(doc=2764,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.08408674 = fieldWeight in 2764, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2764)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.3, S.683-706
  6. He, Q.: ¬A study of the strength indexes in co-word analysis (2000) 0.00
    5.936372E-4 = product of:
      0.0011872743 = sum of:
        0.0011872743 = product of:
          0.003561823 = sum of:
            0.003561823 = weight(_text_:s in 111) [ClassicSimilarity], result of:
              0.003561823 = score(doc=111,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.072074346 = fieldWeight in 111, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=111)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Pages
    S.77-82
  7. Chen, L.; Fang, H.: ¬An automatic method for ex-tracting innovative ideas based on the Scopus® database (2019) 0.00
    4.9469766E-4 = product of:
      9.893953E-4 = sum of:
        9.893953E-4 = product of:
          0.0029681858 = sum of:
            0.0029681858 = weight(_text_:s in 5310) [ClassicSimilarity], result of:
              0.0029681858 = score(doc=5310,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.060061958 = fieldWeight in 5310, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5310)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization. 46(2019) no.3, S.171-186