Search (3 results, page 1 of 1)

  • × author_ss:"Zuccala, A."
  • × theme_ss:"Informetrie"
  1. Zuccala, A.; Guns, R.; Cornacchia, R.; Bod, R.: Can we rank scholarly book publishers? : a bibliometric experiment with the field of history (2015) 0.01
    0.009816773 = product of:
      0.019633546 = sum of:
        0.019633546 = product of:
          0.039267093 = sum of:
            0.039267093 = weight(_text_:r in 2037) [ClassicSimilarity], result of:
              0.039267093 = score(doc=2037,freq=6.0), product of:
                0.12397416 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.037451506 = queryNorm
                0.3167361 = fieldWeight in 2037, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2037)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  2. Rousseau, R.; Zuccala, A.: ¬A classification of author co-citations : definitions and search strategies (2004) 0.01
    0.0056677163 = product of:
      0.0113354325 = sum of:
        0.0113354325 = product of:
          0.022670865 = sum of:
            0.022670865 = weight(_text_:r in 2266) [ClassicSimilarity], result of:
              0.022670865 = score(doc=2266,freq=2.0), product of:
                0.12397416 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.037451506 = queryNorm
                0.18286766 = fieldWeight in 2266, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2266)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Zuccala, A.; Someren, M. van; Bellen, M. van: ¬A machine-learning approach to coding book reviews as quality indicators : toward a theory of megacitation (2014) 0.01
    0.0056677163 = product of:
      0.0113354325 = sum of:
        0.0113354325 = product of:
          0.022670865 = sum of:
            0.022670865 = weight(_text_:r in 1530) [ClassicSimilarity], result of:
              0.022670865 = score(doc=1530,freq=2.0), product of:
                0.12397416 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.037451506 = queryNorm
                0.18286766 = fieldWeight in 1530, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1530)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A theory of "megacitation" is introduced and used in an experiment to demonstrate how a qualitative scholarly book review can be converted into a weighted bibliometric indicator. We employ a manual human-coding approach to classify book reviews in the field of history based on reviewers' assessments of a book author's scholarly credibility (SC) and writing style (WS). In total, 100 book reviews were selected from the American Historical Review and coded for their positive/negative valence on these two dimensions. Most were coded as positive (68% for SC and 47% for WS), and there was also a small positive correlation between SC and WS (r = 0.2). We then constructed a classifier, combining both manual design and machine learning, to categorize sentiment-based sentences in history book reviews. The machine classifier produced a matched accuracy (matched to the human coding) of approximately 75% for SC and 64% for WS. WS was found to be more difficult to classify by machine than SC because of the reviewers' use of more subtle language. With further training data, a machine-learning approach could be useful for automatically classifying a large number of history book reviews at once. Weighted megacitations can be especially valuable if they are used in conjunction with regular book/journal citations, and "libcitations" (i.e., library holding counts) for a comprehensive assessment of a book/monograph's scholarly impact.