Search (2 results, page 1 of 1)
-
×
author_ss:"Zuccala, A."
-
Rousseau, R.; Zuccala, A.: ¬A classification of author co-citations : definitions and search strategies (2004)
0.02
0.021102749 = product of: 0.042205498 = sum of: 0.042205498 = product of: 0.16882199 = sum of: 0.16882199 = weight(_text_:author's in 2266) [ClassicSimilarity], result of: 0.16882199 = score(doc=2266,freq=4.0), product of: 0.3215584 = queryWeight, product of: 6.7201533 = idf(docFreq=144, maxDocs=44218) 0.047849856 = queryNorm 0.52501196 = fieldWeight in 2266, product of: 2.0 = tf(freq=4.0), with freq of: 4.0 = termFreq=4.0 6.7201533 = idf(docFreq=144, maxDocs=44218) 0.0390625 = fieldNorm(doc=2266) 0.25 = coord(1/4) 0.5 = coord(1/2)
- Abstract
- The term author co-citation is defined and classified according to four distinct forms: the pure first-author co-citation, the pure author co-citation, the general author co-citation, and the special co-authorlco-citation. Each form can be used to obtain one count in an author co-citation study, based an a binary counting rule, which either recognizes the co-citedness of two authors in a given reference list (1) or does not (0). Most studies using author co-citations have relied solely an first-author cocitation counts as evidence of an author's oeuvre or body of work contributed to a research field. In this article, we argue that an author's contribution to a selected field of study should not be limited, but should be based an his/her complete list of publications, regardless of author ranking. We discuss the implications associated with using each co-citation form and show where simple first-author co-citations fit within our classification scheme. Examples are given to substantiate each author co-citation form defined in our classification, including a set of sample Dialog(TM) searches using references extracted from the SciSearch database.
-
Zuccala, A.; Someren, M. van; Bellen, M. van: ¬A machine-learning approach to coding book reviews as quality indicators : toward a theory of megacitation (2014)
0.01
0.014921897 = product of: 0.029843794 = sum of: 0.029843794 = product of: 0.11937518 = sum of: 0.11937518 = weight(_text_:author's in 1530) [ClassicSimilarity], result of: 0.11937518 = score(doc=1530,freq=2.0), product of: 0.3215584 = queryWeight, product of: 6.7201533 = idf(docFreq=144, maxDocs=44218) 0.047849856 = queryNorm 0.3712395 = fieldWeight in 1530, product of: 1.4142135 = tf(freq=2.0), with freq of: 2.0 = termFreq=2.0 6.7201533 = idf(docFreq=144, maxDocs=44218) 0.0390625 = fieldNorm(doc=1530) 0.25 = coord(1/4) 0.5 = coord(1/2)
- Abstract
- A theory of "megacitation" is introduced and used in an experiment to demonstrate how a qualitative scholarly book review can be converted into a weighted bibliometric indicator. We employ a manual human-coding approach to classify book reviews in the field of history based on reviewers' assessments of a book author's scholarly credibility (SC) and writing style (WS). In total, 100 book reviews were selected from the American Historical Review and coded for their positive/negative valence on these two dimensions. Most were coded as positive (68% for SC and 47% for WS), and there was also a small positive correlation between SC and WS (r = 0.2). We then constructed a classifier, combining both manual design and machine learning, to categorize sentiment-based sentences in history book reviews. The machine classifier produced a matched accuracy (matched to the human coding) of approximately 75% for SC and 64% for WS. WS was found to be more difficult to classify by machine than SC because of the reviewers' use of more subtle language. With further training data, a machine-learning approach could be useful for automatically classifying a large number of history book reviews at once. Weighted megacitations can be especially valuable if they are used in conjunction with regular book/journal citations, and "libcitations" (i.e., library holding counts) for a comprehensive assessment of a book/monograph's scholarly impact.