Search (5 results, page 1 of 1)

  • × author_ss:"Zuccala, A."
  • × theme_ss:"Informetrie"
  1. Zuccala, A.; Someren, M. van; Bellen, M. van: ¬A machine-learning approach to coding book reviews as quality indicators : toward a theory of megacitation (2014) 0.00
    0.0033826875 = product of:
      0.006765375 = sum of:
        0.006765375 = product of:
          0.01353075 = sum of:
            0.01353075 = weight(_text_:a in 1530) [ClassicSimilarity], result of:
              0.01353075 = score(doc=1530,freq=32.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.25478977 = fieldWeight in 1530, product of:
                  5.656854 = tf(freq=32.0), with freq of:
                    32.0 = termFreq=32.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1530)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A theory of "megacitation" is introduced and used in an experiment to demonstrate how a qualitative scholarly book review can be converted into a weighted bibliometric indicator. We employ a manual human-coding approach to classify book reviews in the field of history based on reviewers' assessments of a book author's scholarly credibility (SC) and writing style (WS). In total, 100 book reviews were selected from the American Historical Review and coded for their positive/negative valence on these two dimensions. Most were coded as positive (68% for SC and 47% for WS), and there was also a small positive correlation between SC and WS (r = 0.2). We then constructed a classifier, combining both manual design and machine learning, to categorize sentiment-based sentences in history book reviews. The machine classifier produced a matched accuracy (matched to the human coding) of approximately 75% for SC and 64% for WS. WS was found to be more difficult to classify by machine than SC because of the reviewers' use of more subtle language. With further training data, a machine-learning approach could be useful for automatically classifying a large number of history book reviews at once. Weighted megacitations can be especially valuable if they are used in conjunction with regular book/journal citations, and "libcitations" (i.e., library holding counts) for a comprehensive assessment of a book/monograph's scholarly impact.
    Type
    a
  2. Zuccala, A.; Guns, R.; Cornacchia, R.; Bod, R.: Can we rank scholarly book publishers? : a bibliometric experiment with the field of history (2015) 0.00
    0.0029294936 = product of:
      0.005858987 = sum of:
        0.005858987 = product of:
          0.011717974 = sum of:
            0.011717974 = weight(_text_:a in 2037) [ClassicSimilarity], result of:
              0.011717974 = score(doc=2037,freq=24.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22065444 = fieldWeight in 2037, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2037)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is a publisher ranking study based on a citation data grant from Elsevier, specifically, book titles cited in Scopus history journals (2007-2011) and matching metadata from WorldCat® (i.e., OCLC numbers, ISBN codes, publisher records, and library holding counts). Using both resources, we have created a unique relational database designed to compare citation counts to books with international library holdings or libcitations for scholarly book publishers. First, we construct a ranking of the top 500 publishers and explore descriptive statistics at the level of publisher type (university, commercial, other) and country of origin. We then identify the top 50 university presses and commercial houses based on total citations and mean citations per book (CPB). In a third analysis, we present a map of directed citation links between journals and book publishers. American and British presses/publishing houses tend to dominate the work of library collection managers and citing scholars; however, a number of specialist publishers from Europe are included. Distinct clusters from the directed citation map indicate a certain degree of regionalism and subject specialization, where some journals produced in languages other than English tend to cite books published by the same parent press. Bibliometric rankings convey only a small part of how the actual structure of the publishing field has evolved; hence, challenges lie ahead for developers of new citation indices for books and bibliometricians interested in measuring book and publisher impacts.
    Type
    a
  3. Rousseau, R.; Zuccala, A.: ¬A classification of author co-citations : definitions and search strategies (2004) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 2266) [ClassicSimilarity], result of:
              0.009567685 = score(doc=2266,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 2266, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2266)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The term author co-citation is defined and classified according to four distinct forms: the pure first-author co-citation, the pure author co-citation, the general author co-citation, and the special co-authorlco-citation. Each form can be used to obtain one count in an author co-citation study, based an a binary counting rule, which either recognizes the co-citedness of two authors in a given reference list (1) or does not (0). Most studies using author co-citations have relied solely an first-author cocitation counts as evidence of an author's oeuvre or body of work contributed to a research field. In this article, we argue that an author's contribution to a selected field of study should not be limited, but should be based an his/her complete list of publications, regardless of author ranking. We discuss the implications associated with using each co-citation form and show where simple first-author co-citations fit within our classification scheme. Examples are given to substantiate each author co-citation form defined in our classification, including a set of sample Dialog(TM) searches using references extracted from the SciSearch database.
    Type
    a
  4. Zuccala, A.: Author cocitation analysis is to intellectual structure as Web colink analysis is to ... ? (2006) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 6008) [ClassicSimilarity], result of:
              0.00894975 = score(doc=6008,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 6008, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6008)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Author Cocitation Analysis (ACA) and Web Colink Analysis (WCA) are examined as sister techniques in the related fields of bibliometrics and webometrics. Comparisons are made between the two techniques based on their data retrieval, mapping, and interpretation procedures, using mathematics as the subject in focus. An ACA is carried out and interpreted for a group of participants (authors) involved in an Isaac Newton Institute (2000) workshop-Singularity Theory and Its Applications to Wave Propagation Theory and Dynamical Systems-and compared/contrasted with a WCA for a list of international mathematics research institute home pages on the Web. Although the practice of ACA may be used to inform a WCA, the two techniques do not share many elements in common. The most important departure between ACA and WCA exists at the interpretive stage when ACA maps become meaningful in light of citation theory, and WCA maps require interpretation based on hyperlink theory. Much of the research concerning link theory and motivations for linking is still new; therefore further studies based on colinking are needed, mainly map-based studies, to understand what makes a Web colink structure meaningful.
    Type
    a
  5. Zuccala, A.; Leeuwen, T.van: Book reviews in humanities research evaluations (2011) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 4771) [ClassicSimilarity], result of:
              0.00894975 = score(doc=4771,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 4771, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4771)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Bibliometric evaluations of research outputs in the social sciences and humanities are challenging due to limitations associated with Web of Science data; however, background literature has shown that scholars are interested in stimulating improvements. We give special attention to book reviews processed by Web of Sciencehistory and literature journals, focusing on two types: Type I (i.e., reference to book only) and Type II (i.e., reference to book and other scholarly sources). Bibliometric data are collected and analyzed for a large set of reviews (1981-2009) to observe general publication patterns and patterns of citedness and co-citedness with books under review. Results show that reviews giving reference only to the book (Type I) are published more frequently while reviews referencing the book and other works (Type II) are more likely to be cited. The referencing culture of the humanities makes it difficult to understand patterns of co-citedness between books and review articles without further in-depth content analyses. Overall, citation counts to book reviews are typically low, but our data showed that they are scholarly and do play a role in the scholarly communication system. In the disciplines of history and literature, where book reviews are prominent, counting the number and type of reviews that a scholar produces throughout his/her career is a positive step forward in research evaluations. We propose a new set of journal quality indicators for the purpose of monitoring their scholarly influence.
    Type
    a