Search (8 results, page 1 of 1)

  • × author_ss:"Zuccala, A."
  1. Zuccala, A.; Someren, M. van; Bellen, M. van: ¬A machine-learning approach to coding book reviews as quality indicators : toward a theory of megacitation (2014) 0.00
    0.0033826875 = product of:
      0.006765375 = sum of:
        0.006765375 = product of:
          0.01353075 = sum of:
            0.01353075 = weight(_text_:a in 1530) [ClassicSimilarity], result of:
              0.01353075 = score(doc=1530,freq=32.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.25478977 = fieldWeight in 1530, product of:
                  5.656854 = tf(freq=32.0), with freq of:
                    32.0 = termFreq=32.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1530)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A theory of "megacitation" is introduced and used in an experiment to demonstrate how a qualitative scholarly book review can be converted into a weighted bibliometric indicator. We employ a manual human-coding approach to classify book reviews in the field of history based on reviewers' assessments of a book author's scholarly credibility (SC) and writing style (WS). In total, 100 book reviews were selected from the American Historical Review and coded for their positive/negative valence on these two dimensions. Most were coded as positive (68% for SC and 47% for WS), and there was also a small positive correlation between SC and WS (r = 0.2). We then constructed a classifier, combining both manual design and machine learning, to categorize sentiment-based sentences in history book reviews. The machine classifier produced a matched accuracy (matched to the human coding) of approximately 75% for SC and 64% for WS. WS was found to be more difficult to classify by machine than SC because of the reviewers' use of more subtle language. With further training data, a machine-learning approach could be useful for automatically classifying a large number of history book reviews at once. Weighted megacitations can be especially valuable if they are used in conjunction with regular book/journal citations, and "libcitations" (i.e., library holding counts) for a comprehensive assessment of a book/monograph's scholarly impact.
    Type
    a
  2. Zuccala, A.; Guns, R.; Cornacchia, R.; Bod, R.: Can we rank scholarly book publishers? : a bibliometric experiment with the field of history (2015) 0.00
    0.0029294936 = product of:
      0.005858987 = sum of:
        0.005858987 = product of:
          0.011717974 = sum of:
            0.011717974 = weight(_text_:a in 2037) [ClassicSimilarity], result of:
              0.011717974 = score(doc=2037,freq=24.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22065444 = fieldWeight in 2037, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2037)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is a publisher ranking study based on a citation data grant from Elsevier, specifically, book titles cited in Scopus history journals (2007-2011) and matching metadata from WorldCat® (i.e., OCLC numbers, ISBN codes, publisher records, and library holding counts). Using both resources, we have created a unique relational database designed to compare citation counts to books with international library holdings or libcitations for scholarly book publishers. First, we construct a ranking of the top 500 publishers and explore descriptive statistics at the level of publisher type (university, commercial, other) and country of origin. We then identify the top 50 university presses and commercial houses based on total citations and mean citations per book (CPB). In a third analysis, we present a map of directed citation links between journals and book publishers. American and British presses/publishing houses tend to dominate the work of library collection managers and citing scholars; however, a number of specialist publishers from Europe are included. Distinct clusters from the directed citation map indicate a certain degree of regionalism and subject specialization, where some journals produced in languages other than English tend to cite books published by the same parent press. Bibliometric rankings convey only a small part of how the actual structure of the publishing field has evolved; hence, challenges lie ahead for developers of new citation indices for books and bibliometricians interested in measuring book and publisher impacts.
    Type
    a
  3. Zuccala, A.: Modeling the invisible college (2006) 0.00
    0.0026473717 = product of:
      0.0052947435 = sum of:
        0.0052947435 = product of:
          0.010589487 = sum of:
            0.010589487 = weight(_text_:a in 3350) [ClassicSimilarity], result of:
              0.010589487 = score(doc=3350,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19940455 = fieldWeight in 3350, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3350)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article addresses the invisible college concept with the intent of developing a consensus regarding its definition. Emphasis is placed an the term as it was defined and used in Derek de Solla Price's work (1963, 1986) and reviewed an the basis of its thematic progress in past research over the years. Special attention is given to Lievrouw's (1990) article concerning the structure versus social process problem to show that both conditions are essential to the invisible college and may be reconciled. A new definition of the invisible college is also introduced, including a proposed research model. With this model, researchers are encouraged to study the invisible college by focusing an three critical components-the subject specialty, the scientists as social actors, and the information use environment (IUE).
    Type
    a
  4. Zuccala, A.; Thelwall, M.; Oppenheim, C.; Dhiensa, R.: Web intelligence analyses of digital libraries : a case study of the National electronic Library for Health (NeLH) (2007) 0.00
    0.0026202186 = product of:
      0.005240437 = sum of:
        0.005240437 = product of:
          0.010480874 = sum of:
            0.010480874 = weight(_text_:a in 838) [ClassicSimilarity], result of:
              0.010480874 = score(doc=838,freq=30.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19735932 = fieldWeight in 838, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=838)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to explore the use of LexiURL as a Web intelligence tool for collecting and analysing links to digital libraries, focusing specifically on the National electronic Library for Health (NeLH). Design/methodology/approach - The Web intelligence techniques in this study are a combination of link analysis (web structure mining), web server log file analysis (web usage mining), and text analysis (web content mining), utilizing the power of commercial search engines and drawing upon the information science fields of bibliometrics and webometrics. LexiURL is a computer program designed to calculate summary statistics for lists of links or URLs. Its output is a series of standard reports, for example listing and counting all of the different domain names in the data. Findings - Link data, when analysed together with user transaction log files (i.e. Web referring domains) can provide insights into who is using a digital library and when, and who could be using the digital library if they are "surfing" a particular part of the Web; in this case any site that is linked to or colinked with the NeLH. This study found that the NeLH was embedded in a multifaceted Web context, including many governmental, educational, commercial and organisational sites, with the most interesting being sites from the.edu domain, representing American Universities. Not many links directed to the NeLH were followed on September 25, 2005 (the date of the log file analysis and link extraction analysis), which means that users who access the digital library have been arriving at the site via only a few select links, bookmarks and search engine searches, or non-electronic sources. Originality/value - A number of studies concerning digital library users have been carried out using log file analysis as a research tool. Log files focus on real-time user transactions; while LexiURL can be used to extract links and colinks associated with a digital library's growing Web network. This Web network is not recognized often enough, and can be a useful indication of where potential users are surfing, even if they have not yet specifically visited the NeLH site.
    Type
    a
  5. Rousseau, R.; Zuccala, A.: ¬A classification of author co-citations : definitions and search strategies (2004) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 2266) [ClassicSimilarity], result of:
              0.009567685 = score(doc=2266,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 2266, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2266)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The term author co-citation is defined and classified according to four distinct forms: the pure first-author co-citation, the pure author co-citation, the general author co-citation, and the special co-authorlco-citation. Each form can be used to obtain one count in an author co-citation study, based an a binary counting rule, which either recognizes the co-citedness of two authors in a given reference list (1) or does not (0). Most studies using author co-citations have relied solely an first-author cocitation counts as evidence of an author's oeuvre or body of work contributed to a research field. In this article, we argue that an author's contribution to a selected field of study should not be limited, but should be based an his/her complete list of publications, regardless of author ranking. We discuss the implications associated with using each co-citation form and show where simple first-author co-citations fit within our classification scheme. Examples are given to substantiate each author co-citation form defined in our classification, including a set of sample Dialog(TM) searches using references extracted from the SciSearch database.
    Type
    a
  6. Zuccala, A.: Author cocitation analysis is to intellectual structure as Web colink analysis is to ... ? (2006) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 6008) [ClassicSimilarity], result of:
              0.00894975 = score(doc=6008,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 6008, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6008)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Author Cocitation Analysis (ACA) and Web Colink Analysis (WCA) are examined as sister techniques in the related fields of bibliometrics and webometrics. Comparisons are made between the two techniques based on their data retrieval, mapping, and interpretation procedures, using mathematics as the subject in focus. An ACA is carried out and interpreted for a group of participants (authors) involved in an Isaac Newton Institute (2000) workshop-Singularity Theory and Its Applications to Wave Propagation Theory and Dynamical Systems-and compared/contrasted with a WCA for a list of international mathematics research institute home pages on the Web. Although the practice of ACA may be used to inform a WCA, the two techniques do not share many elements in common. The most important departure between ACA and WCA exists at the interpretive stage when ACA maps become meaningful in light of citation theory, and WCA maps require interpretation based on hyperlink theory. Much of the research concerning link theory and motivations for linking is still new; therefore further studies based on colinking are needed, mainly map-based studies, to understand what makes a Web colink structure meaningful.
    Type
    a
  7. Zuccala, A.; Leeuwen, T.van: Book reviews in humanities research evaluations (2011) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 4771) [ClassicSimilarity], result of:
              0.00894975 = score(doc=4771,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 4771, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4771)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Bibliometric evaluations of research outputs in the social sciences and humanities are challenging due to limitations associated with Web of Science data; however, background literature has shown that scholars are interested in stimulating improvements. We give special attention to book reviews processed by Web of Sciencehistory and literature journals, focusing on two types: Type I (i.e., reference to book only) and Type II (i.e., reference to book and other scholarly sources). Bibliometric data are collected and analyzed for a large set of reviews (1981-2009) to observe general publication patterns and patterns of citedness and co-citedness with books under review. Results show that reviews giving reference only to the book (Type I) are published more frequently while reviews referencing the book and other works (Type II) are more likely to be cited. The referencing culture of the humanities makes it difficult to understand patterns of co-citedness between books and review articles without further in-depth content analyses. Overall, citation counts to book reviews are typically low, but our data showed that they are scholarly and do play a role in the scholarly communication system. In the disciplines of history and literature, where book reviews are prominent, counting the number and type of reviews that a scholar produces throughout his/her career is a positive step forward in research evaluations. We propose a new set of journal quality indicators for the purpose of monitoring their scholarly influence.
    Type
    a
  8. Zuccala, A.; Breum, M.; Bruun, K.; Wunsch, B.T.: Metric assessments of books as families of works (2018) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 4018) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=4018,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 4018, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4018)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a