Search (8 results, page 1 of 1)

  • × author_ss:"Zuccala, A."
  1. Zuccala, A.: Author cocitation analysis is to intellectual structure as Web colink analysis is to ... ? (2006) 0.01
    0.011763023 = product of:
      0.05489411 = sum of:
        0.034870304 = weight(_text_:web in 6008) [ClassicSimilarity], result of:
          0.034870304 = score(doc=6008,freq=8.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.36057037 = fieldWeight in 6008, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6008)
        0.0050448296 = weight(_text_:information in 6008) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=6008,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 6008, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6008)
        0.014978974 = weight(_text_:retrieval in 6008) [ClassicSimilarity], result of:
          0.014978974 = score(doc=6008,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.16710453 = fieldWeight in 6008, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6008)
      0.21428572 = coord(3/14)
    
    Abstract
    Author Cocitation Analysis (ACA) and Web Colink Analysis (WCA) are examined as sister techniques in the related fields of bibliometrics and webometrics. Comparisons are made between the two techniques based on their data retrieval, mapping, and interpretation procedures, using mathematics as the subject in focus. An ACA is carried out and interpreted for a group of participants (authors) involved in an Isaac Newton Institute (2000) workshop-Singularity Theory and Its Applications to Wave Propagation Theory and Dynamical Systems-and compared/contrasted with a WCA for a list of international mathematics research institute home pages on the Web. Although the practice of ACA may be used to inform a WCA, the two techniques do not share many elements in common. The most important departure between ACA and WCA exists at the interpretive stage when ACA maps become meaningful in light of citation theory, and WCA maps require interpretation based on hyperlink theory. Much of the research concerning link theory and motivations for linking is still new; therefore further studies based on colinking are needed, mainly map-based studies, to understand what makes a Web colink structure meaningful.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.11, S.1487-1502
  2. Zuccala, A.; Thelwall, M.; Oppenheim, C.; Dhiensa, R.: Web intelligence analyses of digital libraries : a case study of the National electronic Library for Health (NeLH) (2007) 0.01
    0.007479083 = product of:
      0.052353576 = sum of:
        0.04831771 = weight(_text_:web in 838) [ClassicSimilarity], result of:
          0.04831771 = score(doc=838,freq=24.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.49962097 = fieldWeight in 838, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=838)
        0.0040358636 = weight(_text_:information in 838) [ClassicSimilarity], result of:
          0.0040358636 = score(doc=838,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.0775819 = fieldWeight in 838, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=838)
      0.14285715 = coord(2/14)
    
    Abstract
    Purpose - The purpose of this paper is to explore the use of LexiURL as a Web intelligence tool for collecting and analysing links to digital libraries, focusing specifically on the National electronic Library for Health (NeLH). Design/methodology/approach - The Web intelligence techniques in this study are a combination of link analysis (web structure mining), web server log file analysis (web usage mining), and text analysis (web content mining), utilizing the power of commercial search engines and drawing upon the information science fields of bibliometrics and webometrics. LexiURL is a computer program designed to calculate summary statistics for lists of links or URLs. Its output is a series of standard reports, for example listing and counting all of the different domain names in the data. Findings - Link data, when analysed together with user transaction log files (i.e. Web referring domains) can provide insights into who is using a digital library and when, and who could be using the digital library if they are "surfing" a particular part of the Web; in this case any site that is linked to or colinked with the NeLH. This study found that the NeLH was embedded in a multifaceted Web context, including many governmental, educational, commercial and organisational sites, with the most interesting being sites from the.edu domain, representing American Universities. Not many links directed to the NeLH were followed on September 25, 2005 (the date of the log file analysis and link extraction analysis), which means that users who access the digital library have been arriving at the site via only a few select links, bookmarks and search engine searches, or non-electronic sources. Originality/value - A number of studies concerning digital library users have been carried out using log file analysis as a research tool. Log files focus on real-time user transactions; while LexiURL can be used to extract links and colinks associated with a digital library's growing Web network. This Web network is not recognized often enough, and can be a useful indication of where potential users are surfing, even if they have not yet specifically visited the NeLH site.
  3. Zuccala, A.; Leeuwen, T.van: Book reviews in humanities research evaluations (2011) 0.00
    0.004243123 = product of:
      0.029701859 = sum of:
        0.02465703 = weight(_text_:web in 4771) [ClassicSimilarity], result of:
          0.02465703 = score(doc=4771,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25496176 = fieldWeight in 4771, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4771)
        0.0050448296 = weight(_text_:information in 4771) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=4771,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 4771, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4771)
      0.14285715 = coord(2/14)
    
    Abstract
    Bibliometric evaluations of research outputs in the social sciences and humanities are challenging due to limitations associated with Web of Science data; however, background literature has shown that scholars are interested in stimulating improvements. We give special attention to book reviews processed by Web of Sciencehistory and literature journals, focusing on two types: Type I (i.e., reference to book only) and Type II (i.e., reference to book and other scholarly sources). Bibliometric data are collected and analyzed for a large set of reviews (1981-2009) to observe general publication patterns and patterns of citedness and co-citedness with books under review. Results show that reviews giving reference only to the book (Type I) are published more frequently while reviews referencing the book and other works (Type II) are more likely to be cited. The referencing culture of the humanities makes it difficult to understand patterns of co-citedness between books and review articles without further in-depth content analyses. Overall, citation counts to book reviews are typically low, but our data showed that they are scholarly and do play a role in the scholarly communication system. In the disciplines of history and literature, where book reviews are prominent, counting the number and type of reviews that a scholar produces throughout his/her career is a positive step forward in research evaluations. We propose a new set of journal quality indicators for the purpose of monitoring their scholarly influence.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.10, S.1979-1991
  4. Zuccala, A.; Breum, M.; Bruun, K.; Wunsch, B.T.: Metric assessments of books as families of works (2018) 0.00
    0.0028605436 = product of:
      0.020023804 = sum of:
        0.0050448296 = weight(_text_:information in 4018) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=4018,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 4018, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4018)
        0.014978974 = weight(_text_:retrieval in 4018) [ClassicSimilarity], result of:
          0.014978974 = score(doc=4018,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.16710453 = fieldWeight in 4018, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4018)
      0.14285715 = coord(2/14)
    
    Abstract
    We describe the intellectual and physical properties of books as manifestations, expressions, and works and assess the current indexing and metadata structure of monographs in the Book Citation Index (BKCI). Our focus is on the interrelationship of these properties in light of the Functional Requirements for Bibliographic Records (FRBR). Data pertaining to monographs were collected from the Danish PURE repository system as well as the BKCI (2005-2015) via their International Standard Book Numbers (ISBNs). Each ISBN was then matched to the same ISBN and family-related ISBNs cataloged in two additional databases: OCLC-WorldCat and Goodreads. With the retrieval of all family-related ISBNs, we were able to determine the number of monograph expressions present in the BKCI and their collective relationship to one work. Our results show that the majority of missing expressions from the BKCI are emblematic (i.e., first editions of monographs) and that both the indexing and metadata structure of this commercial database could significantly improve with the introduction of distinct expression IDs (i.e., for every distinct edition) and unifying work-related IDs. This improved metadata structure would support the collection of more accurate publication and citation counts for monographs and has implications for developing new indicators based on bibliographic levels.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.1, S.146-157
  5. Zuccala, A.: Modeling the invisible college (2006) 0.00
    7.134467E-4 = product of:
      0.009988253 = sum of:
        0.009988253 = weight(_text_:information in 3350) [ClassicSimilarity], result of:
          0.009988253 = score(doc=3350,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1920054 = fieldWeight in 3350, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3350)
      0.071428575 = coord(1/14)
    
    Abstract
    This article addresses the invisible college concept with the intent of developing a consensus regarding its definition. Emphasis is placed an the term as it was defined and used in Derek de Solla Price's work (1963, 1986) and reviewed an the basis of its thematic progress in past research over the years. Special attention is given to Lievrouw's (1990) article concerning the structure versus social process problem to show that both conditions are essential to the invisible college and may be reconciled. A new definition of the invisible college is also introduced, including a proposed research model. With this model, researchers are encouraged to study the invisible college by focusing an three critical components-the subject specialty, the scientists as social actors, and the information use environment (IUE).
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.2, S.152-168
  6. Rousseau, R.; Zuccala, A.: ¬A classification of author co-citations : definitions and search strategies (2004) 0.00
    3.6034497E-4 = product of:
      0.0050448296 = sum of:
        0.0050448296 = weight(_text_:information in 2266) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=2266,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 2266, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2266)
      0.071428575 = coord(1/14)
    
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.6, S.513-529
  7. Zuccala, A.; Someren, M. van; Bellen, M. van: ¬A machine-learning approach to coding book reviews as quality indicators : toward a theory of megacitation (2014) 0.00
    3.6034497E-4 = product of:
      0.0050448296 = sum of:
        0.0050448296 = weight(_text_:information in 1530) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=1530,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 1530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1530)
      0.071428575 = coord(1/14)
    
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.11, S.2248-2260
  8. Zuccala, A.; Guns, R.; Cornacchia, R.; Bod, R.: Can we rank scholarly book publishers? : a bibliometric experiment with the field of history (2015) 0.00
    3.6034497E-4 = product of:
      0.0050448296 = sum of:
        0.0050448296 = weight(_text_:information in 2037) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=2037,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 2037, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2037)
      0.071428575 = coord(1/14)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.7, S.1333-1347