Search (7 results, page 1 of 1)

  • × author_ss:"Wilson, C.S."
  • × theme_ss:"Informetrie"
  1. Wilson, C.S.: Defining subject collections for informetric analyses : the effect of varying the subject aboutness level (1998) 0.00
    0.0029000505 = product of:
      0.005800101 = sum of:
        0.005800101 = product of:
          0.011600202 = sum of:
            0.011600202 = weight(_text_:a in 1035) [ClassicSimilarity], result of:
              0.011600202 = score(doc=1035,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21843673 = fieldWeight in 1035, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1035)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Examines how several commonly measured properties of subject literatures vary as an important factor in the compilation of subject collections (the amount which a document 'says' about a subject) is varied. This document property has been expressed in formal terms and given a simple measure for the one subject examined, the research topic of Bradford's law of scattering. It is found that lowering the level of subject aboutness required for admission to a collection produces a large increase in the size of the collection obtained, and an appreciable change in some size related properties
    Type
    a
  2. Bhavnani, S.K.; Wilson, C.S.: Information scattering (2009) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 3816) [ClassicSimilarity], result of:
              0.009471525 = score(doc=3816,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 3816, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3816)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information scattering is an often observed phenomenon related to information collections where there are a few sources that have many items of relevant information about a topic, while most sources have only a few. This entry discusses the original discovery of the phenomenon, the types of information scattering observed across many different information collections, methods that have been used to analyze the phenomenon, explanations for why and how information scattering occurs, and how these results have informed the design of systems and search strategies. The entry concludes with future challenges related to building computational models to more precisely describe the process of information scatter, and algorithms which help users to gather highly scattered information.
    Type
    a
  3. Hood, W.W.; Wilson, C.S.: Overlap in bibliographic databases (2003) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 1868) [ClassicSimilarity], result of:
              0.00894975 = score(doc=1868,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 1868, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1868)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Bibliographic databases contain surrogates to a particular subset of the complete set of literature; some databases are very narrow in their scope, while others are multidisciplinary. These databases overlap in their coverage of the literature to a greater or lesser extent. The topic of Fuzzy Set Theory is examined to determine the overlap of coverage in the databases that index this topic. It was found that about 63% of records in the data set are unique to only one database, and the remaining 37% are duplicated in from two to 12 different databases. The overlap distribution is found to conform to a Lotka-type plot. The records with maximum overlap are identified; however, further work is needed to determine the significance of the high level of overlap in these records. The unique records are plotted using a Bradford-type form of data presentation and are found to conform (visually) to a hyperbolic distribution. The extent and causes of intra-database duplication (records duplicated in the one database) are also examined. Finally, the overlap in the top databases in the dataset were examined, and a high correlation was found between overlapping records, and overlapping DIALOG OneSearch categories.
    Type
    a
  4. White, H.D.; Boell, S.K.; Yu, H.; Davis, M.; Wilson, C.S.; Cole, F.T.H.: Libcitations : a measure for comparative assessment of book publications in the humanities and social sciences (2009) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 2846) [ClassicSimilarity], result of:
              0.00894975 = score(doc=2846,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 2846, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2846)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Bibliometric measures for evaluating research units in the book-oriented humanities and social sciences are underdeveloped relative to those available for journal-oriented science and technology. We therefore present a new measure designed for book-oriented fields: the libcitation count. This is a count of the libraries holding a given book, as reported in a national or international union catalog. As librarians decide what to acquire for the audiences they serve, they jointly constitute an instrument for gauging the cultural impact of books. Their decisions are informed by knowledge not only of audiences but also of the book world (e.g., the reputations of authors and the prestige of publishers). From libcitation counts, measures can be derived for comparing research units. Here, we imagine a match-up between the departments of history, philosophy, and political science at the University of New South Wales and the University of Sydney in Australia. We chose the 12 books from each department that had the highest libcitation counts in the Libraries Australia union catalog during 2000 to 2006. We present each book's raw libcitation count, its rank within its Library of Congress (LC) class, and its LC-class normalized libcitation score. The latter is patterned on the item-oriented field normalized citation score used in evaluative bibliometrics. Summary statistics based on these measures allow the departments to be compared for cultural impact. Our work has implications for programs such as Excellence in Research for Australia and the Research Assessment Exercise in the United Kingdom. It also has implications for data mining in OCLC's WorldCat.
    Type
    a
  5. Hood, W.W.; Wilson, C.S.: ¬The scatter of documents over databases in different subject domains : how many databases are needed? (2001) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 6936) [ClassicSimilarity], result of:
              0.006765375 = score(doc=6936,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 6936, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6936)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The distribution of bibliographic records in on-line bibliographic databases is examined using 14 different search topics. These topics were searched using the DIALOG database host, and using as many suitable databases as possible. The presence of duplicate records in the searches was taken into consideration in the analysis, and the problem with lexical ambiguity in at least one search topic is discussed. The study answers questions such as how many databases are needed in a multifile search for particular topics, and what coverage will be achieved using a certain number of databases. The distribution of the percentages of records retrieved over a number of databases for 13 of the 14 search topics roughly fell into three groups: (1) high concentration of records in one database with about 80% coverage in five to eight databases; (2) moderate concentration in one database with about 80% coverage in seven to 10 databases; and (3) low concentration in one database with about 80% coverage in 16 to 19 databases. The study does conform with earlier results, but shows that the number of databases needed for searches with varying complexities of search strategies, is much more topic dependent than previous studies would indicate.
    Type
    a
  6. Hood, W.W.; Wilson, C.S.: ¬The relationship of records in multiple databases to their usage or citedness (2005) 0.00
    0.001674345 = product of:
      0.00334869 = sum of:
        0.00334869 = product of:
          0.00669738 = sum of:
            0.00669738 = weight(_text_:a in 3680) [ClassicSimilarity], result of:
              0.00669738 = score(doc=3680,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12611452 = fieldWeight in 3680, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3680)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Papers in journals are indexed in bibliographic databases in varying degrees of overlap. The question has been raised as to whether papers that appear in multiple databases (highly overlapping) are in any way more significant (such as being more highly cited) than papers that are indexed in few databases. This paper uses a dataset from fuzzy set theory to compare low overlap papers with high overlap ones, and finds that more highly overlapping papers are in fact more highly cited.
    Type
    a
  7. Wilson, C.S.; Tenopir, C.: Local citation analysis, publishing and reading patterns : using multiple methods to evaluate faculty use of an academic library's research collection (2008) 0.00
    8.4567186E-4 = product of:
      0.0016913437 = sum of:
        0.0016913437 = product of:
          0.0033826875 = sum of:
            0.0033826875 = weight(_text_:a in 1960) [ClassicSimilarity], result of:
              0.0033826875 = score(doc=1960,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.06369744 = fieldWeight in 1960, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1960)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a