Search (12 results, page 1 of 1)

  • × author_ss:"Wilson, C.S."
  1. Bhavnani, S.K.; Wilson, C.S.: Information scattering (2009) 0.01
    0.010803735 = product of:
      0.027009336 = sum of:
        0.009535614 = weight(_text_:a in 3816) [ClassicSimilarity], result of:
          0.009535614 = score(doc=3816,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 3816, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3816)
        0.017473722 = product of:
          0.034947444 = sum of:
            0.034947444 = weight(_text_:information in 3816) [ClassicSimilarity], result of:
              0.034947444 = score(doc=3816,freq=20.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.42933714 = fieldWeight in 3816, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3816)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Information scattering is an often observed phenomenon related to information collections where there are a few sources that have many items of relevant information about a topic, while most sources have only a few. This entry discusses the original discovery of the phenomenon, the types of information scattering observed across many different information collections, methods that have been used to analyze the phenomenon, explanations for why and how information scattering occurs, and how these results have informed the design of systems and search strategies. The entry concludes with future challenges related to building computational models to more precisely describe the process of information scatter, and algorithms which help users to gather highly scattered information.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
    Type
    a
  2. D'Ambra, J.; Wilson, C.S.: Use of the World Wide Web for international travel : integrating the construct of uncertainty in information seeking and the Task-Technology Fit (TTF) Model (2004) 0.01
    0.008038533 = product of:
      0.020096332 = sum of:
        0.0076151006 = weight(_text_:a in 1135) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=1135,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 1135, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1135)
        0.01248123 = product of:
          0.02496246 = sum of:
            0.02496246 = weight(_text_:information in 1135) [ClassicSimilarity], result of:
              0.02496246 = score(doc=1135,freq=20.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30666938 = fieldWeight in 1135, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1135)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In this study, we attempt to evaluate the performance of the World Wide Web as an information resource in the domain of international travel. The theoretical framework underpinning our approach recognizes the contribution of models of information seeking behavior and of information systems in explaining World Wide Web usage as an information resource. Specifically, a model integrating the construct of uncertainty in information seeking and the task-technology fit model is presented. To test the integrated model, 217 travelers participated in a questionnaire-based empirical study. Our results confirm that richer (or enhanced) models are required to evaluate the broad context of World Wide Web (the Web) usage as an information resource. Use of the Web for travel tasks, for uncertainty reduction, as an information resource, and for mediation all have a significant impact an users' perception of performance, explaining 46% of the variance. Additionally, our study contributes to the testing and validation of metrics for use of the Web as an information resource in a specific domain.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.8, S.731-742
    Type
    a
  3. Hood, W.W.; Wilson, C.S.: Solving problems in library and information science using Fuzzy set theory (2002) 0.01
    0.007891519 = product of:
      0.019728797 = sum of:
        0.009138121 = weight(_text_:a in 814) [ClassicSimilarity], result of:
          0.009138121 = score(doc=814,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1709182 = fieldWeight in 814, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=814)
        0.010590675 = product of:
          0.02118135 = sum of:
            0.02118135 = weight(_text_:information in 814) [ClassicSimilarity], result of:
              0.02118135 = score(doc=814,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2602176 = fieldWeight in 814, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=814)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Various mathematical tools and theories have found application in Library and Information Science (LIS). One of these is Fuzzy Set Theory (FST). FST is a generalization of classical Set Theory, designed to better model situations where membership of a set is not discrete but is "fuzzy." The theory dates from 1965, when Lotfi Zadeh published his seminal paper on the topic. As well as mathematical developments and extensions of the theory itself, there have been many applications of FST to such diverse areas as medical diagnoses and washing machines. The theory has also found application in a number of aspects of LIS. Information Retrieval (IR) is one area where FST can prove useful; this paper reviews IR applications of FST. Another major area of Information Science in which FST has found application is Informetrics; these studies are also reviewed. A few examples of the use of this theory in non-LIS domains are also examined.
    Footnote
    Artikel in einem Themenheft "Current theory in library and information science"
    Type
    a
  4. Hood, W.; Wilson, C.S.: Indexing terms in the LISA database on CD-ROM (1994) 0.01
    0.005898641 = product of:
      0.014746603 = sum of:
        0.0100103095 = weight(_text_:a in 7293) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=7293,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 7293, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=7293)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 7293) [ClassicSimilarity], result of:
              0.009472587 = score(doc=7293,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 7293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7293)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper summarizes the findings of a recent study on the indexing practices used in the LISA database. The indexing terms (DE), the date each record was added to the file(DA), the accession number (AN), and the classification code (CC) of each record were extracted from the complete CD-ROM database. Adjustments to standardize the DE terms were made, and the adjusted data set was analyzed for average number of headings per record per DA year, the rank frequency and rank size distribution of DE classes, and the frequency distribution of the number of DEs per record per DA. The results show that a large number of headings are used once or twice over the whole database. The years in which DE terms first appeared was analyzed. A comparison of the use ot a particular classification code with the DE terms in each record was also undertaken. There was a strong but not total match between the CCs and DEs used. Some attention is given to the chain indexing procedure used by LISA to account for the pattern of DE usage. The concluding section looks at scope for further research on LISA and other databases
    Source
    Information processing and management. 30(1994) no.3, S.327-342
    Type
    a
  5. Hood, W.W.; Wilson, C.S.: Overlap in bibliographic databases (2003) 0.01
    0.005182888 = product of:
      0.012957219 = sum of:
        0.009010308 = weight(_text_:a in 1868) [ClassicSimilarity], result of:
          0.009010308 = score(doc=1868,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 1868, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1868)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 1868) [ClassicSimilarity], result of:
              0.007893822 = score(doc=1868,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 1868, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1868)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Bibliographic databases contain surrogates to a particular subset of the complete set of literature; some databases are very narrow in their scope, while others are multidisciplinary. These databases overlap in their coverage of the literature to a greater or lesser extent. The topic of Fuzzy Set Theory is examined to determine the overlap of coverage in the databases that index this topic. It was found that about 63% of records in the data set are unique to only one database, and the remaining 37% are duplicated in from two to 12 different databases. The overlap distribution is found to conform to a Lotka-type plot. The records with maximum overlap are identified; however, further work is needed to determine the significance of the high level of overlap in these records. The unique records are plotted using a Bradford-type form of data presentation and are found to conform (visually) to a hyperbolic distribution. The extent and causes of intra-database duplication (records duplicated in the one database) are also examined. Finally, the overlap in the top databases in the dataset were examined, and a high correlation was found between overlapping records, and overlapping DIALOG OneSearch categories.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.12, S.1091-1103
    Type
    a
  6. White, H.D.; Boell, S.K.; Yu, H.; Davis, M.; Wilson, C.S.; Cole, F.T.H.: Libcitations : a measure for comparative assessment of book publications in the humanities and social sciences (2009) 0.01
    0.005182888 = product of:
      0.012957219 = sum of:
        0.009010308 = weight(_text_:a in 2846) [ClassicSimilarity], result of:
          0.009010308 = score(doc=2846,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 2846, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2846)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 2846) [ClassicSimilarity], result of:
              0.007893822 = score(doc=2846,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 2846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2846)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Bibliometric measures for evaluating research units in the book-oriented humanities and social sciences are underdeveloped relative to those available for journal-oriented science and technology. We therefore present a new measure designed for book-oriented fields: the libcitation count. This is a count of the libraries holding a given book, as reported in a national or international union catalog. As librarians decide what to acquire for the audiences they serve, they jointly constitute an instrument for gauging the cultural impact of books. Their decisions are informed by knowledge not only of audiences but also of the book world (e.g., the reputations of authors and the prestige of publishers). From libcitation counts, measures can be derived for comparing research units. Here, we imagine a match-up between the departments of history, philosophy, and political science at the University of New South Wales and the University of Sydney in Australia. We chose the 12 books from each department that had the highest libcitation counts in the Libraries Australia union catalog during 2000 to 2006. We present each book's raw libcitation count, its rank within its Library of Congress (LC) class, and its LC-class normalized libcitation score. The latter is patterned on the item-oriented field normalized citation score used in evaluative bibliometrics. Summary statistics based on these measures allow the departments to be compared for cultural impact. Our work has implications for programs such as Excellence in Research for Australia and the Research Assessment Exercise in the United Kingdom. It also has implications for data mining in OCLC's WorldCat.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.6, S.1083-1096
    Type
    a
  7. D'Ambra, J.; Wilson, C.S.; Akter, S.: Application of the task-technology fit model to structure and evaluate the adoption of E-books by Academics (2013) 0.01
    0.005093954 = product of:
      0.012734884 = sum of:
        0.005898632 = weight(_text_:a in 529) [ClassicSimilarity], result of:
          0.005898632 = score(doc=529,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.11032722 = fieldWeight in 529, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=529)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 529) [ClassicSimilarity], result of:
              0.013672504 = score(doc=529,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 529, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=529)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Increasingly, e-books are becoming alternatives to print books in academic libraries, thus providing opportunities to assess how well the use of e-books meets the requirements of academics. This study uses the task-technology fit (TTF) model to explore the interrelationships of e-books, the affordances offered by smart readers, the information needs of academics, and the "fit" of technology to tasks as well as performance. We propose that the adoption of e-books will be dependent on how academics perceive the fit of this new medium to the tasks they undertake as well as what added-value functionality is delivered by the information technology that delivers the content. The study used content analysis and an online survey, administered to the faculty in Medicine, Science and Engineering at the University of New South Wales, to identify the attributes of a TTF construct of e-books in academic settings. Using exploratory factor analysis, preliminary findings confirmed annotation, navigation, and output as the core dimensions of the TTF construct. The results of confirmatory factor analysis using partial least squares path modeling supported the overall TTF model in reflecting significant positive impact of task, technology, and individual characteristics on TTF for e-books in academic settings; it also confirmed significant positive impact of TTF on individuals' performance and use, and impact of using e-books on individual performance. Our research makes two contributions: the development of an e-book TTF construct and the testing of that construct in a model validating the efficacy of the TTF framework in measuring perceived fit of e-books to academic tasks.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.1, S.48-64
    Type
    a
  8. Hood, W.W.; Wilson, C.S.: ¬The relationship of records in multiple databases to their usage or citedness (2005) 0.00
    0.0049073496 = product of:
      0.012268374 = sum of:
        0.0067426977 = weight(_text_:a in 3680) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=3680,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 3680, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3680)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 3680) [ClassicSimilarity], result of:
              0.011051352 = score(doc=3680,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 3680, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3680)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Papers in journals are indexed in bibliographic databases in varying degrees of overlap. The question has been raised as to whether papers that appear in multiple databases (highly overlapping) are in any way more significant (such as being more highly cited) than papers that are indexed in few databases. This paper uses a dataset from fuzzy set theory to compare low overlap papers with high overlap ones, and finds that more highly overlapping papers are in fact more highly cited.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.9, S.1004-1007
    Type
    a
  9. Hood, W.W.; Wilson, C.S.: ¬The scatter of documents over databases in different subject domains : how many databases are needed? (2001) 0.00
    0.004303226 = product of:
      0.010758064 = sum of:
        0.0068111527 = weight(_text_:a in 6936) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=6936,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 6936, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6936)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 6936) [ClassicSimilarity], result of:
              0.007893822 = score(doc=6936,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 6936, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6936)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The distribution of bibliographic records in on-line bibliographic databases is examined using 14 different search topics. These topics were searched using the DIALOG database host, and using as many suitable databases as possible. The presence of duplicate records in the searches was taken into consideration in the analysis, and the problem with lexical ambiguity in at least one search topic is discussed. The study answers questions such as how many databases are needed in a multifile search for particular topics, and what coverage will be achieved using a certain number of databases. The distribution of the percentages of records retrieved over a number of databases for 13 of the 14 search topics roughly fell into three groups: (1) high concentration of records in one database with about 80% coverage in five to eight databases; (2) moderate concentration in one database with about 80% coverage in seven to 10 databases; and (3) low concentration in one database with about 80% coverage in 16 to 19 databases. The study does conform with earlier results, but shows that the number of databases needed for searches with varying complexities of search strategies, is much more topic dependent than previous studies would indicate.
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.14, S.1242-1254
    Type
    a
  10. Fattahi, R.; Wilson, C.S.; Cole, F.: ¬An alternative approach to natural language query expansion in search engines : text analysis of non-topical terms in Web documents (2008) 0.00
    0.0035052493 = product of:
      0.008763123 = sum of:
        0.0048162127 = weight(_text_:a in 2106) [ClassicSimilarity], result of:
          0.0048162127 = score(doc=2106,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.090081796 = fieldWeight in 2106, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2106)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 2106) [ClassicSimilarity], result of:
              0.007893822 = score(doc=2106,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 2106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2106)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents a new approach to query expansion in search engines through the use of general non-topical terms (NTTs) and domain-specific semi-topical terms (STTs). NTTs and STTs can be used in conjunction with topical terms (TTs) to improve precision in retrieval results. In Phase I, 20 topical queries in two domains (Health and the Social Sciences) were carried out in Google and from the results of the queries, 800 pages were textually analysed. Of 1442 NTTs and STTs identified, 15% were shared between the two domains; 62% were NTTs and 38% were STTs; and approximately 64% occurred before while 36% occurred after their respective topical terms (TTs). Findings of Phase II showed that query expansion through NTTs (or STTs) particularly in the 'exact title' and URL search options resulted in more precise and manageable results. Statistically significant differences were found between Health and the Social Sciences vis-à-vis keyword and 'exact phrase' search results; however there were no significant differences in exact title and URL search results. The ratio of exact phrase, exact title, and URL search result frequencies to keyword search result frequencies also showed statistically significant differences between the two domains. Our findings suggest that web searching could be greatly enhanced combining NTTs (and STTs) with TTs in an initial query. Additionally, search results would improve if queries are restricted to the exact title or URL search options. Finally, we suggest the development and implementation of knowledge-based lists of NTTs (and STTs) by both general and specialized search engines to aid query expansion.
    Source
    Information processing and management. 44(2008) no.4, S.1503-1516
    Type
    a
  11. Wilson, C.S.; Tenopir, C.: Local citation analysis, publishing and reading patterns : using multiple methods to evaluate faculty use of an academic library's research collection (2008) 0.00
    0.002940995 = product of:
      0.007352487 = sum of:
        0.0034055763 = weight(_text_:a in 1960) [ClassicSimilarity], result of:
          0.0034055763 = score(doc=1960,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.06369744 = fieldWeight in 1960, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1960)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 1960) [ClassicSimilarity], result of:
              0.007893822 = score(doc=1960,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 1960, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1960)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.9, S.1393-1408
    Type
    a
  12. Wilson, C.S.: Defining subject collections for informetric analyses : the effect of varying the subject aboutness level (1998) 0.00
    0.0023357389 = product of:
      0.011678694 = sum of:
        0.011678694 = weight(_text_:a in 1035) [ClassicSimilarity], result of:
          0.011678694 = score(doc=1035,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.21843673 = fieldWeight in 1035, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1035)
      0.2 = coord(1/5)
    
    Abstract
    Examines how several commonly measured properties of subject literatures vary as an important factor in the compilation of subject collections (the amount which a document 'says' about a subject) is varied. This document property has been expressed in formal terms and given a simple measure for the one subject examined, the research topic of Bradford's law of scattering. It is found that lowering the level of subject aboutness required for admission to a collection produces a large increase in the size of the collection obtained, and an appreciable change in some size related properties
    Type
    a