Search (77 results, page 1 of 4)

  • × theme_ss:"Informetrie"
  • × year_i:[2000 TO 2010}
  1. Hood, W.W.; Wilson, C.S.: ¬The scatter of documents over databases in different subject domains : how many databases are needed? (2001) 0.05
    0.052648105 = product of:
      0.13162026 = sum of:
        0.07827286 = weight(_text_:line in 6936) [ClassicSimilarity], result of:
          0.07827286 = score(doc=6936,freq=2.0), product of:
            0.25266227 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.045055166 = queryNorm
            0.30979243 = fieldWeight in 6936, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6936)
        0.053347398 = weight(_text_:bibliographic in 6936) [ClassicSimilarity], result of:
          0.053347398 = score(doc=6936,freq=4.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.30414405 = fieldWeight in 6936, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6936)
      0.4 = coord(2/5)
    
    Abstract
    The distribution of bibliographic records in on-line bibliographic databases is examined using 14 different search topics. These topics were searched using the DIALOG database host, and using as many suitable databases as possible. The presence of duplicate records in the searches was taken into consideration in the analysis, and the problem with lexical ambiguity in at least one search topic is discussed. The study answers questions such as how many databases are needed in a multifile search for particular topics, and what coverage will be achieved using a certain number of databases. The distribution of the percentages of records retrieved over a number of databases for 13 of the 14 search topics roughly fell into three groups: (1) high concentration of records in one database with about 80% coverage in five to eight databases; (2) moderate concentration in one database with about 80% coverage in seven to 10 databases; and (3) low concentration in one database with about 80% coverage in 16 to 19 databases. The study does conform with earlier results, but shows that the number of databases needed for searches with varying complexities of search strategies, is much more topic dependent than previous studies would indicate.
  2. Marion, L.S.; McCain, K.W.: Contrasting views of software engineering journals : author cocitation choices and indexer vocabulary assignments (2001) 0.05
    0.04639807 = product of:
      0.11599517 = sum of:
        0.07827286 = weight(_text_:line in 5767) [ClassicSimilarity], result of:
          0.07827286 = score(doc=5767,freq=2.0), product of:
            0.25266227 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.045055166 = queryNorm
            0.30979243 = fieldWeight in 5767, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5767)
        0.03772231 = weight(_text_:bibliographic in 5767) [ClassicSimilarity], result of:
          0.03772231 = score(doc=5767,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.21506234 = fieldWeight in 5767, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5767)
      0.4 = coord(2/5)
    
    Abstract
    We explore the intellectual subject structure and research themes in software engineering through the identification and analysis of a core journal literature. We examine this literature via two expert perspectives: that of the author, who identified significant work by citing it (journal cocitation analysis), and that of the professional indexer, who tags published work with subject terms to facilitate retrieval from a bibliographic database (subject profile analysis). The data sources are SCISEARCH (the on-line version of Science Citation Index), and INSPEC (a database covering software engineering, computer science, and information systems). We use data visualization tools (cluster analysis, multidimensional scaling, and PFNets) to show the "intellectual maps" of software engineering. Cocitation and subject profile analyses demonstrate that software engineering is a distinct interdisciplinary field, valuing practical and applied aspects, and spanning a subject continuum from "programming-in-the-smalI" to "programming-in-the-large." This continuum mirrors the software development life cycle by taking the operating system or major application from initial programming through project management, implementation, and maintenance. Object orientation is an integral but distinct subject area in software engineering. Key differences are the importance of management and programming: (1) cocitation analysis emphasizes project management and systems development; (2) programming techniques/languages are more influential in subject profiles; (3) cocitation profiles place object-oriented journals separately and centrally while the subject profile analysis locates these journals with the programming/languages group
  3. Egghe, L.: ¬A rationale for the Hirsch-index rank-order distribution and a comparison with the impact factor rank-order distribution (2009) 0.03
    0.030994475 = product of:
      0.15497237 = sum of:
        0.15497237 = weight(_text_:line in 3124) [ClassicSimilarity], result of:
          0.15497237 = score(doc=3124,freq=4.0), product of:
            0.25266227 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.045055166 = queryNorm
            0.6133578 = fieldWeight in 3124, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3124)
      0.2 = coord(1/5)
    
    Abstract
    We present a rationale for the Hirsch-index rank-order distribution and prove that it is a power law (hence a straight line in the log-log scale). This is confirmed by experimental data of Pyykkö and by data produced in this article on 206 mathematics journals. This distribution is of a completely different nature than the impact factor (IF) rank-order distribution which (as proved in a previous article) is S-shaped. This is also confirmed by our example. Only in the log-log scale of the h-index distribution do we notice a concave deviation of the straight line for higher ranks. This phenomenon is discussed.
  4. Zhang, Y.; Jansen, B.J.; Spink, A.: Identification of factors predicting clickthrough in Web searching using neural network analysis (2009) 0.02
    0.024256555 = product of:
      0.12128277 = sum of:
        0.12128277 = sum of:
          0.084656656 = weight(_text_:searching in 2742) [ClassicSimilarity], result of:
            0.084656656 = score(doc=2742,freq=6.0), product of:
              0.18226127 = queryWeight, product of:
                4.0452914 = idf(docFreq=2103, maxDocs=44218)
                0.045055166 = queryNorm
              0.46447968 = fieldWeight in 2742, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.0452914 = idf(docFreq=2103, maxDocs=44218)
                0.046875 = fieldNorm(doc=2742)
          0.03662612 = weight(_text_:22 in 2742) [ClassicSimilarity], result of:
            0.03662612 = score(doc=2742,freq=2.0), product of:
              0.15777552 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045055166 = queryNorm
              0.23214069 = fieldWeight in 2742, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2742)
      0.2 = coord(1/5)
    
    Abstract
    In this research, we aim to identify factors that significantly affect the clickthrough of Web searchers. Our underlying goal is determine more efficient methods to optimize the clickthrough rate. We devise a clickthrough metric for measuring customer satisfaction of search engine results using the number of links visited, number of queries a user submits, and rank of clicked links. We use a neural network to detect the significant influence of searching characteristics on future user clickthrough. Our results show that high occurrences of query reformulation, lengthy searching duration, longer query length, and the higher ranking of prior clicked links correlate positively with future clickthrough. We provide recommendations for leveraging these findings for improving the performance of search engine retrieval and result ranking, along with implications for search engine marketing.
    Date
    22. 3.2009 17:49:11
  5. Bensman, S.J.; Leydesdorff, L.: Definition and identification of journals as bibliographic and subject entities : librarianship versus ISI Journal Citation Reports methods and their effect on citation measures (2009) 0.02
    0.022176098 = product of:
      0.11088049 = sum of:
        0.11088049 = weight(_text_:bibliographic in 2840) [ClassicSimilarity], result of:
          0.11088049 = score(doc=2840,freq=12.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.63215154 = fieldWeight in 2840, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=2840)
      0.2 = coord(1/5)
    
    Abstract
    This paper explores the ISI Journal Citation Reports (JCR) bibliographic and subject structures through Library of Congress (LC) and American research libraries cataloging and classification methodology. The 2006 Science Citation Index JCR Behavioral Sciences subject category journals are used as an example. From the library perspective, the main fault of the JCR bibliographic structure is that the JCR mistakenly identifies journal title segments as journal bibliographic entities, seriously affecting journal rankings by total cites and the impact factor. In respect to JCR subject structure, the title segment, which constitutes the JCR bibliographic basis, is posited as the best bibliographic entity for the citation measurement of journal subject relationships. Through factor analysis and other methods, the JCR subject categorization of journals is tested against their LC subject headings and classification. The finding is that JCR and library journal subject analyses corroborate, clarify, and correct each other.
  6. Morris, S.A.; Yen, G.; Wu, Z.; Asnake, B.: Time line visualization of research fronts (2003) 0.02
    0.021916403 = product of:
      0.10958201 = sum of:
        0.10958201 = weight(_text_:line in 1452) [ClassicSimilarity], result of:
          0.10958201 = score(doc=1452,freq=2.0), product of:
            0.25266227 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.045055166 = queryNorm
            0.4337094 = fieldWeight in 1452, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1452)
      0.2 = coord(1/5)
    
  7. Shibata, N.; Kajikawa, Y.; Takeda, Y.; Matsushima, K.: Comparative study on methods of detecting research fronts using different types of citation (2009) 0.02
    0.021193277 = product of:
      0.05298319 = sum of:
        0.03772231 = weight(_text_:bibliographic in 2743) [ClassicSimilarity], result of:
          0.03772231 = score(doc=2743,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.21506234 = fieldWeight in 2743, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2743)
        0.015260884 = product of:
          0.030521767 = sum of:
            0.030521767 = weight(_text_:22 in 2743) [ClassicSimilarity], result of:
              0.030521767 = score(doc=2743,freq=2.0), product of:
                0.15777552 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045055166 = queryNorm
                0.19345059 = fieldWeight in 2743, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2743)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In this article, we performed a comparative study to investigate the performance of methods for detecting emerging research fronts. Three types of citation network, co-citation, bibliographic coupling, and direct citation, were tested in three research domains, gallium nitride (GaN), complex network (CNW), and carbon nanotube (CNT). Three types of citation network were constructed for each research domain, and the papers in those domains were divided into clusters to detect the research front. We evaluated the performance of each type of citation network in detecting a research front by using the following measures of papers in the cluster: visibility, measured by normalized cluster size, speed, measured by average publication year, and topological relevance, measured by density. Direct citation, which could detect large and young emerging clusters earlier, shows the best performance in detecting a research front, and co-citation shows the worst. Additionally, in direct citation networks, the clustering coefficient was the largest, which suggests that the content similarity of papers connected by direct citations is the greatest and that direct citation networks have the least risk of missing emerging research domains because core papers are included in the largest component.
    Date
    22. 3.2009 17:52:50
  8. Niemi, T.; Hirvonen, L.; Järvelin, K.: Multidimensional data model and query language for informetrics (2003) 0.02
    0.018785488 = product of:
      0.093927436 = sum of:
        0.093927436 = weight(_text_:line in 1753) [ClassicSimilarity], result of:
          0.093927436 = score(doc=1753,freq=2.0), product of:
            0.25266227 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.045055166 = queryNorm
            0.37175092 = fieldWeight in 1753, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.046875 = fieldNorm(doc=1753)
      0.2 = coord(1/5)
    
    Abstract
    Multidimensional data analysis or On-line analytical processing (OLAP) offers a single subject-oriented source for analyzing summary data based an various dimensions. We demonstrate that the OLAP approach gives a promising starting point for advanced analysis and comparison among summary data in informetrics applications. At the moment there is no single precise, commonly accepted logical/conceptual model for multidimensional analysis. This is because the requirements of applications vary considerably. We develop a conceptual/logical multidimensional model for supporting the complex and unpredictable needs of informetrics. Summary data are considered with respect of some dimensions. By changing dimensions the user may construct other views an the same summary data. We develop a multidimensional query language whose basic idea is to support the definition of views in a way, which is natural and intuitive for lay users in the informetrics area. We show that this view-oriented query language has a great expressive power and its degree of declarativity is greater than in contemporary operation-oriented or SQL (Structured Query Language)-like OLAP query languages.
  9. Ahlgren, P.; Jarneving, B.; Rousseau, R.: Requirements for a cocitation similarity measure, with special reference to Pearson's correlation coefficient (2003) 0.02
    0.016954621 = product of:
      0.042386554 = sum of:
        0.030177847 = weight(_text_:bibliographic in 5171) [ClassicSimilarity], result of:
          0.030177847 = score(doc=5171,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.17204987 = fieldWeight in 5171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=5171)
        0.012208707 = product of:
          0.024417413 = sum of:
            0.024417413 = weight(_text_:22 in 5171) [ClassicSimilarity], result of:
              0.024417413 = score(doc=5171,freq=2.0), product of:
                0.15777552 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045055166 = queryNorm
                0.15476047 = fieldWeight in 5171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5171)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Ahlgren, Jarneving, and. Rousseau review accepted procedures for author co-citation analysis first pointing out that since in the raw data matrix the row and column values are identical i,e, the co-citation count of two authors, there is no clear choice for diagonal values. They suggest the number of times an author has been co-cited with himself excluding self citation rather than the common treatment as zeros or as missing values. When the matrix is converted to a similarity matrix the normal procedure is to create a matrix of Pearson's r coefficients between data vectors. Ranking by r and by co-citation frequency and by intuition can easily yield three different orders. It would seem necessary that the adding of zeros to the matrix will not affect the value or the relative order of similarity measures but it is shown that this is not the case with Pearson's r. Using 913 bibliographic descriptions form the Web of Science of articles form JASIS and Scientometrics, authors names were extracted, edited and 12 information retrieval authors and 12 bibliometric authors each from the top 100 most cited were selected. Co-citation and r value (diagonal elements treated as missing) matrices were constructed, and then reconstructed in expanded form. Adding zeros can both change the r value and the ordering of the authors based upon that value. A chi-squared distance measure would not violate these requirements, nor would the cosine coefficient. It is also argued that co-citation data is ordinal data since there is no assurance of an absolute zero number of co-citations, and thus Pearson is not appropriate. The number of ties in co-citation data make the use of the Spearman rank order coefficient problematic.
    Date
    9. 7.2006 10:22:35
  10. Cronin, B.: Semiotics and evaluative bibliometrics (2000) 0.02
    0.015088923 = product of:
      0.07544462 = sum of:
        0.07544462 = weight(_text_:bibliographic in 4542) [ClassicSimilarity], result of:
          0.07544462 = score(doc=4542,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.43012467 = fieldWeight in 4542, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.078125 = fieldNorm(doc=4542)
      0.2 = coord(1/5)
    
    Abstract
    The reciprocal relationship between bibliographic references and citations in the context of the scholarly communication system is examined. Semiotic analysis of referencing behaviours and citation counting reveals the complexity of prevailing sign systems and associated symbolic practices.
  11. Morris, S.A.: Manifestation of emerging specialties in journal literature : a growth model of papers, references, exemplars, bibliographic coupling, cocitation, and clustering coefficient distribution (2005) 0.01
    0.013067392 = product of:
      0.06533696 = sum of:
        0.06533696 = weight(_text_:bibliographic in 4338) [ClassicSimilarity], result of:
          0.06533696 = score(doc=4338,freq=6.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.3724989 = fieldWeight in 4338, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4338)
      0.2 = coord(1/5)
    
    Abstract
    A model is presented of the manifestation of the birth and development of a scientific specialty in a collection of journal papers. The proposed model, Cumulative Advantage by Paper with Exemplars (CAPE) is an adaptation of Price's cumulative advantage model (D. Price, 1976). Two modifications are made: (a) references are cited in groups by paper, and (b) the model accounts for the generation of highly cited exemplar references immediately after the birth of the specialty. This simple growth process mimics many characteristic features of real collections of papers, including the structure of the paper-to-reference matrix, the reference-per-paper distribution, the paper-per-reference distribution, the bibliographic coupling distribution, the cocitation distribution, the bibliographic coupling clustering coefficient distribution, and the temporal distribution of exemplar references. The model yields a great deal of insight into the process that produces the connectedness and clustering of a collection of articles and references. Two examples are presented and successfully modeled: a collection of 131 articles an MEMS RF (microelectromechnical systems radio frequency) switches, and a collection of 901 articles an the subject of complex networks.
  12. Bensman, S.J.: Urquhart's and Garfield's laws : the British controversy over their validity (2001) 0.01
    0.012523659 = product of:
      0.06261829 = sum of:
        0.06261829 = weight(_text_:line in 6026) [ClassicSimilarity], result of:
          0.06261829 = score(doc=6026,freq=2.0), product of:
            0.25266227 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.045055166 = queryNorm
            0.24783395 = fieldWeight in 6026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.03125 = fieldNorm(doc=6026)
      0.2 = coord(1/5)
    
    Abstract
    The British controversy over the validity of Urquhart's and Garfield's Laws during the 1970s constitutes an important episode in the formulation of the probability structure of human knowledge. This controversy took place within the historical context of the convergence of two scientific revolutions-the bibliometric and the biometric-that had been launched in Britain. The preceding decades had witnessed major breakthroughs in understanding the probability distributions underlying the use of human knowledge. Two of the most important of these breakthroughs were the laws posited by Donald J. Urquhart and Eugene Garfield, who played major roles in establishing the institutional bases of the bibliometric revolution. For his part, Urquhart began his realization of S. C. Bradford's concept of a national science library by analyzing the borrowing of journals on interlibrary loan from the Science Museum Library in 1956. He found that 10% of the journals accounted for 80% of the loans and formulated Urquhart's Law, by which the interlibrary use of a journal is a measure of its total use. This law underlay the operations of the National Lending Library for Science and Technology (NLLST), which Urquhart founded. The NLLST became the British Library Lending Division (BLLD) and ultimately the British Library Document Supply Centre (BLDSC). In contrast, Garfield did a study of 1969 journal citations as part of the process of creating the Science Citation Index (SCI), formulating his Law of Concentration, by which the bulk of the information needs in science can be satisfied by a relatively small, multidisciplinary core of journals. This law became the operational principle of the Institute for Scientif ic Information created by Garfield. A study at the BLLD under Urquhart's successor, Maurice B. Line, found low correlations of NLLST use with SCI citations, and publication of this study started a major controversy, during which both laws were called into question. The study was based on the faulty use of the Spearman rank correlation coefficient, and the controversy over it was instrumental in causing B. C. Brookes to investigate bibliometric laws as probabilistic phenomena and begin to link the bibliometric with the biometric revolution. This paper concludes with a resolution of the controversy by means of a statistical technique that incorporates Brookes' criticism of the Spearman rank-correlation method and demonstrates the mutual supportiveness of the two laws
  13. Moed, H.F.; Luwel, M.; Nederhof, A.J.: Towards research performance in the humanities (2002) 0.01
    0.012523659 = product of:
      0.06261829 = sum of:
        0.06261829 = weight(_text_:line in 820) [ClassicSimilarity], result of:
          0.06261829 = score(doc=820,freq=2.0), product of:
            0.25266227 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.045055166 = queryNorm
            0.24783395 = fieldWeight in 820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.03125 = fieldNorm(doc=820)
      0.2 = coord(1/5)
    
    Abstract
    This paper describes a general methodology for developing bibliometric performance indicators. Such a description provides a framework or paradigm for application-oriented research in the field of evaluative quantitative science and technology studies, particularly in the humanities and social sciences. It is based on our study of scholarly output in the field of Law at the four major universities in Flanders, the Dutch speaking part of Belgium. The study illustrates that bibliometrics is much more than conducting citation analyses based on the indexes produced by the Institute for Scientific Information (ISI), since citation data do not play a role in the study. Interaction with scholars in the fields under consideration and openness in the presentation of the quantitative outcomes are the basic features of the methodology. Bibliometrics should be used as an instrument to create a mirror. While not a direct reflection, this study provides a thorough analysis of how scholars in the humanities and social sciences structure their activities and their research output. This structure can be examined empirically from the point of view of its consistency and the degree of consensus among scholars. Relevant issues can be raised that are worth considering in more detail in followup studies, and conclusions from our empirical materials may illuminate such issues. We argue that the principal aim of the development and application of bibliometric indicators is to stimulate a debate among scholars in the field under investigation on the nature of scholarly quality, its principal dimensions, and operationalizations. This aim provides a criterion of "productivity" of the development process. We further contend that librarians are not infrequently requested to provide assistance in collecting data related to research performance assessments, and that the methodology described in the paper aims at offering a general framework for such activities, and can be used by librarians as a line of action whenever they become involved.
  14. Zhao, L.: How librarians used e-resources : an analysis of citations in CCQ (2006) 0.01
    0.012071139 = product of:
      0.060355693 = sum of:
        0.060355693 = weight(_text_:bibliographic in 5766) [ClassicSimilarity], result of:
          0.060355693 = score(doc=5766,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.34409973 = fieldWeight in 5766, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=5766)
      0.2 = coord(1/5)
    
    Abstract
    How are library professionals who do research about bibliographic organization using electronic resources (e-resources) in their journal articles? Are they keeping pace with the use of e-resources outside the library world? What are the e-resources most used in their research? This article aims to address these and other questions by analyzing bibliographical references/notes in articles in Cataloging and Classification Quarterly (CCQ) for every other year from 1994 to 2004.
  15. Hood, W.W.; Wilson, C.S.: Overlap in bibliographic databases (2003) 0.01
    0.01066948 = product of:
      0.053347398 = sum of:
        0.053347398 = weight(_text_:bibliographic in 1868) [ClassicSimilarity], result of:
          0.053347398 = score(doc=1868,freq=4.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.30414405 = fieldWeight in 1868, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1868)
      0.2 = coord(1/5)
    
    Abstract
    Bibliographic databases contain surrogates to a particular subset of the complete set of literature; some databases are very narrow in their scope, while others are multidisciplinary. These databases overlap in their coverage of the literature to a greater or lesser extent. The topic of Fuzzy Set Theory is examined to determine the overlap of coverage in the databases that index this topic. It was found that about 63% of records in the data set are unique to only one database, and the remaining 37% are duplicated in from two to 12 different databases. The overlap distribution is found to conform to a Lotka-type plot. The records with maximum overlap are identified; however, further work is needed to determine the significance of the high level of overlap in these records. The unique records are plotted using a Bradford-type form of data presentation and are found to conform (visually) to a hyperbolic distribution. The extent and causes of intra-database duplication (records duplicated in the one database) are also examined. Finally, the overlap in the top databases in the dataset were examined, and a high correlation was found between overlapping records, and overlapping DIALOG OneSearch categories.
  16. Walters, W.H.; Wilder, E.I.: Bibliographic index coverage of a multidisciplinary field (2003) 0.01
    0.01066948 = product of:
      0.053347398 = sum of:
        0.053347398 = weight(_text_:bibliographic in 2114) [ClassicSimilarity], result of:
          0.053347398 = score(doc=2114,freq=4.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.30414405 = fieldWeight in 2114, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2114)
      0.2 = coord(1/5)
    
    Abstract
    Walters and Wilder describe the literature of later-life migration, a multi-disciplinary topic, and evaluate its bibliographic coverage in seven disciplinary and five multi-disciplinary databases. Multiple database searches and reviews of the references of found items discovered over 500 papers published between January 1990 and December 2000. These were then read to determine if late-life migration was their central focus, and to select those which presented noteworthy findings, innovative approaches, or were covering topics unseen elsewhere, and also were understandable to a broad readership, and generally available. One hundred and fifty five journal articles met these criteria and are the focus of the study. The core journals of sociology, economics, and demography are not major contributors, but three gerontology journals are in the top five. The top two journals have broad coverage, but the others tend to concentrate on one of five themes. The top five journals account for 40 % of papers and the top twelve 70%. Of nine papers cited 30 or more times seven appeared in the top 12 contributing journals. The median article in the study was indexed by six of the twelve databases, and 12% were indexed by more than 7 databases. The correlation between citation and number of databases indexing a paper is very low. Social Sciences Citation Index will 73% coverage. Typical overlap in the 12 databases is about 45%.
  17. Vaughan, L.; Shaw , D.: Bibliographic and Web citations : what Is the difference? (2003) 0.01
    0.01066948 = product of:
      0.053347398 = sum of:
        0.053347398 = weight(_text_:bibliographic in 5176) [ClassicSimilarity], result of:
          0.053347398 = score(doc=5176,freq=4.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.30414405 = fieldWeight in 5176, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5176)
      0.2 = coord(1/5)
    
    Abstract
    Vaughn, and Shaw look at the relationship between traditional citation and Web citation (not hyperlinks but rather textual mentions of published papers). Using English language research journals in ISI's 2000 Journal Citation Report - Information and Library Science category - 1209 full length papers published in 1997 in 46 journals were identified. Each was searched in Social Science Citation Index and on the Web using Google phrase search by entering the title in quotation marks, and followed for distinction where necessary with sub-titles, author's names, and journal title words. After removing obvious false drops, the number of web sites was recorded for comparison with the SSCI counts. A second sample from 1992 was also collected for examination. There were a total of 16,371 web citations to the selected papers. The top and bottom ranked four journals were then examined and every third citation to every third paper was selected and classified as to source type, domain, and country of origin. Web counts are much higher than ISI citation counts. Of the 46 journals from 1997, 26 demonstrated a significant correlation between Web and traditional citation counts, and 11 of the 15 in the 1992 sample also showed significant correlation. Journal impact factor in 1998 and 1999 correlated significantly with average Web citations per journal in the 1997 data, but at a low level. Thirty percent of web citations come from other papers posted on the web, and 30percent from listings of web based bibliographic services, while twelve percent come from class reading lists. High web citation journals often have web accessible tables of content.
  18. Zhao, D.; Strotmann, A.: Evolution of research activities and intellectual influences in information science 1996-2005 : introducing author bibliographic-coupling analysis (2008) 0.01
    0.01066948 = product of:
      0.053347398 = sum of:
        0.053347398 = weight(_text_:bibliographic in 2384) [ClassicSimilarity], result of:
          0.053347398 = score(doc=2384,freq=4.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.30414405 = fieldWeight in 2384, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2384)
      0.2 = coord(1/5)
    
    Abstract
    Author cocitation analysis (ACA) has frequently been applied over the last two decades for mapping the intellectual structure of a research field as represented by its authors. However, what is mapped in ACA is actually the structure of intellectual influences on a research field as perceived by its active authors. In this exploratory paper, by contrast, we introduce author bibliographic-coupling analysis (ABCA) as a method to map the research activities of active authors themselves for a more realistic picture of the current state of research in a field. We choose the information science (IS) field and study its intellectual structure both in terms of current research activities as seen from ABCA and in terms of intellectual influences on its research as shown from ACA. We examine how these two aspects of the intellectual structure of the IS field are related, and how they both developed during the first decade of the Web, 1996-2005. We find that these two citation-based author-mapping methods complement each other, and that, in combination, they provide a more comprehensive view of the intellectual structure of the IS field than either of them can provide on its own.
  19. Pulgarin, A.; Gil-Leiva, I.: Bibliometric analysis of the automatic indexing literature : 1956-2000 (2004) 0.01
    0.010562247 = product of:
      0.05281123 = sum of:
        0.05281123 = weight(_text_:bibliographic in 2566) [ClassicSimilarity], result of:
          0.05281123 = score(doc=2566,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.30108726 = fieldWeight in 2566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2566)
      0.2 = coord(1/5)
    
    Abstract
    We present a bibliometric study of a corpus of 839 bibliographic references about automatic indexing, covering the period 1956-2000. We analyse the distribution of authors and works, the obsolescence and its dispersion, and the distribution of the literature by topic, year, and source type. We conclude that: (i) there has been a constant interest on the part of researchers; (ii) the most studied topics were the techniques and methods employed and the general aspects of automatic indexing; (iii) the productivity of the authors does fit a Lotka distribution (Dmax=0.02 and critical value=0.054); (iv) the annual aging factor is 95%; and (v) the dispersion of the literature is low.
  20. Hood, W.W.; Wilson, C.S.: ¬The relationship of records in multiple databases to their usage or citedness (2005) 0.01
    0.010562247 = product of:
      0.05281123 = sum of:
        0.05281123 = weight(_text_:bibliographic in 3680) [ClassicSimilarity], result of:
          0.05281123 = score(doc=3680,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.30108726 = fieldWeight in 3680, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3680)
      0.2 = coord(1/5)
    
    Abstract
    Papers in journals are indexed in bibliographic databases in varying degrees of overlap. The question has been raised as to whether papers that appear in multiple databases (highly overlapping) are in any way more significant (such as being more highly cited) than papers that are indexed in few databases. This paper uses a dataset from fuzzy set theory to compare low overlap papers with high overlap ones, and finds that more highly overlapping papers are in fact more highly cited.

Authors

Languages

  • e 72
  • d 5
  • More… Less…

Types

  • a 76
  • el 1
  • m 1
  • More… Less…

Classifications