Search (499 results, page 1 of 25)

  • × theme_ss:"Informetrie"
  1. Herb, U.; Beucke, D.: ¬Die Zukunft der Impact-Messung : Social Media, Nutzung und Zitate im World Wide Web (2013) 0.14
    0.14178866 = product of:
      0.4962603 = sum of:
        0.24813014 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.24813014 = score(doc=2188,freq=2.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.24813014 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.24813014 = score(doc=2188,freq=2.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
      0.2857143 = coord(2/7)
    
    Content
    Vgl. unter: https://www.leibniz-science20.de%2Fforschung%2Fprojekte%2Faltmetrics-in-verschiedenen-wissenschaftsdisziplinen%2F&ei=2jTgVaaXGcK4Udj1qdgB&usg=AFQjCNFOPdONj4RKBDf9YDJOLuz3lkGYlg&sig2=5YI3KWIGxBmk5_kv0P_8iQ.
  2. Niemi, T.; Hirvonen, L.; Järvelin, K.: Multidimensional data model and query language for informetrics (2003) 0.03
    0.03016981 = product of:
      0.10559433 = sum of:
        0.023504408 = weight(_text_:based in 1753) [ClassicSimilarity], result of:
          0.023504408 = score(doc=1753,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.19973516 = fieldWeight in 1753, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=1753)
        0.08208992 = weight(_text_:great in 1753) [ClassicSimilarity], result of:
          0.08208992 = score(doc=1753,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.37327147 = fieldWeight in 1753, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1753)
      0.2857143 = coord(2/7)
    
    Abstract
    Multidimensional data analysis or On-line analytical processing (OLAP) offers a single subject-oriented source for analyzing summary data based an various dimensions. We demonstrate that the OLAP approach gives a promising starting point for advanced analysis and comparison among summary data in informetrics applications. At the moment there is no single precise, commonly accepted logical/conceptual model for multidimensional analysis. This is because the requirements of applications vary considerably. We develop a conceptual/logical multidimensional model for supporting the complex and unpredictable needs of informetrics. Summary data are considered with respect of some dimensions. By changing dimensions the user may construct other views an the same summary data. We develop a multidimensional query language whose basic idea is to support the definition of views in a way, which is natural and intuitive for lay users in the informetrics area. We show that this view-oriented query language has a great expressive power and its degree of declarativity is greater than in contemporary operation-oriented or SQL (Structured Query Language)-like OLAP query languages.
  3. Zheng, X.; Sun, A.: Collecting event-related tweets from twitter stream (2019) 0.03
    0.03016981 = product of:
      0.10559433 = sum of:
        0.023504408 = weight(_text_:based in 4672) [ClassicSimilarity], result of:
          0.023504408 = score(doc=4672,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.19973516 = fieldWeight in 4672, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=4672)
        0.08208992 = weight(_text_:great in 4672) [ClassicSimilarity], result of:
          0.08208992 = score(doc=4672,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.37327147 = fieldWeight in 4672, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4672)
      0.2857143 = coord(2/7)
    
    Abstract
    Twitter provides a channel of collecting and publishing instant information on major events like natural disasters. However, information flow on Twitter is of great volume. For a specific event, messages collected from the Twitter Stream based on either location constraint or predefined keywords would contain a lot of noise. In this article, we propose a method to achieve both high-precision and high-recall in collecting event-related tweets. Our method involves an automatic keyword generation component, and an event-related tweet identification component. For keyword generation, we consider three properties of candidate keywords, namely relevance, coverage, and evolvement. The keyword updating mechanism enables our method to track the main topics of tweets along event development. To minimize annotation effort in identifying event-related tweets, we adopt active learning and incorporate multiple-instance learning which assigns labels to bags instead of instances (that is, individual tweets). Through experiments on two real-world events, we demonstrate the superiority of our method against state-of-the-art alternatives.
  4. Zhao, D.; Strotmann, A.: Counting first, last, or all authors in citation analysis : a comprehensive comparison in the highly collaborative stem cell research field (2011) 0.03
    0.029238276 = product of:
      0.10233396 = sum of:
        0.033925693 = weight(_text_:based in 4368) [ClassicSimilarity], result of:
          0.033925693 = score(doc=4368,freq=6.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.28829288 = fieldWeight in 4368, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4368)
        0.068408266 = weight(_text_:great in 4368) [ClassicSimilarity], result of:
          0.068408266 = score(doc=4368,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.31105953 = fieldWeight in 4368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4368)
      0.2857143 = coord(2/7)
    
    Abstract
    How can citation analysis take into account the highly collaborative nature and unique research and publication culture of biomedical research fields? This study explores this question by introducing last-author citation counting and comparing it with traditional first-author counting and theoretically optimal all-author counting in the stem cell research field for the years 2004-2009. For citation ranking, last-author counting, which is directly supported by Scopus but not by ISI databases, appears to approximate all-author counting quite well in a field where heads of research labs are traditionally listed as last authors; however, first author counting does not. For field mapping, we find that author co-citation analyses based on different counting methods all produce similar overall intellectual structures of a research field, but detailed structures and minor specialties revealed differ to various degrees and thus require great caution to interpret. This is true especially when authors are selected into the analysis based on citedness, because author selection is found to have a greater effect on mapping results than does choice of co-citation counting method. Findings are based on a comprehensive, high-quality dataset extracted in several steps from PubMed and Scopus and subjected to automatic reference and author name disambiguation.
  5. Tonta, Y.: Scholarly communication and the use of networked information sources (1996) 0.03
    0.02798997 = product of:
      0.09796489 = sum of:
        0.08208992 = weight(_text_:great in 6389) [ClassicSimilarity], result of:
          0.08208992 = score(doc=6389,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.37327147 = fieldWeight in 6389, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=6389)
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 6389) [ClassicSimilarity], result of:
              0.031749934 = score(doc=6389,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 6389, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6389)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Examines the use of networked information sources in scholarly communication. Networked information sources are defined broadly to cover: documents and images stored on electronic network hosts; data files; newsgroups; listservs; online information services and electronic periodicals. Reports results of a survey to determine how heavily, if at all, networked information sources are cited in scholarly printed periodicals published in 1993 and 1994. 27 printed periodicals, representing a wide range of subjects and the most influential periodicals in their fields, were identified through the Science Citation Index and Social Science Citation Index Journal Citation Reports. 97 articles were selected for further review and references, footnotes and bibliographies were checked for references to networked information sources. Only 2 articles were found to contain such references. Concludes that, although networked information sources facilitate scholars' work to a great extent during the research process, scholars have yet to incorporate such sources in the bibliographies of their published articles
    Source
    IFLA journal. 22(1996) no.3, S.240-245
  6. Alonso, S.; Cabrerizo, F.J.; Herrera-Viedma, E.; Herrera, F.: WoS query partitioner : a tool to retrieve very large numbers of items from the Web of Science using different source-based partitioning approaches (2010) 0.03
    0.02514151 = product of:
      0.087995276 = sum of:
        0.019587006 = weight(_text_:based in 3701) [ClassicSimilarity], result of:
          0.019587006 = score(doc=3701,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.16644597 = fieldWeight in 3701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3701)
        0.068408266 = weight(_text_:great in 3701) [ClassicSimilarity], result of:
          0.068408266 = score(doc=3701,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.31105953 = fieldWeight in 3701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3701)
      0.2857143 = coord(2/7)
    
    Abstract
    Thomson Reuters' Web of Science (WoS) is undoubtedly a great tool for scientiometrics purposes. It allows one to retrieve and compute different measures such as the total number of papers that satisfy a particular condition; however, it also is well known that this tool imposes several different restrictions that make obtaining certain results difficult. One of those constraints is that the tool does not offer the total count of documents in a dataset if it is larger than 100,000 items. In this article, we propose and analyze different approaches that involve partitioning the search space (using the Source field) to retrieve item counts for very large datasets from the WoS. The proposed techniques improve previous approaches: They do not need any extra information about the retrieved dataset (thus allowing completely automatic procedures to retrieve the results), they are designed to avoid many of the restrictions imposed by the WoS, and they can be easily applied to almost any query. Finally, a description of WoS Query Partitioner, a freely available and online interactive tool that implements those techniques, is presented.
  7. White, H.D.; Zuccala, A.A.: Libcitations, worldcat, cultural impact, and fame (2018) 0.03
    0.02514151 = product of:
      0.087995276 = sum of:
        0.019587006 = weight(_text_:based in 4578) [ClassicSimilarity], result of:
          0.019587006 = score(doc=4578,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.16644597 = fieldWeight in 4578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4578)
        0.068408266 = weight(_text_:great in 4578) [ClassicSimilarity], result of:
          0.068408266 = score(doc=4578,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.31105953 = fieldWeight in 4578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4578)
      0.2857143 = coord(2/7)
    
    Abstract
    Just as citations to a book can be counted, so can that book's libcitations-the number of libraries in a consortium that hold it. These holdings counts per title can be obtained from the consortium's union catalog, such as OCLC's WorldCat. Librarians seeking to serve their customers well must be attuned to various kinds of merit in books. The result in WorldCat is a great variation in the libcitations particular books receive. The higher a title's count (or percentile), the more famous it is-either absolutely or within a subject class. Degree of fame also indicates cultural impact, allowing that further documentation of impact may be needed. Using WorldCat data, we illustrate high, medium, and low degrees of fame with 170 titles published during 1990-1995 or 2001-2006 and spanning the 10 main Dewey classes. We use their total libcitation counts or their counts from members of the Association of Research Libraries, or both, as of late 2011. Our analysis of their fame draws on the recognizability of their authors, the extent to which they and their authors are covered by Wikipedia, and whether they have movie or TV versions. Ordinal scales based on Wikipedia coverage and on libcitation counts are very significantly associated.
  8. Thelwall, M.: Extracting macroscopic information from Web links (2001) 0.03
    0.025080575 = product of:
      0.08778201 = sum of:
        0.02770021 = weight(_text_:based in 6851) [ClassicSimilarity], result of:
          0.02770021 = score(doc=6851,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23539014 = fieldWeight in 6851, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6851)
        0.060081802 = product of:
          0.120163605 = sum of:
            0.120163605 = weight(_text_:britain in 6851) [ClassicSimilarity], result of:
              0.120163605 = score(doc=6851,freq=2.0), product of:
                0.29147226 = queryWeight, product of:
                  7.462781 = idf(docFreq=68, maxDocs=44218)
                  0.03905679 = queryNorm
                0.4122643 = fieldWeight in 6851, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.462781 = idf(docFreq=68, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6851)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Much has been written about the potential and pitfalls of macroscopic Web-based link analysis, yet there have been no studies that have provided clear statistical evidence that any of the proposed calculations can produce results over large areas of the Web that correlate with phenomena external to the Internet. This article attempts to provide such evidence through an evaluation of Ingwersen's (1998) proposed external Web Impact Factor (WIF) for the original use of the Web: the interlinking of academic research. In particular, it studies the case of the relationship between academic hyperlinks and research activity for universities in Britain, a country chosen for its variety of institutions and the existence of an official government rating exercise for research. After reviewing the numerous reasons why link counts may be unreliable, it demonstrates that four different WIFs do, in fact, correlate with the conventional academic research measures. The WIF delivering the greatest correlation with research rankings was the ratio of Web pages with links pointing at research-based pages to faculty numbers. The scarcity of links to electronic academic papers in the data set suggests that, in contrast to citation analysis, this WIF is measuring the reputations of universities and their scholars, rather than the quality of their publications
  9. Ajiferuke, I.; Lu, K.; Wolfram, D.: ¬A comparison of citer and citation-based measure outcomes for multiple disciplines (2010) 0.02
    0.023530135 = product of:
      0.08235547 = sum of:
        0.0664805 = weight(_text_:based in 4000) [ClassicSimilarity], result of:
          0.0664805 = score(doc=4000,freq=16.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.56493634 = fieldWeight in 4000, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=4000)
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 4000) [ClassicSimilarity], result of:
              0.031749934 = score(doc=4000,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 4000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4000)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Author research impact was examined based on citer analysis (the number of citers as opposed to the number of citations) for 90 highly cited authors grouped into three broad subject areas. Citer-based outcome measures were also compared with more traditional citation-based measures for levels of association. The authors found that there are significant differences in citer-based outcomes among the three broad subject areas examined and that there is a high degree of correlation between citer and citation-based measures for all measures compared, except for two outcomes calculated for the social sciences. Citer-based measures do produce slightly different rankings of authors based on citer counts when compared to more traditional citation counts. Examples are provided. Citation measures may not adequately address the influence, or reach, of an author because citations usually do not address the origin of the citation beyond self-citations.
    Date
    28. 9.2010 12:54:22
  10. Liu, D.-R.; Shih, M.-J.: Hybrid-patent classification based on patent-network analysis (2011) 0.02
    0.019608447 = product of:
      0.06862956 = sum of:
        0.05540042 = weight(_text_:based in 4189) [ClassicSimilarity], result of:
          0.05540042 = score(doc=4189,freq=16.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.47078028 = fieldWeight in 4189, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4189)
        0.013229139 = product of:
          0.026458278 = sum of:
            0.026458278 = weight(_text_:22 in 4189) [ClassicSimilarity], result of:
              0.026458278 = score(doc=4189,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.19345059 = fieldWeight in 4189, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4189)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Effective patent management is essential for organizations to maintain their competitive advantage. The classification of patents is a critical part of patent management and industrial analysis. This study proposes a hybrid-patent-classification approach that combines a novel patent-network-based classification method with three conventional classification methods to analyze query patents and predict their classes. The novel patent network contains various types of nodes that represent different features extracted from patent documents. The nodes are connected based on the relationship metrics derived from the patent metadata. The proposed classification method predicts a query patent's class by analyzing all reachable nodes in the patent network and calculating their relevance to the query patent. It then classifies the query patent with a modified k-nearest neighbor classifier. To further improve the approach, we combine it with content-based, citation-based, and metadata-based classification methods to develop a hybrid-classification approach. We evaluate the performance of the hybrid approach on a test dataset of patent documents obtained from the U.S. Patent and Trademark Office, and compare its performance with that of the three conventional methods. The results demonstrate that the proposed patent-network-based approach yields more accurate class predictions than the patent network-based approach.
    Date
    22. 1.2011 13:04:21
  11. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.02
    0.018710561 = product of:
      0.06548696 = sum of:
        0.044320337 = weight(_text_:based in 1431) [ClassicSimilarity], result of:
          0.044320337 = score(doc=1431,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.37662423 = fieldWeight in 1431, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0625 = fieldNorm(doc=1431)
        0.021166623 = product of:
          0.042333245 = sum of:
            0.042333245 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.042333245 = score(doc=1431,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Properties of a percentile-based rating scale needed in bibliometrics are formulated. Based on these properties, P100 was recently introduced as a new citation-rank approach (Bornmann, Leydesdorff, & Wang, 2013). In this paper, we conceptualize P100 and propose an improvement which we call P100'. Advantages and disadvantages of citation-rank indicators are noted.
    Date
    22. 8.2014 17:05:18
  12. Bensman, S.J.: Urquhart's and Garfield's laws : the British controversy over their validity (2001) 0.02
    0.018210014 = product of:
      0.063735045 = sum of:
        0.015669605 = weight(_text_:based in 6026) [ClassicSimilarity], result of:
          0.015669605 = score(doc=6026,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.13315678 = fieldWeight in 6026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=6026)
        0.04806544 = product of:
          0.09613088 = sum of:
            0.09613088 = weight(_text_:britain in 6026) [ClassicSimilarity], result of:
              0.09613088 = score(doc=6026,freq=2.0), product of:
                0.29147226 = queryWeight, product of:
                  7.462781 = idf(docFreq=68, maxDocs=44218)
                  0.03905679 = queryNorm
                0.32981142 = fieldWeight in 6026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.462781 = idf(docFreq=68, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6026)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The British controversy over the validity of Urquhart's and Garfield's Laws during the 1970s constitutes an important episode in the formulation of the probability structure of human knowledge. This controversy took place within the historical context of the convergence of two scientific revolutions-the bibliometric and the biometric-that had been launched in Britain. The preceding decades had witnessed major breakthroughs in understanding the probability distributions underlying the use of human knowledge. Two of the most important of these breakthroughs were the laws posited by Donald J. Urquhart and Eugene Garfield, who played major roles in establishing the institutional bases of the bibliometric revolution. For his part, Urquhart began his realization of S. C. Bradford's concept of a national science library by analyzing the borrowing of journals on interlibrary loan from the Science Museum Library in 1956. He found that 10% of the journals accounted for 80% of the loans and formulated Urquhart's Law, by which the interlibrary use of a journal is a measure of its total use. This law underlay the operations of the National Lending Library for Science and Technology (NLLST), which Urquhart founded. The NLLST became the British Library Lending Division (BLLD) and ultimately the British Library Document Supply Centre (BLDSC). In contrast, Garfield did a study of 1969 journal citations as part of the process of creating the Science Citation Index (SCI), formulating his Law of Concentration, by which the bulk of the information needs in science can be satisfied by a relatively small, multidisciplinary core of journals. This law became the operational principle of the Institute for Scientif ic Information created by Garfield. A study at the BLLD under Urquhart's successor, Maurice B. Line, found low correlations of NLLST use with SCI citations, and publication of this study started a major controversy, during which both laws were called into question. The study was based on the faulty use of the Spearman rank correlation coefficient, and the controversy over it was instrumental in causing B. C. Brookes to investigate bibliometric laws as probabilistic phenomena and begin to link the bibliometric with the biometric revolution. This paper concludes with a resolution of the controversy by means of a statistical technique that incorporates Brookes' criticism of the Spearman rank-correlation method and demonstrates the mutual supportiveness of the two laws
  13. Wang, S.; Ma, Y.; Mao, J.; Bai, Y.; Liang, Z.; Li, G.: Quantifying scientific breakthroughs by a novel disruption indicator based on knowledge entities : On the rise of scrape-and-report scholarship in online reviews research (2023) 0.02
    0.017487803 = product of:
      0.06120731 = sum of:
        0.04797817 = weight(_text_:based in 882) [ClassicSimilarity], result of:
          0.04797817 = score(doc=882,freq=12.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.4077077 = fieldWeight in 882, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=882)
        0.013229139 = product of:
          0.026458278 = sum of:
            0.026458278 = weight(_text_:22 in 882) [ClassicSimilarity], result of:
              0.026458278 = score(doc=882,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.19345059 = fieldWeight in 882, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=882)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Compared to previous studies that generally detect scientific breakthroughs based on citation patterns, this article proposes a knowledge entity-based disruption indicator by quantifying the change of knowledge directly created and inspired by scientific breakthroughs to their evolutionary trajectories. Two groups of analytic units, including MeSH terms and their co-occurrences, are employed independently by the indicator to measure the change of knowledge. The effectiveness of the proposed indicators was evaluated against the four datasets of scientific breakthroughs derived from four recognition trials. In terms of identifying scientific breakthroughs, the proposed disruption indicator based on MeSH co-occurrences outperforms that based on MeSH terms and three earlier citation-based disruption indicators. It is also shown that in our indicator, measuring the change of knowledge inspired by the focal paper in its evolutionary trajectory is a larger contributor than measuring the change created by the focal paper. Our study not only offers empirical insights into conceptual understanding of scientific breakthroughs but also provides practical disruption indicator for scientists and science management agencies searching for valuable research.
    Date
    22. 1.2023 18:37:33
  14. Vieira, E.S.; Cabral, J.A.S.; Gomes, J.A.N.F.: Definition of a model based on bibliometric indicators for assessing applicants to academic positions (2014) 0.02
    0.01637174 = product of:
      0.05730109 = sum of:
        0.038780294 = weight(_text_:based in 1221) [ClassicSimilarity], result of:
          0.038780294 = score(doc=1221,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.3295462 = fieldWeight in 1221, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1221)
        0.018520795 = product of:
          0.03704159 = sum of:
            0.03704159 = weight(_text_:22 in 1221) [ClassicSimilarity], result of:
              0.03704159 = score(doc=1221,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.2708308 = fieldWeight in 1221, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1221)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    A model based on a set of bibliometric indicators is proposed for the prediction of the ranking of applicants to an academic position as produced by a committee of peers. The results show that a very small number of indicators may lead to a robust prediction of about 75% of the cases. We start with 12 indicators to build a few composite indicators by factor analysis. Following a discrete choice model, we arrive at 3 comparatively good predicative models. We conclude that these models have a surprisingly good predictive power and may help peers in their selection process.
    Date
    18. 3.2014 18:22:21
  15. Zitt, M.; Lelu, A.; Bassecoulard, E.: Hybrid citation-word representations in science mapping : Portolan charts of research fields? (2011) 0.02
    0.016293434 = product of:
      0.05702702 = sum of:
        0.04379788 = weight(_text_:based in 4130) [ClassicSimilarity], result of:
          0.04379788 = score(doc=4130,freq=10.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.37218451 = fieldWeight in 4130, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4130)
        0.013229139 = product of:
          0.026458278 = sum of:
            0.026458278 = weight(_text_:22 in 4130) [ClassicSimilarity], result of:
              0.026458278 = score(doc=4130,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.19345059 = fieldWeight in 4130, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4130)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The mapping of scientific fields, based on principles established in the seventies, has recently shown a remarkable development and applications are now booming with progress in computing efficiency. We examine here the convergence of two thematic mapping approaches, citation-based and word-based, which rely on quite different sociological backgrounds. A corpus in the nanoscience field was broken down into research themes, using the same clustering technique on the 2 networks separately. The tool for comparison is the table of intersections of the M clusters (here M=50) built on either side. A classical visual exploitation of such contingency tables is based on correspondence analysis. We investigate a rearrangement of the intersection table (block modeling), resulting in pseudo-map. The interest of this representation for confronting the two breakdowns is discussed. The amount of convergence found is, in our view, a strong argument in favor of the reliability of bibliometric mapping. However, the outcomes are not convergent at the degree where they can be substituted for each other. Differences highlight the complementarity between approaches based on different networks. In contrast with the strong informetric posture found in recent literature, where lexical and citation markers are considered as miscible tokens, the framework proposed here does not mix the two elements at an early stage, in compliance with their contrasted logic.
    Date
    8. 1.2011 18:22:50
  16. Fujigaki, Y.: ¬The citation system : citation networks as repeatedly focusing on difference, continuous re-evaluation, and as persistent knowledge accumulation (1998) 0.02
    0.015636176 = product of:
      0.10945322 = sum of:
        0.10945322 = weight(_text_:great in 5129) [ClassicSimilarity], result of:
          0.10945322 = score(doc=5129,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.49769527 = fieldWeight in 5129, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0625 = fieldNorm(doc=5129)
      0.14285715 = coord(1/7)
    
    Abstract
    States that it can be shown that claims of a lack of theories of citation are also indicative of a great need for a theory which links science dynamics and measurement. There is a wide gap between qualitative (science dynamics) and quantitative (measurement) approaches. To link them, proposes the use of the citation system, that potentially bridges a gap between measurement and epistemology, by applying system theory to the publication system
  17. Zhang, Y.: ¬The impact of Internet-based electronic resources on formal scholarly communication in the area of library and information science : a citation analysis (1998) 0.02
    0.015038435 = product of:
      0.052634522 = sum of:
        0.033925693 = weight(_text_:based in 2808) [ClassicSimilarity], result of:
          0.033925693 = score(doc=2808,freq=6.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.28829288 = fieldWeight in 2808, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2808)
        0.018708827 = product of:
          0.037417654 = sum of:
            0.037417654 = weight(_text_:22 in 2808) [ClassicSimilarity], result of:
              0.037417654 = score(doc=2808,freq=4.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.27358043 = fieldWeight in 2808, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2808)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Internet based electronic resources are growing dramatically but there have been no empirical studies evaluating the impact of e-sources, as a whole, on formal scholarly communication. reports results of an investigation into how much e-sources have been used in formal scholarly communication, using a case study in the area of Library and Information Science (LIS) during the period 1994 to 1996. 4 citation based indicators were used in the study of the impact measurement. Concludes that, compared with the impact of print sources, the impact of e-sources on formal scholarly communication in LIS is small, as measured by e-sources cited, and does not increase significantly by year even though there is observable growth of these impact across the years. It is found that periodical format is related to the rate of citing e-sources, articles are more likely to cite e-sources than are print priodical articles. However, once authors cite electronic resource, there is no significant difference in the number of references per article by periodical format or by year. Suggests that, at this stage, citing e-sources may depend on authors rather than the periodical format in which authors choose to publish
    Date
    30. 1.1999 17:22:22
  18. Mommoh, O.M.: Subject analysis of post-graduate theses in library, archival and information science at Ahmadu Bello University, Zaria (1995/96) 0.02
    0.015001667 = product of:
      0.052505832 = sum of:
        0.03133921 = weight(_text_:based in 673) [ClassicSimilarity], result of:
          0.03133921 = score(doc=673,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.26631355 = fieldWeight in 673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0625 = fieldNorm(doc=673)
        0.021166623 = product of:
          0.042333245 = sum of:
            0.042333245 = weight(_text_:22 in 673) [ClassicSimilarity], result of:
              0.042333245 = score(doc=673,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.30952093 = fieldWeight in 673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=673)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Reports results of a bibliometric study of 111 theses accepted by the Department of Library and Information Science, Ahmadu Bello University, Zaria, Nigeria, between 1977 and 1992. The analysis was based on year, type and degree awarded, subject, type of library and geographical area. Concludes that the highest number of submissions was 1991, when 108 MLS theses (97,29%) and 3 PhD theses (2,71%) were accepted. Libraries and readers was the most concetrated subject while the academic library was the most discussed type of library
    Source
    Library focus. 13/14(1995/96), S.22-25
  19. He, Z.-L.: International collaboration does not have greater epistemic authority (2009) 0.01
    0.01403292 = product of:
      0.04911522 = sum of:
        0.03324025 = weight(_text_:based in 3122) [ClassicSimilarity], result of:
          0.03324025 = score(doc=3122,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.28246817 = fieldWeight in 3122, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=3122)
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 3122) [ClassicSimilarity], result of:
              0.031749934 = score(doc=3122,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 3122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3122)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The consistent finding that internationally coauthored papers are more heavily cited has led to a tacit agreement among politicians and scientists that international collaboration in scientific research should be particularly promoted. However, existing studies of research collaboration suffer from a major weakness in that the Thomson Reuters Web of Science until recently did not link author names with affiliation addresses. The general approach has been to hierarchically code papers into international paper, national paper, or local paper based on the address information. This hierarchical coding scheme severely understates the level and contribution of local or national collaboration on an internationally coauthored paper. In this research, I code collaboration variables by hand checking each paper in the sample, use two measures of a paper's impact, and try several regression models. I find that both international collaboration and local collaboration are positively and significantly associated with a paper's impact, but international collaboration does not have more epistemic authority than local collaboration. This result suggests that previous findings based on hierarchical coding might be misleading.
    Date
    26. 9.2009 11:22:05
  20. Tang, L.; Hu, G.; Liu, W.: Funding acknowledgment analysis : queries and caveats (2017) 0.01
    0.013681654 = product of:
      0.09577157 = sum of:
        0.09577157 = weight(_text_:great in 3442) [ClassicSimilarity], result of:
          0.09577157 = score(doc=3442,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.43548337 = fieldWeight in 3442, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3442)
      0.14285715 = coord(1/7)
    
    Abstract
    Thomson Reuters's Web of Science (WoS) began systematically collecting acknowledgment information in August 2008. Since then, bibliometric analysis of funding acknowledgment (FA) has been growing and has aroused intense interest and attention from both academia and policy makers. Examining the distribution of FA by citation index database, by language, and by acknowledgment type, we noted coverage limitations and potential biases in each analysis. We argue that despite its great value, bibliometric analysis of FA should be used with caution.

Years

Languages

  • e 487
  • d 9
  • ro 1
  • sp 1
  • More… Less…

Types

  • a 491
  • el 6
  • m 5
  • s 3
  • r 1
  • More… Less…