Search (569 results, page 1 of 29)

  • × theme_ss:"Informetrie"
  • × year_i:[2010 TO 2020}
  1. Herb, U.; Beucke, D.: ¬Die Zukunft der Impact-Messung : Social Media, Nutzung und Zitate im World Wide Web (2013) 0.11
    0.11061867 = product of:
      0.49778402 = sum of:
        0.24889201 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.24889201 = score(doc=2188,freq=2.0), product of:
            0.3321406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03917671 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.24889201 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.24889201 = score(doc=2188,freq=2.0), product of:
            0.3321406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03917671 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
      0.22222222 = coord(2/9)
    
    Content
    Vgl. unter: https://www.leibniz-science20.de%2Fforschung%2Fprojekte%2Faltmetrics-in-verschiedenen-wissenschaftsdisziplinen%2F&ei=2jTgVaaXGcK4Udj1qdgB&usg=AFQjCNFOPdONj4RKBDf9YDJOLuz3lkGYlg&sig2=5YI3KWIGxBmk5_kv0P_8iQ.
  2. Mayernik, M.S.; Hart, D.L.; Maull, K.E.; Weber, N.M.: Assessing and tracing the outcomes and impact of research infrastructures (2017) 0.06
    0.06488947 = product of:
      0.14600131 = sum of:
        0.041947264 = weight(_text_:applications in 3635) [ClassicSimilarity], result of:
          0.041947264 = score(doc=3635,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 3635, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3635)
        0.018332949 = weight(_text_:of in 3635) [ClassicSimilarity], result of:
          0.018332949 = score(doc=3635,freq=24.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2992506 = fieldWeight in 3635, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3635)
        0.034061253 = weight(_text_:software in 3635) [ClassicSimilarity], result of:
          0.034061253 = score(doc=3635,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.21915624 = fieldWeight in 3635, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3635)
        0.051659852 = product of:
          0.103319705 = sum of:
            0.103319705 = weight(_text_:packages in 3635) [ClassicSimilarity], result of:
              0.103319705 = score(doc=3635,freq=2.0), product of:
                0.2706874 = queryWeight, product of:
                  6.9093957 = idf(docFreq=119, maxDocs=44218)
                  0.03917671 = queryNorm
                0.3816938 = fieldWeight in 3635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.9093957 = idf(docFreq=119, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3635)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Abstract
    Recent policy shifts on the part of funding agencies and journal publishers are causing changes in the acknowledgment and citation behaviors of scholars. A growing emphasis on open science and reproducibility is changing how authors cite and acknowledge "research infrastructures"-entities that are used as inputs to or as underlying foundations for scholarly research, including data sets, software packages, computational models, observational platforms, and computing facilities. At the same time, stakeholder interest in quantitative understanding of impact is spurring increased collection and analysis of metrics related to use of research infrastructures. This article reviews work spanning several decades on tracing and assessing the outcomes and impacts from these kinds of research infrastructures. We discuss how research infrastructures are identified and referenced by scholars in the research literature and how those references are being collected and analyzed for the purposes of evaluating impact. Synthesizing common features of a wide range of studies, we identify notable challenges that impede the analysis of impact metrics for research infrastructures and outline key open research questions that can guide future research and applications related to such metrics.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.6, S.1341-1359
  3. Jiang, Z.; Liu, X.; Chen, Y.: Recovering uncaptured citations in a scholarly network : a two-step citation analysis to estimate publication importance (2016) 0.03
    0.031577006 = product of:
      0.09473101 = sum of:
        0.059322387 = weight(_text_:applications in 3018) [ClassicSimilarity], result of:
          0.059322387 = score(doc=3018,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34394607 = fieldWeight in 3018, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3018)
        0.014968789 = weight(_text_:of in 3018) [ClassicSimilarity], result of:
          0.014968789 = score(doc=3018,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24433708 = fieldWeight in 3018, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3018)
        0.020439833 = weight(_text_:systems in 3018) [ClassicSimilarity], result of:
          0.020439833 = score(doc=3018,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 3018, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3018)
      0.33333334 = coord(3/9)
    
    Abstract
    The citation relationships between publications, which are significant for assessing the importance of scholarly components within a network, have been used for various scientific applications. Missing citation metadata in scholarly databases, however, create problems for classical citation-based ranking algorithms and challenge the performance of citation-based retrieval systems. In this research, we utilize a two-step citation analysis method to investigate the importance of publications for which citation information is partially missing. First, we calculate the importance of the author and then use his importance to estimate the publication importance for some selected articles. To evaluate this method, we designed a simulation experiment-"random citation-missing"-to test the two-step citation analysis that we carried out with the Association for Computing Machinery (ACM) Digital Library (DL). In this experiment, we simulated different scenarios in a large-scale scientific digital library, from high-quality citation data, to very poor quality data, The results show that a two-step citation analysis can effectively uncover the importance of publications in different situations. More importantly, we found that the optimized impact from the importance of an author (first step) is exponentially increased when the quality of citation decreases. The findings from this study can further enhance citation-based publication-ranking algorithms for real-world applications.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.7, S.1722-1735
  4. Zhu, Q.; Kong, X.; Hong, S.; Li, J.; He, Z.: Global ontology research progress : a bibliometric analysis (2015) 0.03
    0.03132182 = product of:
      0.093965456 = sum of:
        0.059322387 = weight(_text_:applications in 2590) [ClassicSimilarity], result of:
          0.059322387 = score(doc=2590,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34394607 = fieldWeight in 2590, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2590)
        0.015876798 = weight(_text_:of in 2590) [ClassicSimilarity], result of:
          0.015876798 = score(doc=2590,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25915858 = fieldWeight in 2590, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2590)
        0.018766273 = product of:
          0.037532546 = sum of:
            0.037532546 = weight(_text_:22 in 2590) [ClassicSimilarity], result of:
              0.037532546 = score(doc=2590,freq=4.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.27358043 = fieldWeight in 2590, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2590)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Purpose - The purpose of this paper is to analyse the global scientific outputs of ontology research, an important emerging discipline that has huge potential to improve information understanding, organization, and management. Design/methodology/approach - This study collected literature published during 1900-2012 from the Web of Science database. The bibliometric analysis was performed from authorial, institutional, national, spatiotemporal, and topical aspects. Basic statistical analysis, visualization of geographic distribution, co-word analysis, and a new index were applied to the selected data. Findings - Characteristics of publication outputs suggested that ontology research has entered into the soaring stage, along with increased participation and collaboration. The authors identified the leading authors, institutions, nations, and articles in ontology research. Authors were more from North America, Europe, and East Asia. The USA took the lead, while China grew fastest. Four major categories of frequently used keywords were identified: applications in Semantic Web, applications in bioinformatics, philosophy theories, and common supporting technology. Semantic Web research played a core role, and gene ontology study was well-developed. The study focus of ontology has shifted from philosophy to information science. Originality/value - This is the first study to quantify global research patterns and trends in ontology, which might provide a potential guide for the future research. The new index provides an alternative way to evaluate the multidisciplinary influence of researchers.
    Date
    20. 1.2015 18:30:22
    17. 9.2018 18:22:23
    Source
    Aslib journal of information management. 67(2015) no.1, S.27-54
  5. Norris, M.; Oppenheim, C.: ¬The h-index : a broad review of a new bibliometric indicator (2010) 0.03
    0.030798 = product of:
      0.092393994 = sum of:
        0.059322387 = weight(_text_:applications in 4147) [ClassicSimilarity], result of:
          0.059322387 = score(doc=4147,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34394607 = fieldWeight in 4147, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4147)
        0.019801848 = weight(_text_:of in 4147) [ClassicSimilarity], result of:
          0.019801848 = score(doc=4147,freq=28.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.32322758 = fieldWeight in 4147, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4147)
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 4147) [ClassicSimilarity], result of:
              0.026539518 = score(doc=4147,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 4147, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4147)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Purpose - This review aims to show, broadly, how the h-index has become a subject of widespread debate, how it has spawned many variants and diverse applications since first introduced in 2005 and some of the issues in its use. Design/methodology/approach - The review drew on a range of material published in 1990 or so sources published since 2005. From these sources, a number of themes were identified and discussed ranging from the h-index's advantages to which citation database might be selected for its calculation. Findings - The analysis shows how the h-index has quickly established itself as a major subject of interest in the field of bibliometrics. Study of the index ranges from its mathematical underpinning to a range of variants perceived to address the indexes' shortcomings. The review illustrates how widely the index has been applied but also how care must be taken in its application. Originality/value - The use of bibliometric indicators to measure research performance continues, with the h-index as its latest addition. The use of the h-index, its variants and many applications to which it has been put are still at the exploratory stage. The review shows the breadth and diversity of this research and the need to verify the veracity of the h-index by more studies.
    Date
    8. 1.2011 19:22:13
    Source
    Journal of documentation. 66(2010) no.5, S.681-705
  6. Koulouri, X.; Ifrim, C.; Wallace, M.; Pop, F.: Making sense of citations (2017) 0.03
    0.030140176 = product of:
      0.09042053 = sum of:
        0.050336715 = weight(_text_:applications in 3486) [ClassicSimilarity], result of:
          0.050336715 = score(doc=3486,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 3486, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=3486)
        0.015556021 = weight(_text_:of in 3486) [ClassicSimilarity], result of:
          0.015556021 = score(doc=3486,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25392252 = fieldWeight in 3486, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3486)
        0.0245278 = weight(_text_:systems in 3486) [ClassicSimilarity], result of:
          0.0245278 = score(doc=3486,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 3486, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=3486)
      0.33333334 = coord(3/9)
    
    Abstract
    To this day the analysis of citations has been aimed mainly to the exploration of different ways to count them, such as the total count, the h-index or the s-index, in order to quantify a researcher's overall contribution and impact. In this work we show how the consideration of the structured metadata that accompany citations, such as the publication outlet in which they have appeared, can lead to a considerably more insightful understanding of the ways in which a researcher has impacted the work of others.
    Series
    Information Systems and Applications, incl. Internet/Web, and HCI; 10151
  7. Hennemann, S.: Evaluating the performance of geographical locations within scientific networks using an aggregation-randomization-re-sampling approach (ARR) (2012) 0.03
    0.026906682 = product of:
      0.080720045 = sum of:
        0.041947264 = weight(_text_:applications in 510) [ClassicSimilarity], result of:
          0.041947264 = score(doc=510,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=510)
        0.018332949 = weight(_text_:of in 510) [ClassicSimilarity], result of:
          0.018332949 = score(doc=510,freq=24.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2992506 = fieldWeight in 510, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=510)
        0.020439833 = weight(_text_:systems in 510) [ClassicSimilarity], result of:
          0.020439833 = score(doc=510,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=510)
      0.33333334 = coord(3/9)
    
    Abstract
    Knowledge creation and dissemination in science and technology systems are perceived as prerequisites for socioeconomic development. The efficiency of creating new knowledge is considered to have a geographical component, that is, some regions are more capable in terms of scientific knowledge production than others. This article presents a method of using a network representation of scientific interaction to assess the relative efficiency of regions with diverse boundaries in channeling knowledge through a science system. In a first step, a weighted aggregate of the betweenness centrality is produced from empirical data (aggregation). The subsequent randomization of this empirical network produces the necessary null model for significance testing and normalization (randomization). This step is repeated to provide greater confidence about the results (re-sampling). The results are robust estimates for the relative regional efficiency of brokering knowledge, which is discussed along with cross-sectional and longitudinal empirical examples. The network representation acts as a straightforward metaphor of conceptual ideas from economic geography and neighboring disciplines. However, the procedure is not limited to centrality measures, nor is it limited to geographical aggregates. Therefore, it offers a wide range of applications for scientometrics and beyond.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.12, S.2393-2404
  8. Wan, X.; Liu, F.: Are all literature citations equally important? : automatic citation strength estimation and its applications (2014) 0.03
    0.026820354 = product of:
      0.08046106 = sum of:
        0.050336715 = weight(_text_:applications in 1350) [ClassicSimilarity], result of:
          0.050336715 = score(doc=1350,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 1350, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=1350)
        0.014200641 = weight(_text_:of in 1350) [ClassicSimilarity], result of:
          0.014200641 = score(doc=1350,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23179851 = fieldWeight in 1350, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1350)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 1350) [ClassicSimilarity], result of:
              0.031847417 = score(doc=1350,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 1350, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1350)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Literature citation analysis plays a very important role in bibliometrics and scientometrics, such as the Science Citation Index (SCI) impact factor, h-index. Existing citation analysis methods assume that all citations in a paper are equally important, and they simply count the number of citations. Here we argue that the citations in a paper are not equally important and some citations are more important than the others. We use a strength value to assess the importance of each citation and propose to use the regression method with a few useful features for automatically estimating the strength value of each citation. Evaluation results on a manually labeled data set in the computer science field show that the estimated values can achieve good correlation with human-labeled values. We further apply the estimated citation strength values for evaluating paper influence and author influence, and the preliminary evaluation results demonstrate the usefulness of the citation strength values.
    Date
    22. 8.2014 17:12:35
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.9, S.1929-1938
  9. Torres-Salinas, D.; Robinson-García, N.: ¬The time for bibliometric applications (2016) 0.03
    0.025194416 = product of:
      0.11337487 = sum of:
        0.10067343 = weight(_text_:applications in 2763) [ClassicSimilarity], result of:
          0.10067343 = score(doc=2763,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.5836958 = fieldWeight in 2763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.09375 = fieldNorm(doc=2763)
        0.012701439 = weight(_text_:of in 2763) [ClassicSimilarity], result of:
          0.012701439 = score(doc=2763,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 2763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=2763)
      0.22222222 = coord(2/9)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.4, S.1014-1015
  10. Cobo, M.J.; López-Herrera, A.G.; Herrera-Viedma, E.; Herrera, F.: Science mapping software tools : review, analysis, and cooperative study among tools (2011) 0.02
    0.024739683 = product of:
      0.11132857 = sum of:
        0.016935252 = weight(_text_:of in 4486) [ClassicSimilarity], result of:
          0.016935252 = score(doc=4486,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 4486, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4486)
        0.09439332 = weight(_text_:software in 4486) [ClassicSimilarity], result of:
          0.09439332 = score(doc=4486,freq=6.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.6073436 = fieldWeight in 4486, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=4486)
      0.22222222 = coord(2/9)
    
    Abstract
    Science mapping aims to build bibliometric maps that describe how specific disciplines, scientific domains, or research fields are conceptually, intellectually, and socially structured. Different techniques and software tools have been proposed to carry out science mapping analysis. The aim of this article is to review, analyze, and compare some of these software tools, taking into account aspects such as the bibliometric techniques available and the different kinds of analysis.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.7, S.1382-1402
  11. Zitt, M.; Lelu, A.; Bassecoulard, E.: Hybrid citation-word representations in science mapping : Portolan charts of research fields? (2011) 0.02
    0.024516657 = product of:
      0.07354997 = sum of:
        0.041947264 = weight(_text_:applications in 4130) [ClassicSimilarity], result of:
          0.041947264 = score(doc=4130,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 4130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4130)
        0.018332949 = weight(_text_:of in 4130) [ClassicSimilarity], result of:
          0.018332949 = score(doc=4130,freq=24.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2992506 = fieldWeight in 4130, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4130)
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 4130) [ClassicSimilarity], result of:
              0.026539518 = score(doc=4130,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 4130, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4130)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    The mapping of scientific fields, based on principles established in the seventies, has recently shown a remarkable development and applications are now booming with progress in computing efficiency. We examine here the convergence of two thematic mapping approaches, citation-based and word-based, which rely on quite different sociological backgrounds. A corpus in the nanoscience field was broken down into research themes, using the same clustering technique on the 2 networks separately. The tool for comparison is the table of intersections of the M clusters (here M=50) built on either side. A classical visual exploitation of such contingency tables is based on correspondence analysis. We investigate a rearrangement of the intersection table (block modeling), resulting in pseudo-map. The interest of this representation for confronting the two breakdowns is discussed. The amount of convergence found is, in our view, a strong argument in favor of the reliability of bibliometric mapping. However, the outcomes are not convergent at the degree where they can be substituted for each other. Differences highlight the complementarity between approaches based on different networks. In contrast with the strong informetric posture found in recent literature, where lexical and citation markers are considered as miscible tokens, the framework proposed here does not mix the two elements at an early stage, in compliance with their contrasted logic.
    Date
    8. 1.2011 18:22:50
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.1, S.19-39
  12. Callahan, A.; Hockema, S.; Eysenbach, G.: Contextual cocitation : augmenting cocitation analysis and its applications (2010) 0.02
    0.023395304 = product of:
      0.105278865 = sum of:
        0.083051346 = weight(_text_:applications in 3465) [ClassicSimilarity], result of:
          0.083051346 = score(doc=3465,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.4815245 = fieldWeight in 3465, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3465)
        0.022227516 = weight(_text_:of in 3465) [ClassicSimilarity], result of:
          0.022227516 = score(doc=3465,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.36282203 = fieldWeight in 3465, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3465)
      0.22222222 = coord(2/9)
    
    Abstract
    In this work, a novel method of cocitation analysis, coined contextual cocitation analysis, is introduced and described in comparison to traditional methods of cocitation analysis. Equations for quantifying contextual cocitation strength are introduced and their implications explored using theoretical examples alongside the application of contextual cocitation to a series of BioMed Central publications and their cited resources. Based on this work, the implications of contextual cocitation for understanding the granularity of the relationships created between cited published research and methods for its analysis are discussed. Future applications and improvements of this work, including its extended application to the published research of multiple disciplines, are then presented with rationales for their inclusion.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1130-1143
  13. Cobo, M.J.; López-Herrera, A.G.; Herrera-Viedma, E.; Herrera, F.: SciMAT: A new science mapping analysis software tool (2012) 0.02
    0.021647224 = product of:
      0.097412504 = sum of:
        0.014818345 = weight(_text_:of in 373) [ClassicSimilarity], result of:
          0.014818345 = score(doc=373,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 373, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=373)
        0.082594156 = weight(_text_:software in 373) [ClassicSimilarity], result of:
          0.082594156 = score(doc=373,freq=6.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.53142565 = fieldWeight in 373, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=373)
      0.22222222 = coord(2/9)
    
    Abstract
    This article presents a new open-source software tool, SciMAT, which performs science mapping analysis within a longitudinal framework. It provides different modules that help the analyst to carry out all the steps of the science mapping workflow. In addition, SciMAT presents three key features that are remarkable in respect to other science mapping software tools: (a) a powerful preprocessing module to clean the raw bibliographical data, (b) the use of bibliometric measures to study the impact of each studied element, and (c) a wizard to configure the analysis.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.8, S.1609-1630
  14. Ridenour, L.: Practical applications of citation analysis to examine interdisciplinary knowledge (2016) 0.02
    0.017575702 = product of:
      0.079090655 = sum of:
        0.06711562 = weight(_text_:applications in 4938) [ClassicSimilarity], result of:
          0.06711562 = score(doc=4938,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.38913056 = fieldWeight in 4938, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0625 = fieldNorm(doc=4938)
        0.011975031 = weight(_text_:of in 4938) [ClassicSimilarity], result of:
          0.011975031 = score(doc=4938,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.19546966 = fieldWeight in 4938, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4938)
      0.22222222 = coord(2/9)
    
    Source
    Knowledge organization for a sustainable world: challenges and perspectives for cultural, scientific, and technological sharing in a connected society : proceedings of the Fourteenth International ISKO Conference 27-29 September 2016, Rio de Janeiro, Brazil / organized by International Society for Knowledge Organization (ISKO), ISKO-Brazil, São Paulo State University ; edited by José Augusto Chaves Guimarães, Suellen Oliveira Milani, Vera Dodebei
  15. Egghe, L.; Guns, R.: Applications of the generalized law of Benford to informetric data (2012) 0.02
    0.015866594 = product of:
      0.071399674 = sum of:
        0.050336715 = weight(_text_:applications in 376) [ClassicSimilarity], result of:
          0.050336715 = score(doc=376,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 376, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=376)
        0.021062955 = weight(_text_:of in 376) [ClassicSimilarity], result of:
          0.021062955 = score(doc=376,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34381276 = fieldWeight in 376, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=376)
      0.22222222 = coord(2/9)
    
    Abstract
    In a previous work (Egghe, 2011), the first author showed that Benford's law (describing the logarithmic distribution of the numbers 1, 2, ... , 9 as first digits of data in decimal form) is related to the classical law of Zipf with exponent 1. The work of Campanario and Coslado (2011), however, shows that Benford's law does not always fit practical data in a statistical sense. In this article, we use a generalization of Benford's law related to the general law of Zipf with exponent ? > 0. Using data from Campanario and Coslado, we apply nonlinear least squares to determine the optimal ? and show that this generalized law of Benford fits the data better than the classical law of Benford.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.8, S.1662-1665
  16. Abramo, G.; D'Angelo, C.A.; Viel, F.: Assessing the accuracy of the h- and g-indexes for measuring researchers' productivity (2013) 0.02
    0.01541975 = product of:
      0.069388874 = sum of:
        0.050336715 = weight(_text_:applications in 957) [ClassicSimilarity], result of:
          0.050336715 = score(doc=957,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 957, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=957)
        0.019052157 = weight(_text_:of in 957) [ClassicSimilarity], result of:
          0.019052157 = score(doc=957,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3109903 = fieldWeight in 957, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=957)
      0.22222222 = coord(2/9)
    
    Abstract
    Bibliometric indicators are increasingly used in support of decisions about recruitment, career advancement, rewards, and selective funding for scientists. Given the importance of the applications, bibliometricians are obligated to carry out empirical testing of the robustness of the indicators, in simulations of real contexts. In this work, we compare the results of national-scale research assessments at the individual level, based on the following three different indexes: the h-index, the g-index, and "fractional scientific strength" (FSS), an indicator previously proposed by the authors. For each index, we construct and compare rankings lists of all Italian academic researchers working in the hard sciences during the period 2001-2005. The analysis quantifies the shifts in ranks that occur when researchers' productivity rankings by simple indicators such as the h- or g-indexes are compared with those by more accurate FSS.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.6, S.1224-1234
  17. Ferrara, E.; Romero, A.E.: Scientific impact evaluation and the effect of self-citations : mitigating the bias by discounting the h-index (2013) 0.02
    0.01541975 = product of:
      0.069388874 = sum of:
        0.050336715 = weight(_text_:applications in 1112) [ClassicSimilarity], result of:
          0.050336715 = score(doc=1112,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 1112, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=1112)
        0.019052157 = weight(_text_:of in 1112) [ClassicSimilarity], result of:
          0.019052157 = score(doc=1112,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3109903 = fieldWeight in 1112, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1112)
      0.22222222 = coord(2/9)
    
    Abstract
    In this article, we propose a measure to assess scientific impact that discounts self-citations and does not require any prior knowledge of their distribution among publications. This index can be applied to both researchers and journals. In particular, we show that it fills the gap of the h-index and similar measures that do not take into account the effect of self-citations for authors or journals impact evaluation. We provide 2 real-world examples: First, we evaluate the research impact of the most productive scholars in computer science (according to DBLP Computer Science Bibliography, Universität Trier, Trier, Germany); then we revisit the impact of the journals ranked in the Computer Science Applications section of the SCImago Journal & Country Rank ranking service (Consejo Superior de Investigaciones Científicas, University of Granada, Extremadura, Madrid, Spain). We observe how self-citations, in many cases, affect the rankings obtained according to different measures (including h-index and ch-index), and show how the proposed measure mitigates this effect.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2332-2339
  18. Zhao, S.X.; Zhang, P.L.; Li, J.; Tan, A.M.; Ye, F.Y.: Abstracting the core subnet of weighted networks based on link strengths (2014) 0.01
    0.01464283 = product of:
      0.06589273 = sum of:
        0.050336715 = weight(_text_:applications in 1256) [ClassicSimilarity], result of:
          0.050336715 = score(doc=1256,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 1256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=1256)
        0.015556021 = weight(_text_:of in 1256) [ClassicSimilarity], result of:
          0.015556021 = score(doc=1256,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25392252 = fieldWeight in 1256, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1256)
      0.22222222 = coord(2/9)
    
    Abstract
    Most measures of networks are based on the nodes, although links are also elementary units in networks and represent interesting social or physical connections. In this work we suggest an option for exploring networks, called the h-strength, with explicit focus on links and their strengths. The h-strength and its extensions can naturally simplify a complex network to a small and concise subnetwork (h-subnet) but retains the most important links with its core structure. Its applications in 2 typical information networks, the paper cocitation network of a topic (the h-index) and 5 scientific collaboration networks in the field of "water resources," suggest that h-strength and its extensions could be a useful choice for abstracting, simplifying, and visualizing a complex network. Moreover, we observe that the 2 informetric models, the Glänzel-Schubert model and the Hirsch model, roughly hold in the context of the h-strength for the collaboration networks.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.5, S.984-994
  19. Thelwall, M.: Web indicators for research evaluation : a practical guide (2016) 0.01
    0.014423447 = product of:
      0.06490551 = sum of:
        0.016735615 = weight(_text_:of in 3384) [ClassicSimilarity], result of:
          0.016735615 = score(doc=3384,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27317715 = fieldWeight in 3384, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3384)
        0.048169892 = weight(_text_:software in 3384) [ClassicSimilarity], result of:
          0.048169892 = score(doc=3384,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30993375 = fieldWeight in 3384, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3384)
      0.22222222 = coord(2/9)
    
    Abstract
    In recent years there has been an increasing demand for research evaluation within universities and other research-based organisations. In parallel, there has been an increasing recognition that traditional citation-based indicators are not able to reflect the societal impacts of research and are slow to appear. This has led to the creation of new indicators for different types of research impact as well as timelier indicators, mainly derived from the Web. These indicators have been called altmetrics, webometrics or just web metrics. This book describes and evaluates a range of web indicators for aspects of societal or scholarly impact, discusses the theory and practice of using and evaluating web indicators for research assessment and outlines practical strategies for obtaining many web indicators. In addition to describing impact indicators for traditional scholarly outputs, such as journal articles and monographs, it also covers indicators for videos, datasets, software and other non-standard scholarly outputs. The book describes strategies to analyse web indicators for individual publications as well as to compare the impacts of groups of publications. The practical part of the book includes descriptions of how to use the free software Webometric Analyst to gather and analyse web data. This book is written for information science undergraduate and Master?s students that are learning about alternative indicators or scientometrics as well as Ph.D. students and other researchers and practitioners using indicators to help assess research impact or to study scholarly communication.
  20. Thelwall, M.: Mendeley readership altmetrics for medical articles : an analysis of 45 fields (2016) 0.01
    0.014008479 = product of:
      0.063038155 = sum of:
        0.050336715 = weight(_text_:applications in 3055) [ClassicSimilarity], result of:
          0.050336715 = score(doc=3055,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 3055, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=3055)
        0.012701439 = weight(_text_:of in 3055) [ClassicSimilarity], result of:
          0.012701439 = score(doc=3055,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 3055, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3055)
      0.22222222 = coord(2/9)
    
    Abstract
    Medical research is highly funded and often expensive and so is particularly important to evaluate effectively. Nevertheless, citation counts may accrue too slowly for use in some formal and informal evaluations. It is therefore important to investigate whether alternative metrics could be used as substitutes. This article assesses whether one such altmetric, Mendeley readership counts, correlates strongly with citation counts across all medical fields, whether the relationship is stronger if student readers are excluded, and whether they are distributed similarly to citation counts. Based on a sample of 332,975 articles from 2009 in 45 medical fields in Scopus, citation counts correlated strongly (about 0.7; 78% of articles had at least one reader) with Mendeley readership counts (from the new version 1 applications programming interface [API]) in almost all fields, with one minor exception, and the correlations tended to decrease slightly when student readers were excluded. Readership followed either a lognormal or a hooked power law distribution, whereas citations always followed a hooked power law, showing that the two may have underlying differences.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.8, S.1962-1972

Languages

  • e 555
  • d 12
  • More… Less…

Types

  • a 557
  • el 11
  • m 7
  • s 3
  • x 1
  • More… Less…