Search (229 results, page 1 of 12)

  • × year_i:[2010 TO 2020}
  • × theme_ss:"Informetrie"
  1. Herb, U.; Beucke, D.: ¬Die Zukunft der Impact-Messung : Social Media, Nutzung und Zitate im World Wide Web (2013) 0.07
    0.07437435 = product of:
      0.2974974 = sum of:
        0.2974974 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.2974974 = score(doc=2188,freq=2.0), product of:
            0.39700332 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046827413 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
      0.25 = coord(1/4)
    
    Content
    Vgl. unter: https://www.leibniz-science20.de%2Fforschung%2Fprojekte%2Faltmetrics-in-verschiedenen-wissenschaftsdisziplinen%2F&ei=2jTgVaaXGcK4Udj1qdgB&usg=AFQjCNFOPdONj4RKBDf9YDJOLuz3lkGYlg&sig2=5YI3KWIGxBmk5_kv0P_8iQ.
  2. Fiala, D.: Bibliometric analysis of CiteSeer data for countries (2012) 0.05
    0.050731372 = product of:
      0.101462744 = sum of:
        0.07602732 = weight(_text_:data in 2742) [ClassicSimilarity], result of:
          0.07602732 = score(doc=2742,freq=12.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.513453 = fieldWeight in 2742, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 2742) [ClassicSimilarity], result of:
              0.05087085 = score(doc=2742,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 2742, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2742)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article describes the results of our analysis of the data from the CiteSeer digital library. First, we examined the data from the point of view of source top-level Internet domains from which the data were collected. Second, we measured country shares in publications indexed by CiteSeer and compared them to those based on mainstream bibliographic data from the Web of Science and Scopus. And third, we concentrated on analyzing publications and their citations aggregated by countries. This way, we generated rankings of the most influential countries in computer science using several non-recursive as well as recursive methods such as citation counts or PageRank. We conclude that even if East Asian countries are underrepresented in CiteSeer, its data may well be used along with other conventional bibliographic databases for comparing the computer science research productivity and performance of countries.
    Source
    Information processing and management. 48(2012) no.2, S.242-253
  3. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.05
    0.050071426 = product of:
      0.10014285 = sum of:
        0.062076043 = weight(_text_:data in 1239) [ClassicSimilarity], result of:
          0.062076043 = score(doc=1239,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4192326 = fieldWeight in 1239, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=1239)
        0.038066804 = product of:
          0.07613361 = sum of:
            0.07613361 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.07613361 = score(doc=1239,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    18. 3.2014 19:13:22
  4. Kronegger, L.; Mali, F.; Ferligoj, A.; Doreian, P.: Classifying scientific disciplines in Slovenia : a study of the evolution of collaboration structures (2015) 0.04
    0.040554725 = product of:
      0.08110945 = sum of:
        0.062076043 = weight(_text_:data in 1639) [ClassicSimilarity], result of:
          0.062076043 = score(doc=1639,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4192326 = fieldWeight in 1639, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1639)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 1639) [ClassicSimilarity], result of:
              0.038066804 = score(doc=1639,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 1639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1639)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    We explore classifying scientific disciplines including their temporal features by focusing on their collaboration structures over time. Bibliometric data for Slovenian researchers registered at the Slovenian Research Agency were used. These data were obtained from the Slovenian National Current Research Information System. We applied a recently developed hierarchical clustering procedure for symbolic data to the coauthorship structure of scientific disciplines. To track temporal changes, we divided data for the period 1986-2010 into five 5-year time periods. The clusters of disciplines for the Slovene science system revealed 5 clusters of scientific disciplines that, in large measure, correspond with the official national classification of sciences. However, there were also some significant differences pointing to the need for a dynamic classification system of sciences to better characterize them. Implications stemming from these results, especially with regard to classifying scientific disciplines, understanding the collaborative structure of science, and research and development policies, are discussed.
    Date
    21. 1.2015 14:55:22
  5. Liu, Y.; Rousseau, R.: Towards a representation of diffusion and interaction of scientific ideas : the case of fiber optics communication (2012) 0.04
    0.040442396 = product of:
      0.08088479 = sum of:
        0.051210128 = weight(_text_:data in 2723) [ClassicSimilarity], result of:
          0.051210128 = score(doc=2723,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34584928 = fieldWeight in 2723, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2723)
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 2723) [ClassicSimilarity], result of:
              0.05934933 = score(doc=2723,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 2723, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2723)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The research question studied in this contribution is how to find an adequate representation to describe the diffusion of scientific ideas over time. We claim that citation data, at least of articles that act as concept symbols, can be considered to contain this information. As a case study we show how the founding article by Nobel Prize winner Kao illustrates the evolution of the field of fiber optics communication. We use a continuous description of discrete citation data in order to accentuate turning points and breakthroughs in the history of this field. Applying the principles explained in this contribution informetrics may reveal the trajectories along which science is developing.
    Source
    Information processing and management. 48(2012) no.4, S.791-801
  6. Yang, S.; Han, R.; Ding, J.; Song, Y.: ¬The distribution of Web citations (2012) 0.04
    0.03959743 = product of:
      0.07919486 = sum of:
        0.053759433 = weight(_text_:data in 2735) [ClassicSimilarity], result of:
          0.053759433 = score(doc=2735,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 2735, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2735)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 2735) [ClassicSimilarity], result of:
              0.05087085 = score(doc=2735,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 2735, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2735)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A substantial amount of research has focused on the persistence or availability of Web citations. The present study analyzes Web citation distributions. Web citations are defined as the mentions of the URLs of Web pages (Web resources) as references in academic papers. The present paper primarily focuses on the analysis of the URLs of Web citations and uses three sets of data, namely, Set 1 from the Humanities and Social Science Index in China (CSSCI, 1998-2009), Set 2 from the publications of two international computer science societies, Communications of the ACM and IEEE Computer (1995-1999), and Set 3 from the medical science database, MEDLINE, of the National Library of Medicine (1994-2006). Web citation distributions are investigated based on Web site types, Web page types, URL frequencies, URL depths, URL lengths, and year of article publication. Results show significant differences in the Web citation distributions among the three data sets. However, when the URLs of Web citations with the same hostnames are aggregated, the distributions in the three data sets are consistent with the power law (the Lotka function).
    Source
    Information processing and management. 48(2012) no.4, S.779-790
  7. Leydesdorff, L.; Bornmann, L.; Wagner, C.S.: ¬The relative influences of government funding and international collaboration on citation impact (2019) 0.04
    0.036396418 = product of:
      0.072792836 = sum of:
        0.053759433 = weight(_text_:data in 4681) [ClassicSimilarity], result of:
          0.053759433 = score(doc=4681,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 4681, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=4681)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 4681) [ClassicSimilarity], result of:
              0.038066804 = score(doc=4681,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 4681, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4681)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A recent publication in Nature reports that public R&D funding is only weakly correlated with the citation impact of a nation's articles as measured by the field-weighted citation index (FWCI; defined by Scopus). On the basis of the supplementary data, we up-scaled the design using Web of Science data for the decade 2003-2013 and OECD funding data for the corresponding decade assuming a 2-year delay (2001-2011). Using negative binomial regression analysis, we found very small coefficients, but the effects of international collaboration are positive and statistically significant, whereas the effects of government funding are negative, an order of magnitude smaller, and statistically nonsignificant (in two of three analyses). In other words, international collaboration improves the impact of research articles, whereas more government funding tends to have a small adverse effect when comparing OECD countries.
    Date
    8. 1.2019 18:22:45
  8. Mingers, J.; Macri, F.; Petrovici, D.: Using the h-index to measure the quality of journals in the field of business and management (2012) 0.03
    0.03466491 = product of:
      0.06932982 = sum of:
        0.043894395 = weight(_text_:data in 2741) [ClassicSimilarity], result of:
          0.043894395 = score(doc=2741,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 2741, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2741)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 2741) [ClassicSimilarity], result of:
              0.05087085 = score(doc=2741,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 2741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2741)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper considers the use of the h-index as a measure of a journal's research quality and contribution. We study a sample of 455 journals in business and management all of which are included in the ISI Web of Science (WoS) and the Association of Business School's peer review journal ranking list. The h-index is compared with both the traditional impact factors, and with the peer review judgements. We also consider two sources of citation data - the WoS itself and Google Scholar. The conclusions are that the h-index is preferable to the impact factor for a variety of reasons, especially the selective coverage of the impact factor and the fact that it disadvantages journals that publish many papers. Google Scholar is also preferred to WoS as a data source. However, the paper notes that it is not sufficient to use any single metric to properly evaluate research achievements.
    Source
    Information processing and management. 48(2012) no.2, S.234-241
  9. Bornmann, L.: How to analyze percentile citation impact data meaningfully in bibliometrics : the statistical analysis of distributions, percentile rank classes, and top-cited papers (2013) 0.03
    0.0314639 = product of:
      0.0629278 = sum of:
        0.043894395 = weight(_text_:data in 656) [ClassicSimilarity], result of:
          0.043894395 = score(doc=656,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 656, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=656)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 656) [ClassicSimilarity], result of:
              0.038066804 = score(doc=656,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 656, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=656)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    According to current research in bibliometrics, percentiles (or percentile rank classes) are the most suitable method for normalizing the citation counts of individual publications in terms of the subject area, the document type, and the publication year. Up to now, bibliometric research has concerned itself primarily with the calculation of percentiles. This study suggests how percentiles (and percentile rank classes) can be analyzed meaningfully for an evaluation study. Publication sets from four universities are compared with each other to provide sample data. These suggestions take into account on the one hand the distribution of percentiles over the publications in the sets (universities here) and on the other hand concentrate on the range of publications with the highest citation impact-that is, the range that is usually of most interest in the evaluation of scientific performance.
    Date
    22. 3.2013 19:44:17
  10. Crespo, J.A.; Herranz, N.; Li, Y.; Ruiz-Castillo, J.: ¬The effect on citation inequality of differences in citation practices at the web of science subject category level (2014) 0.03
    0.02950487 = product of:
      0.05900974 = sum of:
        0.03657866 = weight(_text_:data in 1291) [ClassicSimilarity], result of:
          0.03657866 = score(doc=1291,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 1291, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1291)
        0.022431081 = product of:
          0.044862162 = sum of:
            0.044862162 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
              0.044862162 = score(doc=1291,freq=4.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.27358043 = fieldWeight in 1291, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1291)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article studies the impact of differences in citation practices at the subfield, or Web of Science subject category level, using the model introduced in Crespo, Li, and Ruiz-Castillo (2013a), according to which the number of citations received by an article depends on its underlying scientific influence and the field to which it belongs. We use the same Thomson Reuters data set of about 4.4 million articles used in Crespo et al. (2013a) to analyze 22 broad fields. The main results are the following: First, when the classification system goes from 22 fields to 219 subfields the effect on citation inequality of differences in citation practices increases from ?14% at the field level to 18% at the subfield level. Second, we estimate a set of exchange rates (ERs) over a wide [660, 978] citation quantile interval to express the citation counts of articles into the equivalent counts in the all-sciences case. In the fractional case, for example, we find that in 187 of 219 subfields the ERs are reliable in the sense that the coefficient of variation is smaller than or equal to 0.10. Third, in the fractional case the normalization of the raw data using the ERs (or subfield mean citations) as normalization factors reduces the importance of the differences in citation practices from 18% to 3.8% (3.4%) of overall citation inequality. Fourth, the results in the fractional case are essentially replicated when we adopt a multiplicative approach.
  11. Torres-Salinas, D.; Gorraiz, J.; Robinson-Garcia, N.: ¬The insoluble problems of books : what does Altmetric.com have to offer? (2018) 0.03
    0.029478844 = product of:
      0.05895769 = sum of:
        0.046268754 = weight(_text_:data in 4633) [ClassicSimilarity], result of:
          0.046268754 = score(doc=4633,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.31247756 = fieldWeight in 4633, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=4633)
        0.012688936 = product of:
          0.025377871 = sum of:
            0.025377871 = weight(_text_:22 in 4633) [ClassicSimilarity], result of:
              0.025377871 = score(doc=4633,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.15476047 = fieldWeight in 4633, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4633)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The purpose of this paper is to analyze the capabilities, functionalities and appropriateness of Altmetric.com as a data source for the bibliometric analysis of books in comparison to PlumX. Design/methodology/approach The authors perform an exploratory analysis on the metrics the Altmetric Explorer for Institutions, platform offers for books. The authors use two distinct data sets of books. On the one hand, the authors analyze the Book Collection included in Altmetric.com. On the other hand, the authors use Clarivate's Master Book List, to analyze Altmetric.com's capabilities to download and merge data with external databases. Finally, the authors compare the findings with those obtained in a previous study performed in PlumX. Findings Altmetric.com combines and orderly tracks a set of data sources combined by DOI identifiers to retrieve metadata from books, being Google Books its main provider. It also retrieves information from commercial publishers and from some Open Access initiatives, including those led by university libraries, such as Harvard Library. We find issues with linkages between records and mentions or ISBN discrepancies. Furthermore, the authors find that automatic bots affect greatly Wikipedia mentions to books. The comparison with PlumX suggests that none of these tools provide a complete picture of the social attention generated by books and are rather complementary than comparable tools. Practical implications This study targets different audience which can benefit from the findings. First, bibliometricians and researchers who seek for alternative sources to develop bibliometric analyses of books, with a special focus on the Social Sciences and Humanities fields. Second, librarians and research managers who are the main clients to which these tools are directed. Third, Altmetric.com itself as well as other altmetric providers who might get a better understanding of the limitations users encounter and improve this promising tool. Originality/value This is the first study to analyze Altmetric.com's functionalities and capabilities for providing metric data for books and to compare results from this platform, with those obtained via PlumX.
    Date
    20. 1.2015 18:30:22
  12. Ding, Y.: Applying weighted PageRank to author citation networks (2011) 0.03
    0.029208332 = product of:
      0.058416665 = sum of:
        0.036211025 = weight(_text_:data in 4188) [ClassicSimilarity], result of:
          0.036211025 = score(doc=4188,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 4188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4188)
        0.022205638 = product of:
          0.044411276 = sum of:
            0.044411276 = weight(_text_:22 in 4188) [ClassicSimilarity], result of:
              0.044411276 = score(doc=4188,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.2708308 = fieldWeight in 4188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4188)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article aims to identify whether different weighted PageRank algorithms can be applied to author citation networks to measure the popularity and prestige of a scholar from a citation perspective. Information retrieval (IR) was selected as a test field and data from 1956-2008 were collected from Web of Science. Weighted PageRank with citation and publication as weighted vectors were calculated on author citation networks. The results indicate that both popularity rank and prestige rank were highly correlated with the weighted PageRank. Principal component analysis was conducted to detect relationships among these different measures. For capturing prize winners within the IR field, prestige rank outperformed all the other measures
    Date
    22. 1.2011 13:02:21
  13. Zhao, M.; Yan, E.; Li, K.: Data set mentions and citations : a content analysis of full-text publications (2018) 0.03
    0.027977297 = product of:
      0.11190919 = sum of:
        0.11190919 = weight(_text_:data in 4008) [ClassicSimilarity], result of:
          0.11190919 = score(doc=4008,freq=26.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.75578237 = fieldWeight in 4008, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=4008)
      0.25 = coord(1/4)
    
    Abstract
    This study provides evidence of data set mentions and citations in multiple disciplines based on a content analysis of 600 publications in PLoS One. We find that data set mentions and citations varied greatly among disciplines in terms of how data sets were collected, referenced, and curated. While a majority of articles provided free access to data, formal ways of data attribution such as DOIs and data citations were used in a limited number of articles. In addition, data reuse took place in less than 30% of the publications that used data, suggesting that researchers are still inclined to create and use their own data sets, rather than reusing previously curated data. This paper provides a comprehensive understanding of how data sets are used in science and helps institutions and publishers make useful data policies.
  14. Park, H.; You, S.; Wolfram, D.: Informal data citation for data sharing and reuse is more common than formal data citation in biomedical fields (2018) 0.03
    0.027433997 = product of:
      0.10973599 = sum of:
        0.10973599 = weight(_text_:data in 4544) [ClassicSimilarity], result of:
          0.10973599 = score(doc=4544,freq=36.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.7411056 = fieldWeight in 4544, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4544)
      0.25 = coord(1/4)
    
    Abstract
    Data citation, where products of research such as data sets, software, and tissue cultures are shared and acknowledged, is becoming more common in the era of Open Science. Currently, the practice of formal data citation-where data references are included alongside bibliographic references in the reference section of a publication-is uncommon. We examine the prevalence of data citation, documenting data sharing and reuse, in a sample of full text articles from the biological/biomedical sciences, the fields with the most public data sets available documented by the Data Citation Index (DCI). We develop a method that combines automated text extraction with human assessment for revealing candidate occurrences of data sharing and reuse by using terms that are most likely to indicate their occurrence. The analysis reveals that informal data citation in the main text of articles is far more common than formal data citations in the references of articles. As a result, data sharers do not receive documented credit for their data contributions in a similar way as authors do for their research articles because informal data citations are not recorded in sources such as the DCI. Ongoing challenges for the study of data citation are also outlined.
  15. Wan, X.; Liu, F.: Are all literature citations equally important? : automatic citation strength estimation and its applications (2014) 0.03
    0.025035713 = product of:
      0.050071426 = sum of:
        0.031038022 = weight(_text_:data in 1350) [ClassicSimilarity], result of:
          0.031038022 = score(doc=1350,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 1350, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1350)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 1350) [ClassicSimilarity], result of:
              0.038066804 = score(doc=1350,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 1350, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1350)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Literature citation analysis plays a very important role in bibliometrics and scientometrics, such as the Science Citation Index (SCI) impact factor, h-index. Existing citation analysis methods assume that all citations in a paper are equally important, and they simply count the number of citations. Here we argue that the citations in a paper are not equally important and some citations are more important than the others. We use a strength value to assess the importance of each citation and propose to use the regression method with a few useful features for automatically estimating the strength value of each citation. Evaluation results on a manually labeled data set in the computer science field show that the estimated values can achieve good correlation with human-labeled values. We further apply the estimated citation strength values for evaluating paper influence and author influence, and the preliminary evaluation results demonstrate the usefulness of the citation strength values.
    Date
    22. 8.2014 17:12:35
  16. Zhu, Q.; Kong, X.; Hong, S.; Li, J.; He, Z.: Global ontology research progress : a bibliometric analysis (2015) 0.02
    0.02414805 = product of:
      0.0482961 = sum of:
        0.02586502 = weight(_text_:data in 2590) [ClassicSimilarity], result of:
          0.02586502 = score(doc=2590,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 2590, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2590)
        0.022431081 = product of:
          0.044862162 = sum of:
            0.044862162 = weight(_text_:22 in 2590) [ClassicSimilarity], result of:
              0.044862162 = score(doc=2590,freq=4.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.27358043 = fieldWeight in 2590, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2590)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - The purpose of this paper is to analyse the global scientific outputs of ontology research, an important emerging discipline that has huge potential to improve information understanding, organization, and management. Design/methodology/approach - This study collected literature published during 1900-2012 from the Web of Science database. The bibliometric analysis was performed from authorial, institutional, national, spatiotemporal, and topical aspects. Basic statistical analysis, visualization of geographic distribution, co-word analysis, and a new index were applied to the selected data. Findings - Characteristics of publication outputs suggested that ontology research has entered into the soaring stage, along with increased participation and collaboration. The authors identified the leading authors, institutions, nations, and articles in ontology research. Authors were more from North America, Europe, and East Asia. The USA took the lead, while China grew fastest. Four major categories of frequently used keywords were identified: applications in Semantic Web, applications in bioinformatics, philosophy theories, and common supporting technology. Semantic Web research played a core role, and gene ontology study was well-developed. The study focus of ontology has shifted from philosophy to information science. Originality/value - This is the first study to quantify global research patterns and trends in ontology, which might provide a potential guide for the future research. The new index provides an alternative way to evaluate the multidisciplinary influence of researchers.
    Date
    20. 1.2015 18:30:22
    17. 9.2018 18:22:23
  17. Lievers, W.B.; Pilkey, A.K.: Characterizing the frequency of repeated citations : the effects of journal, subject area, and self-citation (2012) 0.02
    0.023530604 = product of:
      0.04706121 = sum of:
        0.02586502 = weight(_text_:data in 2725) [ClassicSimilarity], result of:
          0.02586502 = score(doc=2725,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 2725, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2725)
        0.021196188 = product of:
          0.042392377 = sum of:
            0.042392377 = weight(_text_:processing in 2725) [ClassicSimilarity], result of:
              0.042392377 = score(doc=2725,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.22363065 = fieldWeight in 2725, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2725)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Previous studies have repeatedly demonstrated that the relevance of a citing document is related to the number of times with which the source document is cited. Despite the ease with which electronic documents would permit the incorporation of this information into citation-based document search and retrieval systems, the possibilities of repeated citations remain untapped. Part of this under-utilization may be due to the fact that very little is known regarding the pattern of repeated citations in scholarly literature or how this pattern may vary as a function of journal, academic discipline or self-citation. The current research addresses these unanswered questions in order to facilitate the future incorporation of repeated citation information into document search and retrieval systems. Using data mining of electronic texts, the citation characteristics of nine different journals, covering the three different academic fields (economics, computing, and medicine & biology), were characterized. It was found that the frequency (f) with which a reference is cited N or more times within a document is consistent across the sampled journals and academic fields. Self-citation causes an increase in frequency, and this effect becomes more pronounced for large N. The objectivity, automatability, and insensitivity of repeated citations to journal and discipline, present powerful opportunities for improving citation-based document search.
    Source
    Information processing and management. 48(2012) no.6, S.1116-1123
  18. Vaughan, L.; Yang, R.: Web data as academic and business quality estimates : a comparison of three data sources (2012) 0.02
    0.023314415 = product of:
      0.09325766 = sum of:
        0.09325766 = weight(_text_:data in 452) [ClassicSimilarity], result of:
          0.09325766 = score(doc=452,freq=26.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.6298187 = fieldWeight in 452, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=452)
      0.25 = coord(1/4)
    
    Abstract
    Earlier studies found that web hyperlink data contain various types of information, ranging from academic to political, that can be used to analyze a variety of social phenomena. Specifically, the numbers of inlinks to academic websites are associated with academic performance, while the counts of inlinks to company websites correlate with business variables. However, the scarcity of sources from which to collect inlink data in recent years has required us to seek new data sources. The recent demise of the inlink search function of Yahoo! made this need more pressing. Different alternative variables or data sources have been proposed. This study compared three types of web data to determine which are better as academic and business quality estimates, and what are the relationships among the three data sources. The study found that Alexa inlink and Google URL citation data can replace Yahoo! inlink data and that the former is better than the latter. Alexa is even better than Yahoo!, which has been the main data source in recent years. The unique nature of Alexa data could explain its relative advantages over other data sources.
  19. Vaughan, L.; Ninkov, A.: ¬A new approach to web co-link analysis (2018) 0.02
    0.022399765 = product of:
      0.08959906 = sum of:
        0.08959906 = weight(_text_:data in 4256) [ClassicSimilarity], result of:
          0.08959906 = score(doc=4256,freq=24.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.60511017 = fieldWeight in 4256, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4256)
      0.25 = coord(1/4)
    
    Abstract
    Numerous web co-link studies have analyzed a wide variety of websites ranging from those in the academic and business arena to those dealing with politics and governments. Such studies uncover rich information about these organizations. In recent years, however, there has been a dearth of co-link analysis, mainly due to the lack of sources from which co-link data can be collected directly. Although several commercial services such as Alexa provide inlink data, none provide co-link data. We propose a new approach to web co-link analysis that can alleviate this problem so that researchers can continue to mine the valuable information contained in co-link data. The proposed approach has two components: (a) generating co-link data from inlink data using a computer program; (b) analyzing co-link data at the site level in addition to the page level that previous co-link analyses have used. The site-level analysis has the potential of expanding co-link data sources. We tested this proposed approach by analyzing a group of websites focused on vaccination using Moz inlink data. We found that the approach is feasible, as we were able to generate co-link data from inlink data and analyze the co-link data with multidimensional scaling.
  20. Lamb, I.; Larson, C.: Shining a light on scientific data : building a data catalog to foster data sharing and reuse (2016) 0.02
    0.021947198 = product of:
      0.08778879 = sum of:
        0.08778879 = weight(_text_:data in 3195) [ClassicSimilarity], result of:
          0.08778879 = score(doc=3195,freq=16.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5928845 = fieldWeight in 3195, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3195)
      0.25 = coord(1/4)
    
    Abstract
    The scientific community's growing eagerness to make research data available to the public provides libraries - with our expertise in metadata and discovery - an interesting new opportunity. This paper details the in-house creation of a "data catalog" which describes datasets ranging from population-level studies like the US Census to small, specialized datasets created by researchers at our own institution. Based on Symfony2 and Solr, the data catalog provides a powerful search interface to help researchers locate the data that can help them, and an administrative interface so librarians can add, edit, and manage metadata elements at will. This paper will outline the successes, failures, and total redos that culminated in the current manifestation of our data catalog.

Languages

  • e 222
  • d 6
  • More… Less…

Types

  • a 222
  • m 6
  • el 5
  • s 3
  • More… Less…