Search (78 results, page 1 of 4)

  • × year_i:[2010 TO 2020}
  • × theme_ss:"Informetrie"
  1. Thelwall, M.: Are Mendeley reader counts high enough for research evaluations when articles are published? (2017) 0.11
    0.10902387 = product of:
      0.1635358 = sum of:
        0.08966068 = weight(_text_:systematic in 3806) [ClassicSimilarity], result of:
          0.08966068 = score(doc=3806,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 3806, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3806)
        0.07387512 = sum of:
          0.04021717 = weight(_text_:indexing in 3806) [ClassicSimilarity], result of:
            0.04021717 = score(doc=3806,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.21146181 = fieldWeight in 3806, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3806)
          0.033657953 = weight(_text_:22 in 3806) [ClassicSimilarity], result of:
            0.033657953 = score(doc=3806,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.19345059 = fieldWeight in 3806, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3806)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose Mendeley reader counts have been proposed as early indicators for the impact of academic publications. The purpose of this paper is to assess whether there are enough Mendeley readers for research evaluation purposes during the month when an article is first published. Design/methodology/approach Average Mendeley reader counts were compared to the average Scopus citation counts for 104,520 articles from ten disciplines during the second half of 2016. Findings Articles attracted, on average, between 0.1 and 0.8 Mendeley readers per article in the month in which they first appeared in Scopus. This is about ten times more than the average Scopus citation count. Research limitations/implications Other disciplines may use Mendeley more or less than the ten investigated here. The results are dependent on Scopus's indexing practices, and Mendeley reader counts can be manipulated and have national and seniority biases. Practical implications Mendeley reader counts during the month of publication are more powerful than Scopus citations for comparing the average impacts of groups of documents but are not high enough to differentiate between the impacts of typical individual articles. Originality/value This is the first multi-disciplinary and systematic analysis of Mendeley reader counts from the publication month of an article.
    Date
    20. 1.2015 18:30:22
  2. Leydesdorff, L.; Bornmann, L.: How fractional counting of citations affects the impact factor : normalization in terms of differences in citation potentials among fields of science (2011) 0.07
    0.07099311 = product of:
      0.10648966 = sum of:
        0.08966068 = weight(_text_:systematic in 4186) [ClassicSimilarity], result of:
          0.08966068 = score(doc=4186,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 4186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4186)
        0.016828977 = product of:
          0.033657953 = sum of:
            0.033657953 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
              0.033657953 = score(doc=4186,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.19345059 = fieldWeight in 4186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4186)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Impact Factors (IFs) of the Institute for Scientific Information suffer from a number of drawbacks, among them the statistics-Why should one use the mean and not the median?-and the incomparability among fields of science because of systematic differences in citation behavior among fields. Can these drawbacks be counteracted by fractionally counting citation weights instead of using whole numbers in the numerators? (a) Fractional citation counts are normalized in terms of the citing sources and thus would take into account differences in citation behavior among fields of science. (b) Differences in the resulting distributions can be tested statistically for their significance at different levels of aggregation. (c) Fractional counting can be generalized to any document set including journals or groups of journals, and thus the significance of differences among both small and large sets can be tested. A list of fractionally counted IFs for 2008 is available online at http:www.leydesdorff.net/weighted_if/weighted_if.xls The between-group variance among the 13 fields of science identified in the U.S. Science and Engineering Indicators is no longer statistically significant after this normalization. Although citation behavior differs largely between disciplines, the reflection of these differences in fractionally counted citation distributions can not be used as a reliable instrument for the classification.
    Date
    22. 1.2011 12:51:07
  3. Shah, T.A.; Gul, S.; Gaur, R.C.: Authors self-citation behaviour in the field of Library and Information Science (2015) 0.05
    0.049695175 = product of:
      0.07454276 = sum of:
        0.06276248 = weight(_text_:systematic in 2597) [ClassicSimilarity], result of:
          0.06276248 = score(doc=2597,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.22101676 = fieldWeight in 2597, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2597)
        0.011780283 = product of:
          0.023560567 = sum of:
            0.023560567 = weight(_text_:22 in 2597) [ClassicSimilarity], result of:
              0.023560567 = score(doc=2597,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.1354154 = fieldWeight in 2597, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2597)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of this paper is to analyse the author self-citation behavior in the field of Library and Information Science. Various factors governing the author self-citation behavior have also been studied. Design/methodology/approach The 2012 edition of Social Science Citation Index was consulted for the selection of LIS journals. Under the subject heading "Information Science and Library Science" there were 84 journals and out of these 12 journals were selected for the study based on systematic sampling. The study was confined to original research and review articles that were published in select journals in the year 2009. The main reason to choose 2009 was to get at least five years (2009-2013) citation data from Web of Science Core Collection (excluding Book Citation Index) and SciELO Citation Index. A citation was treated as self-citation whenever one of the authors of citing and cited paper was common, i.e., the set of co-authors of the citing paper and that of the cited one are not disjoint. To minimize the risk of homonyms, spelling variances and misspelling in authors' names, the authors compared full author names in citing and cited articles. Findings A positive correlation between number of authors and total number of citations exists with no correlation between number of authors and number/share of self-citations, i.e., self-citations are not affected by the number of co-authors in a paper. Articles which are produced in collaboration attract more self-citations than articles produced by only one author. There is no statistically significant variation in citations counts (total and self-citations) in works that are result of different types of collaboration. A strong and statistically significant positive correlation exists between total citation count and frequency of self-citations. No relation could be ascertained between total citation count and proportion of self-citations. Authors tend to cite more of their recent works than the work of other authors. Total citation count and number of self-citations are positively correlated with the impact factor of source publication and correlation coefficient for total citations is much higher than that for self-citations. A negative correlation exhibits between impact factor and the share of self-citations. Of particular note is that the correlation in all the cases is of weak nature. Research limitations/implications The research provides an understanding of the author self-citations in the field of LIS. readers are encouraged to further the study by taking into account large sample, tracing citations also from Book Citation Index (WoS) and comparing results with other allied subjects so as to validate the robustness of the findings of this study. Originality/value Readers are encouraged to further the study by taking into account large sample, tracing citations also from Book Citation Index (WoS) and comparing results with other allied subjects so as to validate the robustness of the findings of this study.
    Date
    20. 1.2015 18:30:22
  4. Leydesdorff, L.; Opthof, T.: Scopus's source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations (2010) 0.04
    0.03586427 = product of:
      0.10759281 = sum of:
        0.10759281 = weight(_text_:systematic in 4107) [ClassicSimilarity], result of:
          0.10759281 = score(doc=4107,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 4107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=4107)
      0.33333334 = coord(1/3)
    
    Abstract
    Impact factors (and similar measures such as the Scimago Journal Rankings) suffer from two problems: (a) citation behavior varies among fields of science and, therefore, leads to systematic differences, and (b) there are no statistics to inform us whether differences are significant. The recently introduced "source normalized impact per paper" indicator of Scopus tries to remedy the first of these two problems, but a number of normalization decisions are involved, which makes it impossible to test for significance. Using fractional counting of citations-based on the assumption that impact is proportionate to the number of references in the citing documents-citations can be contextualized at the paper level and aggregated impacts of sets can be tested for their significance. It can be shown that the weighted impact of Annals of Mathematics (0.247) is not so much lower than that of Molecular Cell (0.386) despite a five-f old difference between their impact factors (2.793 and 13.156, respectively).
  5. Zhang, C.; Bu, Y.; Ding, Y.; Xu, J.: Understanding scientific collaboration : homophily, transitivity, and preferential attachment (2018) 0.04
    0.03586427 = product of:
      0.10759281 = sum of:
        0.10759281 = weight(_text_:systematic in 4011) [ClassicSimilarity], result of:
          0.10759281 = score(doc=4011,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 4011, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=4011)
      0.33333334 = coord(1/3)
    
    Abstract
    Scientific collaboration is essential in solving problems and breeding innovation. Coauthor network analysis has been utilized to study scholars' collaborations for a long time, but these studies have not simultaneously taken different collaboration features into consideration. In this paper, we present a systematic approach to analyze the differences in possibilities that two authors will cooperate as seen from the effects of homophily, transitivity, and preferential attachment. Exponential random graph models (ERGMs) are applied in this research. We find that different types of publications one author has written play diverse roles in his/her collaborations. An author's tendency to form new collaborations with her/his coauthors' collaborators is strong, where the more coauthors one author had before, the more new collaborators he/she will attract. We demonstrate that considering the authors' attributes and homophily effects as well as the transitivity and preferential attachment effects of the coauthorship network in which they are embedded helps us gain a comprehensive understanding of scientific collaboration.
  6. Thelwall, M.: ¬A comparison of link and URL citation counting (2011) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 4533) [ClassicSimilarity], result of:
          0.08966068 = score(doc=4533,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 4533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4533)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - Link analysis is an established topic within webometrics. It normally uses counts of links between sets of web sites or to sets of web sites. These link counts are derived from web crawlers or commercial search engines with the latter being the only alternative for some investigations. This paper compares link counts with URL citation counts in order to assess whether the latter could be a replacement for the former if the major search engines withdraw their advanced hyperlink search facilities. Design/methodology/approach - URL citation counts are compared with link counts for a variety of data sets used in previous webometric studies. Findings - The results show a high degree of correlation between the two but with URL citations being much less numerous, at least outside academia and business. Research limitations/implications - The results cover a small selection of 15 case studies and so the findings are only indicative. Significant differences between results indicate that the difference between link counts and URL citation counts will vary between webometric studies. Practical implications - Should link searches be withdrawn, then link analyses of less well linked non-academic, non-commercial sites would be seriously weakened, although citations based on e-mail addresses could help to make citations more numerous than links for some business and academic contexts. Originality/value - This is the first systematic study of the difference between link counts and URL citation counts in a variety of contexts and it shows that there are significant differences between the two.
  7. Haustein, S.; Peters, I.; Sugimoto, C.R.; Thelwall, M.; Larivière, V.: Tweeting biomedicine : an analysis of tweets and citations in the biomedical literature (2014) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 1229) [ClassicSimilarity], result of:
          0.08966068 = score(doc=1229,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 1229, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1229)
      0.33333334 = coord(1/3)
    
    Abstract
    Data collected by social media platforms have been introduced as new sources for indicators to help measure the impact of scholarly research in ways that are complementary to traditional citation analysis. Data generated from social media activities can be used to reflect broad types of impact. This article aims to provide systematic evidence about how often Twitter is used to disseminate information about journal articles in the biomedical sciences. The analysis is based on 1.4 million documents covered by both PubMed and Web of Science and published between 2010 and 2012. The number of tweets containing links to these documents was analyzed and compared to citations to evaluate the degree to which certain journals, disciplines, and specialties were represented on Twitter and how far tweets correlate with citation impact. With less than 10% of PubMed articles mentioned on Twitter, its uptake is low in general but differs between journals and specialties. Correlations between tweets and citations are low, implying that impact metrics based on tweets are different from those based on citations. A framework using the coverage of articles and the correlation between Twitter mentions and citations is proposed to facilitate the evaluation of novel social-media-based metrics.
  8. Xu, L.: Research synthesis methods and library and information science : shared problems, limited diffusion (2016) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 3057) [ClassicSimilarity], result of:
          0.08966068 = score(doc=3057,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 3057, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3057)
      0.33333334 = coord(1/3)
    
    Abstract
    Interests of researchers who engage with research synthesis methods (RSM) intersect with library and information science (LIS) research and practice. This intersection is described by a summary of conceptualizations of research synthesis in a diverse set of research fields and in the context of Swanson's (1986) discussion of undiscovered public knowledge. Through a selective literature review, research topics that intersect with LIS and RSM are outlined. Topics identified include open access, information retrieval, bias and research information ethics, referencing practices, citation patterns, and data science. Subsequently, bibliometrics and topic modeling are used to present a systematic overview of the visibility of RSM in LIS. This analysis indicates that RSM became visible in LIS in the 1980s. Overall, LIS research has drawn substantially from general and internal medicine, the field's own literature, and business; and is drawn on by health and medical sciences, computing, and business. Through this analytical overview, it is confirmed that research synthesis is more visible in the health and medical literature in LIS; but suggests that, LIS, as a meta-science, has the potential to make substantive contributions to a broader variety of fields in the context of topics related to research synthesis methods.
  9. Barnes, C.S.: ¬The construct validity of the h-index (2016) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 3165) [ClassicSimilarity], result of:
          0.08966068 = score(doc=3165,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 3165, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3165)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose The purpose of this paper is to show how bibliometrics would benefit from a stronger programme of construct validity. Design/methodology/approach The value of the construct validity concept is demonstrated by applying this approach to the evaluation of the h-index, a widely used metric. Findings The paper demonstrates that the h-index comprehensively fails any test of construct validity. In simple terms, the metric does not measure what it purports to measure. This conclusion suggests that the current popularity of the h-index as a topic for bibliometric research represents wasted effort, which might have been avoided if researchers had adopted the approach suggested in this paper. Research limitations/implications This study is based on the analysis of a single bibliometric concept. Practical implications The conclusion that the h-index fails any test in terms of construct validity implies that the widespread use of this metric within the higher education sector as a management tool represents poor practice, and almost certainly results in the misallocation of resources. Social implications This paper suggests that the current enthusiasm for the h-index within the higher education sector is misplaced. The implication is that universities, grant funding bodies and faculty administrators should abandon the use of the h-index as a management tool. Such a change would have a significant effect on current hiring, promotion and tenure practices within the sector, as well as current attitudes towards the measurement of academic performance. Originality/value The originality of the paper lies in the systematic application of the concept of construct validity to bibliometric enquiry.
  10. Walters, W.H.; Linvill, A.C.: Bibliographic index coverage of open-access journals in six subject areas (2011) 0.02
    0.02462504 = product of:
      0.07387512 = sum of:
        0.07387512 = sum of:
          0.04021717 = weight(_text_:indexing in 4635) [ClassicSimilarity], result of:
            0.04021717 = score(doc=4635,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.21146181 = fieldWeight in 4635, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4635)
          0.033657953 = weight(_text_:22 in 4635) [ClassicSimilarity], result of:
            0.033657953 = score(doc=4635,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.19345059 = fieldWeight in 4635, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4635)
      0.33333334 = coord(1/3)
    
    Abstract
    We investigate the extent to which open-access (OA) journals and articles in biology, computer science, economics, history, medicine, and psychology are indexed in each of 11 bibliographic databases. We also look for variations in index coverage by journal subject, journal size, publisher type, publisher size, date of first OA issue, region of publication, language of publication, publication fee, and citation impact factor. Two databases, Biological Abstracts and PubMed, provide very good coverage of the OA journal literature, indexing 60 to 63% of all OA articles in their disciplines. Five databases provide moderately good coverage (22-41%), and four provide relatively poor coverage (0-12%). OA articles in biology journals, English-only journals, high-impact journals, and journals that charge publication fees of $1,000 or more are especially likely to be indexed. Conversely, articles from OA publishers in Africa, Asia, or Central/South America are especially unlikely to be indexed. Four of the 11 databases index commercially published articles at a substantially higher rate than articles published by universities, scholarly societies, nonprofit publishers, or governments. Finally, three databases-EBSCO Academic Search Complete, ProQuest Research Library, and Wilson OmniFile-provide less comprehensive coverage of OA articles than of articles in comparable subscription journals.
  11. Abdelkareem, M.A.A.: In terms of publication index, what indicator is the best for researchers indexing, Google Scholar, Scopus, Clarivate or others? (2018) 0.02
    0.020983277 = product of:
      0.06294983 = sum of:
        0.06294983 = product of:
          0.12589966 = sum of:
            0.12589966 = weight(_text_:indexing in 4548) [ClassicSimilarity], result of:
              0.12589966 = score(doc=4548,freq=10.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6619802 = fieldWeight in 4548, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4548)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    I believe that Google Scholar is the most popular academic indexing way for researchers and citations. However, some other indexing institutions may be more professional than Google Scholar but not as popular as Google Scholar. Other indexing websites like Scopus and Clarivate are providing more statistical figures for scholars, institutions or even journals. On account of publication citations, always Google Scholar shows higher citations for a paper than other indexing websites since Google Scholar consider most of the publication platforms so he can easily count the citations. While other databases just consider the citations come from those journals that are already indexed in their database
  12. Stuart, D.: Web metrics for library and information professionals (2014) 0.02
    0.020920826 = product of:
      0.06276248 = sum of:
        0.06276248 = weight(_text_:systematic in 2274) [ClassicSimilarity], result of:
          0.06276248 = score(doc=2274,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.22101676 = fieldWeight in 2274, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
      0.33333334 = coord(1/3)
    
    Content
    1. Introduction. MetricsIndicators -- Web metrics and Ranganathan's laws of library science -- Web metrics for the library and information professional -- The aim of this book -- The structure of the rest of this book -- 2. Bibliometrics, webometrics and web metrics. Web metrics -- Information science metrics -- Web analytics -- Relational and evaluative metrics -- Evaluative web metrics -- Relational web metrics -- Validating the results -- 3. Data collection tools. The anatomy of a URL, web links and the structure of the web -- Search engines 1.0 -- Web crawlers -- Search engines 2.0 -- Post search engine 2.0: fragmentation -- 4. Evaluating impact on the web. Websites -- Blogs -- Wikis -- Internal metrics -- External metrics -- A systematic approach to content analysis -- 5. Evaluating social media impact. Aspects of social network sites -- Typology of social network sites -- Research and tools for specific sites and services -- Other social network sites -- URL shorteners: web analytic links on any site -- General social media impact -- Sentiment analysis -- 6. Investigating relationships between actors. Social network analysis methods -- Sources for relational network analysis -- 7. Exploring traditional publications in a new environment. More bibliographic items -- Full text analysis -- Greater context -- 8. Web metrics and the web of data. The web of data -- Building the semantic web -- Implications of the web of data for web metrics -- Investigating the web of data today -- SPARQL -- Sindice -- LDSpider: an RDF web crawler -- 9. The future of web metrics and the library and information professional. How far we have come -- The future of web metrics -- The future of the library and information professional and web metrics.
  13. Chaves Guimarães, J.A.; Tennis, J.T.: Constant pioneers : the citation frontiers of indexing theory in the ISKO international proceedings (2012) 0.02
    0.018575516 = product of:
      0.055726547 = sum of:
        0.055726547 = product of:
          0.11145309 = sum of:
            0.11145309 = weight(_text_:indexing in 818) [ClassicSimilarity], result of:
              0.11145309 = score(doc=818,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5860202 = fieldWeight in 818, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=818)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents a citation analysis of indexing research in the ISKO Proceedings. Understanding that there are different traditions of research into indexing, we look for evidence of this in the citing and cited authors. Three areas of cited and citing authors surface, after applying Price's elitism analysis, each roughly corresponding to geographic distributions.
  14. Haustein, S.; Sugimoto, C.; Larivière, V.: Social media in scholarly communication : Guest editorial (2015) 0.02
    0.018106725 = product of:
      0.05432017 = sum of:
        0.05432017 = sum of:
          0.034125403 = weight(_text_:indexing in 3809) [ClassicSimilarity], result of:
            0.034125403 = score(doc=3809,freq=4.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.1794313 = fieldWeight in 3809, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3809)
          0.02019477 = weight(_text_:22 in 3809) [ClassicSimilarity], result of:
            0.02019477 = score(doc=3809,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.116070345 = fieldWeight in 3809, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3809)
      0.33333334 = coord(1/3)
    
    Abstract
    One of the solutions to help scientists filter the most relevant publications and, thus, to stay current on developments in their fields during the transition from "little science" to "big science", was the introduction of citation indexing as a Wellsian "World Brain" (Garfield, 1964) of scientific information: It is too much to expect a research worker to spend an inordinate amount of time searching for the bibliographic descendants of antecedent papers. It would not be excessive to demand that the thorough scholar check all papers that have cited or criticized such papers, if they could be located quickly. The citation index makes this check practicable (Garfield, 1955, p. 108). In retrospective, citation indexing can be perceived as a pre-social web version of crowdsourcing, as it is based on the concept that the community of citing authors outperforms indexers in highlighting cognitive links between papers, particularly on the level of specific ideas and concepts (Garfield, 1983). Over the last 50 years, citation analysis and more generally, bibliometric methods, have developed from information retrieval tools to research evaluation metrics, where they are presumed to make scientific funding more efficient and effective (Moed, 2006). However, the dominance of bibliometric indicators in research evaluation has also led to significant goal displacement (Merton, 1957) and the oversimplification of notions of "research productivity" and "scientific quality", creating adverse effects such as salami publishing, honorary authorships, citation cartels, and misuse of indicators (Binswanger, 2015; Cronin and Sugimoto, 2014; Frey and Osterloh, 2006; Haustein and Larivière, 2015; Weingart, 2005).
    Date
    20. 1.2015 18:30:22
  15. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.01
    0.0134631805 = product of:
      0.04038954 = sum of:
        0.04038954 = product of:
          0.08077908 = sum of:
            0.08077908 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.08077908 = score(doc=1239,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    18. 3.2014 19:13:22
  16. Pellack, L.J.; Kappmeyer, L.O.: ¬The ripple effect of women's name changes in indexing, citation, and authority control (2011) 0.01
    0.0134057235 = product of:
      0.04021717 = sum of:
        0.04021717 = product of:
          0.08043434 = sum of:
            0.08043434 = weight(_text_:indexing in 4347) [ClassicSimilarity], result of:
              0.08043434 = score(doc=4347,freq=8.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.42292362 = fieldWeight in 4347, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4347)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This study investigated name changes of women authors to determine how they were represented in indexes and cited references and identify problem areas. A secondary purpose of the study was to investigate whether or not indexing services were using authority control and how this influenced the search results. The works of eight library science authors who had published under multiple names were examined. The researchers compared author names as they appeared on title pages of publications versus in four online databases and in bibliographies by checking 380 publications and 1,159 citations. Author names were correctly provided 81.22% of the time in indexing services and 90.94% in citation lists. The lowest accuracy (54.55%) occurred when limiting to publications found in Library Literature. The highest accuracy (94.18%) occurred with works published before a surname changed. Author names in indexes and citations correctly matched names on journal articles more often than for any other type of publication. Indexes and citation style manuals treated author names in multiple ways, often altering names substantially from how they appear on the title page. Recommendations are made for changes in editorial styles by indexing services and by the authors themselves to help alleviate future confusion in author name searching.
  17. MacRoberts, M.H.; MacRoberts, B.R.: Problems of citation analysis : a study of uncited and seldom-cited influences (2010) 0.01
    0.01072458 = product of:
      0.032173738 = sum of:
        0.032173738 = product of:
          0.064347476 = sum of:
            0.064347476 = weight(_text_:indexing in 3308) [ClassicSimilarity], result of:
              0.064347476 = score(doc=3308,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.3383389 = fieldWeight in 3308, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3308)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Theme
    Citation indexing
  18. Hellqvist, B.: Referencing in the humanities and its implications for citation analysis (2010) 0.01
    0.009384007 = product of:
      0.02815202 = sum of:
        0.02815202 = product of:
          0.05630404 = sum of:
            0.05630404 = weight(_text_:indexing in 3329) [ClassicSimilarity], result of:
              0.05630404 = score(doc=3329,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.29604656 = fieldWeight in 3329, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3329)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Theme
    Citation indexing
  19. Klein, A.: Von der Schneeflocke zur Lawine : Möglichkeiten der Nutzung freier Zitationsdaten in Bibliotheken (2017) 0.01
    0.009384007 = product of:
      0.02815202 = sum of:
        0.02815202 = product of:
          0.05630404 = sum of:
            0.05630404 = weight(_text_:indexing in 4002) [ClassicSimilarity], result of:
              0.05630404 = score(doc=4002,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.29604656 = fieldWeight in 4002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4002)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Theme
    Citation indexing
  20. MacRoberts, M.H.; MacRoberts, B.R.: ¬The mismeasure of science : citation analysis (2018) 0.01
    0.009384007 = product of:
      0.02815202 = sum of:
        0.02815202 = product of:
          0.05630404 = sum of:
            0.05630404 = weight(_text_:indexing in 4058) [ClassicSimilarity], result of:
              0.05630404 = score(doc=4058,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.29604656 = fieldWeight in 4058, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4058)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Theme
    Citation indexing

Languages

  • e 74
  • d 4
  • More… Less…

Types

  • a 75
  • el 2
  • m 2
  • s 1
  • More… Less…