Search (73 results, page 1 of 4)

  • × theme_ss:"Informetrie"
  • × year_i:[2010 TO 2020}
  1. Herb, U.; Beucke, D.: ¬Die Zukunft der Impact-Messung : Social Media, Nutzung und Zitate im World Wide Web (2013) 0.04
    0.03696644 = product of:
      0.22179863 = sum of:
        0.22179863 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.22179863 = score(doc=2188,freq=2.0), product of:
            0.2959851 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03491209 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
      0.16666667 = coord(1/6)
    
    Content
    Vgl. unter: https://www.leibniz-science20.de%2Fforschung%2Fprojekte%2Faltmetrics-in-verschiedenen-wissenschaftsdisziplinen%2F&ei=2jTgVaaXGcK4Udj1qdgB&usg=AFQjCNFOPdONj4RKBDf9YDJOLuz3lkGYlg&sig2=5YI3KWIGxBmk5_kv0P_8iQ.
  2. Pellack, L.J.; Kappmeyer, L.O.: ¬The ripple effect of women's name changes in indexing, citation, and authority control (2011) 0.03
    0.028218701 = product of:
      0.084656104 = sum of:
        0.031560984 = weight(_text_:searching in 4347) [ClassicSimilarity], result of:
          0.031560984 = score(doc=4347,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.22347288 = fieldWeight in 4347, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4347)
        0.053095117 = product of:
          0.106190234 = sum of:
            0.106190234 = weight(_text_:manuals in 4347) [ClassicSimilarity], result of:
              0.106190234 = score(doc=4347,freq=2.0), product of:
                0.25905544 = queryWeight, product of:
                  7.4202213 = idf(docFreq=71, maxDocs=44218)
                  0.03491209 = queryNorm
                0.40991318 = fieldWeight in 4347, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.4202213 = idf(docFreq=71, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4347)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This study investigated name changes of women authors to determine how they were represented in indexes and cited references and identify problem areas. A secondary purpose of the study was to investigate whether or not indexing services were using authority control and how this influenced the search results. The works of eight library science authors who had published under multiple names were examined. The researchers compared author names as they appeared on title pages of publications versus in four online databases and in bibliographies by checking 380 publications and 1,159 citations. Author names were correctly provided 81.22% of the time in indexing services and 90.94% in citation lists. The lowest accuracy (54.55%) occurred when limiting to publications found in Library Literature. The highest accuracy (94.18%) occurred with works published before a surname changed. Author names in indexes and citations correctly matched names on journal articles more often than for any other type of publication. Indexes and citation style manuals treated author names in multiple ways, often altering names substantially from how they appear on the title page. Recommendations are made for changes in editorial styles by indexing services and by the authors themselves to help alleviate future confusion in author name searching.
  3. Costas, R.; Zahedi, Z.; Wouters, P.: ¬The thematic orientation of publications mentioned on social media : large-scale disciplinary comparison of social media metrics with citations (2015) 0.01
    0.013372278 = product of:
      0.08023366 = sum of:
        0.08023366 = sum of:
          0.056583133 = weight(_text_:etc in 2598) [ClassicSimilarity], result of:
            0.056583133 = score(doc=2598,freq=2.0), product of:
              0.18910104 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.03491209 = queryNorm
              0.2992217 = fieldWeight in 2598, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2598)
          0.023650533 = weight(_text_:22 in 2598) [ClassicSimilarity], result of:
            0.023650533 = score(doc=2598,freq=2.0), product of:
              0.1222562 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03491209 = queryNorm
              0.19345059 = fieldWeight in 2598, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2598)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - The purpose of this paper is to analyze the disciplinary orientation of scientific publications that were mentioned on different social media platforms, focussing on their differences and similarities with citation counts. Design/methodology/approach - Social media metrics and readership counts, associated with 500,216 publications and their citation data from the Web of Science database, were collected from Altmetric.com and Mendeley. Results are presented through descriptive statistical analyses together with science maps generated with VOSviewer. Findings - The results confirm Mendeley as the most prevalent social media source with similar characteristics to citations in their distribution across fields and their density in average values per publication. The humanities, natural sciences, and engineering disciplines have a much lower presence of social media metrics. Twitter has a stronger focus on general medicine and social sciences. Other sources (blog, Facebook, Google+, and news media mentions) are more prominent in regards to multidisciplinary journals. Originality/value - This paper reinforces the relevance of Mendeley as a social media source for analytical purposes from a disciplinary perspective, being particularly relevant for the social sciences (together with Twitter). Key implications for the use of social media metrics on the evaluation of research performance (e.g. the concentration of some social media metrics, such as blogs, news items, etc., around multidisciplinary journals) are identified.
    Date
    20. 1.2015 18:30:22
  4. Haustein, S.; Sugimoto, C.; Larivière, V.: Social media in scholarly communication : Guest editorial (2015) 0.01
    0.00867725 = product of:
      0.02603175 = sum of:
        0.01893659 = weight(_text_:searching in 3809) [ClassicSimilarity], result of:
          0.01893659 = score(doc=3809,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.13408373 = fieldWeight in 3809, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3809)
        0.0070951595 = product of:
          0.014190319 = sum of:
            0.014190319 = weight(_text_:22 in 3809) [ClassicSimilarity], result of:
              0.014190319 = score(doc=3809,freq=2.0), product of:
                0.1222562 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03491209 = queryNorm
                0.116070345 = fieldWeight in 3809, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3809)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    One of the solutions to help scientists filter the most relevant publications and, thus, to stay current on developments in their fields during the transition from "little science" to "big science", was the introduction of citation indexing as a Wellsian "World Brain" (Garfield, 1964) of scientific information: It is too much to expect a research worker to spend an inordinate amount of time searching for the bibliographic descendants of antecedent papers. It would not be excessive to demand that the thorough scholar check all papers that have cited or criticized such papers, if they could be located quickly. The citation index makes this check practicable (Garfield, 1955, p. 108). In retrospective, citation indexing can be perceived as a pre-social web version of crowdsourcing, as it is based on the concept that the community of citing authors outperforms indexers in highlighting cognitive links between papers, particularly on the level of specific ideas and concepts (Garfield, 1983). Over the last 50 years, citation analysis and more generally, bibliometric methods, have developed from information retrieval tools to research evaluation metrics, where they are presumed to make scientific funding more efficient and effective (Moed, 2006). However, the dominance of bibliometric indicators in research evaluation has also led to significant goal displacement (Merton, 1957) and the oversimplification of notions of "research productivity" and "scientific quality", creating adverse effects such as salami publishing, honorary authorships, citation cartels, and misuse of indicators (Binswanger, 2015; Cronin and Sugimoto, 2014; Frey and Osterloh, 2006; Haustein and Larivière, 2015; Weingart, 2005).
    Date
    20. 1.2015 18:30:22
  5. He, J.; Ping, Q.; Lou, W.; Chen, C.: PaperPoles : facilitating adaptive visual exploration of scientific publications by citation links (2019) 0.01
    0.007438996 = product of:
      0.044633973 = sum of:
        0.044633973 = weight(_text_:searching in 5326) [ClassicSimilarity], result of:
          0.044633973 = score(doc=5326,freq=4.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.3160384 = fieldWeight in 5326, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5326)
      0.16666667 = coord(1/6)
    
    Abstract
    Finding relevant publications is a common task. Typically, a researcher browses through a list of publications and traces additional relevant publications. When relevant publications are identified, the list may be expanded by the citation links of the relevant publications. The information needs of researchers may change as they go through such iterative processes. The exploration process quickly becomes cumbersome as the list expands. Most existing academic search systems tend to be limited in terms of the extent to which searchers can adapt their search as they proceed. In this article, we introduce an adaptive visual exploration system named PaperPoles to support exploration of scientific publications in a context-aware environment. Searchers can express their information needs by intuitively formulating positive and negative queries. The search results are grouped and displayed in a cluster view, which shows aspects and relevance patterns of the results to support navigation and exploration. We conducted an experiment to compare PaperPoles with a list-based interface in performing two academic search tasks with different complexity. The results show that PaperPoles can improve the accuracy of searching for the simple and complex tasks. It can also reduce the completion time of searching and improve exploration effectiveness in the complex task. PaperPoles demonstrates a potentially effective workflow for adaptive visual search of complex information.
  6. Hellqvist, B.: Referencing in the humanities and its implications for citation analysis (2010) 0.01
    0.0073642298 = product of:
      0.044185378 = sum of:
        0.044185378 = weight(_text_:searching in 3329) [ClassicSimilarity], result of:
          0.044185378 = score(doc=3329,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.31286204 = fieldWeight in 3329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3329)
      0.16666667 = coord(1/6)
    
    Abstract
    This article studies citation practices in the arts and humanities from a theoretical and conceptual viewpoint, drawing on studies from fields like linguistics, history, library & information science, and the sociology of science. The use of references in the humanities is discussed in connection with the growing interest in the possibilities of applying citation analysis to humanistic disciplines. The study shows how the use of references within the humanities is connected to concepts of originality, to intellectual organization, and to searching and writing. Finally, it is acknowledged that the use of references is connected to stylistic, epistemological, and organizational differences, and these differences must be taken into account when applying citation analysis to humanistic disciplines.
  7. Dalen, H.P. van; Henkens, K.: Intended and unintended consequences of a publish-or-perish culture : a worldwide survey (2012) 0.01
    0.0056583136 = product of:
      0.03394988 = sum of:
        0.03394988 = product of:
          0.06789976 = sum of:
            0.06789976 = weight(_text_:etc in 2299) [ClassicSimilarity], result of:
              0.06789976 = score(doc=2299,freq=2.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.35906604 = fieldWeight in 2299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2299)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    How does publication pressure in modern-day universities affect the intrinsic and extrinsic rewards in science? By using a worldwide survey among demographers in developed and developing countries, the authors show that the large majority perceive the publication pressure as high, but more so in Anglo-Saxon countries and to a lesser extent in Western Europe. However, scholars see both the pros (upward mobility) and cons (excessive publication and uncitedness, neglect of policy issues, etc.) of the so-called publish-or-perish culture. By measuring behavior in terms of reading and publishing, and perceived extrinsic rewards and stated intrinsic rewards of practicing science, it turns out that publication pressure negatively affects the orientation of demographers towards policy and knowledge sharing. There are no signs that the pressure affects reading and publishing outside the core discipline.
  8. Bouyssou, D.; Marchant, T.: Ranking scientists and departments in a consistent manner (2011) 0.01
    0.0056583136 = product of:
      0.03394988 = sum of:
        0.03394988 = product of:
          0.06789976 = sum of:
            0.06789976 = weight(_text_:etc in 4751) [ClassicSimilarity], result of:
              0.06789976 = score(doc=4751,freq=2.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.35906604 = fieldWeight in 4751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4751)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    The standard data that we use when computing bibliometric rankings of scientists are their publication/ citation records, i.e., so many papers with 0 citation, so many with 1 citation, so many with 2 citations, etc. The standard data for bibliometric rankings of departments have the same structure. It is therefore tempting (and many authors gave in to temptation) to use the same method for computing rankings of scientists and rankings of departments. Depending on the method, this can yield quite surprising and unpleasant results. Indeed, with some methods, it may happen that the "best" department contains the "worst" scientists, and only them. This problem will not occur if the rankings satisfy a property called consistency, recently introduced in the literature. In this article, we explore the consequences of consistency and we characterize two families of consistent rankings.
  9. Xie, Z.; Ouyang, Z.; Li, J.; Dong, E.: Modelling transition phenomena of scientific coauthorship networks (2018) 0.01
    0.0056583136 = product of:
      0.03394988 = sum of:
        0.03394988 = product of:
          0.06789976 = sum of:
            0.06789976 = weight(_text_:etc in 4043) [ClassicSimilarity], result of:
              0.06789976 = score(doc=4043,freq=2.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.35906604 = fieldWeight in 4043, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4043)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    In a range of scientific coauthorship networks, transitions emerge in degree distribution, in the correlation between degree and local clustering coefficient, etc. The existence of those transitions could be regarded because of the diversity in collaboration behaviors of scientific fields. A growing geometric hypergraph built on a cluster of concentric circles is proposed to model two specific collaboration behaviors, namely the behaviors of research team leaders and those of the other team members. The model successfully predicts the transitions, as well as many common features of coauthorship networks. Particularly, it realizes a process of deriving the complex "scale-free" property from the simple "yes/no" decisions. Moreover, it provides a reasonable explanation for the emergence of transitions with the difference of collaboration behaviors between leaders and other members. The difference emerges in the evolution of research teams, which synthetically addresses several specific factors of generating collaborations, namely the communications between research teams, academic impacts and homophily of authors.
  10. Tsay, M.-y.; Shu, Z.-y.: Journal bibliometric analysis : a case study on the Journal of Documentation (2011) 0.01
    0.005260164 = product of:
      0.031560984 = sum of:
        0.031560984 = weight(_text_:searching in 294) [ClassicSimilarity], result of:
          0.031560984 = score(doc=294,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.22347288 = fieldWeight in 294, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.0390625 = fieldNorm(doc=294)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - This study aims to explore the journal bibliometric characteristics of the Journal of Documentation (JOD) and the subject relationship with other disciplines by citation analysis. Design/methodology/approach - The citation data were drawn from references of each article of JOD during 1998 and 2008. Ulrich's Periodicals Directory, Library of Congress Subject Heading, retrieved from the WorldCat and LISA database were used to identify the main class, subclass and subject of cited journals and books. Findings - The results of this study revealed that journal articles are the most cited document, followed by books and book chapters, electronic resources, and conference proceedings, respectively. The three main classes of cited journals in JOD papers are library science, science, and social sciences. The three subclasses of non-LIS journals that were highly cited in JOD papers are Science, "Mathematics. Computer science", and "Industries. Land use. Labor". The three highly cited subjects of library and information science journals encompass searching, information work, and online information retrieval. The most cited main class of books in JOD papers is library and information science, followed by social sciences, science, "Philosophy. Psychology. Religion." The three highly cited subclasses of books in JOD papers are "Books (General). Writing. Paleography. Book industries and trade. Libraries. Bibliography," "Philology and linguistics," and Science, and the most cited subject of books is information storage and retrieval systems. Originality/value - Results for the present research found that information science, as represented by JOD, is a developing discipline with an expanding literature relating to multiple subject areas.
  11. McCain, K.W.: Assessing obliteration by incorporation : issues and caveats (2012) 0.01
    0.005260164 = product of:
      0.031560984 = sum of:
        0.031560984 = weight(_text_:searching in 485) [ClassicSimilarity], result of:
          0.031560984 = score(doc=485,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.22347288 = fieldWeight in 485, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.0390625 = fieldNorm(doc=485)
      0.16666667 = coord(1/6)
    
    Abstract
    Empirical studies of obliteration by incorporation (OBI) may be conducted at the level of the database record or the fulltext citation-in-context. To assess the difference between the two approaches, 1,040 articles with a variant of the phrase "evolutionarily stable strategies" (ESS) were identified by searching the Web of Science (Thomson Reuters, Philadelphia, PA) and discipline-level databases. The majority (72%) of all articles were published in life sciences journals. The ESS concept is associated with a small set of canonical publications by John Maynard Smith; OBI represents a decoupling of the use of the phrase and a citation to a John Maynard Smith publication. Across all articles at the record level, OBI is measured by the number of articles with the phrase in the database record but which lack a reference to a source article (implicit citations). At the citation-in-context level, articles that coupled a non-Maynard Smith citation with the ESS phrase (indirect citations) were counted along with those that cited relevant Maynard Smith publications (explicit citations) and OBI counted only based on those articles that lacked any citation coupled with the ESS text phrase. The degree of OBI observed depended on the level of analysis. Record-level OBI trended upward, peaking in 2002 (62%), with a secondary drop and rebound to 53% (2008). Citation-in-context OBI percentages were lower with no clear pattern. Several issues relating to the design of empirical OBI studies are discussed.
  12. Kousha, K.; Thelwall, M.: ¬An automatic method for extracting citations from Google Books (2015) 0.01
    0.005260164 = product of:
      0.031560984 = sum of:
        0.031560984 = weight(_text_:searching in 1658) [ClassicSimilarity], result of:
          0.031560984 = score(doc=1658,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.22347288 = fieldWeight in 1658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1658)
      0.16666667 = coord(1/6)
    
    Abstract
    Recent studies have shown that counting citations from books can help scholarly impact assessment and that Google Books (GB) is a useful source of such citation counts, despite its lack of a public citation index. Searching GB for citations produces approximate matches, however, and so its raw results need time-consuming human filtering. In response, this article introduces a method to automatically remove false and irrelevant matches from GB citation searches in addition to introducing refinements to a previous GB manual citation extraction method. The method was evaluated by manual checking of sampled GB results and comparing citations to about 14,500 monographs in the Thomson Reuters Book Citation Index (BKCI) against automatically extracted citations from GB across 24 subject areas. GB citations were 103% to 137% as numerous as BKCI citations in the humanities, except for tourism (72%) and linguistics (91%), 46% to 85% in social sciences, but only 8% to 53% in the sciences. In all cases, however, GB had substantially more citing books than did BKCI, with BKCI's results coming predominantly from journal articles. Moderate correlations between the GB and BKCI citation counts in social sciences and humanities, with most BKCI results coming from journal articles rather than books, suggests that they could measure the different aspects of impact, however.
  13. Orduña-Malea, E.; Torres-Salinas, D.; López-Cózar, E.D.: Hyperlinks embedded in twitter as a proxy for total external in-links to international university websites (2015) 0.01
    0.005260164 = product of:
      0.031560984 = sum of:
        0.031560984 = weight(_text_:searching in 2043) [ClassicSimilarity], result of:
          0.031560984 = score(doc=2043,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.22347288 = fieldWeight in 2043, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2043)
      0.16666667 = coord(1/6)
    
    Abstract
    Twitter as a potential alternative source of external links for use in webometric analysis is analyzed because of its capacity to embed hyperlinks in different tweets. Given the limitations on searching Twitter's public application programming interface (API), we used the Topsy search engine as a source for compiling tweets. To this end, we took a global sample of 200 universities and compiled all the tweets with hyperlinks to any of these institutions. Further link data was obtained from alternative sources (MajesticSEO and OpenSiteExplorer) in order to compare the results. Thereafter, various statistical tests were performed to determine the correlation between the indicators and the possibility of predicting external links from the collected tweets. The results indicate a high volume of tweets, although they are skewed by the performance of specific universities and countries. The data provided by Topsy correlated significantly with all link indicators, particularly with OpenSiteExplorer (r?=?0.769). Finally, prediction models do not provide optimum results because of high error rates. We conclude that the use of Twitter (via Topsy) as a source of hyperlinks to universities produces promising results due to its high correlation with link indicators, though limited by policies and culture regarding use and presence in social networks.
  14. Schmidt, M.: ¬An analysis of the validity of retraction annotation in pubmed and the web of science (2018) 0.01
    0.005260164 = product of:
      0.031560984 = sum of:
        0.031560984 = weight(_text_:searching in 4044) [ClassicSimilarity], result of:
          0.031560984 = score(doc=4044,freq=2.0), product of:
            0.14122958 = queryWeight, product of:
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.03491209 = queryNorm
            0.22347288 = fieldWeight in 4044, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0452914 = idf(docFreq=2103, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4044)
      0.16666667 = coord(1/6)
    
    Abstract
    Research on scientific misconduct relies increasingly on retractions of articles. An interdisciplinary line of research has been established that empirically assesses the phenomenon of scientific misconduct using information on retractions, and thus aims to shed light on aspects of misconduct that previously were hidden. However, comparability and interpretability of studies are to a certain extent impeded by an absence of standards in corpus delineation and by the fact that the validity of this empirical data basis has never been systematically scrutinized. This article assesses the conceptual and empirical delineation of retractions against related publication types through a comparative analysis of the coverage and consistency of retraction annotation in the databases PubMed and the Web of Science (WoS), which are both commonly used for empicial studies on retractions. The searching and linking approaches of the WoS were subsequently evaluated. The results indicate that a considerable number of PubMed retracted publications and retractions are not labeled as such in the WoS or are indistinguishable from corrections, which is highly relevant for corpus and sample strategies in the WoS.
  15. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.00
    0.0047301063 = product of:
      0.028380638 = sum of:
        0.028380638 = product of:
          0.056761276 = sum of:
            0.056761276 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.056761276 = score(doc=1239,freq=2.0), product of:
                0.1222562 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03491209 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    18. 3.2014 19:13:22
  16. Vinkler, P.: Application of the distribution of citations among publications in scientometric evaluations (2011) 0.00
    0.004715261 = product of:
      0.028291566 = sum of:
        0.028291566 = product of:
          0.056583133 = sum of:
            0.056583133 = weight(_text_:etc in 4769) [ClassicSimilarity], result of:
              0.056583133 = score(doc=4769,freq=2.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.2992217 = fieldWeight in 4769, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4769)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    The ?-indicator (or ?v-indicator) of a set of journal papers is equal to a hundredth of the total number of citations obtained by the elite set of publications. The number of publications in the elite set P(?) is calculated as the square root of total papers. For greater sets the following equation is used: P(?v) = (10 log P) - 10, where P is the total number of publications. For sets comprising a single or several extreme frequently cited paper, the ?-index may be distorted. Therefore, a new indicator based on the distribution of citations is suggested. Accordingly, the publications are classified into citation categories, of which lower limits are given as 0, and (2n + 1), whereas the upper limits as 2n (n = 0, 2, 3, etc.). The citations distribution score (CDS) index is defined as the sum of weighted numbers of publications in the individual categories. The CDS-index increases logarithmically with the increasing number of citations. The citation distribution rate indicator is introduced by relating the actual CDS-index to the possible maximum. Several size-dependent and size-independent indicators were calculated. It has been concluded that relevant, already accepted scientometric indicators may validate novel indices through resulting in similar conclusions ("converging validation of indicators").
  17. Leydesdorff, L.; Bornmann, L.: Integrated impact indicators compared with impact factors : an alternative research design with policy implications (2011) 0.00
    0.004715261 = product of:
      0.028291566 = sum of:
        0.028291566 = product of:
          0.056583133 = sum of:
            0.056583133 = weight(_text_:etc in 4919) [ClassicSimilarity], result of:
              0.056583133 = score(doc=4919,freq=2.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.2992217 = fieldWeight in 4919, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4919)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    In bibliometrics, the association of "impact" with central-tendency statistics is mistaken. Impacts add up, and citation curves therefore should be integrated instead of averaged. For example, the journals MIS Quarterly and Journal of the American Society for Information Science and Technology differ by a factor of 2 in terms of their respective impact factors (IF), but the journal with the lower IF has the higher impact. Using percentile ranks (e.g., top-1%, top-10%, etc.), an Integrated Impact Indicator (I3) can be based on integration of the citation curves, but after normalization of the citation curves to the same scale. The results across document sets can be compared as percentages of the total impact of a reference set. Total number of citations, however, should not be used instead because the shape of the citation curves is then not appreciated. I3 can be applied to any document set and any citation window. The results of the integration (summation) are fully decomposable in terms of journals or institutional units such as nations, universities, and so on because percentile ranks are determined at the paper level. In this study, we first compare I3 with IFs for the journals in two Institute for Scientific Information subject categories ("Information Science & Library Science" and "Multidisciplinary Sciences"). The library and information science set is additionally decomposed in terms of nations. Policy implications of this possible paradigm shift in citation impact analysis are specified.
  18. Leydesdorff, L.; Zhou, P.; Bornmann, L.: How can journal impact factors be normalized across fields of science? : An assessment in terms of percentile ranks and fractional counts (2013) 0.00
    0.004715261 = product of:
      0.028291566 = sum of:
        0.028291566 = product of:
          0.056583133 = sum of:
            0.056583133 = weight(_text_:etc in 532) [ClassicSimilarity], result of:
              0.056583133 = score(doc=532,freq=2.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.2992217 = fieldWeight in 532, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=532)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Using the CD-ROM version of the Science Citation Index 2010 (N = 3,705 journals), we study the (combined) effects of (a) fractional counting on the impact factor (IF) and (b) transformation of the skewed citation distributions into a distribution of 100 percentiles and six percentile rank classes (top-1%, top-5%, etc.). Do these approaches lead to field-normalized impact measures for journals? In addition to the 2-year IF (IF2), we consider the 5-year IF (IF5), the respective numerators of these IFs, and the number of Total Cites, counted both as integers and fractionally. These various indicators are tested against the hypothesis that the classification of journals into 11 broad fields by PatentBoard/NSF (National Science Foundation) provides statistically significant between-field effects. Using fractional counting the between-field variance is reduced by 91.7% in the case of IF5, and by 79.2% in the case of IF2. However, the differences in citation counts are not significantly affected by fractional counting. These results accord with previous studies, but the longer citation window of a fractionally counted IF5 can lead to significant improvement in the normalization across fields.
  19. Baumgartner, S.E.; Leydesdorff, L.: Group-based trajectory modeling (GBTM) of citations in scholarly literature : dynamic qualities of "transient" and "sticky knowledge claims" (2014) 0.00
    0.004715261 = product of:
      0.028291566 = sum of:
        0.028291566 = product of:
          0.056583133 = sum of:
            0.056583133 = weight(_text_:etc in 1241) [ClassicSimilarity], result of:
              0.056583133 = score(doc=1241,freq=2.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.2992217 = fieldWeight in 1241, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1241)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Group-based trajectory modeling (GBTM) is applied to the citation curves of articles in six journals and to all citable items in a single field of science (virology, 24 journals) to distinguish among the developmental trajectories in subpopulations. Can citation patterns of highly-cited papers be distinguished in an early phase as "fast-breaking" papers? Can "late bloomers" or "sleeping beauties" be identified? Most interesting, we find differences between "sticky knowledge claims" that continue to be cited more than 10 years after publication and "transient knowledge claims" that show a decay pattern after reaching a peak within a few years. Only papers following the trajectory of a "sticky knowledge claim" can be expected to have a sustained impact. These findings raise questions about indicators of "excellence" that use aggregated citation rates after 2 or 3 years (e.g., impact factors). Because aggregated citation curves can also be composites of the two patterns, fifth-order polynomials (with four bending points) are needed to capture citation curves precisely. For the journals under study, the most frequently cited groups were furthermore much smaller than 10%. Although GBTM has proved a useful method for investigating differences among citation trajectories, the methodology does not allow us to define a percentage of highly cited papers inductively across different fields and journals. Using multinomial logistic regression, we conclude that predictor variables such as journal names, number of authors, etc., do not affect the stickiness of knowledge claims in terms of citations but only the levels of aggregated citations (which are field-specific).
  20. Xu, F.; Liu, W.B.; Mingers, J.: New journal classification methods based on the global h-index (2015) 0.00
    0.004715261 = product of:
      0.028291566 = sum of:
        0.028291566 = product of:
          0.056583133 = sum of:
            0.056583133 = weight(_text_:etc in 2684) [ClassicSimilarity], result of:
              0.056583133 = score(doc=2684,freq=2.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.2992217 = fieldWeight in 2684, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2684)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    In this work we develop new journal classification methods based on the h-index. The introduction of the h-index for research evaluation has attracted much attention in the bibliometric study and research quality evaluation. The main purpose of using an h-index is to compare the index for different research units (e.g. researchers, journals, etc.) to differentiate their research performance. However the h-index is defined by only comparing citations counts of one's own publications, it is doubtful that the h index alone should be used for reliable comparisons among different research units, like researchers or journals. In this paper we propose a new global h-index (Gh-index), where the publications in the core are selected in comparison with all the publications of the units to be evaluated. Furthermore, we introduce some variants of the Gh-index to address the issue of discrimination power. We show that together with the original h-index, they can be used to evaluate and classify academic journals with some distinct advantages, in particular that they can produce an automatic classification into a number of categories without arbitrary cut-off points. We then carry out an empirical study for classification of operations research and management science (OR/MS) journals using this index, and compare it with other well-known journal ranking results such as the Association of Business Schools (ABS) Journal Quality Guide and the Committee of Professors in OR (COPIOR) ranking lists.

Languages

  • e 68
  • d 4
  • More… Less…

Types

  • a 71
  • s 2
  • el 1
  • m 1
  • More… Less…