Search (59 results, page 3 of 3)

  • × language_ss:"e"
  • × theme_ss:"Informetrie"
  • × year_i:[2020 TO 2030}
  1. Lemke, S.; Mazarakis, A.; Peters, I.: Conjoint analysis of researchers' hidden preferences for bibliometrics, altmetrics, and usage metrics (2021) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 247) [ClassicSimilarity], result of:
              0.006765375 = score(doc=247,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 247, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=247)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The amount of annually published scholarly articles is growing steadily, as is the number of indicators through which impact of publications is measured. Little is known about how the increasing variety of available metrics affects researchers' processes of selecting literature to read. We conducted ranking experiments embedded into an online survey with 247 participating researchers, most from social sciences. Participants completed series of tasks in which they were asked to rank fictitious publications regarding their expected relevance, based on their scores regarding six prototypical metrics. Through applying logistic regression, cluster analysis, and manual coding of survey answers, we obtained detailed data on how prominent metrics for research impact influence our participants in decisions about which scientific articles to read. Survey answers revealed a combination of qualitative and quantitative characteristics that researchers consult when selecting literature, while regression analysis showed that among quantitative metrics, citation counts tend to be of highest concern, followed by Journal Impact Factors. Our results suggest a comparatively favorable view of many researchers on bibliometrics and widespread skepticism toward altmetrics. The findings underline the importance of equipping researchers with solid knowledge about specific metrics' limitations, as they seem to play significant roles in researchers' everyday relevance assessments.
    Type
    a
  2. Cabanac, G.; Labbé, C.: Prevalence of nonsensical algorithmically generated papers in the scientific literature (2021) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 410) [ClassicSimilarity], result of:
              0.006765375 = score(doc=410,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 410, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=410)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In 2014 leading publishers withdrew more than 120 nonsensical publications automatically generated with the SCIgen program. Casual observations suggested that similar problematic papers are still published and sold, without follow-up retractions. No systematic screening has been performed and the prevalence of such nonsensical publications in the scientific literature is unknown. Our contribution is 2-fold. First, we designed a detector that combs the scientific literature for grammar-based computer-generated papers. Applied to SCIgen, it has a 83.6% precision. Second, we performed a scientometric study of the 243 detected SCIgen-papers from 19 publishers. We estimate the prevalence of SCIgen-papers to be 75 per million papers in Information and Computing Sciences. Only 19% of the 243 problematic papers were dealt with: formal retraction (12) or silent removal (34). Publishers still serve and sometimes sell the remaining 197 papers without any caveat. We found evidence of citation manipulation via edited SCIgen bibliographies. This work reveals metric gaming up to the point of absurdity: fraudsters publish nonsensical algorithmically generated papers featuring genuine references. It stresses the need to screen papers for nonsense before peer-review and chase citation manipulation in published papers. Overall, this is yet another illustration of the harmful effects of the pressure to publish or perish.
    Type
    a
  3. Thelwall, M.; Kousha, K.; Abdoli, M.; Stuart, E.; Makita, M.; Wilson, P.; Levitt, J.: Do altmetric scores reflect article quality? : evidence from the UK Research Excellence Framework 2021 (2023) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 947) [ClassicSimilarity], result of:
              0.006765375 = score(doc=947,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 947, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=947)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Altmetrics are web-based quantitative impact or attention indicators for academic articles that have been proposed to supplement citation counts. This article reports the first assessment of the extent to which mature altmetrics from Altmetric.com and Mendeley associate with individual article quality scores. It exploits expert norm-referenced peer review scores from the UK Research Excellence Framework 2021 for 67,030+ journal articles in all fields 2014-2017/2018, split into 34 broadly field-based Units of Assessment (UoAs). Altmetrics correlated more strongly with research quality than previously found, although less strongly than raw and field normalized Scopus citation counts. Surprisingly, field normalizing citation counts can reduce their strength as a quality indicator for articles in a single field. For most UoAs, Mendeley reader counts are the best altmetric (e.g., three Spearman correlations with quality scores above 0.5), tweet counts are also a moderate strength indicator in eight UoAs (Spearman correlations with quality scores above 0.3), ahead of news (eight correlations above 0.3, but generally weaker), blogs (five correlations above 0.3), and Facebook (three correlations above 0.3) citations, at least in the United Kingdom. In general, altmetrics are the strongest indicators of research quality in the health and physical sciences and weakest in the arts and humanities.
    Type
    a
  4. Jiao, H.; Qiu, Y.; Ma, X.; Yang, B.: Dissmination effect of data papers on scientific datasets (2024) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 1204) [ClassicSimilarity], result of:
              0.006765375 = score(doc=1204,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 1204, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1204)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Open data as an integral part of the open science movement enhances the openness and sharing of scientific datasets. Nevertheless, the normative utilization of data journals, data papers, scientific datasets, and data citations necessitates further research. This study aims to investigate the citation practices associated with data papers and to explore the role of data papers in disseminating scientific datasets. Dataset accession numbers from NCBI databases were employed to analyze the prevalence of data citations for data papers from PubMed Central. A dataset citation practice identification rule was subsequently established. The findings indicate a consistent growth in the number of biomedical data journals published in recent years, with data papers gaining attention and recognition as both publications and data sources. Although the use of data papers as citation sources for data remains relatively rare, there has been a steady increase in data paper citations for data utilization through formal data citations. Furthermore, the increasing proportion of datasets reported in data papers that are employed for analytical purposes highlights the distinct value of data papers in facilitating the dissemination and reuse of datasets to support novel research.
    Type
    a
  5. Williams, B.: Dimensions & VOSViewer bibliometrics in the reference interview (2020) 0.00
    0.001674345 = product of:
      0.00334869 = sum of:
        0.00334869 = product of:
          0.00669738 = sum of:
            0.00669738 = weight(_text_:a in 5719) [ClassicSimilarity], result of:
              0.00669738 = score(doc=5719,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12611452 = fieldWeight in 5719, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5719)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The VOSviewer software provides easy access to bibliometric mapping using data from Dimensions, Scopus and Web of Science. The properly formatted and structured citation data, and the ease in which it can be exported open up new avenues for use during citation searches and eference interviews. This paper details specific techniques for using advanced searches in Dimensions, exporting the citation data, and drawing insights from the maps produced in VOS Viewer. These search techniques and data export practices are fast and accurate enough to build into reference interviews for graduate students, faculty, and post-PhD researchers. The search results derived from them are accurate and allow a more comprehensive view of citation networks embedded in ordinary complex boolean searches.
    Type
    a
  6. Ma, L.: ¬The steering effects of citations and metrics (2021) 0.00
    0.0014647468 = product of:
      0.0029294936 = sum of:
        0.0029294936 = product of:
          0.005858987 = sum of:
            0.005858987 = weight(_text_:a in 176) [ClassicSimilarity], result of:
              0.005858987 = score(doc=176,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.11032722 = fieldWeight in 176, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=176)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose This paper aims to understand the nature of citations and metrics in the larger system of knowledge production involving universities, funding agencies, publishers, and indexing and data analytic services. Design/methodology/approach First, the normative and social constructivist views of citations are reviewed to be understood as co-existing conditions. Second, metrics are examined through the processes of commensuration by tracing the meanings of metrics embedded in various kinds of documents and contexts. Third, the steering effects of citations and metrics on knowledge production are discussed. Finally, the conclusion addresses questions pertaining to the validity and legitimacy of citations as data and their implications for knowledge production and the conception of information. Findings The normative view of citations is understood as an ideal speech situation; the social constructivist view of citation is recognised in the system of knowledge production where citing motivations are influenced by epistemic, social and political factors. When organisational performances are prioritised and generate system imperatives, motives of competition become dominant in shaping citing behaviour, which can deviate from the norms and values in the academic lifeworld. As a result, citations and metrics become a non-linguistic steering medium rather than evidence of research quality and impact. Originality/value This paper contributes to the understanding of the nature of citations and metrics and their implications for the conception of information and knowledge production.
    Type
    a
  7. Leydesdorff, L.; Ivanova, I.: ¬The measurement of "interdisciplinarity" and "synergy" in scientific and extra-scientific collaborations (2021) 0.00
    0.0014647468 = product of:
      0.0029294936 = sum of:
        0.0029294936 = product of:
          0.005858987 = sum of:
            0.005858987 = weight(_text_:a in 208) [ClassicSimilarity], result of:
              0.005858987 = score(doc=208,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.11032722 = fieldWeight in 208, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=208)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Problem solving often requires crossing boundaries, such as those between disciplines. When policy-makers call for "interdisciplinarity," however, they often mean "synergy." Synergy is generated when the whole offers more possibilities than the sum of its parts. An increase in the number of options above the sum of the options in subsets can be measured as redundancy; that is, the number of not-yet-realized options. The number of options available to an innovation system for realization can be as decisive for the system's survival as the historically already-realized innovations. Unlike "interdisciplinarity," "synergy" can also be generated in sectorial or geographical collaborations. The measurement of "synergy," however, requires a methodology different from the measurement of "interdisciplinarity." In this study, we discuss recent advances in the operationalization and measurement of "interdisciplinarity," and propose a methodology for measuring "synergy" based on information theory. The sharing of meanings attributed to information from different perspectives can increase redundancy. Increasing redundancy reduces the relative uncertainty, for example, in niches. The operationalization of the two concepts-"interdisciplinarity" and "synergy"-as different and partly overlapping indicators allows for distinguishing between the effects and the effectiveness of science-policy interventions in research priorities.
    Type
    a
  8. Zhou, H.; Guns, R.; Engels, T.C.E.: Are social sciences becoming more interdisciplinary? : evidence from publications 1960-2014 (2022) 0.00
    0.0014647468 = product of:
      0.0029294936 = sum of:
        0.0029294936 = product of:
          0.005858987 = sum of:
            0.005858987 = weight(_text_:a in 646) [ClassicSimilarity], result of:
              0.005858987 = score(doc=646,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.11032722 = fieldWeight in 646, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=646)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Interdisciplinary research is widely recognized as necessary to tackle some of the grand challenges facing humanity. It is generally believed that interdisciplinarity is becoming increasingly prevalent among Science, Technology, Engineering, and Mathematics (STEM) fields. However, little is known about the evolution of interdisciplinarity in the Social Sciences. Also, how interdisciplinarity and its various aspects evolve over time has seldom been closely quantified and delineated. This paper answers these questions by capturing the disciplinary diversity of the knowledge base of scientific publications in nine broad Social Sciences fields over 55 years. The analysis considers diversity as a whole and its three distinct aspects, namely variety, balance, and disparity. Ordinary least squares (OLS) regressions are also conducted to investigate whether such change, if any, can be found among research with similar characteristics. We find that learning widely and digging deeply have become one of the norms among researchers in Social Sciences. Fields acting as knowledge exporters or independent domains maintain a relatively stable homogeneity in their knowledge base while the knowledge base of importer disciplines evolves towards greater heterogeneity. However, the increase of interdisciplinarity is substantially smaller when controlling for several author and publication related variables.
    Type
    a
  9. Delgado-Quirós, L.; Aguillo, I.F.; Martín-Martín, A.; López-Cózar, E.D.; Orduña-Malea, E.; Ortega, J.L.: Why are these publications missing? : uncovering the reasons behind the exclusion of documents in free-access scholarly databases (2024) 0.00
    0.0014647468 = product of:
      0.0029294936 = sum of:
        0.0029294936 = product of:
          0.005858987 = sum of:
            0.005858987 = weight(_text_:a in 1201) [ClassicSimilarity], result of:
              0.005858987 = score(doc=1201,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.11032722 = fieldWeight in 1201, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1201)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study analyses the coverage of seven free-access bibliographic databases (Crossref, Dimensions-non-subscription version, Google Scholar, Lens, Microsoft Academic, Scilit, and Semantic Scholar) to identify the potential reasons that might cause the exclusion of scholarly documents and how they could influence coverage. To do this, 116 k randomly selected bibliographic records from Crossref were used as a baseline. API endpoints and web scraping were used to query each database. The results show that coverage differences are mainly caused by the way each service builds their databases. While classic bibliographic databases ingest almost the exact same content from Crossref (Lens and Scilit miss 0.1% and 0.2% of the records, respectively), academic search engines present lower coverage (Google Scholar does not find: 9.8%, Semantic Scholar: 10%, and Microsoft Academic: 12%). Coverage differences are mainly attributed to external factors, such as web accessibility and robot exclusion policies (39.2%-46%), and internal requirements that exclude secondary content (6.5%-11.6%). In the case of Dimensions, the only classic bibliographic database with the lowest coverage (7.6%), internal selection criteria such as the indexation of full books instead of book chapters (65%) and the exclusion of secondary content (15%) are the main motives of missing publications.
    Type
    a
  10. Thelwall, M.: Female citation impact superiority 1996-2018 in six out of seven English-speaking nations (2020) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 5948) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=5948,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 5948, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5948)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Efforts to combat continuing gender inequalities in academia need to be informed by evidence about where differences occur. Citations are relevant as potential evidence in appointment and promotion decisions, but it is unclear whether there have been historical gender differences in average citation impact that might explain the current shortfall of senior female academics. This study investigates the evolution of gender differences in citation impact 1996-2018 for six million articles from seven large English-speaking nations: Australia, Canada, Ireland, Jamaica, New Zealand, UK, and the USA. The results show that a small female citation advantage has been the norm over time for all these countries except the USA, where there has been no practical difference. The female citation advantage is largest, and statistically significant in most years, for Australia and the UK. This suggests that any academic bias against citing female-authored research cannot explain current employment inequalities. Nevertheless, comparisons using recent citation data, or avoiding it altogether, during appointments or promotion may disadvantage females in some countries by underestimating the likely greater impact of their work, especially in the long term.
    Type
    a
  11. Lu, C.; Zhang, Y.; Ahn, Y.-Y.; Ding, Y.; Zhang, C.; Ma, D.: Co-contributorship network and division of labor in individual scientific collaborations (2020) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 5963) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=5963,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 5963, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5963)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Collaborations are pervasive in current science. Collaborations have been studied and encouraged in many disciplines. However, little is known about how a team really functions from the detailed division of labor within. In this research, we investigate the patterns of scientific collaboration and division of labor within individual scholarly articles by analyzing their co-contributorship networks. Co-contributorship networks are constructed by performing the one-mode projection of the author-task bipartite networks obtained from 138,787 articles published in PLoS journals. Given an article, we define 3 types of contributors: Specialists, Team-players, and Versatiles. Specialists are those who contribute to all their tasks alone; team-players are those who contribute to every task with other collaborators; and versatiles are those who do both. We find that team-players are the majority and they tend to contribute to the 5 most common tasks as expected, such as "data analysis" and "performing experiments." The specialists and versatiles are more prevalent than expected by our designed 2 null models. Versatiles tend to be senior authors associated with funding and supervision. Specialists are associated with 2 contrasting roles: the supervising role as team leaders or marginal and specialized contributors.
    Type
    a
  12. Radford, M.L.; Kitzie, V.; Mikitish, S.; Floegel, D.; Radford, G.P.; Connaway, L.S.: "People are reading your work," : scholarly identity and social networking sites (2020) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 5983) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=5983,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 5983, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5983)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Scholarly identity refers to endeavors by scholars to promote their reputation, work and networks using online platforms such as ResearchGate, Academia.edu and Twitter. This exploratory research investigates benefits and drawbacks of scholarly identity efforts and avenues for potential library support. Design/methodology/approach Data from 30 semi-structured phone interviews with faculty, doctoral students and academic librarians were qualitatively analyzed using the constant comparisons method (Charmaz, 2014) and Goffman's (1959, 1967) theoretical concept of impression management. Findings Results reveal that use of online platforms enables academics to connect with others and disseminate their research. scholarly identity platforms have benefits, opportunities and offer possibilities for developing academic library support. They are also fraught with drawbacks/concerns, especially related to confusion, for-profit models and reputational risk. Research limitations/implications This exploratory study involves analysis of a small number of interviews (30) with self-selected social scientists from one discipline (communication) and librarians. It lacks gender, race/ethnicity and geographical diversity and focuses exclusively on individuals who use social networking sites for their scholarly identity practices. Social implications Results highlight benefits and risks of scholarly identity work and the potential for adopting practices that consider ethical dilemmas inherent in maintaining an online social media presence. They suggest continuing to develop library support that provides strategic guidance and information on legal responsibilities regarding copyright. Originality/value This research aims to understand the benefits and drawbacks of Scholarly Identity platforms and explore what support academic libraries might offer. It is among the first to investigate these topics comparing perspectives of faculty, doctoral students and librarians.
    Type
    a
  13. Fang, Z.; Costas, R.; Tian, W.; Wang, X.; Wouters, P.: How is science clicked on Twitter? : click metrics for Bitly short links to scientific publications (2021) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 265) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=265,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 265, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=265)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    To provide some context for the potential engagement behavior of Twitter users around science, this article investigates how Bitly short links to scientific publications embedded in scholarly Twitter mentions are clicked on Twitter. Based on the click metrics of over 1.1 million Bitly short links referring to Web of Science (WoS) publications, our results show that around 49.5% of them were not clicked by Twitter users. For those Bitly short links with clicks from Twitter, the majority of their Twitter clicks accumulated within a short period of time after they were first tweeted. Bitly short links to the publications in the field of Social Sciences and Humanities tend to attract more clicks from Twitter over other subject fields. This article also assesses the extent to which Twitter clicks are correlated with some other impact indicators. Twitter clicks are weakly correlated with scholarly impact indicators (WoS citations and Mendeley readers), but moderately correlated to other Twitter engagement indicators (total retweets and total likes). In light of these results, we highlight the importance of paying more attention to the click metrics of URLs in scholarly Twitter mentions, to improve our understanding about the more effective dissemination and reception of science information on Twitter.
    Type
    a
  14. Järvelin, K.; Vakkari, P.: LIS research across 50 years: content analysis of journal articles : offering an information-centric conception of memes (2022) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 949) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=949,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 949, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=949)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose This paper analyses the research in Library and Information Science (LIS) and reports on (1) the status of LIS research in 2015 and (2) on the evolution of LIS research longitudinally from 1965 to 2015. Design/methodology/approach The study employs a quantitative intellectual content analysis of articles published in 30+ scholarly LIS journals, following the design by Tuomaala et al. (2014). In the content analysis, we classify articles along eight dimensions covering topical content and methodology. Findings The topical findings indicate that the earlier strong LIS emphasis on L&I services has declined notably, while scientific and professional communication has become the most popular topic. Information storage and retrieval has given up its earlier strong position towards the end of the years analyzed. Individuals are increasingly the units of observation. End-user's and developer's viewpoints have strengthened at the cost of intermediaries' viewpoint. LIS research is methodologically increasingly scattered since survey, scientometric methods, experiment, case studies and qualitative studies have all gained in popularity. Consequently, LIS may have become more versatile in the analysis of its research objects during the years analyzed. Originality/value Among quantitative intellectual content analyses of LIS research, the study is unique in its scope: length of analysis period (50 years), width (8 dimensions covering topical content and methodology) and depth (the annual batch of 30+ scholarly journals).
    Type
    a
  15. Thelwall, M.; Kousha, K.; Stuart, E.; Makita, M.; Abdoli, M.; Wilson, P.; Levitt, J.: In which fields are citations indicators of research quality? (2023) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 1033) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=1033,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 1033, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1033)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Citation counts are widely used as indicators of research quality to support or replace human peer review and for lists of top cited papers, researchers, and institutions. Nevertheless, the relationship between citations and research quality is poorly evidenced. We report the first large-scale science-wide academic evaluation of the relationship between research quality and citations (field normalized citation counts), correlating them for 87,739 journal articles in 34 field-based UK Units of Assessment (UoA). The two correlate positively in all academic fields, from very weak (0.1) to strong (0.5), reflecting broadly linear relationships in all fields. We give the first evidence that the correlations are positive even across the arts and humanities. The patterns are similar for the field classification schemes of Scopus and Dimensions.ai, although varying for some individual subjects and therefore more uncertain for these. We also show for the first time that no field has a citation threshold beyond which all articles are excellent quality, so lists of top cited articles are not pure collections of excellence, and neither is any top citation percentile indicator. Thus, while appropriately field normalized citations associate positively with research quality in all fields, they never perfectly reflect it, even at high values.
    Type
    a
  16. Zhou, H.; Dong, K.; Xia, Y.: Knowledge inheritance in disciplines : quantifying the successive and distant reuse of references (2023) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 1192) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=1192,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 1192, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1192)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Beitrag in: JASIST special issue on 'Who tweets scientific publications? A large-scale study of tweeting audiences in all areas of research'. Vgl.: https://asistdl.onlinelibrary.wiley.com/doi/10.1002/asi.24833.
    Type
    a
  17. Zhao, D.; Strotmann, A.: Mapping knowledge domains on Wikipedia : an author bibliographic coupling analysis of traditional Chinese medicine (2022) 0.00
    0.0011717974 = product of:
      0.0023435948 = sum of:
        0.0023435948 = product of:
          0.0046871896 = sum of:
            0.0046871896 = weight(_text_:a in 608) [ClassicSimilarity], result of:
              0.0046871896 = score(doc=608,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.088261776 = fieldWeight in 608, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=608)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Wikipedia has the lofty goal of compiling all human knowledge. The purpose of the present study is to map the structure of the Traditional Chinese Medicine (TCM) knowledge domain on Wikipedia, to identify patterns of knowledge representation on Wikipedia and to test the applicability of author bibliographic coupling analysis, an effective method for mapping knowledge domains represented in published scholarly documents, for Wikipedia data. Design/methodology/approach We adapted and followed the well-established procedures and techniques for author bibliographic coupling analysis (ABCA). Instead of bibliographic data from a citation database, we used all articles on TCM downloaded from the English version of Wikipedia as our dataset. An author bibliographic coupling network was calculated and then factor analyzed using SPSS. Factor analysis results were visualized. Factors were labeled upon manual examination of articles that authors who load primarily in each factor have significantly contributed references to. Clear factors were interpreted as topics. Findings Seven TCM topic areas are represented on Wikipedia, among which Acupuncture-related practices, Falun Gong and Herbal Medicine attracted the most significant contributors to TCM. Acupuncture and Qi Gong have the most connections to the TCM knowledge domain and also serve as bridges for other topics to connect to the domain. Herbal medicine is weakly linked to and non-herbal medicine is isolated from the rest of the TCM knowledge domain. It appears that specific topics are represented well on Wikipedia but their conceptual connections are not. ABCA is effective for mapping knowledge domains on Wikipedia but document-based bibliographic coupling analysis is not. Originality/value Given the prominent position of Wikipedia for both information users and for researchers on knowledge organization and information retrieval, it is important to study how well knowledge is represented and structured on Wikipedia. Such studies appear largely missing although studies from different perspectives both about Wikipedia and using Wikipedia as data are abundant. Author bibliographic coupling analysis is effective for mapping knowledge domains represented in published scholarly documents but has never been applied to mapping knowledge domains represented on Wikipedia.
    Type
    a
  18. Thelwall, M.; Foster, D.: Male or female gender-polarized YouTube videos are less viewed (2021) 0.00
    0.0010148063 = product of:
      0.0020296127 = sum of:
        0.0020296127 = product of:
          0.0040592253 = sum of:
            0.0040592253 = weight(_text_:a in 414) [ClassicSimilarity], result of:
              0.0040592253 = score(doc=414,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.07643694 = fieldWeight in 414, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=414)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  19. Thelwall, M.; Maflahi, N.: Academic collaboration rates and citation associations vary substantially between countries and fields (2020) 0.00
    8.4567186E-4 = product of:
      0.0016913437 = sum of:
        0.0016913437 = product of:
          0.0033826875 = sum of:
            0.0033826875 = weight(_text_:a in 5952) [ClassicSimilarity], result of:
              0.0033826875 = score(doc=5952,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.06369744 = fieldWeight in 5952, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5952)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a