Search (1373 results, page 2 of 69)

  • × theme_ss:"Informetrie"
  1. Rotolo, D.; Rafols, I.; Hopkins, M.M.; Leydesdorff, L.: Strategic intelligence on emerging technologies : scientometric overlay mapping (2017) 0.03
    0.034454845 = product of:
      0.08613711 = sum of:
        0.006332749 = weight(_text_:a in 3322) [ClassicSimilarity], result of:
          0.006332749 = score(doc=3322,freq=6.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.13239266 = fieldWeight in 3322, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3322)
        0.07980436 = weight(_text_:68 in 3322) [ClassicSimilarity], result of:
          0.07980436 = score(doc=3322,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.35710898 = fieldWeight in 3322, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.046875 = fieldNorm(doc=3322)
      0.4 = coord(2/5)
    
    Abstract
    This paper examines the use of scientometric overlay mapping as a tool of "strategic intelligence" to aid the governing of emerging technologies. We develop an integrative synthesis of different overlay mapping techniques and associated perspectives on technological emergence across geographical, social, and cognitive spaces. To do so, we longitudinally analyze (with publication and patent data) three case studies of emerging technologies in the medical domain. These are RNA interference (RNAi), human papillomavirus (HPV) testing technologies for cervical cancer, and thiopurine methyltransferase (TPMT) genetic testing. Given the flexibility (i.e., adaptability to different sources of data) and granularity (i.e., applicability across multiple levels of data aggregation) of overlay mapping techniques, we argue that these techniques can favor the integration and comparison of results from different contexts and cases, thus potentially functioning as a platform for "distributed" strategic intelligence for analysts and decision makers.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.1, S.214-233
    Type
    a
  2. Jaffe, A.B.; Rassenfosse, G. de: Patent citation data in social science research : overview and best practices (2017) 0.03
    0.034454845 = product of:
      0.08613711 = sum of:
        0.006332749 = weight(_text_:a in 3646) [ClassicSimilarity], result of:
          0.006332749 = score(doc=3646,freq=6.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.13239266 = fieldWeight in 3646, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3646)
        0.07980436 = weight(_text_:68 in 3646) [ClassicSimilarity], result of:
          0.07980436 = score(doc=3646,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.35710898 = fieldWeight in 3646, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.046875 = fieldNorm(doc=3646)
      0.4 = coord(2/5)
    
    Abstract
    The last 2 decades have witnessed a dramatic increase in the use of patent citation data in social science research. Facilitated by digitization of the patent data and increasing computing power, a community of practice has grown up that has developed methods for using these data to: measure attributes of innovations such as impact and originality; to trace flows of knowledge across individuals, institutions and regions; and to map innovation networks. The objective of this article is threefold. First, it takes stock of these main uses. Second, it discusses 4 pitfalls associated with patent citation data, related to office, time and technology, examiner, and strategic effects. Third, it highlights gaps in our understanding and offers directions for future research.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.6, S.1360-1374
    Type
    a
  3. Sugimoto, C.R.; Work, S.; Larivière, V.; Haustein, S.: Scholarly use of social media and altmetrics : A review of the literature (2017) 0.03
    0.034454845 = product of:
      0.08613711 = sum of:
        0.006332749 = weight(_text_:a in 3781) [ClassicSimilarity], result of:
          0.006332749 = score(doc=3781,freq=6.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.13239266 = fieldWeight in 3781, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3781)
        0.07980436 = weight(_text_:68 in 3781) [ClassicSimilarity], result of:
          0.07980436 = score(doc=3781,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.35710898 = fieldWeight in 3781, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.046875 = fieldNorm(doc=3781)
      0.4 = coord(2/5)
    
    Abstract
    Social media has become integrated into the fabric of the scholarly communication system in fundamental ways, principally through scholarly use of social media platforms and the promotion of new indicators on the basis of interactions with these platforms. Research and scholarship in this area has accelerated since the coining and subsequent advocacy for altmetrics-that is, research indicators based on social media activity. This review provides an extensive account of the state-of-the art in both scholarly use of social media and altmetrics. The review consists of 2 main parts: the first examines the use of social media in academia, reviewing the various functions these platforms have in the scholarly communication process and the factors that affect this use. The second part reviews empirical studies of altmetrics, discussing the various interpretations of altmetrics, data collection and methodological limitations, and differences according to platform. The review ends with a critical discussion of the implications of this transformation in the scholarly communication system.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.9, S.2037-2062
    Type
    a
  4. Subelj, L.; Fiala, D.: Publication boost in web of science journals and its effect on citation distributions (2017) 0.03
    0.03399001 = product of:
      0.08497503 = sum of:
        0.0051706675 = weight(_text_:a in 3537) [ClassicSimilarity], result of:
          0.0051706675 = score(doc=3537,freq=4.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.10809815 = fieldWeight in 3537, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3537)
        0.07980436 = weight(_text_:68 in 3537) [ClassicSimilarity], result of:
          0.07980436 = score(doc=3537,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.35710898 = fieldWeight in 3537, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.046875 = fieldNorm(doc=3537)
      0.4 = coord(2/5)
    
    Abstract
    In this article, we show that the dramatic increase in the number of research articles indexed in the Web of Science database impacts the commonly observed distributions of citations within these articles. First, we document that the growing number of physics articles in recent years is attributed to existing journals publishing more and more articles rather than more new journals coming into being as it happens in computer science. Second, even though the references from the more recent articles generally cover a longer time span, the newer articles are cited more frequently than the older ones if the uneven article growth is not corrected for. Nevertheless, despite this change in the distribution of citations, the citation behavior of scientists does not seem to have changed.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.4, S.1018-1023
    Type
    a
  5. Orduna-Malea, E.; Thelwall, M.; Kousha, K.: Web citations in patents : evidence of technological impact? (2017) 0.03
    0.03399001 = product of:
      0.08497503 = sum of:
        0.0051706675 = weight(_text_:a in 3764) [ClassicSimilarity], result of:
          0.0051706675 = score(doc=3764,freq=4.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.10809815 = fieldWeight in 3764, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3764)
        0.07980436 = weight(_text_:68 in 3764) [ClassicSimilarity], result of:
          0.07980436 = score(doc=3764,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.35710898 = fieldWeight in 3764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.046875 = fieldNorm(doc=3764)
      0.4 = coord(2/5)
    
    Abstract
    Patents sometimes cite webpages either as general background to the problem being addressed or to identify prior publications that limit the scope of the patent granted. Counts of the number of patents citing an organization's website may therefore provide an indicator of its technological capacity or relevance. This article introduces methods to extract URL citations from patents and evaluates the usefulness of counts of patent web citations as a technology indicator. An analysis of patents citing 200 US universities or 177 UK universities found computer science and engineering departments to be frequently cited, as well as research-related webpages, such as Wikipedia, YouTube, or the Internet Archive. Overall, however, patent URL citations seem to be frequent enough to be useful for ranking major US and the top few UK universities if popular hosted subdomains are filtered out, but the hit count estimates on the first search engine results page should not be relied upon for accuracy.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.8, S.1967-1974
    Type
    a
  6. Abramo, G.; D'Angelo, C.A.; Costa, F. Di: Identifying interdisciplinarity through the disciplinary classification of coauthors of scientific publications (2012) 0.03
    0.03338423 = product of:
      0.08346058 = sum of:
        0.003656214 = weight(_text_:a in 491) [ClassicSimilarity], result of:
          0.003656214 = score(doc=491,freq=2.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.07643694 = fieldWeight in 491, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=491)
        0.07980436 = weight(_text_:68 in 491) [ClassicSimilarity], result of:
          0.07980436 = score(doc=491,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.35710898 = fieldWeight in 491, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.046875 = fieldNorm(doc=491)
      0.4 = coord(2/5)
    
    Abstract
    The growing complexity of challenges involved in scientific progress demands ever more frequent application of competencies and knowledge from different scientific fields. The present work analyzes the degree of collaboration among scientists from different disciplines to identify the most frequent "combinations of knowledge" in research activity. The methodology adopts an innovative bibliometric approach based on the disciplinary affiliation of publication coauthors. The field of observation includes all publications (167,179) indexed in the Science Citation Index Expanded for the years 2004-2008, authored by all scientists in the hard sciences (43,223) at Italian universities (68). The analysis examines 205 research fields grouped in 9 disciplines. Identifying the fields with the highest potential of interdisciplinary collaboration is useful to inform research polices at the national and regional levels, as well as management strategies at the institutional level.
    Type
    a
  7. Comins, J.A.; Leydesdorff, L.: Identification of long-term concept-symbols among citations : do common intellectual histories structure citation behavior? (2017) 0.03
    0.03338423 = product of:
      0.08346058 = sum of:
        0.003656214 = weight(_text_:a in 3599) [ClassicSimilarity], result of:
          0.003656214 = score(doc=3599,freq=2.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.07643694 = fieldWeight in 3599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3599)
        0.07980436 = weight(_text_:68 in 3599) [ClassicSimilarity], result of:
          0.07980436 = score(doc=3599,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.35710898 = fieldWeight in 3599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.046875 = fieldNorm(doc=3599)
      0.4 = coord(2/5)
    
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.5, S.1224-1233
    Type
    a
  8. Zuccala, A.; Someren, M. van; Bellen, M. van: ¬A machine-learning approach to coding book reviews as quality indicators : toward a theory of megacitation (2014) 0.03
    0.031476405 = product of:
      0.078691006 = sum of:
        0.012187379 = weight(_text_:a in 1530) [ClassicSimilarity], result of:
          0.012187379 = score(doc=1530,freq=32.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.25478977 = fieldWeight in 1530, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1530)
        0.06650363 = weight(_text_:68 in 1530) [ClassicSimilarity], result of:
          0.06650363 = score(doc=1530,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.29759082 = fieldWeight in 1530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1530)
      0.4 = coord(2/5)
    
    Abstract
    A theory of "megacitation" is introduced and used in an experiment to demonstrate how a qualitative scholarly book review can be converted into a weighted bibliometric indicator. We employ a manual human-coding approach to classify book reviews in the field of history based on reviewers' assessments of a book author's scholarly credibility (SC) and writing style (WS). In total, 100 book reviews were selected from the American Historical Review and coded for their positive/negative valence on these two dimensions. Most were coded as positive (68% for SC and 47% for WS), and there was also a small positive correlation between SC and WS (r = 0.2). We then constructed a classifier, combining both manual design and machine learning, to categorize sentiment-based sentences in history book reviews. The machine classifier produced a matched accuracy (matched to the human coding) of approximately 75% for SC and 64% for WS. WS was found to be more difficult to classify by machine than SC because of the reviewers' use of more subtle language. With further training data, a machine-learning approach could be useful for automatically classifying a large number of history book reviews at once. Weighted megacitations can be especially valuable if they are used in conjunction with regular book/journal citations, and "libcitations" (i.e., library holding counts) for a comprehensive assessment of a book/monograph's scholarly impact.
    Type
    a
  9. Song, M.; Kim, S.Y.; Lee, K.: Ensemble analysis of topical journal ranking in bioinformatics (2017) 0.03
    0.030823285 = product of:
      0.07705821 = sum of:
        0.010554582 = weight(_text_:a in 3650) [ClassicSimilarity], result of:
          0.010554582 = score(doc=3650,freq=24.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.22065444 = fieldWeight in 3650, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3650)
        0.06650363 = weight(_text_:68 in 3650) [ClassicSimilarity], result of:
          0.06650363 = score(doc=3650,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.29759082 = fieldWeight in 3650, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3650)
      0.4 = coord(2/5)
    
    Abstract
    Journal rankings, frequently determined by the journal impact factor or similar indices, are quantitative measures for evaluating a journal's performance in its discipline, which is presently a major research thrust in the bibliometrics field. Recently, text mining was adopted to augment journal ranking-based evaluation with the content analysis of a discipline taking a time-variant factor into consideration. However, previous studies focused mainly on a silo analysis of a discipline using either citation-or content-oriented approaches, and no attempt was made to analyze topical journal ranking and its change over time in a seamless and integrated manner. To address this issue, we propose a journal-time-topic model, an extension of Dirichlet multinomial regression, which we applied to the field of bioinformatics to understand journal contribution to topics in a field and the shift of topic trends. The journal-time-topic model allows us to identify which journals are the major leaders in what topics and the manner in which their topical focus. It also helps reveal an interesting distinct pattern in the journal impact factor of high- and low-ranked journals. The study results shed a new light for understanding topic specific journal rankings and shifts in journals' concentration on a subject.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.6, S.1564-1583
    Type
    a
  10. Zhang, Y.; Zhang, G.; Zhu, D.; Lu, J.: Scientific evolutionary pathways : identifying and visualizing relationships for scientific topics (2017) 0.03
    0.030823285 = product of:
      0.07705821 = sum of:
        0.010554582 = weight(_text_:a in 3758) [ClassicSimilarity], result of:
          0.010554582 = score(doc=3758,freq=24.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.22065444 = fieldWeight in 3758, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3758)
        0.06650363 = weight(_text_:68 in 3758) [ClassicSimilarity], result of:
          0.06650363 = score(doc=3758,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.29759082 = fieldWeight in 3758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3758)
      0.4 = coord(2/5)
    
    Abstract
    Whereas traditional science maps emphasize citation statistics and static relationships, this paper presents a term-based method to identify and visualize the evolutionary pathways of scientific topics in a series of time slices. First, we create a data preprocessing model for accurate term cleaning, consolidating, and clustering. Then we construct a simulated data streaming function and introduce a learning process to train a relationship identification function to adapt to changing environments in real time, where relationships of topic evolution, fusion, death, and novelty are identified. The main result of the method is a map of scientific evolutionary pathways. The visual routines provide a way to indicate the interactions among scientific subjects and a version in a series of time slices helps further illustrate such evolutionary pathways in detail. The detailed outline offers sufficient statistical information to delve into scientific topics and routines and then helps address meaningful insights with the assistance of expert knowledge. This empirical study focuses on scientific proposals granted by the United States National Science Foundation, and demonstrates the feasibility and reliability. Our method could be widely applied to a range of science, technology, and innovation policy research, and offer insight into the evolutionary pathways of scientific activities.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.8, S.1925-1939
    Type
    a
  11. Thelwall, M.; Kousha, K.: SlideShare presentations, citations, users, and trends : a professional site with academic and educational uses (2017) 0.03
    0.03045544 = product of:
      0.0761386 = sum of:
        0.00963497 = weight(_text_:a in 3766) [ClassicSimilarity], result of:
          0.00963497 = score(doc=3766,freq=20.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.20142901 = fieldWeight in 3766, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3766)
        0.06650363 = weight(_text_:68 in 3766) [ClassicSimilarity], result of:
          0.06650363 = score(doc=3766,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.29759082 = fieldWeight in 3766, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3766)
      0.4 = coord(2/5)
    
    Abstract
    SlideShare is a free social website that aims to help users distribute and find presentations. Owned by LinkedIn since 2012, it targets a professional audience but may give value to scholarship through creating a long-term record of the content of talks. This article tests this hypothesis by analyzing sets of general and scholarly related SlideShare documents using content and citation analysis and popularity statistics reported on the site. The results suggest that academics, students, and teachers are a minority of SlideShare uploaders, especially since 2010, with most documents not being directly related to scholarship or teaching. About two thirds of uploaded SlideShare documents are presentation slides, with the remainder often being files associated with presentations or video recordings of talks. SlideShare is therefore a presentation-centered site with a predominantly professional user base. Although a minority of the uploaded SlideShare documents are cited by, or cite, academic publications, probably too few articles are cited by SlideShare to consider extracting SlideShare citations for research evaluation. Nevertheless, scholars should consider SlideShare to be a potential source of academic and nonacademic information, particularly in library and information science, education, and business.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.8, S.1989-2003
    Type
    a
  12. Ping, Q.; He, J.; Chen, C.: How many ways to use CiteSpace? : a study of user interactive events over 14 months (2017) 0.03
    0.030257666 = product of:
      0.075644165 = sum of:
        0.009140535 = weight(_text_:a in 3602) [ClassicSimilarity], result of:
          0.009140535 = score(doc=3602,freq=18.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.19109234 = fieldWeight in 3602, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3602)
        0.06650363 = weight(_text_:68 in 3602) [ClassicSimilarity], result of:
          0.06650363 = score(doc=3602,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.29759082 = fieldWeight in 3602, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3602)
      0.4 = coord(2/5)
    
    Abstract
    Using visual analytic systems effectively may incur a steep learning curve for users, especially for those who have little prior knowledge of either using the tool or accomplishing analytic tasks. How do users deal with a steep learning curve over time? Are there particularly problematic aspects of an analytic process? In this article we investigate these questions through an integrative study of the use of CiteSpace-a visual analytic tool for finding trends and patterns in scientific literature. In particular, we analyze millions of interactive events in logs generated by users worldwide over a 14-month period. The key findings are: (i) three levels of proficiency are identified, namely, level 1: low proficiency, level 2: intermediate proficiency, and level 3: high proficiency, and (ii) behavioral patterns at level 3 are resulted from a more engaging interaction with the system, involving a wider variety of events and being characterized by longer state transition paths, whereas behavioral patterns at levels 1 and 2 seem to focus on learning how to use the tool. This study contributes to the development and evaluation of visual analytic systems in realistic settings and provides a valuable addition to the study of interactive visual analytic processes.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.5, S.1234-1256
    Type
    a
  13. Farys, R.; Wolbring, T.: Matched control groups for modeling events in citation data : an illustration of nobel prize effects in citation networks (2017) 0.03
    0.030257666 = product of:
      0.075644165 = sum of:
        0.009140535 = weight(_text_:a in 3796) [ClassicSimilarity], result of:
          0.009140535 = score(doc=3796,freq=18.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.19109234 = fieldWeight in 3796, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3796)
        0.06650363 = weight(_text_:68 in 3796) [ClassicSimilarity], result of:
          0.06650363 = score(doc=3796,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.29759082 = fieldWeight in 3796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3796)
      0.4 = coord(2/5)
    
    Abstract
    Bibliometric data are frequently used to study the effects of events, such as the honoring of a scholar with an award, and to investigate changes of citation impact over time. However, the number of yearly citations depends upon time for multiple reasons: a) general time trends in citation data, b) changing coverage of databases, c) individual citation life-cycles, and d) selection on citation impact. Hence, it is often ill-advised to simply compare the average number of citations before and after an event to estimate its causal effect. Using a recent publication in this journal on the potential citation chain reaction of a Nobel Prize, we demonstrate that a simple pre-post comparison can lead to biased and misleading results. We propose using matched control groups to improve causal inference and illustrate that the inclusion of a tailor-made synthetic control group in the statistical analysis helps to avoid methodological artifacts. Our results suggest that there is neither a Nobel Prize effect as regards citation impact of the Nobel laureate under investigation nor a related chain reaction in the citation network, as suggested in the original study. Finally, we explain that these methodological recommendations extend far beyond the study of Nobel Prize effects in citation data.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.9, S.2201-2210
    Type
    a
  14. Abad-García, M.-F.; González-Teruel, A.; González-Llinares, J.: Effectiveness of OpenAIRE, BASE, Recolecta, and Google Scholar at finding spanish articles in repositories (2018) 0.03
    0.030257666 = product of:
      0.075644165 = sum of:
        0.009140535 = weight(_text_:a in 4185) [ClassicSimilarity], result of:
          0.009140535 = score(doc=4185,freq=18.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.19109234 = fieldWeight in 4185, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4185)
        0.06650363 = weight(_text_:68 in 4185) [ClassicSimilarity], result of:
          0.06650363 = score(doc=4185,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.29759082 = fieldWeight in 4185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4185)
      0.4 = coord(2/5)
    
    Abstract
    This paper explores the usefulness of OpenAIRE, BASE, Recolecta, and Google Scholar (GS) for evaluating open access (OA) policies that demand a deposit in a repository. A case study was designed focusing on 762 financed articles with a project of FIS-2012 of the Instituto de Salud Carlos III, the Spanish national health service's main management body for health research. Its finance is therefore subject to the Spanish Government OA mandate. A search was carried out for full-text OA copies of the 762 articles using the four tools being evaluated and with identification of the repository housing these items. Of the 762 articles concerned, 510 OA copies were found of 353 unique articles (46.3%) in 68 repositories. OA copies were found of 81.9% of the articles in PubMed Central and copies of 49.5% of the articles in an institutional repository (IR). BASE and GS identified 93.5% of the articles and OpenAIRE 86.7%. Recolecta identified just 62.2% of the articles deposited in a Spanish IR. BASE achieved the greatest success, by locating copies deposited in IR, while GS found those deposited in disciplinary repositories. None of the tools identified copies of all the articles, so they need to be used in a complementary way when evaluating OA policies.
    Type
    a
  15. Klavans, K.; Boyack, K.W.: Which type of citation analysis generates the most accurate taxonomy of scientific and technical knowledge? (2017) 0.03
    0.02982593 = product of:
      0.07456482 = sum of:
        0.008061195 = weight(_text_:a in 3535) [ClassicSimilarity], result of:
          0.008061195 = score(doc=3535,freq=14.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.1685276 = fieldWeight in 3535, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3535)
        0.06650363 = weight(_text_:68 in 3535) [ClassicSimilarity], result of:
          0.06650363 = score(doc=3535,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.29759082 = fieldWeight in 3535, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3535)
      0.4 = coord(2/5)
    
    Abstract
    In 1965, Price foresaw the day when a citation-based taxonomy of science and technology would be delineated and correspondingly used for science policy. A taxonomy needs to be comprehensive and accurate if it is to be useful for policy making, especially now that policy makers are utilizing citation-based indicators to evaluate people, institutions and laboratories. Determining the accuracy of a taxonomy, however, remains a challenge. Previous work on the accuracy of partition solutions is sparse, and the results of those studies, although useful, have not been definitive. In this study we compare the accuracies of topic-level taxonomies based on the clustering of documents using direct citation, bibliographic coupling, and co-citation. Using a set of new gold standards-articles with at least 100 references-we find that direct citation is better at concentrating references than either bibliographic coupling or co-citation. Using the assumption that higher concentrations of references denote more accurate clusters, direct citation thus provides a more accurate representation of the taxonomy of scientific and technical knowledge than either bibliographic coupling or co-citation. We also find that discipline-level taxonomies based on journal schema are highly inaccurate compared to topic-level taxonomies, and recommend against their use.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.4, S.984-998
    Type
    a
  16. Lee, K.; Kim, S.Y.; Kim, E.H.-J.; Song, M.: Comparative evaluation of bibliometric content networks by tomographic content analysis : an application to Parkinson's disease (2017) 0.03
    0.02982593 = product of:
      0.07456482 = sum of:
        0.008061195 = weight(_text_:a in 3606) [ClassicSimilarity], result of:
          0.008061195 = score(doc=3606,freq=14.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.1685276 = fieldWeight in 3606, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3606)
        0.06650363 = weight(_text_:68 in 3606) [ClassicSimilarity], result of:
          0.06650363 = score(doc=3606,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.29759082 = fieldWeight in 3606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3606)
      0.4 = coord(2/5)
    
    Abstract
    To understand the current state of a discipline and to discover new knowledge of a certain theme, one builds bibliometric content networks based on the present knowledge entities. However, such networks can vary according to the collection of data sets relevant to the theme by querying knowledge entities. In this study we classify three different bibliometric content networks. The primary bibliometric network is based on knowledge entities relevant to a keyword of the theme, the secondary network is based on entities associated with the lower concepts of the keyword, and the tertiary network is based on entities influenced by the theme. To explore the content and properties of these networks, we propose a tomographic content analysis that takes a slice-and-dice approach to analyzing the networks. Our findings indicate that the primary network is best suited to understanding the current knowledge on a certain topic, whereas the secondary network is good at discovering new knowledge across fields associated with the topic, and the tertiary network is appropriate for outlining the current knowledge of the topic and relevant studies.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.5, S.1295-1307
    Type
    a
  17. Schneider, J.W.; Costas, R.: Identifying potential "breakthrough" publications using refined citation analyses : three related explorative approaches (2017) 0.03
    0.029586738 = product of:
      0.073966846 = sum of:
        0.007463216 = weight(_text_:a in 3436) [ClassicSimilarity], result of:
          0.007463216 = score(doc=3436,freq=12.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.15602624 = fieldWeight in 3436, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3436)
        0.06650363 = weight(_text_:68 in 3436) [ClassicSimilarity], result of:
          0.06650363 = score(doc=3436,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.29759082 = fieldWeight in 3436, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3436)
      0.4 = coord(2/5)
    
    Abstract
    The article presents three advanced citation-based methods used to detect potential breakthrough articles among very highly cited articles. We approach the detection of such articles from three different perspectives in order to provide different typologies of breakthrough articles. In all three cases we use the hierarchical classification of scientific publications developed at CWTS based on direct citation relationships. We assume that such contextualized articles focus on similar research interests. We utilize the characteristics scores and scales (CSS) approach to partition citation distributions and implement a specific filtering algorithm to sort out potential highly-cited "followers," articles not considered breakthroughs. After invoking thresholds and filtering, three methods are explored: A very exclusive one where only the highest cited article in a micro-cluster is considered as a potential breakthrough article (M1); as well as two conceptually different methods, one that detects potential breakthrough articles among the 2% highest cited articles according to CSS (M2a), and finally a more restrictive version where, in addition to the CSS 2% filter, knowledge diffusion is also considered (M2b). The advance citation-based methods are explored and evaluated using validated publication sets linked to different Danish funding instruments including centers of excellence.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.3, S.709-723
    Type
    a
  18. Kousha, K.; Thelwall, M.: Are wikipedia citations important evidence of the impact of scholarly articles and books? (2017) 0.03
    0.029586738 = product of:
      0.073966846 = sum of:
        0.007463216 = weight(_text_:a in 3440) [ClassicSimilarity], result of:
          0.007463216 = score(doc=3440,freq=12.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.15602624 = fieldWeight in 3440, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3440)
        0.06650363 = weight(_text_:68 in 3440) [ClassicSimilarity], result of:
          0.06650363 = score(doc=3440,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.29759082 = fieldWeight in 3440, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3440)
      0.4 = coord(2/5)
    
    Abstract
    Individual academics and research evaluators often need to assess the value of published research. Although citation counts are a recognized indicator of scholarly impact, alternative data is needed to provide evidence of other types of impact, including within education and wider society. Wikipedia is a logical choice for both of these because the role of a general encyclopaedia is to be an understandable repository of facts about a diverse array of topics and hence it may cite research to support its claims. To test whether Wikipedia could provide new evidence about the impact of scholarly research, this article counted citations to 302,328 articles and 18,735 monographs in English indexed by Scopus in the period 2005 to 2012. The results show that citations from Wikipedia to articles are too rare for most research evaluation purposes, with only 5% of articles being cited in all fields. In contrast, a third of monographs have at least one citation from Wikipedia, with the most in the arts and humanities. Hence, Wikipedia citations can provide extra impact evidence for academic monographs. Nevertheless, the results may be relatively easily manipulated and so Wikipedia is not recommended for evaluations affecting stakeholder interests.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.3, S.762-779
    Type
    a
  19. Leydesdorff, L.; Nerghes, A.: Co-word maps and topic modeling : a comparison using small and medium-sized corpora (N?<?1.000) (2017) 0.03
    0.029586738 = product of:
      0.073966846 = sum of:
        0.007463216 = weight(_text_:a in 3538) [ClassicSimilarity], result of:
          0.007463216 = score(doc=3538,freq=12.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.15602624 = fieldWeight in 3538, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3538)
        0.06650363 = weight(_text_:68 in 3538) [ClassicSimilarity], result of:
          0.06650363 = score(doc=3538,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.29759082 = fieldWeight in 3538, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3538)
      0.4 = coord(2/5)
    
    Abstract
    Induced by "big data," "topic modeling" has become an attractive alternative to mapping co-words in terms of co-occurrences and co-absences using network techniques. Does topic modeling provide an alternative for co-word mapping in research practices using moderately sized document collections? We return to the word/document matrix using first a single text with a strong argument ("The Leiden Manifesto") and then upscale to a sample of moderate size (n?=?687) to study the pros and cons of the two approaches in terms of the resulting possibilities for making semantic maps that can serve an argument. The results from co-word mapping (using two different routines) versus topic modeling are significantly uncorrelated. Whereas components in the co-word maps can easily be designated, the topic models provide sets of words that are very differently organized. In these samples, the topic models seem to reveal similarities other than semantic ones (e.g., linguistic ones). In other words, topic modeling does not replace co-word mapping in small and medium-sized sets; but the paper leaves open the possibility that topic modeling would work well for the semantic mapping of large sets.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.4, S.1024-1035
    Type
    a
  20. Ninkov, A.; Vaughan, L.: ¬A webometric analysis of the online vaccination debate (2017) 0.03
    0.029586738 = product of:
      0.073966846 = sum of:
        0.007463216 = weight(_text_:a in 3605) [ClassicSimilarity], result of:
          0.007463216 = score(doc=3605,freq=12.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.15602624 = fieldWeight in 3605, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3605)
        0.06650363 = weight(_text_:68 in 3605) [ClassicSimilarity], result of:
          0.06650363 = score(doc=3605,freq=2.0), product of:
            0.2234734 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.04148407 = queryNorm
            0.29759082 = fieldWeight in 3605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3605)
      0.4 = coord(2/5)
    
    Abstract
    Webometrics research methods can be effectively used to measure and analyze information on the web. One topic discussed vehemently online that could benefit from this type of analysis is vaccines. We carried out a study analyzing the web presence of both sides of this debate. We collected a variety of webometric data and analyzed the data both quantitatively and qualitatively. The study found far more anti- than pro-vaccine web domains. The anti and pro sides had similar web visibility as measured by the number of links coming from general websites and Tweets. However, the links to the pro domains were of higher quality measured by PageRank scores. The result from the qualitative content analysis confirmed this finding. The analysis of site ages revealed that the battle between the two sides had a long history and is still ongoing. The web scene was polarized with either pro or anti views and little neutral ground. The study suggests ways that professional information can be promoted more effectively on the web. The study demonstrates that webometrics analysis is effective in studying online information dissemination. This kind of analysis can be used to study not only health information but other information as well.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.5, S.1285-1294
    Type
    a

Languages

Types

  • a 1350
  • el 21
  • m 12
  • s 8
  • r 2
  • b 1
  • More… Less…