Search (412 results, page 2 of 21)

  • × theme_ss:"Informetrie"
  1. Thelwall, M.; Li, X.; Barjak, F.; Robinson, S.: Assessing the international web connectivity of research groups (2008) 0.04
    0.03553481 = product of:
      0.10660443 = sum of:
        0.04816959 = weight(_text_:wide in 1401) [ClassicSimilarity], result of:
          0.04816959 = score(doc=1401,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 1401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1401)
        0.05843484 = weight(_text_:web in 1401) [ClassicSimilarity], result of:
          0.05843484 = score(doc=1401,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.40312994 = fieldWeight in 1401, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1401)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - The purpose of this paper is to claim that it is useful to assess the web connectivity of research groups, describe hyperlink-based techniques to achieve this and present brief details of European life sciences research groups as a case study. Design/methodology/approach - A commercial search engine was harnessed to deliver hyperlink data via its automatic query submission interface. A special purpose link analysis tool, LexiURL, then summarised and graphed the link data in appropriate ways. Findings - Webometrics can provide a wide range of descriptive information about the international connectivity of research groups. Research limitations/implications - Only one field was analysed, data was taken from only one search engine, and the results were not validated. Practical implications - Web connectivity seems to be particularly important for attracting overseas job applicants and to promote research achievements and capabilities, and hence we contend that it can be useful for national and international governments to use webometrics to ensure that the web is being used effectively by research groups. Originality/value - This is the first paper to make a case for the value of using a range of webometric techniques to evaluate the web presences of research groups within a field, and possibly the first "applied" webometrics study produced for an external contract.
  2. Almind, T.C.; Ingwersen, P.: Informetric analyses on the World Wide Web : methodological approaches to 'Webometrics' (1997) 0.03
    0.034674477 = product of:
      0.10402343 = sum of:
        0.067437425 = weight(_text_:wide in 4711) [ClassicSimilarity], result of:
          0.067437425 = score(doc=4711,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 4711, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4711)
        0.036585998 = weight(_text_:web in 4711) [ClassicSimilarity], result of:
          0.036585998 = score(doc=4711,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 4711, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4711)
      0.33333334 = coord(2/6)
    
  3. Lawrence, S.: Online or Invisible? (2001) 0.03
    0.034547184 = product of:
      0.10364155 = sum of:
        0.051209725 = weight(_text_:web in 1063) [ClassicSimilarity], result of:
          0.051209725 = score(doc=1063,freq=12.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.35328537 = fieldWeight in 1063, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1063)
        0.05243182 = weight(_text_:computer in 1063) [ClassicSimilarity], result of:
          0.05243182 = score(doc=1063,freq=8.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.32301605 = fieldWeight in 1063, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=1063)
      0.33333334 = coord(2/6)
    
    Content
    The volume of scientific literature typically far exceeds the ability of scientists to identify and utilize all relevant information in their research. Improvements to the accessibility of scientific literature, allowing scientists to locate more relevant research within a given time, have the potential to dramatically improve communication and progress in science. With the web, scientists now have very convenient access to an increasing amount of literature that previously required trips to the library, inter-library loan delays, or substantial effort in locating the source. Evidence shows that usage increases when access is more convenient, and maximizing the usage of the scientific record benefits all of society. Although availability varies greatly by discipline, over a million research articles are freely available on the web. Some journals and conferences provide free access online, others allow authors to post articles on the web, and others allow authors to purchase the right to post their articles on the web. In this article we investigate the impact of free online availability by analyzing citation rates. We do not discuss methods of creating free online availability, such as time-delayed release or publication/membership/conference charges. Online availability of an article may not be expected to greatly improve access and impact by itself. For example, efficient means of locating articles via web search engines or specialized search services is required, and a substantial percentage of the literature needs to be indexed by these search services before it is worthwhile for many scientists to use them. Computer science is a forerunner in web availability -- a substantial percentage of the literature is online and available through search engines such as Google (google.com), or specialized services such as ResearchIndex (researchindex.org). Even so, the greatest impact of the online availability of computer science literature is likely yet to come, because comprehensive search services and more powerful search methods have only become available recently. We analyzed 119,924 conference articles in computer science and related disciplines, obtained from DBLP (dblp.uni-trier.de). In computer science, conference articles are typically formal publications and are often more prestigious than journal articles, with acceptance rates at some conferences below 10%. Citation counts and online availability were estimated using ResearchIndex. The analysis excludes self-citations, where a citation is considered to be a self-citation if one or more of the citing and cited authors match.
  4. Kousha, K.; Thelwall, M.: How is science cited on the Web? : a classification of google unique Web citations (2007) 0.03
    0.03256127 = product of:
      0.0976838 = sum of:
        0.08263934 = weight(_text_:web in 586) [ClassicSimilarity], result of:
          0.08263934 = score(doc=586,freq=20.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5701118 = fieldWeight in 586, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=586)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 586) [ClassicSimilarity], result of:
              0.030088935 = score(doc=586,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 586, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=586)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Although the analysis of citations in the scholarly literature is now an established and relatively well understood part of information science, not enough is known about citations that can be found on the Web. In particular, are there new Web types, and if so, are these trivial or potentially useful for studying or evaluating research communication? We sought evidence based upon a sample of 1,577 Web citations of the URLs or titles of research articles in 64 open-access journals from biology, physics, chemistry, and computing. Only 25% represented intellectual impact, from references of Web documents (23%) and other informal scholarly sources (2%). Many of the Web/URL citations were created for general or subject-specific navigation (45%) or for self-publicity (22%). Additional analyses revealed significant disciplinary differences in the types of Google unique Web/URL citations as well as some characteristics of scientific open-access publishing on the Web. We conclude that the Web provides access to a new and different type of citation information, one that may therefore enable us to measure different aspects of research, and the research process in particular; but to obtain good information, the different types should be separated.
  5. Couto, T.; Cristo, M.; Gonçalves, M.A.; Calado, P.; Ziviani, N.; Moura, E.; Ribeiro-Neto, B.: ¬A comparative study of citations and links in document classification (2006) 0.03
    0.032260682 = product of:
      0.09678204 = sum of:
        0.064012155 = weight(_text_:web in 2531) [ClassicSimilarity], result of:
          0.064012155 = score(doc=2531,freq=12.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.4416067 = fieldWeight in 2531, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2531)
        0.03276989 = weight(_text_:computer in 2531) [ClassicSimilarity], result of:
          0.03276989 = score(doc=2531,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 2531, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2531)
      0.33333334 = coord(2/6)
    
    Abstract
    It is well known that links are an important source of information when dealing with Web collections. However, the question remains on whether the same techniques that are used on the Web can be applied to collections of documents containing citations between scientific papers. In this work we present a comparative study of digital library citations and Web links, in the context of automatic text classification. We show that there are in fact differences between citations and links in this context. For the comparison, we run a series of experiments using a digital library of computer science papers and a Web directory. In our reference collections, measures based on co-citation tend to perform better for pages in the Web directory, with gains up to 37% over text based classifiers, while measures based on bibliographic coupling perform better in a digital library. We also propose a simple and effective way of combining a traditional text based classifier with a citation-link based classifier. This combination is based on the notion of classifier reliability and presented gains of up to 14% in micro-averaged F1 in the Web collection. However, no significant gain was obtained in the digital library. Finally, a user study was performed to further investigate the causes for these results. We discovered that misclassifications by the citation-link based classifiers are in fact difficult cases, hard to classify even for humans.
  6. Gu, Y.: ¬An exploratory study of Malaysian publication productivity in computer science and information technology (2002) 0.03
    0.031504404 = product of:
      0.09451321 = sum of:
        0.04816959 = weight(_text_:wide in 976) [ClassicSimilarity], result of:
          0.04816959 = score(doc=976,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 976, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=976)
        0.04634362 = weight(_text_:computer in 976) [ClassicSimilarity], result of:
          0.04634362 = score(doc=976,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28550854 = fieldWeight in 976, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=976)
      0.33333334 = coord(2/6)
    
    Abstract
    Explores the Malaysian computer science and information technology publication productivity. A total of 547 unique Malaysian authors, affiliated to 52 organizations in Malaysia, contributed 461 publications between 1990 and 1999 as indicated by data collected from three Webbased databases. The majority (378 or 69.1 %) of authors wrote one publication. The productive authors and the number of their papers as well as the position of their names in the articles are listed to indicate their productivity and degree of involvement in their research publications. Researchers from the universities contribute about 428 (92.8%) publications. The three most productive institutions together account for a total of 258 (56.0%) publications. The composition of the publications are 197 (42.7%) journal articles, 263 (57.1 %) conference papers, and 1 (0.2%) monograph chapters. The results indicate that the scholars published in a few core proceedings but contributed to a wide variety of journals. Thirty-nine fields of research undertaken by the scholars are also revealed. The possible reasons for the amount and Pattern of contributions are related to the size of researcher population in the country, the availability of refereed scholarly journals, and the total expenditure allocated to information, computers, and communication technology (ICCT) research in Malaysia.
  7. Subelj, L.; Fiala, D.: Publication boost in web of science journals and its effect on citation distributions (2017) 0.03
    0.031213328 = product of:
      0.093639985 = sum of:
        0.054316122 = weight(_text_:web in 3537) [ClassicSimilarity], result of:
          0.054316122 = score(doc=3537,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.37471575 = fieldWeight in 3537, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3537)
        0.039323866 = weight(_text_:computer in 3537) [ClassicSimilarity], result of:
          0.039323866 = score(doc=3537,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 3537, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=3537)
      0.33333334 = coord(2/6)
    
    Abstract
    In this article, we show that the dramatic increase in the number of research articles indexed in the Web of Science database impacts the commonly observed distributions of citations within these articles. First, we document that the growing number of physics articles in recent years is attributed to existing journals publishing more and more articles rather than more new journals coming into being as it happens in computer science. Second, even though the references from the more recent articles generally cover a longer time span, the newer articles are cited more frequently than the older ones if the uneven article growth is not corrected for. Nevertheless, despite this change in the distribution of citations, the citation behavior of scientists does not seem to have changed.
    Object
    Web of science
  8. Zhao, D.; Strotmann, A.: Can citation analysis of Web publications better detect research fronts? (2007) 0.03
    0.030401578 = product of:
      0.09120473 = sum of:
        0.05843484 = weight(_text_:web in 471) [ClassicSimilarity], result of:
          0.05843484 = score(doc=471,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.40312994 = fieldWeight in 471, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=471)
        0.03276989 = weight(_text_:computer in 471) [ClassicSimilarity], result of:
          0.03276989 = score(doc=471,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=471)
      0.33333334 = coord(2/6)
    
    Abstract
    We present evidence that in some research fields, research published in journals and reported on the Web may collectively represent different evolutionary stages of the field, with journals lagging a few years behind the Web on average, and that a "two-tier" scholarly communication system may therefore be evolving. We conclude that in such fields, (a) for detecting current research fronts, author co-citation analyses (ACA) using articles published on the Web as a data source can outperform traditional ACAs using articles published in journals as data, and that (b) as a result, it is important to use multiple data sources in citation analysis studies of scholarly communication for a complete picture of communication patterns. Our evidence stems from comparing the respective intellectual structures of the XML research field, a subfield of computer science, as revealed from three sets of ACA covering two time periods: (a) from the field's beginnings in 1996 to 2001, and (b) from 2001 to 2006. For the first time period, we analyze research articles both from journals as indexed by the Science Citation Index (SCI) and from the Web as indexed by CiteSeer. We follow up by an ACA of SCI data for the second time period. We find that most trends in the evolution of this field from the first to the second time period that we find when comparing ACA results from the SCI between the two time periods already were apparent in the ACA results from CiteSeer during the first time period.
  9. Bar-Ilan, J.; Peritz, B.C.: Informetric theories and methods for exploring the Internet : an analytical survey of recent research literature (2002) 0.03
    0.029720977 = product of:
      0.08916293 = sum of:
        0.057803504 = weight(_text_:wide in 813) [ClassicSimilarity], result of:
          0.057803504 = score(doc=813,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 813, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=813)
        0.031359423 = weight(_text_:web in 813) [ClassicSimilarity], result of:
          0.031359423 = score(doc=813,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 813, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=813)
      0.33333334 = coord(2/6)
    
    Abstract
    The Internet, and more specifically the World Wide Web, is quickly becoming one of our main information sources. Systematic evaluation and analysis can help us understand how this medium works, grows, and changes, and how it influences our lives and research. New approaches in informetrics can provide an appropriate means towards achieving the above goals, and towards establishing a sound theory. This paper presents a selective review of research based on the Internet, using bibliometric and informetric methods and tools. Some of these studies clearly show the applicability of bibliometric laws to the Internet, while others establish new definitions and methods based on the respective definitions for printed sources. Both informetrics and Internet research can gain from these additional methods.
  10. Thelwall, M.; Klitkou, A.; Verbeek, A.; Stuart, D.; Vincent, C.: Policy-relevant Webometrics for individual scientific fields (2010) 0.03
    0.029720977 = product of:
      0.08916293 = sum of:
        0.057803504 = weight(_text_:wide in 3574) [ClassicSimilarity], result of:
          0.057803504 = score(doc=3574,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 3574, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3574)
        0.031359423 = weight(_text_:web in 3574) [ClassicSimilarity], result of:
          0.031359423 = score(doc=3574,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 3574, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3574)
      0.33333334 = coord(2/6)
    
    Abstract
    Despite over 10 years of research there is no agreement on the most suitable roles for Webometric indicators in support of research policy and almost no field-based Webometrics. This article partly fills these gaps by analyzing the potential of policy-relevant Webometrics for individual scientific fields with the help of 4 case studies. Although Webometrics cannot provide robust indicators of knowledge flows or research impact, it can provide some evidence of networking and mutual awareness. The scope of Webometrics is also relatively wide, including not only research organizations and firms but also intermediary groups like professional associations, Web portals, and government agencies. Webometrics can, therefore, provide evidence about the research process to compliment peer review, bibliometric, and patent indicators: tracking the early, mainly prepublication development of new fields and research funding initiatives, assessing the role and impact of intermediary organizations and the need for new ones, and monitoring the extent of mutual awareness in particular research areas.
  11. Fiala, D.: Bibliometric analysis of CiteSeer data for countries (2012) 0.03
    0.028990587 = product of:
      0.08697176 = sum of:
        0.031359423 = weight(_text_:web in 2742) [ClassicSimilarity], result of:
          0.031359423 = score(doc=2742,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 2742, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.05561234 = weight(_text_:computer in 2742) [ClassicSimilarity], result of:
          0.05561234 = score(doc=2742,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.34261024 = fieldWeight in 2742, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
      0.33333334 = coord(2/6)
    
    Abstract
    This article describes the results of our analysis of the data from the CiteSeer digital library. First, we examined the data from the point of view of source top-level Internet domains from which the data were collected. Second, we measured country shares in publications indexed by CiteSeer and compared them to those based on mainstream bibliographic data from the Web of Science and Scopus. And third, we concentrated on analyzing publications and their citations aggregated by countries. This way, we generated rankings of the most influential countries in computer science using several non-recursive as well as recursive methods such as citation counts or PageRank. We conclude that even if East Asian countries are underrepresented in CiteSeer, its data may well be used along with other conventional bibliographic databases for comparing the computer science research productivity and performance of countries.
  12. Sanderson, M.: Revisiting h measured on UK LIS and IR academics (2008) 0.03
    0.027890932 = product of:
      0.08367279 = sum of:
        0.04434892 = weight(_text_:web in 1867) [ClassicSimilarity], result of:
          0.04434892 = score(doc=1867,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3059541 = fieldWeight in 1867, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1867)
        0.039323866 = weight(_text_:computer in 1867) [ClassicSimilarity], result of:
          0.039323866 = score(doc=1867,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 1867, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=1867)
      0.33333334 = coord(2/6)
    
    Abstract
    A brief communication appearing in this journal ranked UK-based LIS and (some) IR academics by their h-index using data derived from the Thomson ISI Web of Science(TM) (WoS). In this brief communication, the same academics were re-ranked, using other popular citation databases. It was found that for academics who publish more in computer science forums, their h was significantly different due to highly cited papers missed by WoS; consequently, their rank changed substantially. The study was widened to a broader set of UK-based LIS and IR academics in which results showed similar statistically significant differences. A variant of h, hmx, was introduced that allowed a ranking of the academics using all citation databases together.
    Object
    Web of Science
  13. Orduna-Malea, E.; Thelwall, M.; Kousha, K.: Web citations in patents : evidence of technological impact? (2017) 0.03
    0.027890932 = product of:
      0.08367279 = sum of:
        0.04434892 = weight(_text_:web in 3764) [ClassicSimilarity], result of:
          0.04434892 = score(doc=3764,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3059541 = fieldWeight in 3764, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3764)
        0.039323866 = weight(_text_:computer in 3764) [ClassicSimilarity], result of:
          0.039323866 = score(doc=3764,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 3764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=3764)
      0.33333334 = coord(2/6)
    
    Abstract
    Patents sometimes cite webpages either as general background to the problem being addressed or to identify prior publications that limit the scope of the patent granted. Counts of the number of patents citing an organization's website may therefore provide an indicator of its technological capacity or relevance. This article introduces methods to extract URL citations from patents and evaluates the usefulness of counts of patent web citations as a technology indicator. An analysis of patents citing 200 US universities or 177 UK universities found computer science and engineering departments to be frequently cited, as well as research-related webpages, such as Wikipedia, YouTube, or the Internet Archive. Overall, however, patent URL citations seem to be frequent enough to be useful for ranking major US and the top few UK universities if popular hosted subdomains are filtered out, but the hit count estimates on the first search engine results page should not be relied upon for accuracy.
  14. Bornmann, L.; Leydesdorff, L.: Which cities produce more excellent papers than can be expected? : a new mapping approach, using Google Maps, based on statistical significance testing (2011) 0.03
    0.026944324 = product of:
      0.08083297 = sum of:
        0.031359423 = weight(_text_:web in 4767) [ClassicSimilarity], result of:
          0.031359423 = score(doc=4767,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 4767, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4767)
        0.049473554 = product of:
          0.09894711 = sum of:
            0.09894711 = weight(_text_:programs in 4767) [ClassicSimilarity], result of:
              0.09894711 = score(doc=4767,freq=2.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.38428974 = fieldWeight in 4767, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4767)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The methods presented in this paper allow for a statistical analysis revealing centers of excellence around the world using programs that are freely available. Based on Web of Science data (a fee-based database), field-specific excellence can be identified in cities where highly cited papers were published more frequently than can be expected. Compared to the mapping approaches published hitherto, our approach is more analytically oriented by allowing the assessment of an observed number of excellent papers for a city against the expected number. Top performers in output are cities in which authors are located who publish a statistically significant higher number of highly cited papers than can be expected for these cities. As sample data for physics, chemistry, and psychology show, these cities do not necessarily have a high output of highly cited papers.
  15. Asubiaro, T.V.; Onaolapo, S.: ¬A comparative study of the coverage of African journals in Web of Science, Scopus, and CrossRef (2023) 0.03
    0.026352208 = product of:
      0.07905662 = sum of:
        0.064012155 = weight(_text_:web in 992) [ClassicSimilarity], result of:
          0.064012155 = score(doc=992,freq=12.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.4416067 = fieldWeight in 992, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=992)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 992) [ClassicSimilarity], result of:
              0.030088935 = score(doc=992,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=992)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This is the first study that evaluated the coverage of journals from Africa in Web of Science, Scopus, and CrossRef. A list of active journals published in each of the 55 African countries was compiled from Ulrich's periodicals directory and African Journals Online (AJOL) website. Journal master lists for Web of Science, Scopus, and CrossRef were searched for the African journals. A total of 2,229 unique active African journals were identified from Ulrich (N = 2,117, 95.0%) and AJOL (N = 243, 10.9%) after removing duplicates. The volume of African journals in Web of Science and Scopus databases is 7.4% (N = 166) and 7.8% (N = 174), respectively, compared to the 45.6% (N = 1,017) covered in CrossRef. While making up only 17.% of all the African journals, South African journals had the best coverage in the two most authoritative databases, accounting for 73.5% and 62.1% of all the African journals in Web of Science and Scopus, respectively. In contrast, Nigeria published 44.5% of all the African journals. The distribution of the African journals is biased in favor of Medical, Life and Health Sciences and Humanities and the Arts in the three databases. The low representation of African journals in CrossRef, a free indexing infrastructure that could be harnessed for building an African-centric research indexing database, is concerning.
    Date
    22. 6.2023 14:09:06
    Object
    Web of Science
  16. Tonta, Y.: Scholarly communication and the use of networked information sources (1996) 0.03
    0.025285622 = product of:
      0.075856864 = sum of:
        0.057803504 = weight(_text_:wide in 6389) [ClassicSimilarity], result of:
          0.057803504 = score(doc=6389,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 6389, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=6389)
        0.01805336 = product of:
          0.03610672 = sum of:
            0.03610672 = weight(_text_:22 in 6389) [ClassicSimilarity], result of:
              0.03610672 = score(doc=6389,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.23214069 = fieldWeight in 6389, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6389)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Examines the use of networked information sources in scholarly communication. Networked information sources are defined broadly to cover: documents and images stored on electronic network hosts; data files; newsgroups; listservs; online information services and electronic periodicals. Reports results of a survey to determine how heavily, if at all, networked information sources are cited in scholarly printed periodicals published in 1993 and 1994. 27 printed periodicals, representing a wide range of subjects and the most influential periodicals in their fields, were identified through the Science Citation Index and Social Science Citation Index Journal Citation Reports. 97 articles were selected for further review and references, footnotes and bibliographies were checked for references to networked information sources. Only 2 articles were found to contain such references. Concludes that, although networked information sources facilitate scholars' work to a great extent during the research process, scholars have yet to incorporate such sources in the bibliographies of their published articles
    Source
    IFLA journal. 22(1996) no.3, S.240-245
  17. McCain, K.W.: Eponymy and obliteration by incorporation : The case of the "Nash Equilibrium" (2011) 0.02
    0.02476748 = product of:
      0.07430244 = sum of:
        0.04816959 = weight(_text_:wide in 4479) [ClassicSimilarity], result of:
          0.04816959 = score(doc=4479,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 4479, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4479)
        0.026132854 = weight(_text_:web in 4479) [ClassicSimilarity], result of:
          0.026132854 = score(doc=4479,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.18028519 = fieldWeight in 4479, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4479)
      0.33333334 = coord(2/6)
    
    Abstract
    In order to examine the phenomena of eponymy and Obliteration by Incorporation at both the aggregate and individual subject level, the literature relating to the game-theoretic concept of the Nash Equilibrium was studied over the period 1950-2008. Almost 5,300 bibliographic database records for publications explicitly citing at least one of two papers by John Nash and/or using the phrase "Nash Equilibrium/Nash Equilibria" were retrieved from the Web of Science and various subject-related databases. Breadth of influence is demonstrated by the wide variety of subject areas in which Nash Equilibrium-related publications occur, including in the natural and social sciences, humanities, law, and medicine. Fifty percent of all items have been published since 2002, suggesting that Nash's papers have experienced "delayed recognition." A degree of Obliteration by Incorporation is observed in that implicit citations (use of the phrase without citation) increased over the time period studied, although the proportion of all citations that are implicit has remained relatively stable during the most recent decade with an annual rate of between 60% and 70%; subject areas vary in their level of obliteration.
  18. White, H.D.: Pathfinder networks and author cocitation analysis : a remapping of paradigmatic information scientists (2003) 0.02
    0.024158826 = product of:
      0.07247648 = sum of:
        0.026132854 = weight(_text_:web in 1459) [ClassicSimilarity], result of:
          0.026132854 = score(doc=1459,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.18028519 = fieldWeight in 1459, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1459)
        0.04634362 = weight(_text_:computer in 1459) [ClassicSimilarity], result of:
          0.04634362 = score(doc=1459,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28550854 = fieldWeight in 1459, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1459)
      0.33333334 = coord(2/6)
    
    Abstract
    In their 1998 article "Visualizing a discipline: An author cocitation analysis of information science, 1972-1995," White and McCain used multidimensional scaling, hierarchical clustering, and factor analysis to display the specialty groupings of 120 highly-cited ("paradigmatic") information scientists. These statistical techniques are traditional in author cocitation analysis (ACA). It is shown here that a newer technique, Pathfinder Networks (PFNETs), has considerable advantages for ACA. In PFNETs, nodes represent authors, and explicit links represent weighted paths between nodes, the weights in this case being cocitation counts. The links can be drawn to exclude all but the single highest counts for author pairs, which reduces a network of authors to only the most salient relationships. When these are mapped, dominant authors can be defined as those with relatively many links to other authors (i.e., high degree centrality). Links between authors and dominant authors define specialties, and links between dominant authors connect specialties into a discipline. Maps are made with one rather than several computer routines and in one rather than many computer passes. Also, PFNETs can, and should, be generated from matrices of raw counts rather than Pearson correlations, which removes a computational step associated with traditional ACA. White and McCain's raw data from 1998 are remapped as a PFNET. It is shown that the specialty groupings correspond closely to those seen in the factor analysis of the 1998 article. Because PFNETs are fast to compute, they are used in AuthorLink, a new Web-based system that creates live interfaces for cocited author retrieval an the fly.
  19. H-Index auch im Web of Science (2008) 0.02
    0.024123162 = product of:
      0.072369486 = sum of:
        0.054316122 = weight(_text_:web in 590) [ClassicSimilarity], result of:
          0.054316122 = score(doc=590,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.37471575 = fieldWeight in 590, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=590)
        0.01805336 = product of:
          0.03610672 = sum of:
            0.03610672 = weight(_text_:22 in 590) [ClassicSimilarity], result of:
              0.03610672 = score(doc=590,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.23214069 = fieldWeight in 590, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=590)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Content
    "Zur Kurzmitteilung "Latest enhancements in Scopus: ... h-Index incorporated in Scopus" in den letzten Online-Mitteilungen (Online-Mitteilungen 92, S.31) ist zu korrigieren, dass der h-Index sehr wohl bereits im Web of Science enthalten ist. Allerdings findet man/frau diese Information nicht in der "cited ref search", sondern neben der Trefferliste einer Quick Search, General Search oder einer Suche über den Author Finder in der rechten Navigationsleiste unter dem Titel "Citation Report". Der "Citation Report" bietet für die in der jeweiligen Trefferliste angezeigten Arbeiten: - Die Gesamtzahl der Zitierungen aller Arbeiten in der Trefferliste - Die mittlere Zitationshäufigkeit dieser Arbeiten - Die Anzahl der Zitierungen der einzelnen Arbeiten, aufgeschlüsselt nach Publikationsjahr der zitierenden Arbeiten - Die mittlere Zitationshäufigkeit dieser Arbeiten pro Jahr - Den h-Index (ein h-Index von x sagt aus, dass x Arbeiten der Trefferliste mehr als x-mal zitiert wurden; er ist gegenüber sehr hohen Zitierungen einzelner Arbeiten unempfindlicher als die mittlere Zitationshäufigkeit)."
    Date
    6. 4.2008 19:04:22
    Object
    Web of Science
  20. Meho, L.I.; Spurgin, K.M.: Ranking the research productivity of library and information science faculty and schools : an evaluation of data sources and research methods (2005) 0.02
    0.023839345 = product of:
      0.071518034 = sum of:
        0.03853567 = weight(_text_:wide in 4343) [ClassicSimilarity], result of:
          0.03853567 = score(doc=4343,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 4343, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4343)
        0.032982368 = product of:
          0.065964736 = sum of:
            0.065964736 = weight(_text_:programs in 4343) [ClassicSimilarity], result of:
              0.065964736 = score(doc=4343,freq=2.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.25619316 = fieldWeight in 4343, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4343)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This study evaluates the data sources and research methods used in earlier studies to rank the research productivity of Library and Information Science (LIS) faculty and schools. In doing so, the study identifies both tools and methods that generate more accurate publication count rankings as weil as databases that should be taken into consideration when conducting comprehensive searches in the literature for research and curricular needs. With a list of 2,625 items published between 1982 and 2002 by 68 faculty members of 18 American Library Association- (ALA-) accredited LIS schools, hundreds of databases were searched. Results show that there are only 10 databases that provide significant coverage of the LIS indexed literature. Results also show that restricting the data sources to one, two, or even three databases leads to inaccurate rankings and erroneous conclusions. Because no database provides comprehensive coverage of the LIS literature, researchers must rely an a wide range of disciplinary and multidisciplinary databases for ranking and other research purposes. The study answers such questions as the following: Is the Association of Library and Information Science Education's (ALISE's) directory of members a reliable tool to identify a complete list of faculty members at LIS schools? How many and which databases are needed in a multifile search to arrive at accurate publication count rankings? What coverage will be achieved using a certain number of databases? Which research areas are well covered by which databases? What alternative methods and tools are available to supplement gaps among databases? Did coverage performance of databases change over time? What counting method should be used when determining what and how many items each LIS faculty and school has published? The authors recommend advanced analysis of research productivity to provide a more detailed assessment of research productivity of authors and programs.

Years

Languages

  • e 389
  • d 20
  • ro 1
  • More… Less…

Types

  • a 401
  • el 8
  • m 7
  • s 3
  • r 1
  • More… Less…