Search (4 results, page 1 of 1)

  • × theme_ss:"Informetrie"
  • × theme_ss:"Literaturübersicht"
  • × year_i:[2000 TO 2010}
  1. Nicolaisen, J.: Citation analysis (2007) 0.06
    0.05533268 = product of:
      0.11066536 = sum of:
        0.11066536 = sum of:
          0.0108246 = weight(_text_:a in 6091) [ClassicSimilarity], result of:
            0.0108246 = score(doc=6091,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.20383182 = fieldWeight in 6091, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.125 = fieldNorm(doc=6091)
          0.09984076 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
            0.09984076 = score(doc=6091,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.61904186 = fieldWeight in 6091, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=6091)
      0.5 = coord(1/2)
    
    Date
    13. 7.2008 19:53:22
    Type
    a
  2. Bensman, S.J.: Garfield and the impact factors (2007) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 4680) [ClassicSimilarity], result of:
              0.0108246 = score(doc=4680,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 4680, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=4680)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  3. Borgman, C.L.; Furner, J.: Scholarly communication and bibliometrics (2002) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 4291) [ClassicSimilarity], result of:
              0.008118451 = score(doc=4291,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 4291, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4291)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Why devote an ARIST chapter to scholarly communication and bibliometrics, and why now? Bibliometrics already is a frequently covered ARIST topic, with chapters such as that by White and McCain (1989) on bibliometrics generally, White and McCain (1997) on visualization of literatures, Wilson and Hood (2001) on informetric laws, and Tabah (2001) on literature dynamics. Similarly, scholarly communication has been addressed in other ARIST chapters such as Bishop and Star (1996) on social informatics and digital libraries, Schamber (1994) on relevance and information behavior, and many earlier chapters on information needs and uses. More than a decade ago, the first author addressed the intersection of scholarly communication and bibliometrics with a journal special issue and an edited book (Borgman, 1990; Borgman & Paisley, 1989), and she recently examined interim developments (Borgman, 2000a, 2000c). This review covers the decade (1990-2000) since the comprehensive 1990 volume, citing earlier works only when necessary to explain the foundation for recent developments.
    Type
    a
  4. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 4279) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=4279,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 4279, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4279)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
    Type
    a