Search (3 results, page 1 of 1)

  • × author_ss:"Björneborn, L."
  • × theme_ss:"Internet"
  1. Björneborn, L.; Ingwersen, P.: Toward a basic framework for Webometrics (2004) 0.01
    0.009934675 = product of:
      0.01986935 = sum of:
        0.01986935 = product of:
          0.0397387 = sum of:
            0.0397387 = weight(_text_:science in 3088) [ClassicSimilarity], result of:
              0.0397387 = score(doc=3088,freq=4.0), product of:
                0.13793045 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052363027 = queryNorm
                0.2881068 = fieldWeight in 3088, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3088)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we define webometrics within the framework of informetric studies and bibliometrics, as belonging to library and information science, and as associated with cybermetrics as a generic subfield. We develop a consistent and detailed link typology and terminology and make explicit the distinction among different Web node levels when using the proposed conceptual framework. As a consequence, we propose a novel diagram notation to fully appreciate and investigate link structures between Web nodes in webometric analyses. We warn against taking the analogy between citation analyses and link analyses too far.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.14, S.1216-1227
  2. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.01
    0.008691031 = product of:
      0.017382061 = sum of:
        0.017382061 = product of:
          0.034764122 = sum of:
            0.034764122 = weight(_text_:science in 4279) [ClassicSimilarity], result of:
              0.034764122 = score(doc=4279,freq=6.0), product of:
                0.13793045 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052363027 = queryNorm
                0.25204095 = fieldWeight in 4279, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4279)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
    Source
    Annual review of information science and technology. 39(2005), S.81-138
  3. Jepsen, E.T.; Seiden, P.; Ingwersen, P.; Björneborn, L.; Borlund, P.: Characteristics of scientific Web publications : preliminary data gathering and analysis (2004) 0.01
    0.0050177686 = product of:
      0.010035537 = sum of:
        0.010035537 = product of:
          0.020071074 = sum of:
            0.020071074 = weight(_text_:science in 3091) [ClassicSimilarity], result of:
              0.020071074 = score(doc=3091,freq=2.0), product of:
                0.13793045 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052363027 = queryNorm
                0.1455159 = fieldWeight in 3091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3091)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.14, S.1239-1249