Search (206 results, page 1 of 11)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Informetrie"
  1. González-Alcaide, G.; Castelló-Cogollos, L.; Navarro-Molina, C.; Aleixandre-Benavent, R.; Valderrama-Zurián, J.C.: Library and information science research areas : analysis of journal articles in LISA (2008) 0.07
    0.0730405 = product of:
      0.146081 = sum of:
        0.07803083 = weight(_text_:wide in 1347) [ClassicSimilarity], result of:
          0.07803083 = score(doc=1347,freq=4.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.4153836 = fieldWeight in 1347, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1347)
        0.042333104 = weight(_text_:web in 1347) [ClassicSimilarity], result of:
          0.042333104 = score(doc=1347,freq=4.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.3059541 = fieldWeight in 1347, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1347)
        0.025717068 = weight(_text_:retrieval in 1347) [ClassicSimilarity], result of:
          0.025717068 = score(doc=1347,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.20052543 = fieldWeight in 1347, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1347)
      0.5 = coord(3/6)
    
    Abstract
    The main fields of research in Library Science and Documentation are identified by quantifying the frequency of appearance and the analysis of co-occurrence of the descriptors assigned to 11,273 indexed works in the Library and Information Science Abstracts (LISA) database for the 2004-2005 period. The analysis made has enabled three major core research areas to be identified: World Wide Web, Libraries and Education. There are a further 12 areas of research with specific development, one connected with the library sphere and another 11 connected with the World Wide Web and Internet: Networks, Computer Security, Information technologies, Electronic Resources, Electronic Publications, Bibliometrics, Electronic Commerce, Computer applications, Medicine, Searches and Online Information retrieval.
  2. Zhao, D.; Strotmann, A.: Information science during the first decade of the web : an enriched author cocitation analysis (2008) 0.07
    0.06637022 = product of:
      0.13274044 = sum of:
        0.055176124 = weight(_text_:wide in 1720) [ClassicSimilarity], result of:
          0.055176124 = score(doc=1720,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.29372054 = fieldWeight in 1720, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1720)
        0.051847253 = weight(_text_:web in 1720) [ClassicSimilarity], result of:
          0.051847253 = score(doc=1720,freq=6.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.37471575 = fieldWeight in 1720, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1720)
        0.025717068 = weight(_text_:retrieval in 1720) [ClassicSimilarity], result of:
          0.025717068 = score(doc=1720,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.20052543 = fieldWeight in 1720, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1720)
      0.5 = coord(3/6)
    
    Abstract
    Using an enriched author cocitation analysis (ACA), we map information science (IS) for 1996-2005, a decade of explosive development of the World Wide Web, to examine its development since the landmark study by White and McCain (1998). The Web, we find, has had a profound impact on IS, driving the creation of new disciplines and revitalization or obsolescence of old, and most importantly, bridging the chasm between the literatures and retrieval IS camps. Simultaneously, the development of IS towards cognitive aspects has intensified. Our study enriches classic ACA in that it employs both orthogonal and oblique rotations in the factor analysis (FA), and reports both pattern and structure matrices for the latter, thus enabling a comparison between these several FA methods in ACA. Each method provides interesting information not available from the others, we find, especially when results are also visualized in the novel manner we introduce here.
  3. Menczer, F.: Lexical and semantic clustering by Web links (2004) 0.06
    0.058253925 = product of:
      0.11650785 = sum of:
        0.07919799 = weight(_text_:web in 3090) [ClassicSimilarity], result of:
          0.07919799 = score(doc=3090,freq=14.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.57238775 = fieldWeight in 3090, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
        0.025717068 = weight(_text_:retrieval in 3090) [ClassicSimilarity], result of:
          0.025717068 = score(doc=3090,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.20052543 = fieldWeight in 3090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
        0.011592798 = product of:
          0.034778394 = sum of:
            0.034778394 = weight(_text_:29 in 3090) [ClassicSimilarity], result of:
              0.034778394 = score(doc=3090,freq=2.0), product of:
                0.14914064 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042397358 = queryNorm
                0.23319192 = fieldWeight in 3090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3090)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Abstract
    Recent Web-searching and -mining tools are combining text and link analysis to improve ranking and crawling algorithms. The central assumption behind such approaches is that there is a correiation between the graph structure of the Web and the text and meaning of pages. Here I formalize and empirically evaluate two general conjectures drawing connections from link information to lexical and semantic Web content. The link-content conjecture states that a page is similar to the pages that link to it, and the link-cluster conjecture that pages about the same topic are clustered together. These conjectures are offen simply assumed to hold, and Web search tools are built an such assumptions. The present quantitative confirmation sheds light an the connection between the success of the latest Web-mining techniques and the small world topology of the Web, with encouraging implications for the design of better crawling algorithms.
    Date
    9. 1.2005 19:20:29
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  4. Cothey, V.: Web-crawling reliability (2004) 0.05
    0.052256607 = product of:
      0.15676981 = sum of:
        0.06437215 = weight(_text_:wide in 3089) [ClassicSimilarity], result of:
          0.06437215 = score(doc=3089,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.342674 = fieldWeight in 3089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3089)
        0.09239765 = weight(_text_:web in 3089) [ClassicSimilarity], result of:
          0.09239765 = score(doc=3089,freq=14.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.6677857 = fieldWeight in 3089, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3089)
      0.33333334 = coord(2/6)
    
    Abstract
    In this article, I investigate the reliability, in the social science sense, of collecting informetric data about the World Wide Web by Web crawling. The investigation includes a critical examination of the practice of Web crawling and contrasts the results of content crawling with the results of link crawling. It is shown that Web crawling by search engines is intentionally biased and selective. I also report the results of a [arge-scale experimental simulation of Web crawling that illustrates the effects of different crawling policies an data collection. It is concluded that the reliability of Web crawling as a data collection technique is improved by fuller reporting of relevant crawling policies.
  5. Thelwall, M.: Webometrics (2009) 0.04
    0.04479137 = product of:
      0.13437411 = sum of:
        0.055176124 = weight(_text_:wide in 3906) [ClassicSimilarity], result of:
          0.055176124 = score(doc=3906,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.29372054 = fieldWeight in 3906, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3906)
        0.07919799 = weight(_text_:web in 3906) [ClassicSimilarity], result of:
          0.07919799 = score(doc=3906,freq=14.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.57238775 = fieldWeight in 3906, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3906)
      0.33333334 = coord(2/6)
    
    Abstract
    Webometrics is an information science field concerned with measuring aspects of the World Wide Web (WWW) for a variety of information science research goals. It came into existence about five years after the Web was formed and has since grown to become a significant aspect of information science, at least in terms of published research. Although some webometrics research has focused on the structure or evolution of the Web itself or the performance of commercial search engines, most has used data from the Web to shed light on information provision or online communication in various contexts. Most prominently, techniques have been developed to track, map, and assess Web-based informal scholarly communication, for example, in terms of the hyperlinks between academic Web sites or the online impact of digital repositories. In addition, a range of nonacademic issues and groups of Web users have also been analyzed.
  6. fwt: Webseiten liegen im Schnitt nur 19 Klicks auseinander (2001) 0.04
    0.043292698 = product of:
      0.12987809 = sum of:
        0.07803083 = weight(_text_:wide in 5962) [ClassicSimilarity], result of:
          0.07803083 = score(doc=5962,freq=4.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.4153836 = fieldWeight in 5962, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5962)
        0.051847253 = weight(_text_:web in 5962) [ClassicSimilarity], result of:
          0.051847253 = score(doc=5962,freq=6.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.37471575 = fieldWeight in 5962, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5962)
      0.33333334 = coord(2/6)
    
    Abstract
    "Dokumente im World Wide Web liegen durchschnittlich 19 Mausklicks voneinander entfernt - angesichts von schätzungsweise mehr als einer Milliarde Seiten erstaunlich nahe. Albert-Lazlo Barabai vom Institut für Physik der University von Notre Dame (US-Staat Indiana) stellt seine Studie in der britischen Fachzeitschrift Physics World (Juli 2001, S. 33) vor. Der Statistiker konstruierte im Rechner zunächst Modelle von großen Computernetzwerken. Grundlage für diese Abbilder war die Analyse eines kleinen Teils der Verbindungen im Web, die der Wissenschaftler automatisch von einem Programm hatte prüfen lassen. Um seine Ergebnisse zu erklären, vergleicht Barabai das World Wide Web mit den Verbindungen internationaler Fluglinien. Dort gebe es zahlreiche Flughäfen, die meist nur mit anderen Flugplätzen in ihrer näheren Umgebung in Verbindung stünden. Diese kleineren Verteiler stehen ihrerseits mit einigen wenigen großen Airports wie Frankfurt, New York oder Hongkong in Verbindung. Ähnlich sei es im Netz, wo wenige große Server die Verteilung großer Datenmengen übernähmen und weite Entfernungen überbrückten. Damit seien die Online-Wege vergleichsweise kurz. Die Untersuchung spiegelt allerdings die Situation des Jahres 1999 wider. Seinerzeit gab es vermutlich 800 Millionen Knoten."
  7. White, H.D.: Pathfinder networks and author cocitation analysis : a remapping of paradigmatic information scientists (2003) 0.04
    0.040593065 = product of:
      0.08118613 = sum of:
        0.02494502 = weight(_text_:web in 1459) [ClassicSimilarity], result of:
          0.02494502 = score(doc=1459,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.18028519 = fieldWeight in 1459, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1459)
        0.02143089 = weight(_text_:retrieval in 1459) [ClassicSimilarity], result of:
          0.02143089 = score(doc=1459,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.16710453 = fieldWeight in 1459, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1459)
        0.034810223 = product of:
          0.05221533 = sum of:
            0.023233337 = weight(_text_:system in 1459) [ClassicSimilarity], result of:
              0.023233337 = score(doc=1459,freq=2.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.17398985 = fieldWeight in 1459, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1459)
            0.028981995 = weight(_text_:29 in 1459) [ClassicSimilarity], result of:
              0.028981995 = score(doc=1459,freq=2.0), product of:
                0.14914064 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042397358 = queryNorm
                0.19432661 = fieldWeight in 1459, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1459)
          0.6666667 = coord(2/3)
      0.5 = coord(3/6)
    
    Abstract
    In their 1998 article "Visualizing a discipline: An author cocitation analysis of information science, 1972-1995," White and McCain used multidimensional scaling, hierarchical clustering, and factor analysis to display the specialty groupings of 120 highly-cited ("paradigmatic") information scientists. These statistical techniques are traditional in author cocitation analysis (ACA). It is shown here that a newer technique, Pathfinder Networks (PFNETs), has considerable advantages for ACA. In PFNETs, nodes represent authors, and explicit links represent weighted paths between nodes, the weights in this case being cocitation counts. The links can be drawn to exclude all but the single highest counts for author pairs, which reduces a network of authors to only the most salient relationships. When these are mapped, dominant authors can be defined as those with relatively many links to other authors (i.e., high degree centrality). Links between authors and dominant authors define specialties, and links between dominant authors connect specialties into a discipline. Maps are made with one rather than several computer routines and in one rather than many computer passes. Also, PFNETs can, and should, be generated from matrices of raw counts rather than Pearson correlations, which removes a computational step associated with traditional ACA. White and McCain's raw data from 1998 are remapped as a PFNET. It is shown that the specialty groupings correspond closely to those seen in the factor analysis of the 1998 article. Because PFNETs are fast to compute, they are used in AuthorLink, a new Web-based system that creates live interfaces for cocited author retrieval an the fly.
    Date
    29. 3.2003 19:55:24
  8. Thelwall, M.; Harries, G.: Do the Web Sites of Higher Rated Scholars Have Significantly More Online Impact? (2004) 0.04
    0.04027172 = product of:
      0.12081516 = sum of:
        0.045980107 = weight(_text_:wide in 2123) [ClassicSimilarity], result of:
          0.045980107 = score(doc=2123,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.24476713 = fieldWeight in 2123, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2123)
        0.074835055 = weight(_text_:web in 2123) [ClassicSimilarity], result of:
          0.074835055 = score(doc=2123,freq=18.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.5408555 = fieldWeight in 2123, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2123)
      0.33333334 = coord(2/6)
    
    Abstract
    The quality and impact of academic Web sites is of interest to many audiences, including the scholars who use them and Web educators who need to identify best practice. Several large-scale European Union research projects have been funded to build new indicators for online scientific activity, reflecting recognition of the importance of the Web for scholarly communication. In this paper we address the key question of whether higher rated scholars produce higher impact Web sites, using the United Kingdom as a case study and measuring scholars' quality in terms of university-wide average research ratings. Methodological issues concerning the measurement of the online impact are discussed, leading to the adoption of counts of links to a university's constituent single domain Web sites from an aggregated counting metric. The findings suggest that universities with higher rated scholars produce significantly more Web content but with a similar average online impact. Higher rated scholars therefore attract more total links from their peers, but only by being more prolific, refuting earlier suggestions. It can be surmised that general Web publications are very different from scholarly journal articles and conference papers, for which scholarly quality does associate with citation impact. This has important implications for the construction of new Web indicators, for example that online impact should not be used to assess the quality of small groups of scholars, even within a single discipline.
  9. Zhang, Y.; Jansen, B.J.; Spink, A.: Identification of factors predicting clickthrough in Web searching using neural network analysis (2009) 0.04
    0.039769344 = product of:
      0.07953869 = sum of:
        0.042333104 = weight(_text_:web in 2742) [ClassicSimilarity], result of:
          0.042333104 = score(doc=2742,freq=4.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.3059541 = fieldWeight in 2742, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.025717068 = weight(_text_:retrieval in 2742) [ClassicSimilarity], result of:
          0.025717068 = score(doc=2742,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.20052543 = fieldWeight in 2742, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.011488512 = product of:
          0.034465536 = sum of:
            0.034465536 = weight(_text_:22 in 2742) [ClassicSimilarity], result of:
              0.034465536 = score(doc=2742,freq=2.0), product of:
                0.14846832 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042397358 = queryNorm
                0.23214069 = fieldWeight in 2742, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2742)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Abstract
    In this research, we aim to identify factors that significantly affect the clickthrough of Web searchers. Our underlying goal is determine more efficient methods to optimize the clickthrough rate. We devise a clickthrough metric for measuring customer satisfaction of search engine results using the number of links visited, number of queries a user submits, and rank of clicked links. We use a neural network to detect the significant influence of searching characteristics on future user clickthrough. Our results show that high occurrences of query reformulation, lengthy searching duration, longer query length, and the higher ranking of prior clicked links correlate positively with future clickthrough. We provide recommendations for leveraging these findings for improving the performance of search engine retrieval and result ranking, along with implications for search engine marketing.
    Date
    22. 3.2009 17:49:11
  10. Bar-Ilan, J.; Peritz, B.C.: ¬A method for measuring the evolution of a topic on the Web : the case of "informetrics" (2009) 0.04
    0.038845092 = product of:
      0.116535276 = sum of:
        0.045980107 = weight(_text_:wide in 3089) [ClassicSimilarity], result of:
          0.045980107 = score(doc=3089,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.24476713 = fieldWeight in 3089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3089)
        0.07055517 = weight(_text_:web in 3089) [ClassicSimilarity], result of:
          0.07055517 = score(doc=3089,freq=16.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.5099235 = fieldWeight in 3089, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3089)
      0.33333334 = coord(2/6)
    
    Abstract
    The universe of information has been enriched by the creation of the World Wide Web, which has become an indispensible source for research. Since this source is growing at an enormous speed, an in-depth look of its performance to create a method for its evaluation has become necessary; however, growth is not the only process that influences the evolution of the Web. During their lifetime, Web pages may change their content and links to/from other Web pages, be duplicated or moved to a different URL, be removed from the Web either temporarily or permanently, and be temporarily inaccessible due to server and/or communication failures. To obtain a better understanding of these processes, we developed a method for tracking topics on the Web for long periods of time, without the need to employ a crawler and relying only on publicly available resources. The multiple data-collection methods used allow us to discover new pages related to the topic, to identify changes to existing pages, and to detect previously existing pages that have been removed or whose content is not relevant anymore to the specified topic. The method is demonstrated through monitoring Web pages that contain the term informetrics for a period of 8 years. The data-collection method also allowed us to analyze the dynamic changes in search engine coverage, illustrated here on Google - the search engine used for the longest period of time for data collection in this project.
  11. Thelwall, M.; Vaughan, L.: Webometrics : an introduction to the special issue (2004) 0.04
    0.037826736 = product of:
      0.1134802 = sum of:
        0.07356817 = weight(_text_:wide in 2908) [ClassicSimilarity], result of:
          0.07356817 = score(doc=2908,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.3916274 = fieldWeight in 2908, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=2908)
        0.03991203 = weight(_text_:web in 2908) [ClassicSimilarity], result of:
          0.03991203 = score(doc=2908,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.2884563 = fieldWeight in 2908, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=2908)
      0.33333334 = coord(2/6)
    
    Abstract
    Webometrics, the quantitative study of Web phenomena, is a field encompassing contributions from information science, computer science, and statistical physics. Its methodology draws especially from bibliometrics. This special issue presents contributions that both push for ward the field and illustrate a wide range of webometric approaches.
  12. Wolfram, D.: Applied informetrics for information retrieval research (2003) 0.04
    0.03589107 = product of:
      0.10767321 = sum of:
        0.08908654 = weight(_text_:retrieval in 4589) [ClassicSimilarity], result of:
          0.08908654 = score(doc=4589,freq=6.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.6946405 = fieldWeight in 4589, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4589)
        0.018586671 = product of:
          0.05576001 = sum of:
            0.05576001 = weight(_text_:system in 4589) [ClassicSimilarity], result of:
              0.05576001 = score(doc=4589,freq=2.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.41757566 = fieldWeight in 4589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4589)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The author demonstrates how informetric analysis of information retrieval system content and use provides valuable insights that have applications for the modelling, design, and evaluation of information retrieval systems.
  13. Thelwall, M.; Wilkinson, D.: Finding similar academic Web sites with links, bibliometric couplings and colinks (2004) 0.03
    0.034434646 = product of:
      0.10330394 = sum of:
        0.06693452 = weight(_text_:web in 2571) [ClassicSimilarity], result of:
          0.06693452 = score(doc=2571,freq=10.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.48375595 = fieldWeight in 2571, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2571)
        0.036369424 = weight(_text_:retrieval in 2571) [ClassicSimilarity], result of:
          0.036369424 = score(doc=2571,freq=4.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.2835858 = fieldWeight in 2571, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2571)
      0.33333334 = coord(2/6)
    
    Abstract
    A common task in both Webmetrics and Web information retrieval is to identify a set of Web pages or sites that are similar in content. In this paper we assess the extent to which links, colinks and couplings can be used to identify similar Web sites. As an experiment, a random sample of 500 pairs of domains from the UK academic Web were taken and human assessments of site similarity, based upon content type, were compared against ratings for the three concepts. The results show that using a combination of all three gives the highest probability of identifying similar sites, but surprisingly this was only a marginal improvement over using links alone. Another unexpected result was that high values for either colink counts or couplings were associated with only a small increased likelihood of similarity. The principal advantage of using couplings and colinks was found to be greater coverage in terms of a much larger number of pairs of sites being connected by these measures, instead of increased probability of similarity. In information retrieval terminology, this is improved recall rather than improved precision.
  14. Thelwall, M.; Li, X.; Barjak, F.; Robinson, S.: Assessing the international web connectivity of research groups (2008) 0.03
    0.033919625 = product of:
      0.10175887 = sum of:
        0.045980107 = weight(_text_:wide in 1401) [ClassicSimilarity], result of:
          0.045980107 = score(doc=1401,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.24476713 = fieldWeight in 1401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1401)
        0.05577876 = weight(_text_:web in 1401) [ClassicSimilarity], result of:
          0.05577876 = score(doc=1401,freq=10.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.40312994 = fieldWeight in 1401, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1401)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - The purpose of this paper is to claim that it is useful to assess the web connectivity of research groups, describe hyperlink-based techniques to achieve this and present brief details of European life sciences research groups as a case study. Design/methodology/approach - A commercial search engine was harnessed to deliver hyperlink data via its automatic query submission interface. A special purpose link analysis tool, LexiURL, then summarised and graphed the link data in appropriate ways. Findings - Webometrics can provide a wide range of descriptive information about the international connectivity of research groups. Research limitations/implications - Only one field was analysed, data was taken from only one search engine, and the results were not validated. Practical implications - Web connectivity seems to be particularly important for attracting overseas job applicants and to promote research achievements and capabilities, and hence we contend that it can be useful for national and international governments to use webometrics to ensure that the web is being used effectively by research groups. Originality/value - This is the first paper to make a case for the value of using a range of webometric techniques to evaluate the web presences of research groups within a field, and possibly the first "applied" webometrics study produced for an external contract.
  15. Meho, L.I.; Rogers, Y.: Citation counting, citation ranking, and h-index of human-computer interaction researchers : a comparison of Scopus and Web of Science (2008) 0.03
    0.033544913 = product of:
      0.10063474 = sum of:
        0.06599832 = weight(_text_:web in 2352) [ClassicSimilarity], result of:
          0.06599832 = score(doc=2352,freq=14.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.47698978 = fieldWeight in 2352, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2352)
        0.034636416 = product of:
          0.05195462 = sum of:
            0.023233337 = weight(_text_:system in 2352) [ClassicSimilarity], result of:
              0.023233337 = score(doc=2352,freq=2.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.17398985 = fieldWeight in 2352, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2352)
            0.028721282 = weight(_text_:22 in 2352) [ClassicSimilarity], result of:
              0.028721282 = score(doc=2352,freq=2.0), product of:
                0.14846832 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042397358 = queryNorm
                0.19345059 = fieldWeight in 2352, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2352)
          0.6666667 = coord(2/3)
      0.33333334 = coord(2/6)
    
    Abstract
    This study examines the differences between Scopus and Web of Science in the citation counting, citation ranking, and h-index of 22 top human-computer interaction (HCI) researchers from EQUATOR - a large British Interdisciplinary Research Collaboration project. Results indicate that Scopus provides significantly more coverage of HCI literature than Web of Science, primarily due to coverage of relevant ACM and IEEE peer-reviewed conference proceedings. No significant differences exist between the two databases if citations in journals only are compared. Although broader coverage of the literature does not significantly alter the relative citation ranking of individual researchers, Scopus helps distinguish between the researchers in a more nuanced fashion than Web of Science in both citation counting and h-index. Scopus also generates significantly different maps of citation networks of individual scholars than those generated by Web of Science. The study also presents a comparison of h-index scores based on Google Scholar with those based on the union of Scopus and Web of Science. The study concludes that Scopus can be used as a sole data source for citation-based research and evaluation in HCI, especially when citations in conference proceedings are sought, and that researchers should manually calculate h scores instead of relying on system calculations.
    Object
    Web of Science
  16. Stock, W.G.; Weber, S.: Facets of informetrics : Preface (2006) 0.03
    0.032781616 = product of:
      0.06556323 = sum of:
        0.034564834 = weight(_text_:web in 76) [ClassicSimilarity], result of:
          0.034564834 = score(doc=76,freq=6.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.24981049 = fieldWeight in 76, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=76)
        0.017144712 = weight(_text_:retrieval in 76) [ClassicSimilarity], result of:
          0.017144712 = score(doc=76,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.13368362 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=76)
        0.013853687 = product of:
          0.04156106 = sum of:
            0.04156106 = weight(_text_:system in 76) [ClassicSimilarity], result of:
              0.04156106 = score(doc=76,freq=10.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.31124252 = fieldWeight in 76, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=76)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Abstract
    According to Jean M. Tague-Sutcliffe "informetrics" is "the study of the quantitative aspects of information in any form, not just records or bibliographies, and in any social group, not just scientists" (Tague-Sutcliffe, 1992, 1). Leo Egghe also defines "informetrics" in a very broad sense. "(W)e will use the term' informetrics' as the broad term comprising all-metrics studies related to information science, including bibliometrics (bibliographies, libraries,...), scientometrics (science policy, citation analysis, research evaluation,...), webometrics (metrics of the web, the Internet or other social networks such as citation or collaboration networks), ..." (Egghe, 2005b,1311). According to Concepcion S. Wilson "informetrics" is "the quantitative study of collections of moderatesized units of potentially informative text, directed to the scientific understanding of information processes at the social level" (Wilson, 1999, 211). We should add to Wilson's units of text also digital collections of images, videos, spoken documents and music. Dietmar Wolfram divides "informetrics" into two aspects, "system-based characteristics that arise from the documentary content of IR systems and how they are indexed, and usage-based characteristics that arise how users interact with system content and the system interfaces that provide access to the content" (Wolfram, 2003, 6). We would like to follow Tague-Sutcliffe, Egghe, Wilson and Wolfram (and others, for example Björneborn & Ingwersen, 2004) and call this broad research of empirical information science "informetrics". Informetrics includes therefore all quantitative studies in information science. If a scientist performs scientific investigations empirically, e.g. on information users' behavior, on scientific impact of academic journals, on the development of the patent application activity of a company, on links of Web pages, on the temporal distribution of blog postings discussing a given topic, on availability, recall and precision of retrieval systems, on usability of Web sites, and so on, he or she contributes to informetrics. We see three subject areas in information science in which such quantitative research takes place, - information users and information usage, - evaluation of information systems, - information itself, Following Wolfram's article, we divide his system-based characteristics into the "information itself "-category and the "information system"-category. Figure 1 is a simplistic graph of subjects and research areas of informetrics as an empirical information science.
  17. Jepsen, E.T.; Seiden, P.; Ingwersen, P.; Björneborn, L.; Borlund, P.: Characteristics of scientific Web publications : preliminary data gathering and analysis (2004) 0.03
    0.03066202 = product of:
      0.09198606 = sum of:
        0.07055517 = weight(_text_:web in 3091) [ClassicSimilarity], result of:
          0.07055517 = score(doc=3091,freq=16.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.5099235 = fieldWeight in 3091, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3091)
        0.02143089 = weight(_text_:retrieval in 3091) [ClassicSimilarity], result of:
          0.02143089 = score(doc=3091,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.16710453 = fieldWeight in 3091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3091)
      0.33333334 = coord(2/6)
    
    Abstract
    Because of the increasing presence of scientific publications an the Web, combined with the existing difficulties in easily verifying and retrieving these publications, research an techniques and methods for retrieval of scientific Web publications is called for. In this article, we report an the initial steps taken toward the construction of a test collection of scientific Web publications within the subject domain of plant biology. The steps reported are those of data gathering and data analysis aiming at identifying characteristics of scientific Web publications. The data used in this article were generated based an specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AlITheWeb, and AItaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality of the content and whether this information could be employed to harvest, filter, and rank Web publications. The attributes analyzed were inlinks, outlinks, bibliographic references, file format, language, search engine overlap, structural position (according to site structure), and the occurrence of various types of metadata. As could be expected, the ranked output differs between the three search engines. Apparently, this is caused by differences in ranking algorithms rather than the databases themselves. In fact, because scientific Web content in this subject domain receives few inlinks, both AItaVista and AlITheWeb retrieved a higher degree of accessible scientific content than Google. Because of the search engine cutoffs of accessible URLs, the feasibility of using search engine output for Web content analysis is also discussed.
  18. Thelwall, M.: ¬A layered approach for investigating the topological structure of communities in the Web (2003) 0.03
    0.03047014 = product of:
      0.09141042 = sum of:
        0.06110257 = weight(_text_:web in 4450) [ClassicSimilarity], result of:
          0.06110257 = score(doc=4450,freq=12.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.4416067 = fieldWeight in 4450, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4450)
        0.030307854 = weight(_text_:retrieval in 4450) [ClassicSimilarity], result of:
          0.030307854 = score(doc=4450,freq=4.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.23632148 = fieldWeight in 4450, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4450)
      0.33333334 = coord(2/6)
    
    Abstract
    A layered approach for identifying communities in the Web is presented and explored by applying the flake exact community identification algorithm to the UK academic Web. Although community or topic identification is a common task in information retrieval, a new perspective is developed by: the application of alternative document models, shifting the focus from individual pages to aggregated collections based upon Web directories, domains and entire sites; the removal of internal site links; and the adaptation of a new fast algorithm to allow fully-automated community identification using all possible single starting points. The overall topology of the graphs in the three least-aggregated layers was first investigated and found to include a large number of isolated points but, surprisingly, with most of the remainder being in one huge connected component, exact proportions varying by layer. The community identification process then found that the number of communities far exceeded the number of topological components, indicating that community identification is a potentially useful technique, even with random starting points. Both the number and size of communities identified was dependent on the parameter of the algorithm, with very different results being obtained in each case. In conclusion, the UK academic Web is embedded with layers of non-trivial communities and, if it is not unique in this, then there is the promise of improved results for information retrieval algorithms that can exploit this additional structure, and the application of the technique directly to partially automate Web metrics tasks such as that of finding all pages related to a given subject hosted by a single country's universities.
  19. Kousha, K.; Thelwall, M.: How is science cited on the Web? : a classification of google unique Web citations (2007) 0.03
    0.029485613 = product of:
      0.08845684 = sum of:
        0.07888308 = weight(_text_:web in 586) [ClassicSimilarity], result of:
          0.07888308 = score(doc=586,freq=20.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.5701118 = fieldWeight in 586, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=586)
        0.009573761 = product of:
          0.028721282 = sum of:
            0.028721282 = weight(_text_:22 in 586) [ClassicSimilarity], result of:
              0.028721282 = score(doc=586,freq=2.0), product of:
                0.14846832 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042397358 = queryNorm
                0.19345059 = fieldWeight in 586, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=586)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Although the analysis of citations in the scholarly literature is now an established and relatively well understood part of information science, not enough is known about citations that can be found on the Web. In particular, are there new Web types, and if so, are these trivial or potentially useful for studying or evaluating research communication? We sought evidence based upon a sample of 1,577 Web citations of the URLs or titles of research articles in 64 open-access journals from biology, physics, chemistry, and computing. Only 25% represented intellectual impact, from references of Web documents (23%) and other informal scholarly sources (2%). Many of the Web/URL citations were created for general or subject-specific navigation (45%) or for self-publicity (22%). Additional analyses revealed significant disciplinary differences in the types of Google unique Web/URL citations as well as some characteristics of scientific open-access publishing on the Web. We conclude that the Web provides access to a new and different type of citation information, one that may therefore enable us to measure different aspects of research, and the research process in particular; but to obtain good information, the different types should be separated.
  20. Bar-Ilan, J.; Peritz, B.C.: Informetric theories and methods for exploring the Internet : an analytical survey of recent research literature (2002) 0.03
    0.028370049 = product of:
      0.08511014 = sum of:
        0.055176124 = weight(_text_:wide in 813) [ClassicSimilarity], result of:
          0.055176124 = score(doc=813,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.29372054 = fieldWeight in 813, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=813)
        0.029934023 = weight(_text_:web in 813) [ClassicSimilarity], result of:
          0.029934023 = score(doc=813,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.21634221 = fieldWeight in 813, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=813)
      0.33333334 = coord(2/6)
    
    Abstract
    The Internet, and more specifically the World Wide Web, is quickly becoming one of our main information sources. Systematic evaluation and analysis can help us understand how this medium works, grows, and changes, and how it influences our lives and research. New approaches in informetrics can provide an appropriate means towards achieving the above goals, and towards establishing a sound theory. This paper presents a selective review of research based on the Internet, using bibliometric and informetric methods and tools. Some of these studies clearly show the applicability of bibliometric laws to the Internet, while others establish new definitions and methods based on the respective definitions for printed sources. Both informetrics and Internet research can gain from these additional methods.

Authors

Languages

  • e 193
  • d 13
  • More… Less…

Types

  • a 200
  • m 4
  • el 2
  • r 2
  • s 1
  • More… Less…