Search (40 results, page 1 of 2)

  • × theme_ss:"Informetrie"
  • × theme_ss:"Internet"
  • × year_i:[2000 TO 2010}
  1. Thelwall, M.; Ruschenburg, T.: Grundlagen und Forschungsfelder der Webometrie (2006) 0.03
    0.03329213 = product of:
      0.06658426 = sum of:
        0.010096614 = weight(_text_:in in 77) [ClassicSimilarity], result of:
          0.010096614 = score(doc=77,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.17003182 = fieldWeight in 77, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=77)
        0.032829512 = weight(_text_:und in 77) [ClassicSimilarity], result of:
          0.032829512 = score(doc=77,freq=6.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.33931053 = fieldWeight in 77, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=77)
        0.02365813 = product of:
          0.04731626 = sum of:
            0.04731626 = weight(_text_:22 in 77) [ClassicSimilarity], result of:
              0.04731626 = score(doc=77,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.30952093 = fieldWeight in 77, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=77)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Die Webometrie ist ein Teilbereich der Informationswissenschaft der zur Zeit auf die Analyse von Linkstrukturen konzentriert ist. Er ist stark von der Zitationsanalyse geprägt, wie der empirische Schwerpunkt auf der Wissenschaftsanalyse zeigt. In diesem Beitrag diskutieren wir die Nutzung linkbasierter Maße in einem breiten informetrischen Kontext und bewerten verschiedene Verfahren, auch im Hinblick auf ihr generelles Potentialfür die Sozialwissenschaften. Dabei wird auch ein allgemeiner Rahmenfür Linkanalysen mit den erforderlichen Arbeitsschritten vorgestellt. Abschließend werden vielversprechende zukünftige Anwendungsfelder der Webometrie benannt, unter besonderer Berücksichtigung der Analyse von Blogs.
    Date
    4.12.2006 12:12:22
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.8, S.401-406
  2. Hassler, M.: Web analytics : Metriken auswerten, Besucherverhalten verstehen, Website optimieren ; [Metriken analysieren und interpretieren ; Besucherverhalten verstehen und auswerten ; Website-Ziele definieren, Webauftritt optimieren und den Erfolg steigern] (2009) 0.02
    0.016691018 = product of:
      0.05007305 = sum of:
        0.006984316 = weight(_text_:in in 3586) [ClassicSimilarity], result of:
          0.006984316 = score(doc=3586,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.11761922 = fieldWeight in 3586, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3586)
        0.043088734 = weight(_text_:und in 3586) [ClassicSimilarity], result of:
          0.043088734 = score(doc=3586,freq=54.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.44534507 = fieldWeight in 3586, product of:
              7.3484693 = tf(freq=54.0), with freq of:
                54.0 = termFreq=54.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3586)
      0.33333334 = coord(2/6)
    
    Abstract
    Web Analytics bezeichnet die Sammlung, Analyse und Auswertung von Daten der Website-Nutzung mit dem Ziel, diese Informationen zum besseren Verständnis des Besucherverhaltens sowie zur Optimierung der Website zu nutzen. Je nach Ziel der eigenen Website - z.B. die Vermittlung eines Markenwerts oder die Vermehrung von Kontaktanfragen, Bestellungen oder Newsletter-Abonnements - können Sie anhand von Web Analytics herausfinden, wo sich Schwachstellen Ihrer Website befinden und wie Sie Ihre eigenen Ziele durch entsprechende Optimierungen besser erreichen. Dabei ist Web Analytics nicht nur für Website-Betreiber und IT-Abteilungen interessant, sondern wird insbesondere auch mehr und mehr für Marketing und Management nutzbar. Mit diesem Buch lernen Sie, wie Sie die Nutzung Ihrer Website analysieren. Sie können z. B. untersuchen, welche Traffic-Quelle am meisten Umsatz bringt oder welche Bereiche der Website besonders häufig genutzt werden und vieles mehr. Auf diese Weise werden Sie Ihre Besucher, ihr Verhalten und ihre Motivation besser kennen lernen, Ihre Website darauf abstimmen und somit Ihren Erfolg steigern können. Um aus Web Analytics einen wirklichen Mehrwert ziehen zu können, benötigen Sie fundiertes Wissen. Marco Hassler gibt Ihnen in seinem Buch einen umfassenden Einblick in Web Analytics. Er zeigt Ihnen detailliert, wie das Verhalten der Besucher analysiert wird und welche Metriken Sie wann sinnvoll anwenden können. Im letzten Teil des Buches zeigt Ihnen der Autor, wie Sie Ihre Auswertungsergebnisse dafür nutzen, über Conversion-Messungen die Website auf ihre Ziele hin zu optimieren. Ziel dieses Buches ist es, konkrete Web-Analytics-Kenntnisse zu vermitteln und wertvolle praxisorientierte Tipps zu geben. Dazu schlägt das Buch die Brücke zu tangierenden Themenbereichen wie Usability, User-Centered-Design, Online Branding, Online-Marketing oder Suchmaschinenoptimierung. Marco Hassler gibt Ihnen klare Hinweise und Anleitungen, wie Sie Ihre Ziele erreichen.
    BK
    85.20 / Betriebliche Information und Kommunikation
    Classification
    85.20 / Betriebliche Information und Kommunikation
    Footnote
    Rez. in Mitt. VÖB 63(2010) H.1/2, S.147-148 (M. Buzinkay): "Webseiten-Gestaltung und Webseiten-Analyse gehen Hand in Hand. Leider wird das Letztere selten wenn überhaupt berücksichtigt. Zu Unrecht, denn die Analyse der eigenen Maßnahmen ist zur Korrektur und Optimierung entscheidend. Auch wenn die Einsicht greift, dass die Analyse von Webseiten wichtig wäre, ist es oft ein weiter Weg zur Realisierung. Warum? Analyse heißt kontinuierlicher Aufwand, und viele sind nicht bereit beziehungsweise haben nicht die zeitlichen Ressourcen dazu. Ist man einmal zu der Überzeugung gelangt, dass man seine Web-Aktivitäten dennoch optimieren, wenn nicht schon mal gelegentlich hinterfragen sollte, dann lohnt es sich, Marco Hasslers "Web Analytics" in die Hand zu nehmen. Es ist definitiv kein Buch für einen einzigen Lese-Abend, sondern ein Band, mit welchem gearbeitet werden muss. D.h. auch hier: Web-Analyse bedeutet Arbeit und intensive Auseinandersetzung (ein Umstand, den viele nicht verstehen und akzeptieren wollen). Das Buch ist sehr dicht und bleibt trotzdem übersichtlich. Die Gliederung der Themen - von den Grundlagen der Datensammlung, über die Definition von Metriken, hin zur Optimierung von Seiten und schließlich bis zur Arbeit mit Web Analyse Werkzeugen - liefert einen roten Faden, der schön von einem Thema auf das nächste aufbaut. Dadurch fällt es auch leicht, ein eigenes Projekt begleitend zur Buchlektüre Schritt für Schritt aufzubauen. Zahlreiche Screenshots und Illustrationen erleichtern zudem das Verstehen der Zusammenhänge und Erklärungen im Text. Das Buch überzeugt aber auch durch seine Tiefe (bis auf das Kapitel, wo es um die Zusammenstellung von Personas geht) und den angenehm zu lesenden Schreibstil. Von mir kommt eine dringende Empfehlung an alle, die sich mit Online Marketing im Allgemeinen, mit Erfolgskontrolle von Websites und Web-Aktivitäten im Speziellen auseindersetzen."
  3. Pernik, V.; Schlögl, C.: Möglichkeiten und Grenzen von Web Structure Mining am Beispiel von informationswissenschaftlichen Hochschulinstituten im deutschsprachigen Raum (2006) 0.02
    0.016001623 = product of:
      0.048004866 = sum of:
        0.010096614 = weight(_text_:in in 78) [ClassicSimilarity], result of:
          0.010096614 = score(doc=78,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.17003182 = fieldWeight in 78, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=78)
        0.037908252 = weight(_text_:und in 78) [ClassicSimilarity], result of:
          0.037908252 = score(doc=78,freq=8.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.39180204 = fieldWeight in 78, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=78)
      0.33333334 = coord(2/6)
    
    Abstract
    In diesem Beitrag wird eine webometrische Untersuchung vorgestellt, die informationswissenschaftliche Hochschulinstitute in den deutschsprachigen Ländern zum Gegenstand hatte. Ziel dieser Studie war es, einerseits die Linkbeziehungen zwischen den Hochschulinstituten zu analysieren. Andererseits sollten Ähnlichkeiten (zum Beispiel aufgrund von fachlichen, örtlichen oder institutionellen Gegebenheiten) identifiziert werden. Es werden nicht nur die Vorgehensweise bei derartigen Analysen und die daraus resultierenden Ergebnisse dargestellt. Insbesondere sollen Problembereiche und Einschränkungen, die mit der Analyse von Linkstrukturen im Web verbunden sind, thematisiert werden.
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.8, S.407-414
  4. Zhang, Y.; Jansen, B.J.; Spink, A.: Identification of factors predicting clickthrough in Web searching using neural network analysis (2009) 0.01
    0.008438686 = product of:
      0.025316058 = sum of:
        0.0075724614 = weight(_text_:in in 2742) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=2742,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 2742, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 2742) [ClassicSimilarity], result of:
              0.035487194 = score(doc=2742,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 2742, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2742)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    In this research, we aim to identify factors that significantly affect the clickthrough of Web searchers. Our underlying goal is determine more efficient methods to optimize the clickthrough rate. We devise a clickthrough metric for measuring customer satisfaction of search engine results using the number of links visited, number of queries a user submits, and rank of clicked links. We use a neural network to detect the significant influence of searching characteristics on future user clickthrough. Our results show that high occurrences of query reformulation, lengthy searching duration, longer query length, and the higher ranking of prior clicked links correlate positively with future clickthrough. We provide recommendations for leveraging these findings for improving the performance of search engine retrieval and result ranking, along with implications for search engine marketing.
    Date
    22. 3.2009 17:49:11
  5. fwt: Webseiten liegen im Schnitt nur 19 Klicks auseinander (2001) 0.01
    0.008308224 = product of:
      0.024924671 = sum of:
        0.010709076 = weight(_text_:in in 5962) [ClassicSimilarity], result of:
          0.010709076 = score(doc=5962,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 5962, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5962)
        0.014215595 = weight(_text_:und in 5962) [ClassicSimilarity], result of:
          0.014215595 = score(doc=5962,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.14692576 = fieldWeight in 5962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=5962)
      0.33333334 = coord(2/6)
    
    Abstract
    "Dokumente im World Wide Web liegen durchschnittlich 19 Mausklicks voneinander entfernt - angesichts von schätzungsweise mehr als einer Milliarde Seiten erstaunlich nahe. Albert-Lazlo Barabai vom Institut für Physik der University von Notre Dame (US-Staat Indiana) stellt seine Studie in der britischen Fachzeitschrift Physics World (Juli 2001, S. 33) vor. Der Statistiker konstruierte im Rechner zunächst Modelle von großen Computernetzwerken. Grundlage für diese Abbilder war die Analyse eines kleinen Teils der Verbindungen im Web, die der Wissenschaftler automatisch von einem Programm hatte prüfen lassen. Um seine Ergebnisse zu erklären, vergleicht Barabai das World Wide Web mit den Verbindungen internationaler Fluglinien. Dort gebe es zahlreiche Flughäfen, die meist nur mit anderen Flugplätzen in ihrer näheren Umgebung in Verbindung stünden. Diese kleineren Verteiler stehen ihrerseits mit einigen wenigen großen Airports wie Frankfurt, New York oder Hongkong in Verbindung. Ähnlich sei es im Netz, wo wenige große Server die Verteilung großer Datenmengen übernähmen und weite Entfernungen überbrückten. Damit seien die Online-Wege vergleichsweise kurz. Die Untersuchung spiegelt allerdings die Situation des Jahres 1999 wider. Seinerzeit gab es vermutlich 800 Millionen Knoten."
  6. Maharana, B.; Nayak, K.; Sahu, N.K.: Scholarly use of web resources in LIS research : a citation analysis (2006) 0.00
    0.0023517415 = product of:
      0.014110449 = sum of:
        0.014110449 = weight(_text_:in in 53) [ClassicSimilarity], result of:
          0.014110449 = score(doc=53,freq=20.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2376267 = fieldWeight in 53, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=53)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - The essential purpose of this paper is to measure the amount of web resources used for scholarly contributions in the area of library and information science (LIS) in India. It further aims to make an analysis of the nature and type of web resources and studies the various standards for web citations. Design/methodology/approach - In this study, the result of analysis of 292 web citations spread over 95 scholarly papers published in the proceedings of the National Conference of the Society for Information Science, India (SIS-2005) has been reported. All the 292 web citations were scanned and data relating to types of web domains, file formats, styles of citations, etc., were collected through a structured check list. The data thus obtained were systematically analyzed, figurative representations were made and appropriate interpretations were drawn. Findings - The study revealed that 292 (34.88 per cent) out of 837 were web citations, proving a significant correlation between the use of Internet resources and research productivity of LIS professionals in India. The highest number of web citations (35.6 per cent) was from .edu/.ac type domains. Most of the web resources (46.9 per cent) cited in the study were hypertext markup language (HTML) files. Originality/value - The paper is the result of an original analysis of web citations undertaken in order to study the dependence of LIS professionals in India on web sources for their scholarly contributions. This carries research value for web content providers, authors and researchers in LIS.
  7. Koehler, W.: Web page change and persistence : a four-year longitudinal study (2002) 0.00
    0.0021859813 = product of:
      0.013115887 = sum of:
        0.013115887 = weight(_text_:in in 203) [ClassicSimilarity], result of:
          0.013115887 = score(doc=203,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 203, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=203)
      0.16666667 = coord(1/6)
    
    Abstract
    Changes in the topography of the Web can be expressed in at least four ways: (1) more sites on more servers in more places, (2) more pages and objects added to existing sites and pages, (3) changes in traffic, and (4) modifications to existing text, graphic, and other Web objects. This article does not address the first three factors (more sites, more pages, more traffic) in the growth of the Web. It focuses instead on changes to an existing set of Web documents. The article documents changes to an aging set of Web pages, first identified and "collected" in December 1996 and followed weekly thereafter. Results are reported through February 2001. The article addresses two related phenomena: (1) the life cycle of Web objects, and (2) changes to Web objects. These data reaffirm that the half-life of a Web page is approximately 2 years. There is variation among Web pages by top-level domain and by page type (navigation, content). Web page content appears to stabilize over time; aging pages change less often than once they did
  8. Wouters, P.; Vries, R. de: Formally citing the Web (2004) 0.00
    0.002145118 = product of:
      0.0128707085 = sum of:
        0.0128707085 = weight(_text_:in in 3093) [ClassicSimilarity], result of:
          0.0128707085 = score(doc=3093,freq=26.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2167489 = fieldWeight in 3093, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=3093)
      0.16666667 = coord(1/6)
    
    Abstract
    How do authors refer to Web-based information sources in their formal scientific publications? It is not yet weIl known how scientists and scholars actually include new types of information sources, available through the new media, in their published work. This article reports an a comparative study of the lists of references in 38 scientific journals in five different scientific and social scientific fields. The fields are sociology, library and information science, biochemistry and biotechnology, neuroscience, and the mathematics of computing. As is weIl known, references, citations, and hyperlinks play different roles in academic publishing and communication. Our study focuses an hyperlinks as attributes of references in formal scholarly publications. The study developed and applied a method to analyze the differential roles of publishing media in the analysis of scientific and scholarly literature references. The present secondary databases that include reference and citation data (the Web of Science) cannot be used for this type of research. By the automated processing and analysis of the full text of scientific and scholarly articles, we were able to extract the references and hyperlinks contained in these references in relation to other features of the scientific and scholarly literature. Our findings show that hyperlinking references are indeed, as expected, abundantly present in the formal literature. They also tend to cite more recent literature than the average reference. The large majority of the references are to Web instances of traditional scientific journals. Other types of Web-based information sources are less weIl represented in the lists of references, except in the case of pure e-journals. We conclude that this can be explained by taking the role of the publisher into account. Indeed, it seems that the shift from print-based to electronic publishing has created new roles for the publisher. By shaping the way scientific references are hyperlinking to other information sources, the publisher may have a large impact an the availability of scientific and scholarly information.
    Footnote
    Beitrag in einem Themenheft über Webometrics
  9. Jepsen, E.T.; Seiden, P.; Ingwersen, P.; Björneborn, L.; Borlund, P.: Characteristics of scientific Web publications : preliminary data gathering and analysis (2004) 0.00
    0.0021034614 = product of:
      0.012620768 = sum of:
        0.012620768 = weight(_text_:in in 3091) [ClassicSimilarity], result of:
          0.012620768 = score(doc=3091,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 3091, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3091)
      0.16666667 = coord(1/6)
    
    Abstract
    Because of the increasing presence of scientific publications an the Web, combined with the existing difficulties in easily verifying and retrieving these publications, research an techniques and methods for retrieval of scientific Web publications is called for. In this article, we report an the initial steps taken toward the construction of a test collection of scientific Web publications within the subject domain of plant biology. The steps reported are those of data gathering and data analysis aiming at identifying characteristics of scientific Web publications. The data used in this article were generated based an specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AlITheWeb, and AItaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality of the content and whether this information could be employed to harvest, filter, and rank Web publications. The attributes analyzed were inlinks, outlinks, bibliographic references, file format, language, search engine overlap, structural position (according to site structure), and the occurrence of various types of metadata. As could be expected, the ranked output differs between the three search engines. Apparently, this is caused by differences in ranking algorithms rather than the databases themselves. In fact, because scientific Web content in this subject domain receives few inlinks, both AItaVista and AlITheWeb retrieved a higher degree of accessible scientific content than Google. Because of the search engine cutoffs of accessible URLs, the feasibility of using search engine output for Web content analysis is also discussed.
    Footnote
    Beitrag in einem Themenheft über Webometrics
  10. Vaughan, L.; Shaw, D.: Web citation data for impact assessment : a comparison of four science disciplines (2005) 0.00
    0.0021034614 = product of:
      0.012620768 = sum of:
        0.012620768 = weight(_text_:in in 3880) [ClassicSimilarity], result of:
          0.012620768 = score(doc=3880,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 3880, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3880)
      0.16666667 = coord(1/6)
    
    Abstract
    The number and type of Web citations to journal articles in four areas of science are examined: biology, genetics, medicine, and multidisciplinary sciences. For a sample of 5,972 articles published in 114 journals, the median Web citation counts per journal article range from 6.2 in medicine to 10.4 in genetics. About 30% of Web citations in each area indicate intellectual impact (citations from articles or class readings, in contrast to citations from bibliographic services or the author's or journal's home page). Journals receiving more Web citations also have higher percentages of citations indicating intellectual impact. There is significant correlation between the number of citations reported in the databases from the Institute for Scientific Information (ISI, now Thomson Scientific) and the number of citations retrieved using the Google search engine (Web citations). The correlation is much weaker for journals published outside the United Kingdom or United States and for multidisciplinary journals. Web citation numbers are higher than ISI citation counts, suggesting that Web searches might be conducted for an earlier or a more fine-grained assessment of an article's impact. The Web-evident impact of non-UK/USA publications might provide a balance to the geographic or cultural biases observed in ISI's data, although the stability of Web citation counts is debatable.
  11. Vaughan, L.; Shaw , D.: Bibliographic and Web citations : what Is the difference? (2003) 0.00
    0.0021034614 = product of:
      0.012620768 = sum of:
        0.012620768 = weight(_text_:in in 5176) [ClassicSimilarity], result of:
          0.012620768 = score(doc=5176,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 5176, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5176)
      0.16666667 = coord(1/6)
    
    Abstract
    Vaughn, and Shaw look at the relationship between traditional citation and Web citation (not hyperlinks but rather textual mentions of published papers). Using English language research journals in ISI's 2000 Journal Citation Report - Information and Library Science category - 1209 full length papers published in 1997 in 46 journals were identified. Each was searched in Social Science Citation Index and on the Web using Google phrase search by entering the title in quotation marks, and followed for distinction where necessary with sub-titles, author's names, and journal title words. After removing obvious false drops, the number of web sites was recorded for comparison with the SSCI counts. A second sample from 1992 was also collected for examination. There were a total of 16,371 web citations to the selected papers. The top and bottom ranked four journals were then examined and every third citation to every third paper was selected and classified as to source type, domain, and country of origin. Web counts are much higher than ISI citation counts. Of the 46 journals from 1997, 26 demonstrated a significant correlation between Web and traditional citation counts, and 11 of the 15 in the 1992 sample also showed significant correlation. Journal impact factor in 1998 and 1999 correlated significantly with average Web citations per journal in the 1997 data, but at a low level. Thirty percent of web citations come from other papers posted on the web, and 30percent from listings of web based bibliographic services, while twelve percent come from class reading lists. High web citation journals often have web accessible tables of content.
  12. Davis, P.M.; Cohen, S.A.: ¬The effect of the Web on undergraduate citation behavior 1996-1999 (2001) 0.00
    0.0019955188 = product of:
      0.011973113 = sum of:
        0.011973113 = weight(_text_:in in 5768) [ClassicSimilarity], result of:
          0.011973113 = score(doc=5768,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.20163295 = fieldWeight in 5768, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5768)
      0.16666667 = coord(1/6)
    
    Abstract
    A citation analysis of undergraduate term papers in microeconomics revealed a significant decrease in the frequency of scholarly resources cited between 1996 and 1999. Book citations decreased from 30% to 19%, newspaper citations increased from 7% to 19%, and Web citations increased from 9% to 21%. Web citations checked in 2000 revealed that only 18% of URLs cited in 1996 led to the correct Internet document. For 1999 bibliographies, only 55% of URLs led to the correct document. The authors recommend (1) setting stricter guidelines for acceptable citations in course assignments; (2) creating and maintaining scholarly portals for authoritative Web sites with a commitment to long-term access; and (3) continuing to instruct students how to critically evaluate resources
  13. Thelwall, M.; Wilkinson, D.: Finding similar academic Web sites with links, bibliometric couplings and colinks (2004) 0.00
    0.0019955188 = product of:
      0.011973113 = sum of:
        0.011973113 = weight(_text_:in in 2571) [ClassicSimilarity], result of:
          0.011973113 = score(doc=2571,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.20163295 = fieldWeight in 2571, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2571)
      0.16666667 = coord(1/6)
    
    Abstract
    A common task in both Webmetrics and Web information retrieval is to identify a set of Web pages or sites that are similar in content. In this paper we assess the extent to which links, colinks and couplings can be used to identify similar Web sites. As an experiment, a random sample of 500 pairs of domains from the UK academic Web were taken and human assessments of site similarity, based upon content type, were compared against ratings for the three concepts. The results show that using a combination of all three gives the highest probability of identifying similar sites, but surprisingly this was only a marginal improvement over using links alone. Another unexpected result was that high values for either colink counts or couplings were associated with only a small increased likelihood of similarity. The principal advantage of using couplings and colinks was found to be greater coverage in terms of a much larger number of pairs of sites being connected by these measures, instead of increased probability of similarity. In information retrieval terminology, this is improved recall rather than improved precision.
  14. Kuperman, V.: Productivity in the Internet mailing lists : a bibliometric analysis (2006) 0.00
    0.0019676082 = product of:
      0.011805649 = sum of:
        0.011805649 = weight(_text_:in in 4907) [ClassicSimilarity], result of:
          0.011805649 = score(doc=4907,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.19881277 = fieldWeight in 4907, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4907)
      0.16666667 = coord(1/6)
    
    Abstract
    The author examines patterns of productivity in the Internet mailing lists, also known as discussion lists or discussion groups. Datasets have been collected from electronic archives of two Internet mailing lists, the LINGUIST and the History of the English Language. Theoretical models widely used in informetric research have been applied to fit the distribution of posted messages over the population of authors. The Generalized Inverse Poisson-Gaussian and Poisson-lognormal distributions show excellent results in both datasets, while Lotka and Yule-Simon distribution demonstrate poor-to-mediocre fits. In the mailing list where moderation and quality control are enforced to a higher degree, i.e., the LINGUIST, Lotka, and Yule-Simon distributions perform better. The findings can be plausibly explained by the lesser applicability of the success-breedssuccess model to the information production in the electronic communication media, such as Internet mailing lists, where selectivity of publications is marginal or nonexistent. The hypothesis is preliminary, and needs to be validated against the larger variety of datasets. Characteristics of the quality control, competitiveness, and the reward structure in Internet mailing lists as compared to professional scholarly journals are discussed.
  15. Amitay, E.; Carmel, D.; Herscovici, M.; Lempel, R.; Soffer, A.: Trend detection through temporal link analysis (2004) 0.00
    0.001821651 = product of:
      0.010929906 = sum of:
        0.010929906 = weight(_text_:in in 3092) [ClassicSimilarity], result of:
          0.010929906 = score(doc=3092,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18406484 = fieldWeight in 3092, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3092)
      0.16666667 = coord(1/6)
    
    Abstract
    Although time has been recognized as an important dimension in the co-citation literature, to date it has not been incorporated into the analogous process of link analysis an the Web. In this paper, we discuss several aspects and uses of the time dimension in the context of Web information retrieval. We describe the ideal casewhere search engines track and store temporal data for each of the pages in their repository, assigning timestamps to the hyperlinks embedded within the pages. We introduce several applications which benefit from the availability of such timestamps. To demonstrate our claims, we use a somewhat simplistic approach, which dates links by approximating the age of the page's content. We show that by using this crude measure alone it is possible to detect and expose significant events and trends. We predict that by using more robust methods for tracking modifications in the content of pages, search engines will be able to provide results that are more timely and better reflect current real-life trends than those they provide today.
    Footnote
    Beitrag in einem Themenheft über Webometrics
  16. Raan, A.F.J. van; Noyons, E.C.M.: Discovery of patterns of scientific and technological development and knowledge transfer (2002) 0.00
    0.001821651 = product of:
      0.010929906 = sum of:
        0.010929906 = weight(_text_:in in 3603) [ClassicSimilarity], result of:
          0.010929906 = score(doc=3603,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18406484 = fieldWeight in 3603, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3603)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper addresses a bibliometric methodology to discover the structure of the scientific 'landscape' in order to gain detailed insight into the development of MD fields, their interaction, and the transfer of knowledge between them. This methodology is appropriate to visualize the position of MD activities in relation to interdisciplinary MD developments, and particularly in relation to socio-economic problems. Furthermore, it allows the identification of the major actors. It even provides the possibility of foresight. We describe a first approach to apply bibliometric mapping as an instrument to investigate characteristics of knowledge transfer. In this paper we discuss the creation of 'maps of science' with help of advanced bibliometric methods. This 'bibliometric cartography' can be seen as a specific type of data-mining, applied to large amounts of scientific publications. As an example we describe the mapping of the field neuroscience, one of the largest and fast growing fields in the life sciences. The number of publications covered by this database is about 80,000 per year, the period covered is 1995-1998. Current research is going an to update the mapping for the years 1999-2002. This paper addresses the main lines of the methodology and its application in the study of knowledge transfer.
  17. Björneborn, L.; Ingwersen, P.: Toward a basic framework for Webometrics (2004) 0.00
    0.0018033426 = product of:
      0.010820055 = sum of:
        0.010820055 = weight(_text_:in in 3088) [ClassicSimilarity], result of:
          0.010820055 = score(doc=3088,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1822149 = fieldWeight in 3088, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3088)
      0.16666667 = coord(1/6)
    
    Abstract
    In this article, we define webometrics within the framework of informetric studies and bibliometrics, as belonging to library and information science, and as associated with cybermetrics as a generic subfield. We develop a consistent and detailed link typology and terminology and make explicit the distinction among different Web node levels when using the proposed conceptual framework. As a consequence, we propose a novel diagram notation to fully appreciate and investigate link structures between Web nodes in webometric analyses. We warn against taking the analogy between citation analyses and link analyses too far.
    Footnote
    Beitrag in einem Themenheft über Webometrics
  18. Cothey, V.: Web-crawling reliability (2004) 0.00
    0.0018033426 = product of:
      0.010820055 = sum of:
        0.010820055 = weight(_text_:in in 3089) [ClassicSimilarity], result of:
          0.010820055 = score(doc=3089,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1822149 = fieldWeight in 3089, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3089)
      0.16666667 = coord(1/6)
    
    Abstract
    In this article, I investigate the reliability, in the social science sense, of collecting informetric data about the World Wide Web by Web crawling. The investigation includes a critical examination of the practice of Web crawling and contrasts the results of content crawling with the results of link crawling. It is shown that Web crawling by search engines is intentionally biased and selective. I also report the results of a [arge-scale experimental simulation of Web crawling that illustrates the effects of different crawling policies an data collection. It is concluded that the reliability of Web crawling as a data collection technique is improved by fuller reporting of relevant crawling policies.
    Footnote
    Beitrag in einem Themenheft über Webometrics
  19. Faba-Pérez, C.; Zapico-Alonso, F.; Guerrero-Bote, V.P.; Moya-Anegón, F. de: Comparative analysis of webometric measurements in thematic environments (2005) 0.00
    0.0018033426 = product of:
      0.010820055 = sum of:
        0.010820055 = weight(_text_:in in 3554) [ClassicSimilarity], result of:
          0.010820055 = score(doc=3554,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1822149 = fieldWeight in 3554, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3554)
      0.16666667 = coord(1/6)
    
    Abstract
    There have been many attempts to evaluate Web spaces an the basis of the information that they provide, their form or functionality, or even the importance given to each of them by the Web itself. The indicators that have been developed for this purpose fall into two groups: those based an the study of a Web space's formal characteristics, and those related to its link structure. In this study we examine most of the webometric indicators that have been proposed in the literature together with others of our own design by applying them to a set of thematically related Web spaces and analyzing the relationships between the different indicators.
  20. Bar-Ilan, J.: ¬The Web as an information source on informetrics? : A content analysis (2000) 0.00
    0.0017848461 = product of:
      0.010709076 = sum of:
        0.010709076 = weight(_text_:in in 4587) [ClassicSimilarity], result of:
          0.010709076 = score(doc=4587,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 4587, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4587)
      0.16666667 = coord(1/6)
    
    Abstract
    This article addresses the question of whether the Web can serve as an information source for research. Specifically, it analyzes by way of content analysis the Web pages retrieved by the major search engines on a particular date (June 7, 1998), as a result of the query 'informetrics OR informetric'. In 807 out of the 942 retrieved pages, the search terms were mentioned in the context of information science. Over 70% of the pages contained only indirect information on the topic, in the form of hypertext links and bibliographical references without annotation. The bibliographical references extracted from the Web pages were analyzed, and lists of most productive authors, most cited authors, works, and sources were compiled. The list of reference obtained from the Web was also compared to data retrieved from commercial databases. For most cases, the list of references extracted from the Web outperformed the commercial, bibliographic databases. The results of these comparisons indicate that valuable, freely available data is hidden in the Web waiting to be extracted from the millions of Web pages