Search (55 results, page 1 of 3)

  • × theme_ss:"Informetrie"
  • × theme_ss:"Internet"
  1. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.02
    0.02308332 = product of:
      0.115416594 = sum of:
        0.050501734 = weight(_text_:web in 4279) [ClassicSimilarity], result of:
          0.050501734 = score(doc=4279,freq=18.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5408555 = fieldWeight in 4279, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4279)
        0.06491486 = weight(_text_:log in 4279) [ClassicSimilarity], result of:
          0.06491486 = score(doc=4279,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.3540296 = fieldWeight in 4279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4279)
      0.2 = coord(2/10)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
  2. Marchionini, G.: Co-evolution of user and organizational interfaces : a longitudinal case study of WWW dissemination of national statistics (2002) 0.02
    0.022889657 = product of:
      0.11444829 = sum of:
        0.023567477 = weight(_text_:web in 1252) [ClassicSimilarity], result of:
          0.023567477 = score(doc=1252,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25239927 = fieldWeight in 1252, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1252)
        0.09088081 = weight(_text_:log in 1252) [ClassicSimilarity], result of:
          0.09088081 = score(doc=1252,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.49564147 = fieldWeight in 1252, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1252)
      0.2 = coord(2/10)
    
    Abstract
    The data systems, policies and procedures, corporate culture, and public face of an agency or institution make up its organizational interface. This case study describes how user interfaces for the Bureau of Labor Statistics web site evolved over a 5-year period along with the [arger organizational interface and how this co-evolution has influenced the institution itself. Interviews with BLS staff and transaction log analysis are the foci in this analysis that also included user informationseeking studies and user interface prototyping and testing. The results are organized into a model of organizational interface change and related to the information life cycle.
  3. Huang, X.; Peng, F,; An, A.; Schuurmans, D.: Dynamic Web log session identification with statistical language models (2004) 0.02
    0.019619705 = product of:
      0.098098524 = sum of:
        0.020200694 = weight(_text_:web in 3096) [ClassicSimilarity], result of:
          0.020200694 = score(doc=3096,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 3096, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3096)
        0.07789783 = weight(_text_:log in 3096) [ClassicSimilarity], result of:
          0.07789783 = score(doc=3096,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.42483553 = fieldWeight in 3096, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.046875 = fieldNorm(doc=3096)
      0.2 = coord(2/10)
    
  4. Hassler, M.: Web analytics : Metriken auswerten, Besucherverhalten verstehen, Website optimieren ; [Metriken analysieren und interpretieren ; Besucherverhalten verstehen und auswerten ; Website-Ziele definieren, Webauftritt optimieren und den Erfolg steigern] (2009) 0.02
    0.017086184 = product of:
      0.08543092 = sum of:
        0.041340213 = weight(_text_:kommunikation in 3586) [ClassicSimilarity], result of:
          0.041340213 = score(doc=3586,freq=4.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.2810997 = fieldWeight in 3586, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3586)
        0.04409071 = weight(_text_:web in 3586) [ClassicSimilarity], result of:
          0.04409071 = score(doc=3586,freq=28.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.47219574 = fieldWeight in 3586, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3586)
      0.2 = coord(2/10)
    
    Abstract
    Web Analytics bezeichnet die Sammlung, Analyse und Auswertung von Daten der Website-Nutzung mit dem Ziel, diese Informationen zum besseren Verständnis des Besucherverhaltens sowie zur Optimierung der Website zu nutzen. Je nach Ziel der eigenen Website - z.B. die Vermittlung eines Markenwerts oder die Vermehrung von Kontaktanfragen, Bestellungen oder Newsletter-Abonnements - können Sie anhand von Web Analytics herausfinden, wo sich Schwachstellen Ihrer Website befinden und wie Sie Ihre eigenen Ziele durch entsprechende Optimierungen besser erreichen. Dabei ist Web Analytics nicht nur für Website-Betreiber und IT-Abteilungen interessant, sondern wird insbesondere auch mehr und mehr für Marketing und Management nutzbar. Mit diesem Buch lernen Sie, wie Sie die Nutzung Ihrer Website analysieren. Sie können z. B. untersuchen, welche Traffic-Quelle am meisten Umsatz bringt oder welche Bereiche der Website besonders häufig genutzt werden und vieles mehr. Auf diese Weise werden Sie Ihre Besucher, ihr Verhalten und ihre Motivation besser kennen lernen, Ihre Website darauf abstimmen und somit Ihren Erfolg steigern können. Um aus Web Analytics einen wirklichen Mehrwert ziehen zu können, benötigen Sie fundiertes Wissen. Marco Hassler gibt Ihnen in seinem Buch einen umfassenden Einblick in Web Analytics. Er zeigt Ihnen detailliert, wie das Verhalten der Besucher analysiert wird und welche Metriken Sie wann sinnvoll anwenden können. Im letzten Teil des Buches zeigt Ihnen der Autor, wie Sie Ihre Auswertungsergebnisse dafür nutzen, über Conversion-Messungen die Website auf ihre Ziele hin zu optimieren. Ziel dieses Buches ist es, konkrete Web-Analytics-Kenntnisse zu vermitteln und wertvolle praxisorientierte Tipps zu geben. Dazu schlägt das Buch die Brücke zu tangierenden Themenbereichen wie Usability, User-Centered-Design, Online Branding, Online-Marketing oder Suchmaschinenoptimierung. Marco Hassler gibt Ihnen klare Hinweise und Anleitungen, wie Sie Ihre Ziele erreichen.
    BK
    85.20 / Betriebliche Information und Kommunikation
    Classification
    85.20 / Betriebliche Information und Kommunikation
    Footnote
    Rez. in Mitt. VÖB 63(2010) H.1/2, S.147-148 (M. Buzinkay): "Webseiten-Gestaltung und Webseiten-Analyse gehen Hand in Hand. Leider wird das Letztere selten wenn überhaupt berücksichtigt. Zu Unrecht, denn die Analyse der eigenen Maßnahmen ist zur Korrektur und Optimierung entscheidend. Auch wenn die Einsicht greift, dass die Analyse von Webseiten wichtig wäre, ist es oft ein weiter Weg zur Realisierung. Warum? Analyse heißt kontinuierlicher Aufwand, und viele sind nicht bereit beziehungsweise haben nicht die zeitlichen Ressourcen dazu. Ist man einmal zu der Überzeugung gelangt, dass man seine Web-Aktivitäten dennoch optimieren, wenn nicht schon mal gelegentlich hinterfragen sollte, dann lohnt es sich, Marco Hasslers "Web Analytics" in die Hand zu nehmen. Es ist definitiv kein Buch für einen einzigen Lese-Abend, sondern ein Band, mit welchem gearbeitet werden muss. D.h. auch hier: Web-Analyse bedeutet Arbeit und intensive Auseinandersetzung (ein Umstand, den viele nicht verstehen und akzeptieren wollen). Das Buch ist sehr dicht und bleibt trotzdem übersichtlich. Die Gliederung der Themen - von den Grundlagen der Datensammlung, über die Definition von Metriken, hin zur Optimierung von Seiten und schließlich bis zur Arbeit mit Web Analyse Werkzeugen - liefert einen roten Faden, der schön von einem Thema auf das nächste aufbaut. Dadurch fällt es auch leicht, ein eigenes Projekt begleitend zur Buchlektüre Schritt für Schritt aufzubauen. Zahlreiche Screenshots und Illustrationen erleichtern zudem das Verstehen der Zusammenhänge und Erklärungen im Text. Das Buch überzeugt aber auch durch seine Tiefe (bis auf das Kapitel, wo es um die Zusammenstellung von Personas geht) und den angenehm zu lesenden Schreibstil. Von mir kommt eine dringende Empfehlung an alle, die sich mit Online Marketing im Allgemeinen, mit Erfolgskontrolle von Websites und Web-Aktivitäten im Speziellen auseindersetzen."
    RSWK
    Electronic Commerce / Web Site / Verbesserung / Kennzahl
    Subject
    Electronic Commerce / Web Site / Verbesserung / Kennzahl
  5. Menczer, F.: Lexical and semantic clustering by Web links (2004) 0.01
    0.012253862 = product of:
      0.06126931 = sum of:
        0.053446017 = weight(_text_:web in 3090) [ClassicSimilarity], result of:
          0.053446017 = score(doc=3090,freq=14.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.57238775 = fieldWeight in 3090, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 3090) [ClassicSimilarity], result of:
              0.023469873 = score(doc=3090,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 3090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3090)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Recent Web-searching and -mining tools are combining text and link analysis to improve ranking and crawling algorithms. The central assumption behind such approaches is that there is a correiation between the graph structure of the Web and the text and meaning of pages. Here I formalize and empirically evaluate two general conjectures drawing connections from link information to lexical and semantic Web content. The link-content conjecture states that a page is similar to the pages that link to it, and the link-cluster conjecture that pages about the same topic are clustered together. These conjectures are offen simply assumed to hold, and Web search tools are built an such assumptions. The present quantitative confirmation sheds light an the connection between the success of the latest Web-mining techniques and the small world topology of the Web, with encouraging implications for the design of better crawling algorithms.
    Date
    9. 1.2005 19:20:29
  6. Neth, M.: Citation analysis and the Web (1998) 0.01
    0.011982392 = product of:
      0.05991196 = sum of:
        0.023567477 = weight(_text_:web in 108) [ClassicSimilarity], result of:
          0.023567477 = score(doc=108,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25239927 = fieldWeight in 108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=108)
        0.036344483 = product of:
          0.054516725 = sum of:
            0.027381519 = weight(_text_:29 in 108) [ClassicSimilarity], result of:
              0.027381519 = score(doc=108,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.27205724 = fieldWeight in 108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=108)
            0.027135205 = weight(_text_:22 in 108) [ClassicSimilarity], result of:
              0.027135205 = score(doc=108,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.2708308 = fieldWeight in 108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=108)
          0.6666667 = coord(2/3)
      0.2 = coord(2/10)
    
    Date
    10. 1.1999 16:22:37
    Source
    Art documentation. 17(1998) no.1, S.29-33
  7. Vaughan, L.; Thelwall, M.: Scholarly use of the Web : what are the key inducers of links to journal Web sites? (2003) 0.01
    0.011404229 = product of:
      0.057021145 = sum of:
        0.050501734 = weight(_text_:web in 1236) [ClassicSimilarity], result of:
          0.050501734 = score(doc=1236,freq=18.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5408555 = fieldWeight in 1236, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1236)
        0.00651941 = product of:
          0.019558229 = sum of:
            0.019558229 = weight(_text_:29 in 1236) [ClassicSimilarity], result of:
              0.019558229 = score(doc=1236,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19432661 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1236)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Web links have been studied by information scientists for at least six years but it is only in the past two that clear evidence has emerged to show that counts of links to scholarly Web spaces (universities and departments) can correlate significantly with research measures, giving some credence to their use for the investigation of scholarly communication. This paper reports an a study to investigate the factors that influence the creation of links to journal Web sites. An empirical approach is used: collecting data and testing for significant patterns. The specific questions addressed are whether site age and site content are inducers of links to a journal's Web site as measured by the ratio of link counts to Journal Impact Factors, two variables previously discovered to be related. A new methodology for data collection is also introduced that uses the Internet Archive to obtain an earliest known creation date for Web sites. The results show that both site age and site content are significant factors for the disciplines studied: library and information science, and law. Comparisons between the two fields also show disciplinary differences in Web site characteristics. Scholars and publishers should be particularly aware that richer content an a journal's Web site tends to generate links and thus the traffic to the site.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.1, S.29-38
  8. Kaminer, N.; Braunstein, Y.M.: Bibliometric analysis of the impact of Internet use on scholarly productivity (1998) 0.01
    0.0103863785 = product of:
      0.10386378 = sum of:
        0.10386378 = weight(_text_:log in 1151) [ClassicSimilarity], result of:
          0.10386378 = score(doc=1151,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.5664474 = fieldWeight in 1151, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0625 = fieldNorm(doc=1151)
      0.1 = coord(1/10)
    
    Abstract
    Variables measuring the nature and level of Internet usage by natural scientists improve the explanatory power of a traditional bibliographic model of scholarly productivity. The data used to construct these variables come from log files generated by the internal accounting modules of the UNIX operating system. The effects of Internet usage on productivity are quntifiable, and it is possible to calculate tradeoffs between Internet usage and the more traditional inputs
  9. Pernik, V.; Schlögl, C.: Möglichkeiten und Grenzen von Web Structure Mining am Beispiel von informationswissenschaftlichen Hochschulinstituten im deutschsprachigen Raum (2006) 0.01
    0.00970437 = product of:
      0.04852185 = sum of:
        0.038090795 = weight(_text_:web in 78) [ClassicSimilarity], result of:
          0.038090795 = score(doc=78,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.4079388 = fieldWeight in 78, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=78)
        0.010431055 = product of:
          0.031293165 = sum of:
            0.031293165 = weight(_text_:29 in 78) [ClassicSimilarity], result of:
              0.031293165 = score(doc=78,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.31092256 = fieldWeight in 78, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=78)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    In diesem Beitrag wird eine webometrische Untersuchung vorgestellt, die informationswissenschaftliche Hochschulinstitute in den deutschsprachigen Ländern zum Gegenstand hatte. Ziel dieser Studie war es, einerseits die Linkbeziehungen zwischen den Hochschulinstituten zu analysieren. Andererseits sollten Ähnlichkeiten (zum Beispiel aufgrund von fachlichen, örtlichen oder institutionellen Gegebenheiten) identifiziert werden. Es werden nicht nur die Vorgehensweise bei derartigen Analysen und die daraus resultierenden Ergebnisse dargestellt. Insbesondere sollen Problembereiche und Einschränkungen, die mit der Analyse von Linkstrukturen im Web verbunden sind, thematisiert werden.
    Date
    4.12.2006 12:14:29
  10. Davis, P.M.; Cohen, S.A.: ¬The effect of the Web on undergraduate citation behavior 1996-1999 (2001) 0.01
    0.009644936 = product of:
      0.04822468 = sum of:
        0.040401388 = weight(_text_:web in 5768) [ClassicSimilarity], result of:
          0.040401388 = score(doc=5768,freq=8.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.43268442 = fieldWeight in 5768, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5768)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 5768) [ClassicSimilarity], result of:
              0.023469873 = score(doc=5768,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 5768, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5768)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    A citation analysis of undergraduate term papers in microeconomics revealed a significant decrease in the frequency of scholarly resources cited between 1996 and 1999. Book citations decreased from 30% to 19%, newspaper citations increased from 7% to 19%, and Web citations increased from 9% to 21%. Web citations checked in 2000 revealed that only 18% of URLs cited in 1996 led to the correct Internet document. For 1999 bibliographies, only 55% of URLs led to the correct document. The authors recommend (1) setting stricter guidelines for acceptable citations in course assignments; (2) creating and maintaining scholarly portals for authoritative Web sites with a commitment to long-term access; and (3) continuing to instruct students how to critically evaluate resources
    Date
    29. 9.2001 14:01:09
  11. Stuart, D.: Web metrics for library and information professionals (2014) 0.01
    0.007545275 = product of:
      0.07545275 = sum of:
        0.07545275 = weight(_text_:web in 2274) [ClassicSimilarity], result of:
          0.07545275 = score(doc=2274,freq=82.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.808072 = fieldWeight in 2274, product of:
              9.055386 = tf(freq=82.0), with freq of:
                82.0 = termFreq=82.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
      0.1 = coord(1/10)
    
    Abstract
    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional. The book will provide a practical introduction to web metrics for a wide range of library and information professionals, from the bibliometrician wanting to demonstrate the wider impact of a researcher's work than can be demonstrated through traditional citations databases, to the reference librarian wanting to measure how successfully they are engaging with their users on Twitter. It will be a valuable tool for anyone who wants to not only understand the impact of content, but demonstrate this impact to others within the organization and beyond.
    Content
    1. Introduction. MetricsIndicators -- Web metrics and Ranganathan's laws of library science -- Web metrics for the library and information professional -- The aim of this book -- The structure of the rest of this book -- 2. Bibliometrics, webometrics and web metrics. Web metrics -- Information science metrics -- Web analytics -- Relational and evaluative metrics -- Evaluative web metrics -- Relational web metrics -- Validating the results -- 3. Data collection tools. The anatomy of a URL, web links and the structure of the web -- Search engines 1.0 -- Web crawlers -- Search engines 2.0 -- Post search engine 2.0: fragmentation -- 4. Evaluating impact on the web. Websites -- Blogs -- Wikis -- Internal metrics -- External metrics -- A systematic approach to content analysis -- 5. Evaluating social media impact. Aspects of social network sites -- Typology of social network sites -- Research and tools for specific sites and services -- Other social network sites -- URL shorteners: web analytic links on any site -- General social media impact -- Sentiment analysis -- 6. Investigating relationships between actors. Social network analysis methods -- Sources for relational network analysis -- 7. Exploring traditional publications in a new environment. More bibliographic items -- Full text analysis -- Greater context -- 8. Web metrics and the web of data. The web of data -- Building the semantic web -- Implications of the web of data for web metrics -- Investigating the web of data today -- SPARQL -- Sindice -- LDSpider: an RDF web crawler -- 9. The future of web metrics and the library and information professional. How far we have come -- The future of web metrics -- The future of the library and information professional and web metrics.
    RSWK
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
    Subject
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
  12. Zhang, Y.; Jansen, B.J.; Spink, A.: Identification of factors predicting clickthrough in Web searching using neural network analysis (2009) 0.01
    0.007264202 = product of:
      0.03632101 = sum of:
        0.028568096 = weight(_text_:web in 2742) [ClassicSimilarity], result of:
          0.028568096 = score(doc=2742,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.3059541 = fieldWeight in 2742, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.0077529154 = product of:
          0.023258746 = sum of:
            0.023258746 = weight(_text_:22 in 2742) [ClassicSimilarity], result of:
              0.023258746 = score(doc=2742,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23214069 = fieldWeight in 2742, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2742)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    In this research, we aim to identify factors that significantly affect the clickthrough of Web searchers. Our underlying goal is determine more efficient methods to optimize the clickthrough rate. We devise a clickthrough metric for measuring customer satisfaction of search engine results using the number of links visited, number of queries a user submits, and rank of clicked links. We use a neural network to detect the significant influence of searching characteristics on future user clickthrough. Our results show that high occurrences of query reformulation, lengthy searching duration, longer query length, and the higher ranking of prior clicked links correlate positively with future clickthrough. We provide recommendations for leveraging these findings for improving the performance of search engine retrieval and result ranking, along with implications for search engine marketing.
    Date
    22. 3.2009 17:49:11
  13. Ingwersen, P.: ¬The calculation of Web impact factors (1998) 0.01
    0.007070243 = product of:
      0.070702426 = sum of:
        0.070702426 = weight(_text_:web in 1071) [ClassicSimilarity], result of:
          0.070702426 = score(doc=1071,freq=18.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.75719774 = fieldWeight in 1071, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1071)
      0.1 = coord(1/10)
    
    Abstract
    Reports investigations into the feasibility and reliability of calculating impact factors for web sites, called Web Impact Factors (Web-IF). analyzes a selection of 7 small and medium scale national and 4 large web domains as well as 6 institutional web sites over a series of snapshots taken of the web during a month. Describes the data isolation and calculation methods and discusses the tests. The results thus far demonstrate that Web-IFs are calculable with high confidence for national and sector domains whilst institutional Web-IFs should be approached with caution
  14. Yang, S.; Han, R.; Ding, J.; Song, Y.: ¬The distribution of Web citations (2012) 0.01
    0.0069977264 = product of:
      0.06997726 = sum of:
        0.06997726 = weight(_text_:web in 2735) [ClassicSimilarity], result of:
          0.06997726 = score(doc=2735,freq=24.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.7494315 = fieldWeight in 2735, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2735)
      0.1 = coord(1/10)
    
    Abstract
    A substantial amount of research has focused on the persistence or availability of Web citations. The present study analyzes Web citation distributions. Web citations are defined as the mentions of the URLs of Web pages (Web resources) as references in academic papers. The present paper primarily focuses on the analysis of the URLs of Web citations and uses three sets of data, namely, Set 1 from the Humanities and Social Science Index in China (CSSCI, 1998-2009), Set 2 from the publications of two international computer science societies, Communications of the ACM and IEEE Computer (1995-1999), and Set 3 from the medical science database, MEDLINE, of the National Library of Medicine (1994-2006). Web citation distributions are investigated based on Web site types, Web page types, URL frequencies, URL depths, URL lengths, and year of article publication. Results show significant differences in the Web citation distributions among the three data sets. However, when the URLs of Web citations with the same hostnames are aggregated, the distributions in the three data sets are consistent with the power law (the Lotka function).
  15. Koehler, W.: Web page change and persistence : a four-year longitudinal study (2002) 0.01
    0.006699813 = product of:
      0.06699813 = sum of:
        0.06699813 = weight(_text_:web in 203) [ClassicSimilarity], result of:
          0.06699813 = score(doc=203,freq=22.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.717526 = fieldWeight in 203, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=203)
      0.1 = coord(1/10)
    
    Abstract
    Changes in the topography of the Web can be expressed in at least four ways: (1) more sites on more servers in more places, (2) more pages and objects added to existing sites and pages, (3) changes in traffic, and (4) modifications to existing text, graphic, and other Web objects. This article does not address the first three factors (more sites, more pages, more traffic) in the growth of the Web. It focuses instead on changes to an existing set of Web documents. The article documents changes to an aging set of Web pages, first identified and "collected" in December 1996 and followed weekly thereafter. Results are reported through February 2001. The article addresses two related phenomena: (1) the life cycle of Web objects, and (2) changes to Web objects. These data reaffirm that the half-life of a Web page is approximately 2 years. There is variation among Web pages by top-level domain and by page type (navigation, content). Web page content appears to stabilize over time; aging pages change less often than once they did
  16. Park, H.W.; Barnett, G.A.; Nam, I.-Y.: Hyperlink - affiliation network structure of top Web sites : examining affiliates with hyperlink in Korea (2002) 0.01
    0.006665889 = product of:
      0.06665889 = sum of:
        0.06665889 = weight(_text_:web in 584) [ClassicSimilarity], result of:
          0.06665889 = score(doc=584,freq=16.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.71389294 = fieldWeight in 584, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=584)
      0.1 = coord(1/10)
    
    Abstract
    This article argues that individual Web sites form hyperlink-affiliations with others for the purpose of strengthening their individual trust, expertness, and safety. It describes the hyperlink-affiliation network structure of Korea's top 152 Web sites. The data were obtained from their Web sites for October 2000. The results indicate that financial Web sites, such as credit card and stock Web sites, occupy the most central position in the network. A cluster analysis reveals that the structure of the hyperlink-affiliation network is influenced by the financial Web sites with which others are affiliated. These findings are discussed from the perspective of Web site credibility.
  17. Cothey, V.: Web-crawling reliability (2004) 0.01
    0.0062353685 = product of:
      0.062353685 = sum of:
        0.062353685 = weight(_text_:web in 3089) [ClassicSimilarity], result of:
          0.062353685 = score(doc=3089,freq=14.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.6677857 = fieldWeight in 3089, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3089)
      0.1 = coord(1/10)
    
    Abstract
    In this article, I investigate the reliability, in the social science sense, of collecting informetric data about the World Wide Web by Web crawling. The investigation includes a critical examination of the practice of Web crawling and contrasts the results of content crawling with the results of link crawling. It is shown that Web crawling by search engines is intentionally biased and selective. I also report the results of a [arge-scale experimental simulation of Web crawling that illustrates the effects of different crawling policies an data collection. It is concluded that the reliability of Web crawling as a data collection technique is improved by fuller reporting of relevant crawling policies.
  18. Vaughan, L.; Shaw , D.: Bibliographic and Web citations : what Is the difference? (2003) 0.01
    0.0060695536 = product of:
      0.060695533 = sum of:
        0.060695533 = weight(_text_:web in 5176) [ClassicSimilarity], result of:
          0.060695533 = score(doc=5176,freq=26.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.65002745 = fieldWeight in 5176, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5176)
      0.1 = coord(1/10)
    
    Abstract
    Vaughn, and Shaw look at the relationship between traditional citation and Web citation (not hyperlinks but rather textual mentions of published papers). Using English language research journals in ISI's 2000 Journal Citation Report - Information and Library Science category - 1209 full length papers published in 1997 in 46 journals were identified. Each was searched in Social Science Citation Index and on the Web using Google phrase search by entering the title in quotation marks, and followed for distinction where necessary with sub-titles, author's names, and journal title words. After removing obvious false drops, the number of web sites was recorded for comparison with the SSCI counts. A second sample from 1992 was also collected for examination. There were a total of 16,371 web citations to the selected papers. The top and bottom ranked four journals were then examined and every third citation to every third paper was selected and classified as to source type, domain, and country of origin. Web counts are much higher than ISI citation counts. Of the 46 journals from 1997, 26 demonstrated a significant correlation between Web and traditional citation counts, and 11 of the 15 in the 1992 sample also showed significant correlation. Journal impact factor in 1998 and 1999 correlated significantly with average Web citations per journal in the 1997 data, but at a low level. Thirty percent of web citations come from other papers posted on the web, and 30percent from listings of web based bibliographic services, while twelve percent come from class reading lists. High web citation journals often have web accessible tables of content.
  19. Maharana, B.; Nayak, K.; Sahu, N.K.: Scholarly use of web resources in LIS research : a citation analysis (2006) 0.01
    0.0060695536 = product of:
      0.060695533 = sum of:
        0.060695533 = weight(_text_:web in 53) [ClassicSimilarity], result of:
          0.060695533 = score(doc=53,freq=26.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.65002745 = fieldWeight in 53, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=53)
      0.1 = coord(1/10)
    
    Abstract
    Purpose - The essential purpose of this paper is to measure the amount of web resources used for scholarly contributions in the area of library and information science (LIS) in India. It further aims to make an analysis of the nature and type of web resources and studies the various standards for web citations. Design/methodology/approach - In this study, the result of analysis of 292 web citations spread over 95 scholarly papers published in the proceedings of the National Conference of the Society for Information Science, India (SIS-2005) has been reported. All the 292 web citations were scanned and data relating to types of web domains, file formats, styles of citations, etc., were collected through a structured check list. The data thus obtained were systematically analyzed, figurative representations were made and appropriate interpretations were drawn. Findings - The study revealed that 292 (34.88 per cent) out of 837 were web citations, proving a significant correlation between the use of Internet resources and research productivity of LIS professionals in India. The highest number of web citations (35.6 per cent) was from .edu/.ac type domains. Most of the web resources (46.9 per cent) cited in the study were hypertext markup language (HTML) files. Originality/value - The paper is the result of an original analysis of web citations undertaken in order to study the dependence of LIS professionals in India on web sources for their scholarly contributions. This carries research value for web content providers, authors and researchers in LIS.
  20. Hong, T.: ¬The influence of structural and message features an Web site credibility (2006) 0.01
    0.006060208 = product of:
      0.06060208 = sum of:
        0.06060208 = weight(_text_:web in 5787) [ClassicSimilarity], result of:
          0.06060208 = score(doc=5787,freq=18.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.64902663 = fieldWeight in 5787, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5787)
      0.1 = coord(1/10)
    
    Abstract
    This article explores the associations that message features and Web structural features have with perceptions of Web site credibility. In a within-subjects experiment, 84 participants actively located health-related Web sites an the basis of two tasks that differed in task specificity and complexity. Web sites that were deemed most credible were content analyzed for message features and structural features that have been found to be associated with perceptions of source credibility. Regression analyses indicated that message features predicted perceived Web site credibility for both searches when controlling for Internet experience and issue involvement. Advertisements and structural features had no significant effects an perceived Web site credibility. Institutionaffiliated domain names (.gov, org, edu) predicted Web site credibility, but only in the general search, which was more difficult. Implications of results are discussed in terms of online credibility research and Web site design.

Languages

  • e 51
  • d 4
  • More… Less…

Types

  • a 53
  • m 2
  • More… Less…