Search (62 results, page 1 of 4)

  • × theme_ss:"Internet"
  • × theme_ss:"Informetrie"
  1. Menczer, F.: Lexical and semantic clustering by Web links (2004) 0.03
    0.03286336 = product of:
      0.11502176 = sum of:
        0.045902856 = weight(_text_:web in 3090) [ClassicSimilarity], result of:
          0.045902856 = score(doc=3090,freq=14.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.57238775 = fieldWeight in 3090, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
        0.04711391 = weight(_text_:indexierung in 3090) [ClassicSimilarity], result of:
          0.04711391 = score(doc=3090,freq=2.0), product of:
            0.13215348 = queryWeight, product of:
              5.377919 = idf(docFreq=554, maxDocs=44218)
              0.024573348 = queryNorm
            0.35650903 = fieldWeight in 3090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.377919 = idf(docFreq=554, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
        0.007099477 = weight(_text_:information in 3090) [ClassicSimilarity], result of:
          0.007099477 = score(doc=3090,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.16457605 = fieldWeight in 3090, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
        0.014905514 = weight(_text_:retrieval in 3090) [ClassicSimilarity], result of:
          0.014905514 = score(doc=3090,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.20052543 = fieldWeight in 3090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
      0.2857143 = coord(4/14)
    
    Abstract
    Recent Web-searching and -mining tools are combining text and link analysis to improve ranking and crawling algorithms. The central assumption behind such approaches is that there is a correiation between the graph structure of the Web and the text and meaning of pages. Here I formalize and empirically evaluate two general conjectures drawing connections from link information to lexical and semantic Web content. The link-content conjecture states that a page is similar to the pages that link to it, and the link-cluster conjecture that pages about the same topic are clustered together. These conjectures are offen simply assumed to hold, and Web search tools are built an such assumptions. The present quantitative confirmation sheds light an the connection between the success of the latest Web-mining techniques and the small world topology of the Web, with encouraging implications for the design of better crawling algorithms.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.14, S.1261-1269
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  2. Thelwall, M.; Wilkinson, D.: Finding similar academic Web sites with links, bibliometric couplings and colinks (2004) 0.01
    0.014693491 = product of:
      0.06856962 = sum of:
        0.038794994 = weight(_text_:web in 2571) [ClassicSimilarity], result of:
          0.038794994 = score(doc=2571,freq=10.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.48375595 = fieldWeight in 2571, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2571)
        0.008695048 = weight(_text_:information in 2571) [ClassicSimilarity], result of:
          0.008695048 = score(doc=2571,freq=6.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.20156369 = fieldWeight in 2571, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2571)
        0.021079581 = weight(_text_:retrieval in 2571) [ClassicSimilarity], result of:
          0.021079581 = score(doc=2571,freq=4.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.2835858 = fieldWeight in 2571, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2571)
      0.21428572 = coord(3/14)
    
    Abstract
    A common task in both Webmetrics and Web information retrieval is to identify a set of Web pages or sites that are similar in content. In this paper we assess the extent to which links, colinks and couplings can be used to identify similar Web sites. As an experiment, a random sample of 500 pairs of domains from the UK academic Web were taken and human assessments of site similarity, based upon content type, were compared against ratings for the three concepts. The results show that using a combination of all three gives the highest probability of identifying similar sites, but surprisingly this was only a marginal improvement over using links alone. Another unexpected result was that high values for either colink counts or couplings were associated with only a small increased likelihood of similarity. The principal advantage of using couplings and colinks was found to be greater coverage in terms of a much larger number of pairs of sites being connected by these measures, instead of increased probability of similarity. In information retrieval terminology, this is improved recall rather than improved precision.
    Source
    Information processing and management. 40(2004) no.3, S.515-526
  3. Zhang, Y.; Jansen, B.J.; Spink, A.: Identification of factors predicting clickthrough in Web searching using neural network analysis (2009) 0.01
    0.014605832 = product of:
      0.05112041 = sum of:
        0.024536107 = weight(_text_:web in 2742) [ClassicSimilarity], result of:
          0.024536107 = score(doc=2742,freq=4.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.3059541 = fieldWeight in 2742, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.0050200885 = weight(_text_:information in 2742) [ClassicSimilarity], result of:
          0.0050200885 = score(doc=2742,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.116372846 = fieldWeight in 2742, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.014905514 = weight(_text_:retrieval in 2742) [ClassicSimilarity], result of:
          0.014905514 = score(doc=2742,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.20052543 = fieldWeight in 2742, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.006658699 = product of:
          0.019976096 = sum of:
            0.019976096 = weight(_text_:22 in 2742) [ClassicSimilarity], result of:
              0.019976096 = score(doc=2742,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.23214069 = fieldWeight in 2742, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2742)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Abstract
    In this research, we aim to identify factors that significantly affect the clickthrough of Web searchers. Our underlying goal is determine more efficient methods to optimize the clickthrough rate. We devise a clickthrough metric for measuring customer satisfaction of search engine results using the number of links visited, number of queries a user submits, and rank of clicked links. We use a neural network to detect the significant influence of searching characteristics on future user clickthrough. Our results show that high occurrences of query reformulation, lengthy searching duration, longer query length, and the higher ranking of prior clicked links correlate positively with future clickthrough. We provide recommendations for leveraging these findings for improving the performance of search engine retrieval and result ranking, along with implications for search engine marketing.
    Date
    22. 3.2009 17:49:11
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.3, S.557-570
  4. Jepsen, E.T.; Seiden, P.; Ingwersen, P.; Björneborn, L.; Borlund, P.: Characteristics of scientific Web publications : preliminary data gathering and analysis (2004) 0.01
    0.012692357 = product of:
      0.059231002 = sum of:
        0.04089351 = weight(_text_:web in 3091) [ClassicSimilarity], result of:
          0.04089351 = score(doc=3091,freq=16.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.5099235 = fieldWeight in 3091, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3091)
        0.005916231 = weight(_text_:information in 3091) [ClassicSimilarity], result of:
          0.005916231 = score(doc=3091,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.13714671 = fieldWeight in 3091, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3091)
        0.012421262 = weight(_text_:retrieval in 3091) [ClassicSimilarity], result of:
          0.012421262 = score(doc=3091,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.16710453 = fieldWeight in 3091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3091)
      0.21428572 = coord(3/14)
    
    Abstract
    Because of the increasing presence of scientific publications an the Web, combined with the existing difficulties in easily verifying and retrieving these publications, research an techniques and methods for retrieval of scientific Web publications is called for. In this article, we report an the initial steps taken toward the construction of a test collection of scientific Web publications within the subject domain of plant biology. The steps reported are those of data gathering and data analysis aiming at identifying characteristics of scientific Web publications. The data used in this article were generated based an specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AlITheWeb, and AItaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality of the content and whether this information could be employed to harvest, filter, and rank Web publications. The attributes analyzed were inlinks, outlinks, bibliographic references, file format, language, search engine overlap, structural position (according to site structure), and the occurrence of various types of metadata. As could be expected, the ranked output differs between the three search engines. Apparently, this is caused by differences in ranking algorithms rather than the databases themselves. In fact, because scientific Web content in this subject domain receives few inlinks, both AItaVista and AlITheWeb retrieved a higher degree of accessible scientific content than Google. Because of the search engine cutoffs of accessible URLs, the feasibility of using search engine output for Web content analysis is also discussed.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.14, S.1239-1249
  5. Stuart, D.: Web metrics for library and information professionals (2014) 0.01
    0.010580572 = product of:
      0.074064 = sum of:
        0.06480364 = weight(_text_:web in 2274) [ClassicSimilarity], result of:
          0.06480364 = score(doc=2274,freq=82.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.808072 = fieldWeight in 2274, product of:
              9.055386 = tf(freq=82.0), with freq of:
                82.0 = termFreq=82.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
        0.009260367 = weight(_text_:information in 2274) [ClassicSimilarity], result of:
          0.009260367 = score(doc=2274,freq=20.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.21466857 = fieldWeight in 2274, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
      0.14285715 = coord(2/14)
    
    Abstract
    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional. The book will provide a practical introduction to web metrics for a wide range of library and information professionals, from the bibliometrician wanting to demonstrate the wider impact of a researcher's work than can be demonstrated through traditional citations databases, to the reference librarian wanting to measure how successfully they are engaging with their users on Twitter. It will be a valuable tool for anyone who wants to not only understand the impact of content, but demonstrate this impact to others within the organization and beyond.
    BK
    06.00 Information und Dokumentation: Allgemeines
    Classification
    06.00 Information und Dokumentation: Allgemeines
    Content
    1. Introduction. MetricsIndicators -- Web metrics and Ranganathan's laws of library science -- Web metrics for the library and information professional -- The aim of this book -- The structure of the rest of this book -- 2. Bibliometrics, webometrics and web metrics. Web metrics -- Information science metrics -- Web analytics -- Relational and evaluative metrics -- Evaluative web metrics -- Relational web metrics -- Validating the results -- 3. Data collection tools. The anatomy of a URL, web links and the structure of the web -- Search engines 1.0 -- Web crawlers -- Search engines 2.0 -- Post search engine 2.0: fragmentation -- 4. Evaluating impact on the web. Websites -- Blogs -- Wikis -- Internal metrics -- External metrics -- A systematic approach to content analysis -- 5. Evaluating social media impact. Aspects of social network sites -- Typology of social network sites -- Research and tools for specific sites and services -- Other social network sites -- URL shorteners: web analytic links on any site -- General social media impact -- Sentiment analysis -- 6. Investigating relationships between actors. Social network analysis methods -- Sources for relational network analysis -- 7. Exploring traditional publications in a new environment. More bibliographic items -- Full text analysis -- Greater context -- 8. Web metrics and the web of data. The web of data -- Building the semantic web -- Implications of the web of data for web metrics -- Investigating the web of data today -- SPARQL -- Sindice -- LDSpider: an RDF web crawler -- 9. The future of web metrics and the library and information professional. How far we have come -- The future of web metrics -- The future of the library and information professional and web metrics.
    RSWK
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
    Subject
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
  6. Hassler, M.: Web analytics : Metriken auswerten, Besucherverhalten verstehen, Website optimieren ; [Metriken analysieren und interpretieren ; Besucherverhalten verstehen und auswerten ; Website-Ziele definieren, Webauftritt optimieren und den Erfolg steigern] (2009) 0.01
    0.010554866 = product of:
      0.04925604 = sum of:
        0.03786792 = weight(_text_:web in 3586) [ClassicSimilarity], result of:
          0.03786792 = score(doc=3586,freq=28.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.47219574 = fieldWeight in 3586, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3586)
        0.0041413615 = weight(_text_:information in 3586) [ClassicSimilarity], result of:
          0.0041413615 = score(doc=3586,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.0960027 = fieldWeight in 3586, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3586)
        0.007246764 = product of:
          0.021740291 = sum of:
            0.021740291 = weight(_text_:2010 in 3586) [ClassicSimilarity], result of:
              0.021740291 = score(doc=3586,freq=2.0), product of:
                0.117538005 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.024573348 = queryNorm
                0.18496393 = fieldWeight in 3586, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3586)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Web Analytics bezeichnet die Sammlung, Analyse und Auswertung von Daten der Website-Nutzung mit dem Ziel, diese Informationen zum besseren Verständnis des Besucherverhaltens sowie zur Optimierung der Website zu nutzen. Je nach Ziel der eigenen Website - z.B. die Vermittlung eines Markenwerts oder die Vermehrung von Kontaktanfragen, Bestellungen oder Newsletter-Abonnements - können Sie anhand von Web Analytics herausfinden, wo sich Schwachstellen Ihrer Website befinden und wie Sie Ihre eigenen Ziele durch entsprechende Optimierungen besser erreichen. Dabei ist Web Analytics nicht nur für Website-Betreiber und IT-Abteilungen interessant, sondern wird insbesondere auch mehr und mehr für Marketing und Management nutzbar. Mit diesem Buch lernen Sie, wie Sie die Nutzung Ihrer Website analysieren. Sie können z. B. untersuchen, welche Traffic-Quelle am meisten Umsatz bringt oder welche Bereiche der Website besonders häufig genutzt werden und vieles mehr. Auf diese Weise werden Sie Ihre Besucher, ihr Verhalten und ihre Motivation besser kennen lernen, Ihre Website darauf abstimmen und somit Ihren Erfolg steigern können. Um aus Web Analytics einen wirklichen Mehrwert ziehen zu können, benötigen Sie fundiertes Wissen. Marco Hassler gibt Ihnen in seinem Buch einen umfassenden Einblick in Web Analytics. Er zeigt Ihnen detailliert, wie das Verhalten der Besucher analysiert wird und welche Metriken Sie wann sinnvoll anwenden können. Im letzten Teil des Buches zeigt Ihnen der Autor, wie Sie Ihre Auswertungsergebnisse dafür nutzen, über Conversion-Messungen die Website auf ihre Ziele hin zu optimieren. Ziel dieses Buches ist es, konkrete Web-Analytics-Kenntnisse zu vermitteln und wertvolle praxisorientierte Tipps zu geben. Dazu schlägt das Buch die Brücke zu tangierenden Themenbereichen wie Usability, User-Centered-Design, Online Branding, Online-Marketing oder Suchmaschinenoptimierung. Marco Hassler gibt Ihnen klare Hinweise und Anleitungen, wie Sie Ihre Ziele erreichen.
    BK
    85.20 / Betriebliche Information und Kommunikation
    Classification
    85.20 / Betriebliche Information und Kommunikation
    Footnote
    Rez. in Mitt. VÖB 63(2010) H.1/2, S.147-148 (M. Buzinkay): "Webseiten-Gestaltung und Webseiten-Analyse gehen Hand in Hand. Leider wird das Letztere selten wenn überhaupt berücksichtigt. Zu Unrecht, denn die Analyse der eigenen Maßnahmen ist zur Korrektur und Optimierung entscheidend. Auch wenn die Einsicht greift, dass die Analyse von Webseiten wichtig wäre, ist es oft ein weiter Weg zur Realisierung. Warum? Analyse heißt kontinuierlicher Aufwand, und viele sind nicht bereit beziehungsweise haben nicht die zeitlichen Ressourcen dazu. Ist man einmal zu der Überzeugung gelangt, dass man seine Web-Aktivitäten dennoch optimieren, wenn nicht schon mal gelegentlich hinterfragen sollte, dann lohnt es sich, Marco Hasslers "Web Analytics" in die Hand zu nehmen. Es ist definitiv kein Buch für einen einzigen Lese-Abend, sondern ein Band, mit welchem gearbeitet werden muss. D.h. auch hier: Web-Analyse bedeutet Arbeit und intensive Auseinandersetzung (ein Umstand, den viele nicht verstehen und akzeptieren wollen). Das Buch ist sehr dicht und bleibt trotzdem übersichtlich. Die Gliederung der Themen - von den Grundlagen der Datensammlung, über die Definition von Metriken, hin zur Optimierung von Seiten und schließlich bis zur Arbeit mit Web Analyse Werkzeugen - liefert einen roten Faden, der schön von einem Thema auf das nächste aufbaut. Dadurch fällt es auch leicht, ein eigenes Projekt begleitend zur Buchlektüre Schritt für Schritt aufzubauen. Zahlreiche Screenshots und Illustrationen erleichtern zudem das Verstehen der Zusammenhänge und Erklärungen im Text. Das Buch überzeugt aber auch durch seine Tiefe (bis auf das Kapitel, wo es um die Zusammenstellung von Personas geht) und den angenehm zu lesenden Schreibstil. Von mir kommt eine dringende Empfehlung an alle, die sich mit Online Marketing im Allgemeinen, mit Erfolgskontrolle von Websites und Web-Aktivitäten im Speziellen auseindersetzen."
    RSWK
    Electronic Commerce / Web Site / Verbesserung / Kennzahl
    Subject
    Electronic Commerce / Web Site / Verbesserung / Kennzahl
  7. Yang, S.; Han, R.; Ding, J.; Song, Y.: ¬The distribution of Web citations (2012) 0.01
    0.009303005 = product of:
      0.06512103 = sum of:
        0.060100947 = weight(_text_:web in 2735) [ClassicSimilarity], result of:
          0.060100947 = score(doc=2735,freq=24.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.7494315 = fieldWeight in 2735, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2735)
        0.0050200885 = weight(_text_:information in 2735) [ClassicSimilarity], result of:
          0.0050200885 = score(doc=2735,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.116372846 = fieldWeight in 2735, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2735)
      0.14285715 = coord(2/14)
    
    Abstract
    A substantial amount of research has focused on the persistence or availability of Web citations. The present study analyzes Web citation distributions. Web citations are defined as the mentions of the URLs of Web pages (Web resources) as references in academic papers. The present paper primarily focuses on the analysis of the URLs of Web citations and uses three sets of data, namely, Set 1 from the Humanities and Social Science Index in China (CSSCI, 1998-2009), Set 2 from the publications of two international computer science societies, Communications of the ACM and IEEE Computer (1995-1999), and Set 3 from the medical science database, MEDLINE, of the National Library of Medicine (1994-2006). Web citation distributions are investigated based on Web site types, Web page types, URL frequencies, URL depths, URL lengths, and year of article publication. Results show significant differences in the Web citation distributions among the three data sets. However, when the URLs of Web citations with the same hostnames are aggregated, the distributions in the three data sets are consistent with the power law (the Lotka function).
    Source
    Information processing and management. 48(2012) no.4, S.779-790
  8. Park, H.W.; Barnett, G.A.; Nam, I.-Y.: Hyperlink - affiliation network structure of top Web sites : examining affiliates with hyperlink in Korea (2002) 0.01
    0.009015384 = product of:
      0.063107684 = sum of:
        0.057250917 = weight(_text_:web in 584) [ClassicSimilarity], result of:
          0.057250917 = score(doc=584,freq=16.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.71389294 = fieldWeight in 584, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=584)
        0.00585677 = weight(_text_:information in 584) [ClassicSimilarity], result of:
          0.00585677 = score(doc=584,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.13576832 = fieldWeight in 584, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=584)
      0.14285715 = coord(2/14)
    
    Abstract
    This article argues that individual Web sites form hyperlink-affiliations with others for the purpose of strengthening their individual trust, expertness, and safety. It describes the hyperlink-affiliation network structure of Korea's top 152 Web sites. The data were obtained from their Web sites for October 2000. The results indicate that financial Web sites, such as credit card and stock Web sites, occupy the most central position in the network. A cluster analysis reveals that the structure of the hyperlink-affiliation network is influenced by the financial Web sites with which others are affiliated. These findings are discussed from the perspective of Web site credibility.
    Source
    Journal of the American Society for Information Science and technology. 53(2002) no.7, S.592-601
  9. Koehler, W.: Web page change and persistence : a four-year longitudinal study (2002) 0.01
    0.008937481 = product of:
      0.06256236 = sum of:
        0.057542272 = weight(_text_:web in 203) [ClassicSimilarity], result of:
          0.057542272 = score(doc=203,freq=22.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.717526 = fieldWeight in 203, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=203)
        0.0050200885 = weight(_text_:information in 203) [ClassicSimilarity], result of:
          0.0050200885 = score(doc=203,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.116372846 = fieldWeight in 203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=203)
      0.14285715 = coord(2/14)
    
    Abstract
    Changes in the topography of the Web can be expressed in at least four ways: (1) more sites on more servers in more places, (2) more pages and objects added to existing sites and pages, (3) changes in traffic, and (4) modifications to existing text, graphic, and other Web objects. This article does not address the first three factors (more sites, more pages, more traffic) in the growth of the Web. It focuses instead on changes to an existing set of Web documents. The article documents changes to an aging set of Web pages, first identified and "collected" in December 1996 and followed weekly thereafter. Results are reported through February 2001. The article addresses two related phenomena: (1) the life cycle of Web objects, and (2) changes to Web objects. These data reaffirm that the half-life of a Web page is approximately 2 years. There is variation among Web pages by top-level domain and by page type (navigation, content). Web page content appears to stabilize over time; aging pages change less often than once they did
    Source
    Journal of the American Society for Information Science and technology. 53(2002) no.2, S.162-171
  10. Bar-Ilan, J.: ¬The Web as an information source on informetrics? : A content analysis (2000) 0.01
    0.008613925 = product of:
      0.060297474 = sum of:
        0.049072213 = weight(_text_:web in 4587) [ClassicSimilarity], result of:
          0.049072213 = score(doc=4587,freq=16.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.6119082 = fieldWeight in 4587, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4587)
        0.01122526 = weight(_text_:information in 4587) [ClassicSimilarity], result of:
          0.01122526 = score(doc=4587,freq=10.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.2602176 = fieldWeight in 4587, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4587)
      0.14285715 = coord(2/14)
    
    Abstract
    This article addresses the question of whether the Web can serve as an information source for research. Specifically, it analyzes by way of content analysis the Web pages retrieved by the major search engines on a particular date (June 7, 1998), as a result of the query 'informetrics OR informetric'. In 807 out of the 942 retrieved pages, the search terms were mentioned in the context of information science. Over 70% of the pages contained only indirect information on the topic, in the form of hypertext links and bibliographical references without annotation. The bibliographical references extracted from the Web pages were analyzed, and lists of most productive authors, most cited authors, works, and sources were compiled. The list of reference obtained from the Web was also compared to data retrieved from commercial databases. For most cases, the list of references extracted from the Web outperformed the commercial, bibliographic databases. The results of these comparisons indicate that valuable, freely available data is hidden in the Web waiting to be extracted from the millions of Web pages
    Source
    Journal of the American Society for Information Science. 51(2000) no.5, S.432-443
  11. Cothey, V.: Web-crawling reliability (2004) 0.01
    0.0084871575 = product of:
      0.059410103 = sum of:
        0.05355333 = weight(_text_:web in 3089) [ClassicSimilarity], result of:
          0.05355333 = score(doc=3089,freq=14.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.6677857 = fieldWeight in 3089, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3089)
        0.00585677 = weight(_text_:information in 3089) [ClassicSimilarity], result of:
          0.00585677 = score(doc=3089,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.13576832 = fieldWeight in 3089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3089)
      0.14285715 = coord(2/14)
    
    Abstract
    In this article, I investigate the reliability, in the social science sense, of collecting informetric data about the World Wide Web by Web crawling. The investigation includes a critical examination of the practice of Web crawling and contrasts the results of content crawling with the results of link crawling. It is shown that Web crawling by search engines is intentionally biased and selective. I also report the results of a [arge-scale experimental simulation of Web crawling that illustrates the effects of different crawling policies an data collection. It is concluded that the reliability of Web crawling as a data collection technique is improved by fuller reporting of relevant crawling policies.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.14, S.1228-1238
  12. fwt: Webseiten liegen im Schnitt nur 19 Klicks auseinander (2001) 0.01
    0.008312954 = product of:
      0.058190674 = sum of:
        0.030050473 = weight(_text_:web in 5962) [ClassicSimilarity], result of:
          0.030050473 = score(doc=5962,freq=6.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.37471575 = fieldWeight in 5962, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5962)
        0.0281402 = weight(_text_:frankfurt in 5962) [ClassicSimilarity], result of:
          0.0281402 = score(doc=5962,freq=2.0), product of:
            0.10213336 = queryWeight, product of:
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.024573348 = queryNorm
            0.27552408 = fieldWeight in 5962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.046875 = fieldNorm(doc=5962)
      0.14285715 = coord(2/14)
    
    Abstract
    "Dokumente im World Wide Web liegen durchschnittlich 19 Mausklicks voneinander entfernt - angesichts von schätzungsweise mehr als einer Milliarde Seiten erstaunlich nahe. Albert-Lazlo Barabai vom Institut für Physik der University von Notre Dame (US-Staat Indiana) stellt seine Studie in der britischen Fachzeitschrift Physics World (Juli 2001, S. 33) vor. Der Statistiker konstruierte im Rechner zunächst Modelle von großen Computernetzwerken. Grundlage für diese Abbilder war die Analyse eines kleinen Teils der Verbindungen im Web, die der Wissenschaftler automatisch von einem Programm hatte prüfen lassen. Um seine Ergebnisse zu erklären, vergleicht Barabai das World Wide Web mit den Verbindungen internationaler Fluglinien. Dort gebe es zahlreiche Flughäfen, die meist nur mit anderen Flugplätzen in ihrer näheren Umgebung in Verbindung stünden. Diese kleineren Verteiler stehen ihrerseits mit einigen wenigen großen Airports wie Frankfurt, New York oder Hongkong in Verbindung. Ähnlich sei es im Netz, wo wenige große Server die Verteilung großer Datenmengen übernähmen und weite Entfernungen überbrückten. Damit seien die Online-Wege vergleichsweise kurz. Die Untersuchung spiegelt allerdings die Situation des Jahres 1999 wider. Seinerzeit gab es vermutlich 800 Millionen Knoten."
  13. Amitay, E.; Carmel, D.; Herscovici, M.; Lempel, R.; Soffer, A.: Trend detection through temporal link analysis (2004) 0.01
    0.00831091 = product of:
      0.038784247 = sum of:
        0.020446755 = weight(_text_:web in 3092) [ClassicSimilarity], result of:
          0.020446755 = score(doc=3092,freq=4.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.25496176 = fieldWeight in 3092, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3092)
        0.005916231 = weight(_text_:information in 3092) [ClassicSimilarity], result of:
          0.005916231 = score(doc=3092,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.13714671 = fieldWeight in 3092, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3092)
        0.012421262 = weight(_text_:retrieval in 3092) [ClassicSimilarity], result of:
          0.012421262 = score(doc=3092,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.16710453 = fieldWeight in 3092, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3092)
      0.21428572 = coord(3/14)
    
    Abstract
    Although time has been recognized as an important dimension in the co-citation literature, to date it has not been incorporated into the analogous process of link analysis an the Web. In this paper, we discuss several aspects and uses of the time dimension in the context of Web information retrieval. We describe the ideal casewhere search engines track and store temporal data for each of the pages in their repository, assigning timestamps to the hyperlinks embedded within the pages. We introduce several applications which benefit from the availability of such timestamps. To demonstrate our claims, we use a somewhat simplistic approach, which dates links by approximating the age of the page's content. We show that by using this crude measure alone it is possible to detect and expose significant events and trends. We predict that by using more robust methods for tracking modifications in the content of pages, search engines will be able to provide results that are more timely and better reflect current real-life trends than those they provide today.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.14, S.1270-1281
  14. Vaughan, L.; Shaw , D.: Bibliographic and Web citations : what Is the difference? (2003) 0.01
    0.008292205 = product of:
      0.058045432 = sum of:
        0.0521292 = weight(_text_:web in 5176) [ClassicSimilarity], result of:
          0.0521292 = score(doc=5176,freq=26.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.65002745 = fieldWeight in 5176, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5176)
        0.005916231 = weight(_text_:information in 5176) [ClassicSimilarity], result of:
          0.005916231 = score(doc=5176,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.13714671 = fieldWeight in 5176, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5176)
      0.14285715 = coord(2/14)
    
    Abstract
    Vaughn, and Shaw look at the relationship between traditional citation and Web citation (not hyperlinks but rather textual mentions of published papers). Using English language research journals in ISI's 2000 Journal Citation Report - Information and Library Science category - 1209 full length papers published in 1997 in 46 journals were identified. Each was searched in Social Science Citation Index and on the Web using Google phrase search by entering the title in quotation marks, and followed for distinction where necessary with sub-titles, author's names, and journal title words. After removing obvious false drops, the number of web sites was recorded for comparison with the SSCI counts. A second sample from 1992 was also collected for examination. There were a total of 16,371 web citations to the selected papers. The top and bottom ranked four journals were then examined and every third citation to every third paper was selected and classified as to source type, domain, and country of origin. Web counts are much higher than ISI citation counts. Of the 46 journals from 1997, 26 demonstrated a significant correlation between Web and traditional citation counts, and 11 of the 15 in the 1992 sample also showed significant correlation. Journal impact factor in 1998 and 1999 correlated significantly with average Web citations per journal in the 1997 data, but at a low level. Thirty percent of web citations come from other papers posted on the web, and 30percent from listings of web based bibliographic services, while twelve percent come from class reading lists. High web citation journals often have web accessible tables of content.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.14, S.1313-1324
  15. Maharana, B.; Nayak, K.; Sahu, N.K.: Scholarly use of web resources in LIS research : a citation analysis (2006) 0.01
    0.008292205 = product of:
      0.058045432 = sum of:
        0.0521292 = weight(_text_:web in 53) [ClassicSimilarity], result of:
          0.0521292 = score(doc=53,freq=26.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.65002745 = fieldWeight in 53, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=53)
        0.005916231 = weight(_text_:information in 53) [ClassicSimilarity], result of:
          0.005916231 = score(doc=53,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.13714671 = fieldWeight in 53, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=53)
      0.14285715 = coord(2/14)
    
    Abstract
    Purpose - The essential purpose of this paper is to measure the amount of web resources used for scholarly contributions in the area of library and information science (LIS) in India. It further aims to make an analysis of the nature and type of web resources and studies the various standards for web citations. Design/methodology/approach - In this study, the result of analysis of 292 web citations spread over 95 scholarly papers published in the proceedings of the National Conference of the Society for Information Science, India (SIS-2005) has been reported. All the 292 web citations were scanned and data relating to types of web domains, file formats, styles of citations, etc., were collected through a structured check list. The data thus obtained were systematically analyzed, figurative representations were made and appropriate interpretations were drawn. Findings - The study revealed that 292 (34.88 per cent) out of 837 were web citations, proving a significant correlation between the use of Internet resources and research productivity of LIS professionals in India. The highest number of web citations (35.6 per cent) was from .edu/.ac type domains. Most of the web resources (46.9 per cent) cited in the study were hypertext markup language (HTML) files. Originality/value - The paper is the result of an original analysis of web citations undertaken in order to study the dependence of LIS professionals in India on web sources for their scholarly contributions. This carries research value for web content providers, authors and researchers in LIS.
  16. Thelwall, M.: Webometrics (2009) 0.01
    0.00816116 = product of:
      0.057128116 = sum of:
        0.045902856 = weight(_text_:web in 3906) [ClassicSimilarity], result of:
          0.045902856 = score(doc=3906,freq=14.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.57238775 = fieldWeight in 3906, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3906)
        0.01122526 = weight(_text_:information in 3906) [ClassicSimilarity], result of:
          0.01122526 = score(doc=3906,freq=10.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.2602176 = fieldWeight in 3906, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3906)
      0.14285715 = coord(2/14)
    
    Abstract
    Webometrics is an information science field concerned with measuring aspects of the World Wide Web (WWW) for a variety of information science research goals. It came into existence about five years after the Web was formed and has since grown to become a significant aspect of information science, at least in terms of published research. Although some webometrics research has focused on the structure or evolution of the Web itself or the performance of commercial search engines, most has used data from the Web to shed light on information provision or online communication in various contexts. Most prominently, techniques have been developed to track, map, and assess Web-based informal scholarly communication, for example, in terms of the hyperlinks between academic Web sites or the online impact of digital repositories. In addition, a range of nonacademic issues and groups of Web users have also been analyzed.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  17. Hong, T.: ¬The influence of structural and message features an Web site credibility (2006) 0.01
    0.008152719 = product of:
      0.05706903 = sum of:
        0.05204894 = weight(_text_:web in 5787) [ClassicSimilarity], result of:
          0.05204894 = score(doc=5787,freq=18.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.64902663 = fieldWeight in 5787, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5787)
        0.0050200885 = weight(_text_:information in 5787) [ClassicSimilarity], result of:
          0.0050200885 = score(doc=5787,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.116372846 = fieldWeight in 5787, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5787)
      0.14285715 = coord(2/14)
    
    Abstract
    This article explores the associations that message features and Web structural features have with perceptions of Web site credibility. In a within-subjects experiment, 84 participants actively located health-related Web sites an the basis of two tasks that differed in task specificity and complexity. Web sites that were deemed most credible were content analyzed for message features and structural features that have been found to be associated with perceptions of source credibility. Regression analyses indicated that message features predicted perceived Web site credibility for both searches when controlling for Internet experience and issue involvement. Advertisements and structural features had no significant effects an perceived Web site credibility. Institutionaffiliated domain names (.gov, org, edu) predicted Web site credibility, but only in the general search, which was more difficult. Implications of results are discussed in terms of online credibility research and Web site design.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.1, S.114-127
  18. Thelwall, M.; Sud, P.: ¬A comparison of methods for collecting web citation data for academic organizations (2011) 0.01
    0.008130126 = product of:
      0.037940584 = sum of:
        0.01445804 = weight(_text_:web in 4626) [ClassicSimilarity], result of:
          0.01445804 = score(doc=4626,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.18028519 = fieldWeight in 4626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4626)
        0.005916231 = weight(_text_:information in 4626) [ClassicSimilarity], result of:
          0.005916231 = score(doc=4626,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.13714671 = fieldWeight in 4626, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4626)
        0.017566316 = weight(_text_:retrieval in 4626) [ClassicSimilarity], result of:
          0.017566316 = score(doc=4626,freq=4.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.23632148 = fieldWeight in 4626, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4626)
      0.21428572 = coord(3/14)
    
    Abstract
    The primary webometric method for estimating the online impact of an organization is to count links to its website. Link counts have been available from commercial search engines for over a decade but this was set to end by early 2012 and so a replacement is needed. This article compares link counts to two alternative methods: URL citations and organization title mentions. New variations of these methods are also introduced. The three methods are compared against each other using Yahoo!. Two of the three methods (URL citations and organization title mentions) are also compared against each other using Bing. Evidence from a case study of 131 UK universities and 49 US Library and Information Science (LIS) departments suggests that Bing's Hit Count Estimates (HCEs) for popular title searches are not useful for webometric research but that Yahoo!'s HCEs for all three types of search and Bing's URL citation HCEs seem to be consistent. For exact URL counts the results of all three methods in Yahoo! and both methods in Bing are also consistent. Four types of accuracy factors are also introduced and defined: search engine coverage, search engine retrieval variation, search engine retrieval anomalies, and query polysemy.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.8, S.1488-1497
  19. Cronin, B.: Bibliometrics and beyond : some thoughts on web-based citation analysis (2001) 0.01
    0.007456579 = product of:
      0.05219605 = sum of:
        0.04048251 = weight(_text_:web in 3890) [ClassicSimilarity], result of:
          0.04048251 = score(doc=3890,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.50479853 = fieldWeight in 3890, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=3890)
        0.01171354 = weight(_text_:information in 3890) [ClassicSimilarity], result of:
          0.01171354 = score(doc=3890,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.27153665 = fieldWeight in 3890, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=3890)
      0.14285715 = coord(2/14)
    
    Source
    Journal of information science. 27(2001) no.1, S.1-7
  20. Thelwall, M.: Conceptualizing documentation on the Web : an evaluation of different heuristic-based models for counting links between university Web sites (2002) 0.01
    0.0074479007 = product of:
      0.052135304 = sum of:
        0.047951896 = weight(_text_:web in 978) [ClassicSimilarity], result of:
          0.047951896 = score(doc=978,freq=22.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.59793836 = fieldWeight in 978, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=978)
        0.004183407 = weight(_text_:information in 978) [ClassicSimilarity], result of:
          0.004183407 = score(doc=978,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.09697737 = fieldWeight in 978, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=978)
      0.14285715 = coord(2/14)
    
    Abstract
    All known previous Web link studies have used the Web page as the primary indivisible source document for counting purposes. Arguments are presented to explain why this is not necessarily optimal and why other alternatives have the potential to produce better results. This is despite the fact that individual Web files are often the only choice if search engines are used for raw data and are the easiest basic Web unit to identify. The central issue is of defining the Web "document": that which should comprise the single indissoluble unit of coherent material. Three alternative heuristics are defined for the educational arena based upon the directory, the domain and the whole university site. These are then compared by implementing them an a set of 108 UK university institutional Web sites under the assumption that a more effective heuristic will tend to produce results that correlate more highly with institutional research productivity. It was discovered that the domain and directory models were able to successfully reduce the impact of anomalous linking behavior between pairs of Web sites, with the latter being the method of choice. Reasons are then given as to why a document model an its own cannot eliminate all anomalies in Web linking behavior. Finally, the results from all models give a clear confirmation of the very strong association between the research productivity of a UK university and the number of incoming links from its peers' Web sites.
    Source
    Journal of the American Society for Information Science and technology. 53(2002) no.12, S.995-1005

Years

Languages

  • e 57
  • d 5

Types

  • a 60
  • m 2
  • More… Less…