Search (21 results, page 1 of 2)

  • × theme_ss:"Informetrie"
  • × theme_ss:"Internet"
  1. fwt: Webseiten liegen im Schnitt nur 19 Klicks auseinander (2001) 0.05
    0.04721324 = product of:
      0.07081986 = sum of:
        0.060435314 = weight(_text_:im in 5962) [ClassicSimilarity], result of:
          0.060435314 = score(doc=5962,freq=10.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.41901952 = fieldWeight in 5962, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.046875 = fieldNorm(doc=5962)
        0.010384551 = product of:
          0.031153653 = sum of:
            0.031153653 = weight(_text_:online in 5962) [ClassicSimilarity], result of:
              0.031153653 = score(doc=5962,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 5962, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5962)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    "Dokumente im World Wide Web liegen durchschnittlich 19 Mausklicks voneinander entfernt - angesichts von schätzungsweise mehr als einer Milliarde Seiten erstaunlich nahe. Albert-Lazlo Barabai vom Institut für Physik der University von Notre Dame (US-Staat Indiana) stellt seine Studie in der britischen Fachzeitschrift Physics World (Juli 2001, S. 33) vor. Der Statistiker konstruierte im Rechner zunächst Modelle von großen Computernetzwerken. Grundlage für diese Abbilder war die Analyse eines kleinen Teils der Verbindungen im Web, die der Wissenschaftler automatisch von einem Programm hatte prüfen lassen. Um seine Ergebnisse zu erklären, vergleicht Barabai das World Wide Web mit den Verbindungen internationaler Fluglinien. Dort gebe es zahlreiche Flughäfen, die meist nur mit anderen Flugplätzen in ihrer näheren Umgebung in Verbindung stünden. Diese kleineren Verteiler stehen ihrerseits mit einigen wenigen großen Airports wie Frankfurt, New York oder Hongkong in Verbindung. Ähnlich sei es im Netz, wo wenige große Server die Verteilung großer Datenmengen übernähmen und weite Entfernungen überbrückten. Damit seien die Online-Wege vergleichsweise kurz. Die Untersuchung spiegelt allerdings die Situation des Jahres 1999 wider. Seinerzeit gab es vermutlich 800 Millionen Knoten."
  2. Thelwall, M.; Ruschenburg, T.: Grundlagen und Forschungsfelder der Webometrie (2006) 0.04
    0.036313996 = product of:
      0.05447099 = sum of:
        0.03603666 = weight(_text_:im in 77) [ClassicSimilarity], result of:
          0.03603666 = score(doc=77,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.24985497 = fieldWeight in 77, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0625 = fieldNorm(doc=77)
        0.01843433 = product of:
          0.055302992 = sum of:
            0.055302992 = weight(_text_:22 in 77) [ClassicSimilarity], result of:
              0.055302992 = score(doc=77,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.30952093 = fieldWeight in 77, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=77)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Die Webometrie ist ein Teilbereich der Informationswissenschaft der zur Zeit auf die Analyse von Linkstrukturen konzentriert ist. Er ist stark von der Zitationsanalyse geprägt, wie der empirische Schwerpunkt auf der Wissenschaftsanalyse zeigt. In diesem Beitrag diskutieren wir die Nutzung linkbasierter Maße in einem breiten informetrischen Kontext und bewerten verschiedene Verfahren, auch im Hinblick auf ihr generelles Potentialfür die Sozialwissenschaften. Dabei wird auch ein allgemeiner Rahmenfür Linkanalysen mit den erforderlichen Arbeitsschritten vorgestellt. Abschließend werden vielversprechende zukünftige Anwendungsfelder der Webometrie benannt, unter besonderer Berücksichtigung der Analyse von Blogs.
    Date
    4.12.2006 12:12:22
  3. Hassler, M.: Web analytics : Metriken auswerten, Besucherverhalten verstehen, Website optimieren ; [Metriken analysieren und interpretieren ; Besucherverhalten verstehen und auswerten ; Website-Ziele definieren, Webauftritt optimieren und den Erfolg steigern] (2009) 0.03
    0.028016161 = product of:
      0.04202424 = sum of:
        0.031532075 = weight(_text_:im in 3586) [ClassicSimilarity], result of:
          0.031532075 = score(doc=3586,freq=8.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.2186231 = fieldWeight in 3586, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3586)
        0.0104921665 = product of:
          0.031476498 = sum of:
            0.031476498 = weight(_text_:online in 3586) [ClassicSimilarity], result of:
              0.031476498 = score(doc=3586,freq=6.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20327234 = fieldWeight in 3586, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3586)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Web Analytics bezeichnet die Sammlung, Analyse und Auswertung von Daten der Website-Nutzung mit dem Ziel, diese Informationen zum besseren Verständnis des Besucherverhaltens sowie zur Optimierung der Website zu nutzen. Je nach Ziel der eigenen Website - z.B. die Vermittlung eines Markenwerts oder die Vermehrung von Kontaktanfragen, Bestellungen oder Newsletter-Abonnements - können Sie anhand von Web Analytics herausfinden, wo sich Schwachstellen Ihrer Website befinden und wie Sie Ihre eigenen Ziele durch entsprechende Optimierungen besser erreichen. Dabei ist Web Analytics nicht nur für Website-Betreiber und IT-Abteilungen interessant, sondern wird insbesondere auch mehr und mehr für Marketing und Management nutzbar. Mit diesem Buch lernen Sie, wie Sie die Nutzung Ihrer Website analysieren. Sie können z. B. untersuchen, welche Traffic-Quelle am meisten Umsatz bringt oder welche Bereiche der Website besonders häufig genutzt werden und vieles mehr. Auf diese Weise werden Sie Ihre Besucher, ihr Verhalten und ihre Motivation besser kennen lernen, Ihre Website darauf abstimmen und somit Ihren Erfolg steigern können. Um aus Web Analytics einen wirklichen Mehrwert ziehen zu können, benötigen Sie fundiertes Wissen. Marco Hassler gibt Ihnen in seinem Buch einen umfassenden Einblick in Web Analytics. Er zeigt Ihnen detailliert, wie das Verhalten der Besucher analysiert wird und welche Metriken Sie wann sinnvoll anwenden können. Im letzten Teil des Buches zeigt Ihnen der Autor, wie Sie Ihre Auswertungsergebnisse dafür nutzen, über Conversion-Messungen die Website auf ihre Ziele hin zu optimieren. Ziel dieses Buches ist es, konkrete Web-Analytics-Kenntnisse zu vermitteln und wertvolle praxisorientierte Tipps zu geben. Dazu schlägt das Buch die Brücke zu tangierenden Themenbereichen wie Usability, User-Centered-Design, Online Branding, Online-Marketing oder Suchmaschinenoptimierung. Marco Hassler gibt Ihnen klare Hinweise und Anleitungen, wie Sie Ihre Ziele erreichen.
    Footnote
    Rez. in Mitt. VÖB 63(2010) H.1/2, S.147-148 (M. Buzinkay): "Webseiten-Gestaltung und Webseiten-Analyse gehen Hand in Hand. Leider wird das Letztere selten wenn überhaupt berücksichtigt. Zu Unrecht, denn die Analyse der eigenen Maßnahmen ist zur Korrektur und Optimierung entscheidend. Auch wenn die Einsicht greift, dass die Analyse von Webseiten wichtig wäre, ist es oft ein weiter Weg zur Realisierung. Warum? Analyse heißt kontinuierlicher Aufwand, und viele sind nicht bereit beziehungsweise haben nicht die zeitlichen Ressourcen dazu. Ist man einmal zu der Überzeugung gelangt, dass man seine Web-Aktivitäten dennoch optimieren, wenn nicht schon mal gelegentlich hinterfragen sollte, dann lohnt es sich, Marco Hasslers "Web Analytics" in die Hand zu nehmen. Es ist definitiv kein Buch für einen einzigen Lese-Abend, sondern ein Band, mit welchem gearbeitet werden muss. D.h. auch hier: Web-Analyse bedeutet Arbeit und intensive Auseinandersetzung (ein Umstand, den viele nicht verstehen und akzeptieren wollen). Das Buch ist sehr dicht und bleibt trotzdem übersichtlich. Die Gliederung der Themen - von den Grundlagen der Datensammlung, über die Definition von Metriken, hin zur Optimierung von Seiten und schließlich bis zur Arbeit mit Web Analyse Werkzeugen - liefert einen roten Faden, der schön von einem Thema auf das nächste aufbaut. Dadurch fällt es auch leicht, ein eigenes Projekt begleitend zur Buchlektüre Schritt für Schritt aufzubauen. Zahlreiche Screenshots und Illustrationen erleichtern zudem das Verstehen der Zusammenhänge und Erklärungen im Text. Das Buch überzeugt aber auch durch seine Tiefe (bis auf das Kapitel, wo es um die Zusammenstellung von Personas geht) und den angenehm zu lesenden Schreibstil. Von mir kommt eine dringende Empfehlung an alle, die sich mit Online Marketing im Allgemeinen, mit Erfolgskontrolle von Websites und Web-Aktivitäten im Speziellen auseindersetzen."
  4. Pernik, V.; Schlögl, C.: Möglichkeiten und Grenzen von Web Structure Mining am Beispiel von informationswissenschaftlichen Hochschulinstituten im deutschsprachigen Raum (2006) 0.02
    0.016987845 = product of:
      0.050963532 = sum of:
        0.050963532 = weight(_text_:im in 78) [ClassicSimilarity], result of:
          0.050963532 = score(doc=78,freq=4.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.35334828 = fieldWeight in 78, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0625 = fieldNorm(doc=78)
      0.33333334 = coord(1/3)
    
    Abstract
    In diesem Beitrag wird eine webometrische Untersuchung vorgestellt, die informationswissenschaftliche Hochschulinstitute in den deutschsprachigen Ländern zum Gegenstand hatte. Ziel dieser Studie war es, einerseits die Linkbeziehungen zwischen den Hochschulinstituten zu analysieren. Andererseits sollten Ähnlichkeiten (zum Beispiel aufgrund von fachlichen, örtlichen oder institutionellen Gegebenheiten) identifiziert werden. Es werden nicht nur die Vorgehensweise bei derartigen Analysen und die daraus resultierenden Ergebnisse dargestellt. Insbesondere sollen Problembereiche und Einschränkungen, die mit der Analyse von Linkstrukturen im Web verbunden sind, thematisiert werden.
  5. Tonta, Y.: Scholarly communication and the use of networked information sources (1996) 0.02
    0.0161402 = product of:
      0.0484206 = sum of:
        0.0484206 = product of:
          0.0726309 = sum of:
            0.031153653 = weight(_text_:online in 6389) [ClassicSimilarity], result of:
              0.031153653 = score(doc=6389,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 6389, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6389)
            0.04147724 = weight(_text_:22 in 6389) [ClassicSimilarity], result of:
              0.04147724 = score(doc=6389,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23214069 = fieldWeight in 6389, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6389)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Examines the use of networked information sources in scholarly communication. Networked information sources are defined broadly to cover: documents and images stored on electronic network hosts; data files; newsgroups; listservs; online information services and electronic periodicals. Reports results of a survey to determine how heavily, if at all, networked information sources are cited in scholarly printed periodicals published in 1993 and 1994. 27 printed periodicals, representing a wide range of subjects and the most influential periodicals in their fields, were identified through the Science Citation Index and Social Science Citation Index Journal Citation Reports. 97 articles were selected for further review and references, footnotes and bibliographies were checked for references to networked information sources. Only 2 articles were found to contain such references. Concludes that, although networked information sources facilitate scholars' work to a great extent during the research process, scholars have yet to incorporate such sources in the bibliographies of their published articles
    Source
    IFLA journal. 22(1996) no.3, S.240-245
  6. Zhang, Y.; Jansen, B.J.; Spink, A.: Identification of factors predicting clickthrough in Web searching using neural network analysis (2009) 0.02
    0.016094714 = product of:
      0.048284143 = sum of:
        0.048284143 = product of:
          0.072426215 = sum of:
            0.03094897 = weight(_text_:retrieval in 2742) [ClassicSimilarity], result of:
              0.03094897 = score(doc=2742,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20052543 = fieldWeight in 2742, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2742)
            0.04147724 = weight(_text_:22 in 2742) [ClassicSimilarity], result of:
              0.04147724 = score(doc=2742,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23214069 = fieldWeight in 2742, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2742)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    In this research, we aim to identify factors that significantly affect the clickthrough of Web searchers. Our underlying goal is determine more efficient methods to optimize the clickthrough rate. We devise a clickthrough metric for measuring customer satisfaction of search engine results using the number of links visited, number of queries a user submits, and rank of clicked links. We use a neural network to detect the significant influence of searching characteristics on future user clickthrough. Our results show that high occurrences of query reformulation, lengthy searching duration, longer query length, and the higher ranking of prior clicked links correlate positively with future clickthrough. We provide recommendations for leveraging these findings for improving the performance of search engine retrieval and result ranking, along with implications for search engine marketing.
    Date
    22. 3.2009 17:49:11
  7. Thelwall, M.; Sud, P.: ¬A comparison of methods for collecting web citation data for academic organizations (2011) 0.01
    0.013874464 = product of:
      0.04162339 = sum of:
        0.04162339 = product of:
          0.062435087 = sum of:
            0.025961377 = weight(_text_:online in 4626) [ClassicSimilarity], result of:
              0.025961377 = score(doc=4626,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16765618 = fieldWeight in 4626, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4626)
            0.03647371 = weight(_text_:retrieval in 4626) [ClassicSimilarity], result of:
              0.03647371 = score(doc=4626,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23632148 = fieldWeight in 4626, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4626)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The primary webometric method for estimating the online impact of an organization is to count links to its website. Link counts have been available from commercial search engines for over a decade but this was set to end by early 2012 and so a replacement is needed. This article compares link counts to two alternative methods: URL citations and organization title mentions. New variations of these methods are also introduced. The three methods are compared against each other using Yahoo!. Two of the three methods (URL citations and organization title mentions) are also compared against each other using Bing. Evidence from a case study of 131 UK universities and 49 US Library and Information Science (LIS) departments suggests that Bing's Hit Count Estimates (HCEs) for popular title searches are not useful for webometric research but that Yahoo!'s HCEs for all three types of search and Bing's URL citation HCEs seem to be consistent. For exact URL counts the results of all three methods in Yahoo! and both methods in Bing are also consistent. Four types of accuracy factors are also introduced and defined: search engine coverage, search engine retrieval variation, search engine retrieval anomalies, and query polysemy.
  8. Rohman, A.: ¬The emergence, peak, and abeyance of an online information ground : the lifecycle of a Facebook group for verifying information during violence (2021) 0.01
    0.0077401884 = product of:
      0.023220565 = sum of:
        0.023220565 = product of:
          0.06966169 = sum of:
            0.06966169 = weight(_text_:online in 153) [ClassicSimilarity], result of:
              0.06966169 = score(doc=153,freq=10.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.4498688 = fieldWeight in 153, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=153)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Information grounds emerge as people share information with others in a common place. Many studies have investigated the emergence of information grounds in public places. This study pays attention to the emergence, peak, and abeyance of an online information ground. It investigates a Facebook group used by youth for sharing information when misinformation spread wildly during the 2011 violence in Ambon, Indonesia. The findings demonstrate change and continuity in an online information ground; it became an information hub when reaching a peak cycle, and an information repository when entering into abeyance. Despite this period of nonactivity, the friendships and collective memories resulting from information ground interactions last over time and can be used for reactivating the online information ground when new needs emerge. Illuminating the lifecycles of an online information ground, the findings have potential to explain the dynamic of users' interactions with others and with information in quotidian spaces.
  9. Zhang, Y.: ¬The impact of Internet-based electronic resources on formal scholarly communication in the area of library and information science : a citation analysis (1998) 0.01
    0.0054312674 = product of:
      0.016293801 = sum of:
        0.016293801 = product of:
          0.0488814 = sum of:
            0.0488814 = weight(_text_:22 in 2808) [ClassicSimilarity], result of:
              0.0488814 = score(doc=2808,freq=4.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.27358043 = fieldWeight in 2808, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2808)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    30. 1.1999 17:22:22
  10. Neth, M.: Citation analysis and the Web (1998) 0.01
    0.00537668 = product of:
      0.01613004 = sum of:
        0.01613004 = product of:
          0.048390117 = sum of:
            0.048390117 = weight(_text_:22 in 108) [ClassicSimilarity], result of:
              0.048390117 = score(doc=108,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2708308 = fieldWeight in 108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=108)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    10. 1.1999 16:22:37
  11. Thelwall, M.: Webometrics (2009) 0.00
    0.004895325 = product of:
      0.0146859735 = sum of:
        0.0146859735 = product of:
          0.04405792 = sum of:
            0.04405792 = weight(_text_:online in 3906) [ClassicSimilarity], result of:
              0.04405792 = score(doc=3906,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.284522 = fieldWeight in 3906, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3906)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Webometrics is an information science field concerned with measuring aspects of the World Wide Web (WWW) for a variety of information science research goals. It came into existence about five years after the Web was formed and has since grown to become a significant aspect of information science, at least in terms of published research. Although some webometrics research has focused on the structure or evolution of the Web itself or the performance of commercial search engines, most has used data from the Web to shed light on information provision or online communication in various contexts. Most prominently, techniques have been developed to track, map, and assess Web-based informal scholarly communication, for example, in terms of the hyperlinks between academic Web sites or the online impact of digital repositories. In addition, a range of nonacademic issues and groups of Web users have also been analyzed.
  12. Thelwall, M.; Wilkinson, D.: Finding similar academic Web sites with links, bibliometric couplings and colinks (2004) 0.00
    0.0048631616 = product of:
      0.014589485 = sum of:
        0.014589485 = product of:
          0.043768454 = sum of:
            0.043768454 = weight(_text_:retrieval in 2571) [ClassicSimilarity], result of:
              0.043768454 = score(doc=2571,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2835858 = fieldWeight in 2571, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2571)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    A common task in both Webmetrics and Web information retrieval is to identify a set of Web pages or sites that are similar in content. In this paper we assess the extent to which links, colinks and couplings can be used to identify similar Web sites. As an experiment, a random sample of 500 pairs of domains from the UK academic Web were taken and human assessments of site similarity, based upon content type, were compared against ratings for the three concepts. The results show that using a combination of all three gives the highest probability of identifying similar sites, but surprisingly this was only a marginal improvement over using links alone. Another unexpected result was that high values for either colink counts or couplings were associated with only a small increased likelihood of similarity. The principal advantage of using couplings and colinks was found to be greater coverage in terms of a much larger number of pairs of sites being connected by these measures, instead of increased probability of similarity. In information retrieval terminology, this is improved recall rather than improved precision.
  13. Böhm, P.; Rittberger, M.: Einsatz von Webanalyse in überregionalen Informationsinfrastruktureinrichtungen (2016) 0.00
    0.004038437 = product of:
      0.01211531 = sum of:
        0.01211531 = product of:
          0.03634593 = sum of:
            0.03634593 = weight(_text_:online in 3239) [ClassicSimilarity], result of:
              0.03634593 = score(doc=3239,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23471867 = fieldWeight in 3239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3239)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Um die Nutzung ihrer Informationsdienste und -angebote beurteilen zu können, setzen Informationsinfrastruktureinrichtungen der Leibniz-Gemeinschaft die Webanalyse ein. Sieben Leibniz-Informationsinfrastruktureinrichtungen wurden mit einem Online-Fragebogen und einem teilstandardisierten Interview zur Nutzung der Webanalyse befragt. Es werden die verwendeten Methoden, Werkzeuge und Metriken sowie die verfügbaren Ressourcen als auch die Zukunftsperspektive der Webanalyse an den Einrichtungen beschrieben. Insgesamt wird der Stellenwert der Webanalyse an den Instituten als hoch angesehen. Die bisher wenig ausgeprägte Standardisierung und die fehlende Einheitlichkeit der Metriken und Erhebungsmethoden erschweren einen möglichen Vergleich von Nutzungsdaten allerdings erheblich.
  14. Thelwall, M.: Interpreting social science link analysis research : a theoretical framework (2006) 0.00
    0.0034615172 = product of:
      0.010384551 = sum of:
        0.010384551 = product of:
          0.031153653 = sum of:
            0.031153653 = weight(_text_:online in 4908) [ClassicSimilarity], result of:
              0.031153653 = score(doc=4908,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 4908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4908)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Link analysis in various forms is now an established technique in many different subjects, reflecting the perceived importance of links and of the Web. A critical but very difficult issue is how to interpret the results of social science link analyses. lt is argued that the dynamic nature of the Web, its lack of quality control, and the online proliferation of copying and imitation mean that methodologies operating within a highly positivist, quantitative framework are ineffective. Conversely, the sheer variety of the Web makes application of qualitative methodologies and pure reason very problematic to large-scale studies. Methodology triangulation is consequently advocated, in combination with a warning that the Web is incapable of giving definitive answers to large-scale link analysis research questions concerning social factors underlying link creation. Finally, it is claimed that although theoretical frameworks are appropriate for guiding research, a Theory of Link Analysis is not possible.
  15. Hong, T.: ¬The influence of structural and message features an Web site credibility (2006) 0.00
    0.0034615172 = product of:
      0.010384551 = sum of:
        0.010384551 = product of:
          0.031153653 = sum of:
            0.031153653 = weight(_text_:online in 5787) [ClassicSimilarity], result of:
              0.031153653 = score(doc=5787,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 5787, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5787)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This article explores the associations that message features and Web structural features have with perceptions of Web site credibility. In a within-subjects experiment, 84 participants actively located health-related Web sites an the basis of two tasks that differed in task specificity and complexity. Web sites that were deemed most credible were content analyzed for message features and structural features that have been found to be associated with perceptions of source credibility. Regression analyses indicated that message features predicted perceived Web site credibility for both searches when controlling for Internet experience and issue involvement. Advertisements and structural features had no significant effects an perceived Web site credibility. Institutionaffiliated domain names (.gov, org, edu) predicted Web site credibility, but only in the general search, which was more difficult. Implications of results are discussed in terms of online credibility research and Web site design.
  16. Barnett, G.A.; Fink, E.L.: Impact of the internet and scholar age distribution on academic citation age (2008) 0.00
    0.0034615172 = product of:
      0.010384551 = sum of:
        0.010384551 = product of:
          0.031153653 = sum of:
            0.031153653 = weight(_text_:online in 1376) [ClassicSimilarity], result of:
              0.031153653 = score(doc=1376,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 1376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1376)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This article examines the impact of the Internet and the age distribution of research scholars on academic citation age with a mathematical model proposed by Barnett, Fink, and Debus (1989) and a revised model that incorporates information about the online environment and scholar age distribution. The modified model fits the data well, accounting for 99.6% of the variance for science citations and 99.8% for social science citations. The Internet's impact on the aging process of academic citations has been very small, accounting for only 0.1% for the social sciences and 0.8% for the sciences. Rather than resulting in the use of more recent citations, the Internet appears to have lengthened the average life of academic citations by 6 to 8 months. The aging of scholars seems to have a greater impact, accounting for 2.8% of the variance for the sciences and 0.9% for the social sciences. However, because the diffusion of the Internet and the aging of the professoriate are correlated over this time period, differentiating their effects is somewhat problematic.
  17. Menczer, F.: Lexical and semantic clustering by Web links (2004) 0.00
    0.0034387745 = product of:
      0.0103163235 = sum of:
        0.0103163235 = product of:
          0.03094897 = sum of:
            0.03094897 = weight(_text_:retrieval in 3090) [ClassicSimilarity], result of:
              0.03094897 = score(doc=3090,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20052543 = fieldWeight in 3090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3090)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  18. Thelwall, M.: Results from a web impact factor crawler (2001) 0.00
    0.0028845975 = product of:
      0.008653793 = sum of:
        0.008653793 = product of:
          0.025961377 = sum of:
            0.025961377 = weight(_text_:online in 4490) [ClassicSimilarity], result of:
              0.025961377 = score(doc=4490,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16765618 = fieldWeight in 4490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4490)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Web impact factors, the proposed web equivalent of impact factors for journals, can be calculated by using search engines. It has been found that the results are problematic because of the variable coverage of search engines as well as their ability to give significantly different results over short periods of time. The fundamental problem is that although some search engines provide a functionality that is capable of being used for impact calculations, this is not their primary task and therefore they do not give guarantees as to performance in this respect. In this paper, a bespoke web crawler designed specifically for the calculation of reliable WIFs is presented. This crawler was used to calculate WIFs for a number of UK universities, and the results of these calculations are discussed. The principal findings were that with certain restrictions, WIFs can be calculated reliably, but do not correlate with accepted research rankings owing to the variety of material hosted on university servers. Changes to the calculations to improve the fit of the results to research rankings are proposed, but there are still inherent problems undermining the reliability of the calculation. These problems still apply if the WIF scores are taken on their own as indicators of the general impact of any area of the Internet, but with care would not apply to online journals.
  19. Goh, D.H.-L.; Ng, P.K.: Link decay in leading information science journals (2007) 0.00
    0.0028845975 = product of:
      0.008653793 = sum of:
        0.008653793 = product of:
          0.025961377 = sum of:
            0.025961377 = weight(_text_:online in 1334) [ClassicSimilarity], result of:
              0.025961377 = score(doc=1334,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16765618 = fieldWeight in 1334, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1334)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Web citations have become common in scholarly publications as the amount of online literature increases. Yet, such links are not persistent and many decay over time, causing accessibility problems for readers. The present study investigates the link decay phenomenon in three leading information science journals. Articles spanning a period of 7 years (1997-2003) were downloaded, and their links were extracted. From these, a measure of link decay, the half-life, was computed to be approximately 5 years, which compares favorably against other disciplines (1.4-4.8 years). The study also investigated types of link accessibility errors encountered as well as examined characteristics of links that may be associated with decay. It was found that approximately 31% of all citations were not accessible during the time of testing, and the majority of errors were due to missing content (HTTP Error Code 404). Citations from the edu domain were also found to have the highest failure rates of 36% when compared with other popular top-level domains. Results indicate that link decay is a problem that cannot be ignored, and implications for journal authors and readers are discussed.
  20. Jepsen, E.T.; Seiden, P.; Ingwersen, P.; Björneborn, L.; Borlund, P.: Characteristics of scientific Web publications : preliminary data gathering and analysis (2004) 0.00
    0.0028656456 = product of:
      0.008596936 = sum of:
        0.008596936 = product of:
          0.025790809 = sum of:
            0.025790809 = weight(_text_:retrieval in 3091) [ClassicSimilarity], result of:
              0.025790809 = score(doc=3091,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 3091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3091)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Because of the increasing presence of scientific publications an the Web, combined with the existing difficulties in easily verifying and retrieving these publications, research an techniques and methods for retrieval of scientific Web publications is called for. In this article, we report an the initial steps taken toward the construction of a test collection of scientific Web publications within the subject domain of plant biology. The steps reported are those of data gathering and data analysis aiming at identifying characteristics of scientific Web publications. The data used in this article were generated based an specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AlITheWeb, and AItaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality of the content and whether this information could be employed to harvest, filter, and rank Web publications. The attributes analyzed were inlinks, outlinks, bibliographic references, file format, language, search engine overlap, structural position (according to site structure), and the occurrence of various types of metadata. As could be expected, the ranked output differs between the three search engines. Apparently, this is caused by differences in ranking algorithms rather than the databases themselves. In fact, because scientific Web content in this subject domain receives few inlinks, both AItaVista and AlITheWeb retrieved a higher degree of accessible scientific content than Google. Because of the search engine cutoffs of accessible URLs, the feasibility of using search engine output for Web content analysis is also discussed.