Search (6 results, page 1 of 1)

  • × author_ss:"Vaughan, L."
  1. Vaughan, L.; Yang, R.: Web data as academic and business quality estimates : a comparison of three data sources (2012) 0.02
    0.021019576 = product of:
      0.10509788 = sum of:
        0.10509788 = weight(_text_:business in 452) [ClassicSimilarity], result of:
          0.10509788 = score(doc=452,freq=6.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.4839962 = fieldWeight in 452, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.0390625 = fieldNorm(doc=452)
      0.2 = coord(1/5)
    
    Abstract
    Earlier studies found that web hyperlink data contain various types of information, ranging from academic to political, that can be used to analyze a variety of social phenomena. Specifically, the numbers of inlinks to academic websites are associated with academic performance, while the counts of inlinks to company websites correlate with business variables. However, the scarcity of sources from which to collect inlink data in recent years has required us to seek new data sources. The recent demise of the inlink search function of Yahoo! made this need more pressing. Different alternative variables or data sources have been proposed. This study compared three types of web data to determine which are better as academic and business quality estimates, and what are the relationships among the three data sources. The study found that Alexa inlink and Google URL citation data can replace Yahoo! inlink data and that the former is better than the latter. Alexa is even better than Yahoo!, which has been the main data source in recent years. The unique nature of Alexa data could explain its relative advantages over other data sources.
  2. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.02
    0.015037919 = product of:
      0.0751896 = sum of:
        0.0751896 = weight(_text_:great in 4279) [ClassicSimilarity], result of:
          0.0751896 = score(doc=4279,freq=2.0), product of:
            0.24172091 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042928502 = queryNorm
            0.31105953 = fieldWeight in 4279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4279)
      0.2 = coord(1/5)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
  3. Vaughan, L.; Thelwall, M.: Search engine coverage bias : evidence and possible causes (2004) 0.01
    0.014562788 = product of:
      0.07281394 = sum of:
        0.07281394 = weight(_text_:business in 2536) [ClassicSimilarity], result of:
          0.07281394 = score(doc=2536,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.33532238 = fieldWeight in 2536, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.046875 = fieldNorm(doc=2536)
      0.2 = coord(1/5)
    
    Abstract
    Commercial search engines are now playing an increasingly important role in Web information dissemination and access. Of particular interest to business and national governments is whether the big engines have coverage biased towards the US or other countries. In our study we tested for national biases in three major search engines and found significant differences in their coverage of commercial Web sites. The US sites were much better covered than the others in the study: sites from China, Taiwan and Singapore. We then examined the possible technical causes of the differences and found that the language of a site does not affect its coverage by search engines. However, the visibility of a site, measured by the number of links to it, affects its chance to be covered by search engines. We conclude that the coverage bias does exist but this is due not to deliberate choices of the search engines but occurs as a natural result of cumulative advantage effects of US sites on the Web. Nevertheless, the bias remains a cause for international concern.
  4. Vaughan, L.: Uncovering information from social media hyperlinks (2016) 0.01
    0.012135657 = product of:
      0.06067828 = sum of:
        0.06067828 = weight(_text_:business in 2892) [ClassicSimilarity], result of:
          0.06067828 = score(doc=2892,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.2794353 = fieldWeight in 2892, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2892)
      0.2 = coord(1/5)
    
    Abstract
    Analyzing hyperlink patterns has been a major research topic since the early days of the web. Numerous studies reported uncovering rich information and methodological advances. However, very few studies thus far examined hyperlinks in the rapidly developing sphere of social media. This paper reports a study that helps fill this gap. The study analyzed links originating from tweets to the websites of 3 types of organizations (government, education, and business). Data were collected over an 8-month period to observe the fluctuation and reliability of the individual data set. Hyperlink data from the general web (not social media sites) were also collected and compared with social media data. The study found that the 2 types of hyperlink data correlated significantly and that analyzing the 2 together can help organizations see their relative strength or weakness in the two platforms. The study also found that both types of inlink data correlated with offline measures of organizations' performance. Twitter data from a relatively short period were fairly reliable in estimating performance measures. The timelier nature of social media data as well as the date/time stamps on tweets make this type of data potentially more valuable than that from the general web.
  5. Vaughan, L.; Ninkov, A.: ¬A new approach to web co-link analysis (2018) 0.01
    0.012135657 = product of:
      0.06067828 = sum of:
        0.06067828 = weight(_text_:business in 4256) [ClassicSimilarity], result of:
          0.06067828 = score(doc=4256,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.2794353 = fieldWeight in 4256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4256)
      0.2 = coord(1/5)
    
    Abstract
    Numerous web co-link studies have analyzed a wide variety of websites ranging from those in the academic and business arena to those dealing with politics and governments. Such studies uncover rich information about these organizations. In recent years, however, there has been a dearth of co-link analysis, mainly due to the lack of sources from which co-link data can be collected directly. Although several commercial services such as Alexa provide inlink data, none provide co-link data. We propose a new approach to web co-link analysis that can alleviate this problem so that researchers can continue to mine the valuable information contained in co-link data. The proposed approach has two components: (a) generating co-link data from inlink data using a computer program; (b) analyzing co-link data at the site level in addition to the page level that previous co-link analyses have used. The site-level analysis has the potential of expanding co-link data sources. We tested this proposed approach by analyzing a group of websites focused on vaccination using Moz inlink data. We found that the approach is feasible, as we were able to generate co-link data from inlink data and analyze the co-link data with multidimensional scaling.
  6. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.00
    0.0029081097 = product of:
      0.014540548 = sum of:
        0.014540548 = product of:
          0.029081097 = sum of:
            0.029081097 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.029081097 = score(doc=1605,freq=2.0), product of:
                0.1503283 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042928502 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22