Search (7 results, page 1 of 1)

  • × author_ss:"Vaughan, L."
  1. Vaughan, L.: Uncovering information from social media hyperlinks (2016) 0.02
    0.022747558 = product of:
      0.09099023 = sum of:
        0.09099023 = weight(_text_:social in 2892) [ClassicSimilarity], result of:
          0.09099023 = score(doc=2892,freq=10.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.49257156 = fieldWeight in 2892, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2892)
      0.25 = coord(1/4)
    
    Abstract
    Analyzing hyperlink patterns has been a major research topic since the early days of the web. Numerous studies reported uncovering rich information and methodological advances. However, very few studies thus far examined hyperlinks in the rapidly developing sphere of social media. This paper reports a study that helps fill this gap. The study analyzed links originating from tweets to the websites of 3 types of organizations (government, education, and business). Data were collected over an 8-month period to observe the fluctuation and reliability of the individual data set. Hyperlink data from the general web (not social media sites) were also collected and compared with social media data. The study found that the 2 types of hyperlink data correlated significantly and that analyzing the 2 together can help organizations see their relative strength or weakness in the two platforms. The study also found that both types of inlink data correlated with offline measures of organizations' performance. Twitter data from a relatively short period were fairly reliable in estimating performance measures. The timelier nature of social media data as well as the date/time stamps on tweets make this type of data potentially more valuable than that from the general web.
  2. Romero-Frías, E.; Vaughan, L.: European political trends viewed through patterns of Web linking (2010) 0.01
    0.014386819 = product of:
      0.057547275 = sum of:
        0.057547275 = weight(_text_:social in 4002) [ClassicSimilarity], result of:
          0.057547275 = score(doc=4002,freq=4.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.3115296 = fieldWeight in 4002, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4002)
      0.25 = coord(1/4)
    
    Abstract
    This study explored the feasibility of using Web hyperlink data to study European political Web sites. Ninety-six European Union (EU) political parties belonging to a wide range of ideological, historical, and linguistic backgrounds were included in the study. Various types of data on Web links to party Web sites were collected. The Web colink data were visualized using multidimensional scaling (MDS), while the inlink data were analyzed with a 2-way analysis of variance test. The results showed that Web hyperlink data did reflect some political patterns in the EU. The MDS maps showed clusters of political parties along ideological, historical, linguistic, and social lines. Statistical analysis based on inlink counts further confirmed that there was a significant difference along the line of the political history of a country, such that left-wing parties in the former communist countries received considerably fewer inlinks to their Web sites than left-wing parties in countries without a history of communism did. The study demonstrated the possibility of using Web hyperlink data to gain insights into political situations in the EU. This suggests the richness of Web hyperlink data and its potential in studying social-political phenomena.
  3. Vaughan, L.; Shaw , D.: Bibliographic and Web citations : what Is the difference? (2003) 0.01
    0.010173016 = product of:
      0.040692065 = sum of:
        0.040692065 = weight(_text_:social in 5176) [ClassicSimilarity], result of:
          0.040692065 = score(doc=5176,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.22028469 = fieldWeight in 5176, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5176)
      0.25 = coord(1/4)
    
    Abstract
    Vaughn, and Shaw look at the relationship between traditional citation and Web citation (not hyperlinks but rather textual mentions of published papers). Using English language research journals in ISI's 2000 Journal Citation Report - Information and Library Science category - 1209 full length papers published in 1997 in 46 journals were identified. Each was searched in Social Science Citation Index and on the Web using Google phrase search by entering the title in quotation marks, and followed for distinction where necessary with sub-titles, author's names, and journal title words. After removing obvious false drops, the number of web sites was recorded for comparison with the SSCI counts. A second sample from 1992 was also collected for examination. There were a total of 16,371 web citations to the selected papers. The top and bottom ranked four journals were then examined and every third citation to every third paper was selected and classified as to source type, domain, and country of origin. Web counts are much higher than ISI citation counts. Of the 46 journals from 1997, 26 demonstrated a significant correlation between Web and traditional citation counts, and 11 of the 15 in the 1992 sample also showed significant correlation. Journal impact factor in 1998 and 1999 correlated significantly with average Web citations per journal in the 1997 data, but at a low level. Thirty percent of web citations come from other papers posted on the web, and 30percent from listings of web based bibliographic services, while twelve percent come from class reading lists. High web citation journals often have web accessible tables of content.
  4. Leydesdorff, L.; Vaughan, L.: Co-occurrence matrices and their applications in information science : extending ACA to the Web environment (2006) 0.01
    0.010173016 = product of:
      0.040692065 = sum of:
        0.040692065 = weight(_text_:social in 6113) [ClassicSimilarity], result of:
          0.040692065 = score(doc=6113,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.22028469 = fieldWeight in 6113, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6113)
      0.25 = coord(1/4)
    
    Abstract
    Co-occurrence matrices, such as cocitation, coword, and colink matrices, have been used widely in the information sciences. However, confusion and controversy have hindered the proper statistical analysis of these data. The underlying problem, in our opinion, involved understanding the nature of various types of matrices. This article discusses the difference between a symmetrical cocitation matrix and an asymmetrical citation matrix as well as the appropriate statistical techniques that can be applied to each of these matrices, respectively. Similarity measures (such as the Pearson correlation coefficient or the cosine) should not be applied to the symmetrical cocitation matrix but can be applied to the asymmetrical citation matrix to derive the proximity matrix. The argument is illustrated with examples. The study then extends the application of co-occurrence matrices to the Web environment, in which the nature of the available data and thus data collection methods are different from those of traditional databases such as the Science Citation Index. A set of data collected with the Google Scholar search engine is analyzed by using both the traditional methods of multivariate analysis and the new visualization software Pajek, which is based on social network analysis and graph theory.
  5. Vaughan, L.; Yang, R.: Web data as academic and business quality estimates : a comparison of three data sources (2012) 0.01
    0.010173016 = product of:
      0.040692065 = sum of:
        0.040692065 = weight(_text_:social in 452) [ClassicSimilarity], result of:
          0.040692065 = score(doc=452,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.22028469 = fieldWeight in 452, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=452)
      0.25 = coord(1/4)
    
    Abstract
    Earlier studies found that web hyperlink data contain various types of information, ranging from academic to political, that can be used to analyze a variety of social phenomena. Specifically, the numbers of inlinks to academic websites are associated with academic performance, while the counts of inlinks to company websites correlate with business variables. However, the scarcity of sources from which to collect inlink data in recent years has required us to seek new data sources. The recent demise of the inlink search function of Yahoo! made this need more pressing. Different alternative variables or data sources have been proposed. This study compared three types of web data to determine which are better as academic and business quality estimates, and what are the relationships among the three data sources. The study found that Alexa inlink and Google URL citation data can replace Yahoo! inlink data and that the former is better than the latter. Alexa is even better than Yahoo!, which has been the main data source in recent years. The unique nature of Alexa data could explain its relative advantages over other data sources.
  6. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.01
    0.0065351077 = product of:
      0.026140431 = sum of:
        0.026140431 = product of:
          0.052280862 = sum of:
            0.052280862 = weight(_text_:aspects in 4279) [ClassicSimilarity], result of:
              0.052280862 = score(doc=4279,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2496898 = fieldWeight in 4279, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4279)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
  7. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.00
    0.0039227554 = product of:
      0.015691021 = sum of:
        0.015691021 = product of:
          0.031382043 = sum of:
            0.031382043 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.031382043 = score(doc=1605,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22