Search (2578 results, page 129 of 129)

  • × theme_ss:"Internet"
  • × type_ss:"a"
  1. Larosiliere, G.D.; Carter, L.D.; Meske, C.: How does the world connect? : exploring the global diffusion of social network sites (2017) 0.00
    1.8014197E-4 = product of:
      0.0030624135 = sum of:
        0.0030624135 = weight(_text_:in in 3753) [ClassicSimilarity], result of:
          0.0030624135 = score(doc=3753,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.09017298 = fieldWeight in 3753, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3753)
      0.05882353 = coord(1/17)
    
    Abstract
    This study explores the main determinants of social network adoption at the country level. We use the technology-organization-environment (TOE) framework to investigate factors influencing social network adoption. The authors use cross-sectional data from 130 countries. The results indicate that social network adoption, at the country level, is positively influenced by the technological maturity, public readiness, and information and communication technology law sophistication. Technological, organizational, and environmental factors altogether accounted for 67% of variance in social network adoption. These findings provide a first insight into the usage of social network sites at the country level, as well as the main factors that influence public adoption. Implications for research and practice are discussed.
  2. Shmargad, Y.: Structural diversity and tie strength in the purchase of a social networking app (2018) 0.00
    1.8014197E-4 = product of:
      0.0030624135 = sum of:
        0.0030624135 = weight(_text_:in in 4220) [ClassicSimilarity], result of:
          0.0030624135 = score(doc=4220,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.09017298 = fieldWeight in 4220, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4220)
      0.05882353 = coord(1/17)
    
  3. Wang, X.; Zhang, M.; Fan, W.; Zhao, K.: Understanding the spread of COVID-19 misinformation on social media : the effects of topics and a political leader's nudge (2022) 0.00
    1.8014197E-4 = product of:
      0.0030624135 = sum of:
        0.0030624135 = weight(_text_:in in 549) [ClassicSimilarity], result of:
          0.0030624135 = score(doc=549,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.09017298 = fieldWeight in 549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=549)
      0.05882353 = coord(1/17)
    
    Abstract
    The spread of misinformation on social media has become a major societal issue during recent years. In this work, we used the ongoing COVID-19 pandemic as a case study to systematically investigate factors associated with the spread of multi-topic misinformation related to one event on social media based on the heuristic-systematic model. Among factors related to systematic processing of information, we discovered that the topics of a misinformation story matter, with conspiracy theories being the most likely to be retweeted. As for factors related to heuristic processing of information, such as when citizens look up to their leaders during such a crisis, our results demonstrated that behaviors of a political leader, former US President Donald J. Trump, may have nudged people's sharing of COVID-19 misinformation. Outcomes of this study help social media platform and users better understand and prevent the spread of misinformation on social media.
  4. Rice, R.: Putting sample indexes on your Web site (2000) 0.00
    1.6983949E-4 = product of:
      0.0028872713 = sum of:
        0.0028872713 = weight(_text_:in in 226) [ClassicSimilarity], result of:
          0.0028872713 = score(doc=226,freq=4.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.08501591 = fieldWeight in 226, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=226)
      0.05882353 = coord(1/17)
    
    Abstract
    Why do you need samples of your indexing work on your Web site? Think about these situations: Scenario 1: You've contacted a potential client who says he has a project ready to be assigned. He requests some samples of your work. You fax them to him right away and call back a few hours later. "Oh," he says, "I didn't get the fax but anyway I already assigned the project. I can keep your name for future reference, though." Scenario 2: Another potential client asks you to send her some samples and if they're satisfactory, she'll put you on the freelance list. You mail them to her, or even FedEx them if you can spend the money. You wait a week and call her back. She does not remember who you are, and has not seen the samples. If she can find them, she says, she will file them for future reference. Scenario 3: You contacted a potential client who has asked to see some samples of your work. As it happens, she has a project ready to go and if your work is acceptable, you can have the job. You can FedEx her some samples, or you can fax them, she says. You think about FedEx and faxing costs, and mail and faxes that never get to her desk, and the risk of losing the assignment if she calls someone else later today, which she almost surely will, and you suggest an alternative. If she has Internet access, she can see a list of the indexes you've completed, and some samples of your indexes instantly. She is impressed that you have the know-how to create a Web site, and agrees to take a look and call you back shortly. You give her your URL and your phone number, and stand by. In five minutes she calls you back, says she is pleased with what she saw, and asks for your address so she can send the job out to you today.
    Issue
    Beyond book indexing: how to get started in Web indexing, embedded indexing and other computer-based media. Ed. by D. Brenner u. M. Rowland.
  5. Clower, T.: ¬A review of regulatory barriers : is the information superhighway really imminent? (1994) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 53) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=53,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 53, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=53)
      0.05882353 = coord(1/17)
    
    Abstract
    There is much excitement in the information sciences about the imminent deployment of a nationwide information superhighway. However, there remain many obstacles to the development of the infrastructure to support the superhighway, not the least of which is the hodgepodge of regulations administered by state utility commissions and other regulatory agencies. This paper compares state communications regulatory policies and their potential impacts on the development of the physical infrastructure to support an information superhighway. The paper also examines the possibility of federal intervention into state policymaking where there is resistance to formulating policies consistent with the federal administration's goal of information infrastructure development. Finally, using government and private indistry data, an estimation of the direct impacts of infrastructure construction to support a nationwide information superhighway is calculated including direct spending, the creation of construction-related jobs and other potential social impacts. The findings indicate that state regulatory policies will have an impact on the speed and cost of infrastructure development. However, court rulings have limited the likelihood of federal intervention into state agencies who remain intractable. For the United States, the construction of a fiber optic telecommunications network will represent direct spending of more than $400 billion and create more than one million direct and indirect temporary jobs. The paper concludes with a call for additional study on the social and economic impacts of a telecommunications superhighway
  6. Scull, C.; Milewski, A.; Millen, D.: Envisioning the Web : user expectations about the cyber-experience (1999) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 6539) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=6539,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 6539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6539)
      0.05882353 = coord(1/17)
    
    Abstract
    An exploratory research project was undertaken to understand how novice college students and Web savvy librarians initially envisioned the Internet and how these representations changed over time and with experience. Users' representation of the Internet typically contained few meaningful reference points excepting "landmarks" such as search sites and frequently visited sites. For many of the users, the representation was largely procedural, and therefore organized primarily by time. All novice users conceptualized search engines as literally searching the entire Internet when a query was issued. Web savvy librarians understood the limitations of search engines better, but did still expect search engines to follow familiar organizational schemes and to indicate their cataloguing system. Although all users initially approached the Internet with high expectations of information credibility, expert users learned early on that "anyone can publish." In response to the lack of clear credibility conventions, librarians applied the same criteria they used with traditional sources. However, novice users retained high credibility expectations because their exposure was limited to the subscription-based services within their college library. And finally, during an assigned search task new users expected "step by step" instructions and self-evident cues to interaction. They were also overwhelmed and confused by the amount of information "help" displayed and became impatient when a context appropriate solution to their problem was not immediately offered
  7. Thelwall, M.: Conceptualizing documentation on the Web : an evaluation of different heuristic-based models for counting links between university Web sites (2002) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 978) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=978,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 978, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=978)
      0.05882353 = coord(1/17)
    
    Abstract
    All known previous Web link studies have used the Web page as the primary indivisible source document for counting purposes. Arguments are presented to explain why this is not necessarily optimal and why other alternatives have the potential to produce better results. This is despite the fact that individual Web files are often the only choice if search engines are used for raw data and are the easiest basic Web unit to identify. The central issue is of defining the Web "document": that which should comprise the single indissoluble unit of coherent material. Three alternative heuristics are defined for the educational arena based upon the directory, the domain and the whole university site. These are then compared by implementing them an a set of 108 UK university institutional Web sites under the assumption that a more effective heuristic will tend to produce results that correlate more highly with institutional research productivity. It was discovered that the domain and directory models were able to successfully reduce the impact of anomalous linking behavior between pairs of Web sites, with the latter being the method of choice. Reasons are then given as to why a document model an its own cannot eliminate all anomalies in Web linking behavior. Finally, the results from all models give a clear confirmation of the very strong association between the research productivity of a UK university and the number of incoming links from its peers' Web sites.
  8. Özel, S.A.; Altingövde, I.S.; Ulusoy, Ö.; Özsoyoglu, G.; Özsoyoglu, Z.M.: Metadata-Based Modeling of Information Resources an the Web (2004) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 2093) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=2093,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 2093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2093)
      0.05882353 = coord(1/17)
    
    Abstract
    This paper deals with the problem of modeling Web information resources using expert knowledge and personalized user information for improved Web searching capabilities. We propose a "Web information space" model, which is composed of Web-based information resources (HTML/XML [Hypertext Markup Language/Extensible Markup Language] documents an the Web), expert advice repositories (domain-expert-specified metadata for information resources), and personalized information about users (captured as user profiles that indicate users' preferences about experts as well as users' knowledge about topics). Expert advice, the heart of the Web information space model, is specified using topics and relationships among topics (called metalinks), along the lines of the recently proposed topic maps. Topics and metalinks constitute metadata that describe the contents of the underlying HTML/XML Web resources. The metadata specification process is semiautomated, and it exploits XML DTDs (Document Type Definition) to allow domain-expert guided mapping of DTD elements to topics and metalinks. The expert advice is stored in an object-relational database management system (DBMS). To demonstrate the practicality and usability of the proposed Web information space model, we created a prototype expert advice repository of more than one million topics/metalinks for DBLP (Database and Logic Programming) Bibliography data set. We also present a query interface that provides sophisticated querying fa cilities for DBLP Bibliography resources using the expert advice repository.
  9. Kwon, N.: Community networks : community capital or merely an affordable Internet access tool? (2005) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 3560) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=3560,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 3560, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3560)
      0.05882353 = coord(1/17)
    
    Abstract
    In this study a perceived gap between the ideal and the reality of a community network (CN) is examined. Most proponents of CNs state that building a better physical community is their major service goal. However, there has been a concern that citizens might use the service simply as a means to connect to the Internet rather than as a means to connect to their communities. Using a survey research method (n = 213), users' perceptions of community aspects of CN service and the influence of such perceptions an their use were investigated. User demographics and alternative service accessibility were also examined as predictors of use. The present study found that the respondents were using the service mainly for general Internet features. More than two thirds of the respondents were not aware of the community content aspect of the service. Approximately 20% of respondents were identified as those whose perceptions of the community aspects actually affected their use of the service. They were both aware of community contents and using an additional Internet service provider. Findings suggest that the providers did not fully communicate the community aspects of the service with the users, while the user perception of community aspects is a key to further promotion of the service.
  10. Hupfer, M.E.; Detlor, B.: Gender and Web information seeking : a self-concept orientation model (2006) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 5119) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=5119,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 5119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5119)
      0.05882353 = coord(1/17)
    
    Abstract
    Adapting the consumer behavior selectivity model to the Web environment, this paper's key contribution is the introduction of a self-concept orientation model of Web information seeking. This model, which addresses gender, effort, and information content factors, questions the commonly assumed equivalence of sex and gender by specifying the measurement of gender-related selfconcept traits known as self- and other-orientation. Regression analyses identified associations between self-orientation, other-orientation, and self-reported search frequencies for content with identical subject domain (e.g., medical information, government information) and differing relevance (i.e., important to the individual personally versus important to someone close to him or her). Self- and other-orientation interacted such that when individuals were highly self-oriented, their frequency of search for both self- and other-relevant information depended on their level of other-orientation. Specifically, high-self/high-other individuals, with a comprehensive processing strategy, searched most often, whereas high-self/low-other respondents, with an effort minimization strategy, reported the lowest search frequencies. This interaction pattern was even more pronounced for other-relevant information seeking. We found no sex differences in search frequency for either self-relevant or other-relevant information.
  11. Williams, P.; Nicholas, D.: ¬The migration of news to the web (1999) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 735) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=735,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 735, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=735)
      0.05882353 = coord(1/17)
    
    Abstract
    Virtually all UK and US newspapers and the vast majority of regional and even local titles are now represented on the web. Indeed, the Yahoo news and media directory lists no less than 114 UK newspapers online (as of November 1998). Broadcasters from the BBC and Sky downwards, and all the famous news agencies (Press Association, Reuters etc.) also boast comprehensive Internet services. With such an array of sources available, the future of mass access to the Internet, possibly via TV terminals, suggests that more and more people may soon opt for this medium to receive the bulk of their news information. This paper gives an overview of the characteristics of the medium illustrated with examples of how these are being used to both facilitate and enhance the content and dissemination of the news product. These characteristics include hyperlinking to external information sources, providing archive access to past reports, reader interactivity and other features not possible to incorporate into more passive media such as the hardcopy newspaper. From a survey of UK and US news providers it is clear that American newspapers are exploiting the advantages of web information dissemination to a far greater extent than their British counterparts, with the notable exception of The Electronic Telegraph. UK broadcasters, however, generally appear to have adapted better to the new medium, with the BBC rivaling CNN in its depth and extent of news coverage, use of links and other elements.
  12. D'Elia, G.; Abbas, J.; Bishop, K.; Jacobs, D.; Rodger, E.J.: ¬The impact of youth's use of the internet on their use of the public library (2007) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 1314) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=1314,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 1314, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1314)
      0.05882353 = coord(1/17)
    
    Abstract
    A survey of 4,032 youth in grades 5 through 12 was conducted to determine the impact youth's use of the Internet was having on their use of the public library. Results indicated that 100% of the youth had access to the Internet from one or more locations, and that although one quarter of the youth accessed the Internet at the public library, the public library was the least frequently used source of Internet access. For youth without Internet access at home, the public library was also the least used alternate source of access. Approximately 69% of the youth reported that they had visited a public library during the school year. Having Internet access at home did not affect whether or not youth visited the library however, Internet access at home appears to have affected the frequency with which youth visit the library. Youth without Internet access at home visited the library more frequently, whereas youth with Internet access at home visited the library less frequently. Use of the Internet also appeared to have diminished youth's need to use the public library as a source of personal information however, use of the Internet appeared not to have affected their use of the public library for school work or for recreation. Among youth, use of both the Internet and the public library appear to be complementary activities.
  13. Li, R.: ¬The representation of national political freedom on Web interface design : the indicators (2009) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 2856) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=2856,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 2856, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2856)
      0.05882353 = coord(1/17)
    
    Abstract
    This study is designed to validate 10 Power Distance indicators identified from previous research on cultural dimensions to establish a measurement for determining a country's national political freedom represented on Web content and interface design. Two coders performed content analysis on 156 college/university Web sites selected from 39 countries. One-way analysis of variance was applied to analyze each of the proposed 10 indicators to detect statistical significant differences among means of the three freedom groups (free-country group, partly-free-country group, and not-free-country group). The results indicated that 6 of the 10 proposed indicators could be used to measure a country's national political freedom on Web interface design. The seventh indicator, symmetric layout, demonstrated a negative correlation between the freedom level and the Web representation of Power Distance. The last three proposed indicators failed to show any significant differences among the treatment means, and there are no clear trend patterns for the treatment means of the three freedom groups. By examining national political freedom represented on Web pages, this study not only provides an insight into cultural dimensions and Web interface design but also advances our knowledge in sociological and cultural studies of the Web.
  14. Fu, T.; Abbasi, A.; Chen, H.: ¬A focused crawler for Dark Web forums (2010) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 3471) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=3471,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 3471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3471)
      0.05882353 = coord(1/17)
    
    Abstract
    The unprecedented growth of the Internet has given rise to the Dark Web, the problematic facet of the Web associated with cybercrime, hate, and extremism. Despite the need for tools to collect and analyze Dark Web forums, the covert nature of this part of the Internet makes traditional Web crawling techniques insufficient for capturing such content. In this study, we propose a novel crawling system designed to collect Dark Web forum content. The system uses a human-assisted accessibility approach to gain access to Dark Web forums. Several URL ordering features and techniques enable efficient extraction of forum postings. The system also includes an incremental crawler coupled with a recall-improvement mechanism intended to facilitate enhanced retrieval and updating of collected content. Experiments conducted to evaluate the effectiveness of the human-assisted accessibility approach and the recall-improvement-based, incremental-update procedure yielded favorable results. The human-assisted approach significantly improved access to Dark Web forums while the incremental crawler with recall improvement also outperformed standard periodic- and incremental-update approaches. Using the system, we were able to collect over 100 Dark Web forums from three regions. A case study encompassing link and content analysis of collected forums was used to illustrate the value and importance of gathering and analyzing content from such online communities.
  15. Kim, J.H.; Barnett, G.A.; Park, H.W.: ¬A hyperlink and issue network analysis of the United States Senate : a rediscovery of the Web as a relational and topical medium (2010) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 3702) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=3702,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 3702, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3702)
      0.05882353 = coord(1/17)
    
    Abstract
    Politicians' Web sites have been considered a medium for organizing, mobilizing, and agenda-setting, but extant literature lacks a systematic approach to interpret the Web sites of senators - a new medium for political communication. This study classifies the role of political Web sites into relational (hyperlinking) and topical (shared-issues) aspects. The two aspects may be viewed from a social embeddedness perspective and three facets, as K. Foot and S. Schneider ([2002]) suggested. This study employed network analysis, a set of research procedures for identifying structures in social systems, as the basis of the relations among the system's components rather than the attributes of individuals. Hyperlink and issue data were gathered from the United States Senate Web site and Yahoo. Major findings include: (a) The hyperlinks are more targeted at Democratic senators than at Republicans and are a means of communication for senators and users; (b) the issue network found from the Web is used for discussing public agendas and is more highly utilized by Republican senators; (c) the hyperlink and issue networks are correlated; and (d) social relationships and issue ecologies can be effectively detected by these two networks. The need for further research is addressed.
  16. Pereira, D.A.; Ribeiro-Neto, B.; Ziviani, N.; Laender, A.H.F.; Gonçalves, M.A.: ¬A generic Web-based entity resolution framework (2011) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 4450) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=4450,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 4450, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4450)
      0.05882353 = coord(1/17)
    
    Abstract
    Web data repositories usually contain references to thousands of real-world entities from multiple sources. It is not uncommon that multiple entities share the same label (polysemes) and that distinct label variations are associated with the same entity (synonyms), which frequently leads to ambiguous interpretations. Further, spelling variants, acronyms, abbreviated forms, and misspellings compound to worsen the problem. Solving this problem requires identifying which labels correspond to the same real-world entity, a process known as entity resolution. One approach to solve the entity resolution problem is to associate an authority identifier and a list of variant forms with each entity-a data structure known as an authority file. In this work, we propose a generic framework for implementing a method for generating authority files. Our method uses information from the Web to improve the quality of the authority file and, because of that, is referred to as WER-Web-based Entity Resolution. Our contribution here is threefold: (a) we discuss how to implement the WER framework, which is flexible and easy to adapt to new domains; (b) we run extended experimentation with our WER framework to show that it outperforms selected baselines; and (c) we compare the results of a specialized solution for author name resolution with those produced by the generic WER framework, and show that the WER results remain competitive.
  17. Bae, Y.; Lee, H.: Sentiment analysis of twitter audiences : measuring the positive or negative influence of popular twitterers (2012) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 520) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=520,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 520, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=520)
      0.05882353 = coord(1/17)
    
    Abstract
    Twitter is a popular microblogging service that is used to read and write millions of short messages on any topic within a 140-character limit. Popular or influential users tweet their status and are retweeted, mentioned, or replied to by their audience. Sentiment analysis of the tweets by popular users and their audience reveals whether the audience is favorable to popular users. We analyzed over 3,000,000 tweets mentioning or replying to the 13 most influential users to determine audience sentiment. Twitter messages reflect the landscape of sentiment toward its most popular users. We used the sentiment analysis technique as a valid popularity indicator or measure. First, we distinguished between the positive and negative audiences of popular users. Second, we found that the sentiments expressed in the tweets by popular users influenced the sentiment of their audience. Third, from the above two findings we developed a positive-negative measure for this influence. Finally, using a Granger causality analysis, we found that the time-series-based positive-negative sentiment change of the audience was related to the real-world sentiment landscape of popular users. We believe that the positive-negative influence measure between popular users and their audience provides new insights into the influence of a user and is related to the real world.
  18. Son, J.; Lee, J.; Larsen, I.; Nissenbaum, K.R.; Woo, J.: Understanding the uncertainty of disaster tweets and its effect on retweeting : the perspectives of uncertainty reduction theory and information entropy (2020) 0.00
    1.5011833E-4 = product of:
      0.0025520115 = sum of:
        0.0025520115 = weight(_text_:in in 5962) [ClassicSimilarity], result of:
          0.0025520115 = score(doc=5962,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.07514416 = fieldWeight in 5962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5962)
      0.05882353 = coord(1/17)
    
    Abstract
    The rapid and wide dissemination of up-to-date, localized information is a central issue during disasters. Being attributed to the original 140-character length, Twitter provides its users with quick-posting and easy-forwarding features that facilitate the timely dissemination of warnings and alerts. However, a concern arises with respect to the terseness of tweets that restricts the amount of information conveyed in a tweet and thus increases a tweet's uncertainty. We tackle such concerns by proposing entropy as a measure for a tweet's uncertainty. Based on the perspectives of Uncertainty Reduction Theory (URT), we theorize that the more uncertain information of a disaster tweet, the higher the entropy, which will lead to a lower retweet count. By leveraging the statistical and predictive analyses, we provide evidence supporting that entropy validly and reliably assesses the uncertainty of a tweet. This study contributes to improving our understanding of information propagation on Twitter during disasters. Academically, we offer a new variable of entropy to measure a tweet's uncertainty, an important factor influencing disaster tweets' retweeting. Entropy plays a critical role to better comprehend URLs and emoticons as a means to convey information. Practically, this research suggests a set of guidelines for effectively crafting disaster messages on Twitter.

Years

Languages

Types