Search (240 results, page 2 of 12)

  • × theme_ss:"Internet"
  • × year_i:[2010 TO 2020}
  1. Höhn, S.: Stalins Badezimmer in Wikipedia : Die Macher der Internet-Enzyklopädie diskutieren über Verantwortung und Transparenz. Der Brockhaus kehrt dagegen zur gedruckten Ausgabe zurück. (2012) 0.01
    0.011876621 = product of:
      0.023753243 = sum of:
        0.023753243 = sum of:
          0.0016913437 = weight(_text_:a in 2171) [ClassicSimilarity], result of:
            0.0016913437 = score(doc=2171,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.03184872 = fieldWeight in 2171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.01953125 = fieldNorm(doc=2171)
          0.0220619 = weight(_text_:22 in 2171) [ClassicSimilarity], result of:
            0.0220619 = score(doc=2171,freq=4.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.13679022 = fieldWeight in 2171, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=2171)
      0.5 = coord(1/2)
    
    Content
    Der neue Herausgeber des Brockhaus, ein Tochterverlag von Bertelsmann, hat unterdessen angekündigt, zum gedruckten Lexikon zurückzukehren. Etwa Anfang 2015 soll die 22. Auflage erscheinen. In Zeiten des virtuellen Informationsoverkills gebe es einen Bedarf an Orientierung, an Relevanzvorgaben, sagt Geschäftsführer Christoph Hünermann. Ausgerechnet Bertelsmann druckte 2008 ein knapp 1 000 Seiten langes Wikipedia-Lexikon mit den 50 000 meist gesuchten Begriffen. Eine Experten-Redaktion überprüfte die Einträge sicherheitshalber zuvor - soll allerdings kaum Fehler gefunden haben."
    Source
    Frankfurter Rundschau. Nr.76 vom 29.3.2012, S.22-23
    Type
    a
  2. Social Media und Web Science : das Web als Lebensraum, Düsseldorf, 22. - 23. März 2012, Proceedings, hrsg. von Marlies Ockenfeld, Isabella Peters und Katrin Weller. DGI, Frankfurt am Main 2012 (2012) 0.01
    0.010920083 = product of:
      0.021840166 = sum of:
        0.021840166 = product of:
          0.043680333 = sum of:
            0.043680333 = weight(_text_:22 in 1517) [ClassicSimilarity], result of:
              0.043680333 = score(doc=1517,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2708308 = fieldWeight in 1517, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1517)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Firnkes, M.: Schöne neue Welt : der Content der Zukunft wird von Algorithmen bestimmt (2015) 0.01
    0.009360071 = product of:
      0.018720143 = sum of:
        0.018720143 = product of:
          0.037440285 = sum of:
            0.037440285 = weight(_text_:22 in 2118) [ClassicSimilarity], result of:
              0.037440285 = score(doc=2118,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.23214069 = fieldWeight in 2118, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2118)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 7.2015 22:02:31
  4. Song, L.; Tso, G.; Fu, Y.: Click behavior and link prioritization : multiple demand theory application for web improvement (2019) 0.00
    0.0033657318 = product of:
      0.0067314636 = sum of:
        0.0067314636 = product of:
          0.013462927 = sum of:
            0.013462927 = weight(_text_:a in 5322) [ClassicSimilarity], result of:
              0.013462927 = score(doc=5322,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.25351265 = fieldWeight in 5322, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5322)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A common problem encountered in Web improvement is how to arrange the homepage links of a Website. This study analyses Web information search behavior, and applies the multiple demand theory to propose two models to help a visitor allocate time for multiple links. The process of searching is viewed as a formal choice problem in which the visitor attempts to choose from multiple Web links to maximize the total utility. The proposed models are calibrated to clickstream data collected from an educational institute over a seven-and-a-half month period. Based on the best fit model, a metric, utility loss, is constructed to measure the performance of each link and arrange them accordingly. Empirical results show that the proposed metric is highly efficient for prioritizing the links on a homepage and the methodology can also be used to study the feasibility of introducing a new function in a Website.
    Type
    a
  5. Ghosh, J.; Kshitij, A.: ¬An integrated examination of collaboration coauthorship networks through structural cohesion, holes, hierarchy, and percolating clusters (2014) 0.00
    0.0032090992 = product of:
      0.0064181983 = sum of:
        0.0064181983 = product of:
          0.012836397 = sum of:
            0.012836397 = weight(_text_:a in 1333) [ClassicSimilarity], result of:
              0.012836397 = score(doc=1333,freq=20.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.24171482 = fieldWeight in 1333, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1333)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Structural cohesion, hierarchy, holes, and percolating clusters share a complementary existence in many social networks. Although the individual influences of these attributes on the structure and function of a network have been analyzed in detail, a more accurate picture emerges in proper perspective and context only when research methods are employed to integrate their collective impacts on the network. In a major research project, we have undertaken this examination. This paper presents an extract from this project, using a global network assessment of these characteristics. We apply our methods to analyze the collaboration networks of a subset of researchers in India through their coauthored papers in peer-reviewed journals and conference proceedings in management science, including related areas of information technology and economics. We find the Indian networks to be currently suffering from a high degree of fragmentation, which severely restricts researchers' long-rage connectivities in the networks. Comparisons are made with networks of a similar sample of researchers working in the United States.
    Type
    a
  6. Gorgeon, A.; Swanson, E.B.: Web 2.0 according to Wikipedia : capturing an organizing vision (2011) 0.00
    0.0030444188 = product of:
      0.0060888375 = sum of:
        0.0060888375 = product of:
          0.012177675 = sum of:
            0.012177675 = weight(_text_:a in 4766) [ClassicSimilarity], result of:
              0.012177675 = score(doc=4766,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22931081 = fieldWeight in 4766, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4766)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Is Web 2.0 more than a buzzword? In recent years, technologists and others have heatedly debated this question, even in Wikipedia, itself an example of Web 2.0. From the perspective of the present study, Web 2.0 may indeed be a buzzword, but more substantially it is also an example of an organizing vision that drives a community's discourse about certain new Information Technology (IT), serving to advance the technology's adoption and diffusion. Every organizing vision has a career that reflects its construction over time, and in the present study we examine Web 2.0's career as captured in its Wikipedia entry over a 5-year period, finding that it falls into three distinct periods termed Germination, Growth, and Maturation. The findings reveal how Wikipedia, as a discourse vehicle, treats new IT and its many buzzwords, and more broadly captures the careers of their organizing visions. Too, they further our understanding of Wikipedia as a new encyclopedic form, providing novel insights into its uses, its community of contributors, and their editing activities, as well as the dynamics of article construction.
    Type
    a
  7. Gauducheau, N.: ¬An exploratory study of the information-seeking activities of adolescents in a discussion forum (2016) 0.00
    0.0030444188 = product of:
      0.0060888375 = sum of:
        0.0060888375 = product of:
          0.012177675 = sum of:
            0.012177675 = weight(_text_:a in 2634) [ClassicSimilarity], result of:
              0.012177675 = score(doc=2634,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22931081 = fieldWeight in 2634, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2634)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The aim of this study is to understand how teenagers use Internet forums to search for information. The activities of asking for and providing information in a forum were explored, and a set of messages extracted from a French forum targeting adolescents was analyzed. Results show that the messages initiating the threads are often requests for information. Teenagers mainly ask for peers' opinions on personal matters and specific verifiable information. The discussions following these requests take the form of an exchange of advice (question/answer) or a coconstruction of the final answer between the participants (with assessments of participants' responses, requests for explanations, etc.). The results suggest that discussion forums present different advantages for adolescents' information-seeking activities. The first is that this social medium allows finding specialized information on topics specific to this age group. The second is that the collaborative aspect of information seeking in a forum allows these adolescents to overcome difficulties commonly associated with the search process (making a precise request, evaluating a result).
    Type
    a
  8. Mahesh, K.; Karanth, P.: ¬A novel knowledge organization scheme for the Web : superlinks with semantic roles (2012) 0.00
    0.0029294936 = product of:
      0.005858987 = sum of:
        0.005858987 = product of:
          0.011717974 = sum of:
            0.011717974 = weight(_text_:a in 822) [ClassicSimilarity], result of:
              0.011717974 = score(doc=822,freq=24.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22065444 = fieldWeight in 822, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=822)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We discuss the needs of a knowledge organization scheme for supporting Web-based software applications. We show how it differs from traditional knowledge organization schemes due to the virtual, dynamic, ad-hoc, userspecific and application-specific nature of Web-based knowledge. The sheer size of Web resources also adds to the complexity of organizing knowledge on the Web. As such, a standard, global scheme such as a single ontology for classifying and organizing all Web-based content is unrealistic. There is nevertheless a strong and immediate need for effective knowledge organization schemes to improve the efficiency and effectiveness of Web-based applications. In this context, we propose a novel knowledge organization scheme wherein concepts in the ontology of a domain are semantically interlinked with specific pieces of Web-based content using a rich hyper-linking structure known as Superlinks with well-defined semantic roles. We illustrate how such a knowledge organization scheme improves the efficiency and effectiveness of a Web-based e-commerce retail store.
    Source
    Categories, contexts and relations in knowledge organization: Proceedings of the Twelfth International ISKO Conference 6-9 August 2012, Mysore, India. Eds.: Neelameghan, A. u. K.S. Raghavan
    Type
    a
  9. Kong, S.; Ye, F.; Feng, L.; Zhao, Z.: Towards the prediction problems of bursting hashtags on Twitter (2015) 0.00
    0.0029000505 = product of:
      0.005800101 = sum of:
        0.005800101 = product of:
          0.011600202 = sum of:
            0.011600202 = weight(_text_:a in 2338) [ClassicSimilarity], result of:
              0.011600202 = score(doc=2338,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21843673 = fieldWeight in 2338, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Hundreds of thousands of hashtags are generated every day on Twitter. Only a few will burst and become trending topics. In this article, we provide the definition of a bursting hashtag and conduct a systematic study of a series of challenging prediction problems that span the entire life cycles of bursting hashtags. Around the problem of "how to build a system to predict bursting hashtags," we explore different types of features and present machine learning solutions. On real data sets from Twitter, experiments are conducted to evaluate the effectiveness of the proposed solutions and the contributions of features.
    Type
    a
  10. Savolainen, R.: ¬The structure of argument patterns on a social Q&A site (2012) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 517) [ClassicSimilarity], result of:
              0.011481222 = score(doc=517,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 517, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=517)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study investigates the argument patterns in Yahoo! Answers, a major question and answer (Q&A) site. Mainly drawing on the ideas of Toulmin (), argument pattern is conceptualized as a set of 5 major elements: claim, counterclaim, rebuttal, support, and grounds. The combinations of these elements result in diverse argument patterns. Failed opening consists of an initial claim only, whereas nonoppositional argument pattern also includes indications of support. Oppositional argument pattern contains the elements of counterclaim and rebuttal. Mixed argument pattern entails all 5 elements. The empirical data were gathered by downloading from Yahoo! Answers 100 discussion threads discussing global warming-a controversial topic providing a fertile ground for arguments for and against. Of the argument patterns, failed openings were most frequent, followed by oppositional, nonoppositional, and mixed patterns. In most cases, the participants grounded their arguments by drawing on personal beliefs and facts. The findings suggest that oppositional and mixed argument patterns provide more opportunities for the assessment of the quality and credibility of answers, as compared to failed openings and nonoppositional argument patterns.
    Type
    a
  11. Kim, J.H.; Barnett, G.A.; Park, H.W.: ¬A hyperlink and issue network analysis of the United States Senate : a rediscovery of the Web as a relational and topical medium (2010) 0.00
    0.0028047764 = product of:
      0.005609553 = sum of:
        0.005609553 = product of:
          0.011219106 = sum of:
            0.011219106 = weight(_text_:a in 3702) [ClassicSimilarity], result of:
              0.011219106 = score(doc=3702,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21126054 = fieldWeight in 3702, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3702)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Politicians' Web sites have been considered a medium for organizing, mobilizing, and agenda-setting, but extant literature lacks a systematic approach to interpret the Web sites of senators - a new medium for political communication. This study classifies the role of political Web sites into relational (hyperlinking) and topical (shared-issues) aspects. The two aspects may be viewed from a social embeddedness perspective and three facets, as K. Foot and S. Schneider ([2002]) suggested. This study employed network analysis, a set of research procedures for identifying structures in social systems, as the basis of the relations among the system's components rather than the attributes of individuals. Hyperlink and issue data were gathered from the United States Senate Web site and Yahoo. Major findings include: (a) The hyperlinks are more targeted at Democratic senators than at Republicans and are a means of communication for senators and users; (b) the issue network found from the Web is used for discussing public agendas and is more highly utilized by Republican senators; (c) the hyperlink and issue networks are correlated; and (d) social relationships and issue ecologies can be effectively detected by these two networks. The need for further research is addressed.
    Type
    a
  12. Sood, S.O.; Churchill, E.F.; Antin, J.: Automatic identification of personal insults on social news sites (2012) 0.00
    0.0028047764 = product of:
      0.005609553 = sum of:
        0.005609553 = product of:
          0.011219106 = sum of:
            0.011219106 = weight(_text_:a in 4976) [ClassicSimilarity], result of:
              0.011219106 = score(doc=4976,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21126054 = fieldWeight in 4976, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4976)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As online communities grow and the volume of user-generated content increases, the need for community management also rises. Community management has three main purposes: to create a positive experience for existing participants, to promote appropriate, socionormative behaviors, and to encourage potential participants to make contributions. Research indicates that the quality of content a potential participant sees on a site is highly influential; off-topic, negative comments with malicious intent are a particularly strong boundary to participation or set the tone for encouraging similar contributions. A problem for community managers, therefore, is the detection and elimination of such undesirable content. As a community grows, this undertaking becomes more daunting. Can an automated system aid community managers in this task? In this paper, we address this question through a machine learning approach to automatic detection of inappropriate negative user contributions. Our training corpus is a set of comments from a news commenting site that we tasked Amazon Mechanical Turk workers with labeling. Each comment is labeled for the presence of profanity, insults, and the object of the insults. Support vector machines trained on these data are combined with relevance and valence analysis systems in a multistep approach to the detection of inappropriate negative user contributions. The system shows great potential for semiautomated community management.
    Type
    a
  13. Aksoy, C.; Can, F.; Kocberber, S.: Novelty detection for topic tracking (2012) 0.00
    0.0028047764 = product of:
      0.005609553 = sum of:
        0.005609553 = product of:
          0.011219106 = sum of:
            0.011219106 = weight(_text_:a in 51) [ClassicSimilarity], result of:
              0.011219106 = score(doc=51,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21126054 = fieldWeight in 51, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=51)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Multisource web news portals provide various advantages such as richness in news content and an opportunity to follow developments from different perspectives. However, in such environments, news variety and quantity can have an overwhelming effect. New-event detection and topic-tracking studies address this problem. They examine news streams and organize stories according to their events; however, several tracking stories of an event/topic may contain no new information (i.e., no novelty). We study the novelty detection (ND) problem on the tracking news of a particular topic. For this purpose, we build a Turkish ND test collection called BilNov-2005 and propose the usage of three ND methods: a cosine-similarity (CS)-based method, a language-model (LM)-based method, and a cover-coefficient (CC)-based method. For the LM-based ND method, we show that a simpler smoothing approach, Dirichlet smoothing, can have similar performance to a more complex smoothing approach, Shrinkage smoothing. We introduce a baseline that shows the performance of a system with random novelty decisions. In addition, a category-based threshold learning method is used for the first time in ND literature. The experimental results show that the LM-based ND method significantly outperforms the CS- and CC-based methods, and category-based threshold learning achieves promising results when compared to general threshold learning.
    Type
    a
  14. Thelwall, M.; Goriunova, O.; Vis, F.; Faulkner, S.; Burns, A.; Aulich, J.; Mas-Bleda, A.; Stuart, E.; D'Orazio, F.: Chatting through pictures : a classification of images tweeted in one week in the UK and USA (2016) 0.00
    0.0028047764 = product of:
      0.005609553 = sum of:
        0.005609553 = product of:
          0.011219106 = sum of:
            0.011219106 = weight(_text_:a in 3215) [ClassicSimilarity], result of:
              0.011219106 = score(doc=3215,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21126054 = fieldWeight in 3215, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3215)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Twitter is used by a substantial minority of the populations of many countries to share short messages, sometimes including images. Nevertheless, despite some research into specific images, such as selfies, and a few news stories about specific tweeted photographs, little is known about the types of images that are routinely shared. In response, this article reports a content analysis of random samples of 800 images tweeted from the UK or USA during a week at the end of 2014. Although most images were photographs, a substantial minority were hybrid or layered image forms: phone screenshots, collages, captioned pictures, and pictures of text messages. About half were primarily of one or more people, including 10% that were selfies, but a wide variety of other things were also pictured. Some of the images were for advertising or to share a joke but in most cases the purpose of the tweet seemed to be to share the minutiae of daily lives, performing the function of chat or gossip, sometimes in innovative ways.
    Type
    a
  15. Derek Doran, D.; Gokhale, S.S.: ¬A classification framework for web robots (2012) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 505) [ClassicSimilarity], result of:
              0.0108246 = score(doc=505,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 505, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=505)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The behavior of modern web robots varies widely when they crawl for different purposes. In this article, we present a framework to classify these web robots from two orthogonal perspectives, namely, their functionality and the types of resources they consume. Applying the classification framework to a year-long access log from the UConn SoE web server, we present trends that point to significant differences in their crawling behavior.
    Type
    a
  16. Bhavnani, S.K.; Peck, F.A.: Scatter matters : regularities and implications for the scatter of healthcare information on the Web (2010) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 3433) [ClassicSimilarity], result of:
              0.010739701 = score(doc=3433,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 3433, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3433)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Despite the development of huge healthcare Web sites and powerful search engines, many searchers end their searches prematurely with incomplete information. Recent studies suggest that users often retrieve incomplete information because of the complex scatter of relevant facts about a topic across Web pages. However, little is understood about regularities underlying such information scatter. To probe regularities within the scatter of facts across Web pages, this article presents the results of two analyses: (a) a cluster analysis of Web pages that reveals the existence of three page clusters that vary in information density and (b) a content analysis that suggests the role each of the above-mentioned page clusters play in providing comprehensive information. These results provide implications for the design of Web sites, search tools, and training to help users find comprehensive information about a topic and for a hypothesis describing the underlying mechanisms causing the scatter. We conclude by briefly discussing how the analysis of information scatter, at the granularity of facts, complements existing theories of information-seeking behavior.
    Type
    a
  17. Villela Dantas, J.R.; Muniz Farias, P.F.: Conceptual navigation in knowledge management environments using NavCon (2010) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 4230) [ClassicSimilarity], result of:
              0.010739701 = score(doc=4230,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 4230, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4230)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article presents conceptual navigation and NavCon, an architecture that implements this navigation in World Wide Web pages. NavCon architecture makes use of ontology as metadata to contextualize user search for information. Based on ontologies, NavCon automatically inserts conceptual links in Web pages. By using these links, the user may navigate in a graph representing ontology concepts and their relationships. By browsing this graph, it is possible to reach documents associated with the user desired ontology concept. This Web navigation supported by ontology concepts we call conceptual navigation. Conceptual navigation is a technique to browse Web sites within a context. The context filters relevant retrieved information. The context also drives user navigation through paths that meet his needs. A company may implement conceptual navigation to improve user search for information in a knowledge management environment. We suggest that the use of an ontology to conduct navigation in an Intranet may help the user to have a better understanding about the knowledge structure of the company.
    Type
    a
  18. Burford, S.: Complexity and the practice of web information architecture (2011) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 4772) [ClassicSimilarity], result of:
              0.010739701 = score(doc=4772,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 4772, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4772)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article describes the outcomes of research that examined the practice of web information architecture (IA) in large organizations. Using a grounded theory approach, seven large organizations were investigated and the data were analyzed for emerging themes and concepts. The research finds that the practice of web IA is characterized by unpredictability, multiple perspectives, and a need for responsiveness, agility, and negotiation. This article claims that web IA occurs in a complex environment and has emergent, self-organizing properties. There is value in examining the practice as a complex adaptive system. Using this metaphor, a pre-determined, structured methodology that delivers a documented, enduring, information design for the web is found inadequate - dominant and traditional thinking and practice in the organization of information are challenged.
    Type
    a
  19. Wijnhoven, F.: ¬The Hegelian inquiring system and a critical triangulation tool for the Internet information slave : a design science study (2012) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 254) [ClassicSimilarity], result of:
              0.010739701 = score(doc=254,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 254, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=254)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article discusses people's understanding of reality by representations from the Internet. The Hegelian inquiry system is used here to explain the nature of informing on the Internet as activities of information masters to influence information slaves' opinions and as activities of information slaves to become well informed. The key assumption of Hegelianism regarding information is that information has no value independent from the interests and worldviews (theses) it supports. As part of the dialectic process of generating syntheses, we propose a role for information science of offering methods to critically evaluate the master's information, and by this we develop an opinion (thesis) independent from the master's power. For this we offer multiple methods for information criticism, named triangulation, which may help users to evaluate a master's evidence. This article presents also a prototype of a Hegelian information triangulator tool for information slaves (i.e., nonexperts). The article concludes with suggestions for further research on informative triangulation.
    Type
    a
  20. Zubiaga, A.; Spina, D.; Martínez, R.; Fresno, V.: Real-time classification of Twitter trends (2015) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 1661) [ClassicSimilarity], result of:
              0.010739701 = score(doc=1661,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 1661, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1661)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this work, we explore the types of triggers that spark trends on Twitter, introducing a typology with the following 4 types: news, ongoing events, memes, and commemoratives. While previous research has analyzed trending topics over the long term, we look at the earliest tweets that produce a trend, with the aim of categorizing trends early on. This allows us to provide a filtered subset of trends to end users. We experiment with a set of straightforward language-independent features based on the social spread of trends and categorize them using the typology. Our method provides an efficient way to accurately categorize trending topics without need of external data, enabling news organizations to discover breaking news in real-time, or to quickly identify viral memes that might inform marketing decisions, among others. The analysis of social features also reveals patterns associated with each type of trend, such as tweets about ongoing events being shorter as many were likely sent from mobile devices, or memes having more retweets originating from a few trend-setters.
    Type
    a

Languages

  • e 159
  • d 80

Types

  • a 221
  • el 26
  • m 14
  • s 1
  • More… Less…

Subjects

Classifications