Search (120 results, page 1 of 6)

  • × theme_ss:"Internet"
  • × year_i:[2010 TO 2020}
  1. Spink, A.; Danby, S.; Mallan, K.; Butler, C.: Exploring young children's web searching and technoliteracy (2010) 0.15
    0.15437317 = product of:
      0.2058309 = sum of:
        0.100764915 = weight(_text_:web in 3623) [ClassicSimilarity], result of:
          0.100764915 = score(doc=3623,freq=24.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.6245262 = fieldWeight in 3623, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3623)
        0.06598687 = weight(_text_:search in 3623) [ClassicSimilarity], result of:
          0.06598687 = score(doc=3623,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.3840117 = fieldWeight in 3623, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3623)
        0.03907912 = product of:
          0.07815824 = sum of:
            0.07815824 = weight(_text_:engine in 3623) [ClassicSimilarity], result of:
              0.07815824 = score(doc=3623,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.29552078 = fieldWeight in 3623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3623)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Purpose - This paper aims to report findings from an exploratory study investigating the web interactions and technoliteracy of children in the early childhood years. Previous research has studied aspects of older children's technoliteracy and web searching; however, few studies have analyzed web search data from children younger than six years of age. Design/methodology/approach - The study explored the Google web searching and technoliteracy of young children who are enrolled in a "preparatory classroom" or kindergarten (the year before young children begin compulsory schooling in Queensland, Australia). Young children were video- and audio-taped while conducting Google web searches in the classroom. The data were qualitatively analysed to understand the young children's web search behaviour. Findings - The findings show that young children engage in complex web searches, including keyword searching and browsing, query formulation and reformulation, relevance judgments, successive searches, information multitasking and collaborative behaviours. The study results provide significant initial insights into young children's web searching and technoliteracy. Practical implications - The use of web search engines by young children is an important research area with implications for educators and web technologies developers. Originality/value - This is the first study of young children's interaction with a web search engine.
  2. Stuart, D.: Web metrics for library and information professionals (2014) 0.15
    0.14830285 = product of:
      0.19773714 = sum of:
        0.13037933 = weight(_text_:web in 2274) [ClassicSimilarity], result of:
          0.13037933 = score(doc=2274,freq=82.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.808072 = fieldWeight in 2274, product of:
              9.055386 = tf(freq=82.0), with freq of:
                82.0 = termFreq=82.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
        0.04000242 = weight(_text_:search in 2274) [ClassicSimilarity], result of:
          0.04000242 = score(doc=2274,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.23279473 = fieldWeight in 2274, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
        0.027355384 = product of:
          0.05471077 = sum of:
            0.05471077 = weight(_text_:engine in 2274) [ClassicSimilarity], result of:
              0.05471077 = score(doc=2274,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.20686457 = fieldWeight in 2274, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2274)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional. The book will provide a practical introduction to web metrics for a wide range of library and information professionals, from the bibliometrician wanting to demonstrate the wider impact of a researcher's work than can be demonstrated through traditional citations databases, to the reference librarian wanting to measure how successfully they are engaging with their users on Twitter. It will be a valuable tool for anyone who wants to not only understand the impact of content, but demonstrate this impact to others within the organization and beyond.
    Content
    1. Introduction. MetricsIndicators -- Web metrics and Ranganathan's laws of library science -- Web metrics for the library and information professional -- The aim of this book -- The structure of the rest of this book -- 2. Bibliometrics, webometrics and web metrics. Web metrics -- Information science metrics -- Web analytics -- Relational and evaluative metrics -- Evaluative web metrics -- Relational web metrics -- Validating the results -- 3. Data collection tools. The anatomy of a URL, web links and the structure of the web -- Search engines 1.0 -- Web crawlers -- Search engines 2.0 -- Post search engine 2.0: fragmentation -- 4. Evaluating impact on the web. Websites -- Blogs -- Wikis -- Internal metrics -- External metrics -- A systematic approach to content analysis -- 5. Evaluating social media impact. Aspects of social network sites -- Typology of social network sites -- Research and tools for specific sites and services -- Other social network sites -- URL shorteners: web analytic links on any site -- General social media impact -- Sentiment analysis -- 6. Investigating relationships between actors. Social network analysis methods -- Sources for relational network analysis -- 7. Exploring traditional publications in a new environment. More bibliographic items -- Full text analysis -- Greater context -- 8. Web metrics and the web of data. The web of data -- Building the semantic web -- Implications of the web of data for web metrics -- Investigating the web of data today -- SPARQL -- Sindice -- LDSpider: an RDF web crawler -- 9. The future of web metrics and the library and information professional. How far we have come -- The future of web metrics -- The future of the library and information professional and web metrics.
    RSWK
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
    Subject
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
  3. Rogers, R.: Digital methods (2013) 0.13
    0.12834272 = product of:
      0.17112362 = sum of:
        0.08707084 = weight(_text_:web in 2354) [ClassicSimilarity], result of:
          0.08707084 = score(doc=2354,freq=28.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.5396523 = fieldWeight in 2354, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2354)
        0.052789498 = weight(_text_:search in 2354) [ClassicSimilarity], result of:
          0.052789498 = score(doc=2354,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.30720934 = fieldWeight in 2354, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=2354)
        0.031263296 = product of:
          0.06252659 = sum of:
            0.06252659 = weight(_text_:engine in 2354) [ClassicSimilarity], result of:
              0.06252659 = score(doc=2354,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23641664 = fieldWeight in 2354, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2354)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In Digital Methods, Richard Rogers proposes a methodological outlook for social and cultural scholarly research on the Web that seeks to move Internet research beyond the study of online culture. It is not a toolkit for Internet research, or operating instructions for a software package; it deals with broader questions. How can we study social media to learn something about society rather than about social media use? How can hyperlinks reveal not just the value of a Web site but the politics of association? Rogers proposes repurposing Web-native techniques for research into cultural change and societal conditions. We can learn to reapply such "methods of the medium" as crawling and crowd sourcing, PageRank and similar algorithms, tag clouds and other visualizations; we can learn how they handle hits, likes, tags, date stamps, and other Web-native objects. By "thinking along" with devices and the objects they handle, digital research methods can follow the evolving methods of the medium. Rogers uses this new methodological outlook to examine the findings of inquiries into 9/11 search results, the recognition of climate change skeptics by climate-change-related Web sites, the events surrounding the Srebrenica massacre according to Dutch, Serbian, Bosnian, and Croatian Wikipedias, presidential candidates' social media "friends," and the censorship of the Iranian Web. With Digital Methods, Rogers introduces a new vision and method for Internet research and at the same time applies them to the Web's objects of study, from tiny particles (hyperlinks) to large masses (social media).
    Content
    The end of the virtual : digital methods -- The link and the politics of Web space -- The website as archived object -- Googlization and the inculpable engine -- Search as research -- National Web studies -- Social media and post-demographics -- Wikipedia as cultural reference -- After cyberspace : big data, small data.
    LCSH
    Web search engines
    World Wide Web / Research
    RSWK
    Internet / Recherche / World Wide Web 2.0
    Subject
    Internet / Recherche / World Wide Web 2.0
    Web search engines
    World Wide Web / Research
  4. Thelwall, M.; Sud, P.: ¬A comparison of methods for collecting web citation data for academic organizations (2011) 0.13
    0.12791318 = product of:
      0.17055091 = sum of:
        0.029088326 = weight(_text_:web in 4626) [ClassicSimilarity], result of:
          0.029088326 = score(doc=4626,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 4626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4626)
        0.07377557 = weight(_text_:search in 4626) [ClassicSimilarity], result of:
          0.07377557 = score(doc=4626,freq=10.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.4293381 = fieldWeight in 4626, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4626)
        0.06768702 = product of:
          0.13537404 = sum of:
            0.13537404 = weight(_text_:engine in 4626) [ClassicSimilarity], result of:
              0.13537404 = score(doc=4626,freq=6.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.51185703 = fieldWeight in 4626, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4626)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The primary webometric method for estimating the online impact of an organization is to count links to its website. Link counts have been available from commercial search engines for over a decade but this was set to end by early 2012 and so a replacement is needed. This article compares link counts to two alternative methods: URL citations and organization title mentions. New variations of these methods are also introduced. The three methods are compared against each other using Yahoo!. Two of the three methods (URL citations and organization title mentions) are also compared against each other using Bing. Evidence from a case study of 131 UK universities and 49 US Library and Information Science (LIS) departments suggests that Bing's Hit Count Estimates (HCEs) for popular title searches are not useful for webometric research but that Yahoo!'s HCEs for all three types of search and Bing's URL citation HCEs seem to be consistent. For exact URL counts the results of all three methods in Yahoo! and both methods in Bing are also consistent. Four types of accuracy factors are also introduced and defined: search engine coverage, search engine retrieval variation, search engine retrieval anomalies, and query polysemy.
  5. Sanchiza, M.; Chinb, J.; Chevaliera, A.; Fuc, W.T.; Amadieua, F.; Hed, J.: Searching for information on the web : impact of cognitive aging, prior domain knowledge and complexity of the search problems (2017) 0.12
    0.12073888 = product of:
      0.16098517 = sum of:
        0.03490599 = weight(_text_:web in 3294) [ClassicSimilarity], result of:
          0.03490599 = score(doc=3294,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 3294, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3294)
        0.07918424 = weight(_text_:search in 3294) [ClassicSimilarity], result of:
          0.07918424 = score(doc=3294,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.460814 = fieldWeight in 3294, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=3294)
        0.04689494 = product of:
          0.09378988 = sum of:
            0.09378988 = weight(_text_:engine in 3294) [ClassicSimilarity], result of:
              0.09378988 = score(doc=3294,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.35462496 = fieldWeight in 3294, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3294)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This study focuses on the impact of age, prior domain knowledge and cognitive abilities on performance, query production and navigation strategies during information searching. Twenty older adults and nineteen young adults had to answer 12 information search problems of varying nature within two domain knowledge: health and manga. In each domain, participants had to perform two simple fact-finding problems (keywords provided and answer directly accessible on the search engine results page), two difficult fact-finding problems (keywords had to be inferred) and two open-ended information search problems (multiple answers possible and navigation necessary). Results showed that prior domain knowledge helped older adults improve navigation (i.e. reduced the number of webpages visited and thus decreased the feeling of disorientation), query production and reformulation (i.e. they formulated semantically more specific queries, and they inferred a greater number of new keywords).
  6. Spink, A.; Du, J.T.: Toward a Web search model : integrating multitasking, cognitive coordination, and cognitive shifts (2011) 0.09
    0.087796874 = product of:
      0.17559375 = sum of:
        0.08227421 = weight(_text_:web in 4624) [ClassicSimilarity], result of:
          0.08227421 = score(doc=4624,freq=16.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.5099235 = fieldWeight in 4624, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4624)
        0.093319535 = weight(_text_:search in 4624) [ClassicSimilarity], result of:
          0.093319535 = score(doc=4624,freq=16.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.54307455 = fieldWeight in 4624, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4624)
      0.5 = coord(2/4)
    
    Abstract
    Limited research has investigated the role of multitasking, cognitive coordination, and cognitive shifts during web search. Understanding these three behaviors is crucial to web search model development. This study aims to explore characteristics of multitasking behavior, types of cognitive shifts, and levels of cognitive coordination as well as the relationship between them during web search. Data collection included pre- and postquestionnaires, think-aloud protocols, web search logs, observations, and interviews with 42 graduate students who conducted 315 web search sessions with 221 information problems. Results show that web search is a dynamic interaction including the ordering of multiple information problems and the generation of evolving information problems, including task switching, multitasking, explicit task and implicit mental coordination, and cognitive shifting. Findings show that explicit task-level coordination is closely linked to multitasking, and implicit cognitive-level coordination is related to the task-coordination process; including information problem development and task switching. Coordination mechanisms directly result in cognitive state shifts including strategy, evaluation, and view states that affect users' holistic shifts in information problem understanding and knowledge contribution. A web search model integrating multitasking, cognitive coordination, and cognitive shifts (MCC model) is presented. Implications and further research also are discussed.
  7. Bizer, C.; Mendes, P.N.; Jentzsch, A.: Topology of the Web of Data (2012) 0.09
    0.08756581 = product of:
      0.17513162 = sum of:
        0.095947385 = weight(_text_:web in 425) [ClassicSimilarity], result of:
          0.095947385 = score(doc=425,freq=34.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.59466785 = fieldWeight in 425, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=425)
        0.07918424 = weight(_text_:search in 425) [ClassicSimilarity], result of:
          0.07918424 = score(doc=425,freq=18.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.460814 = fieldWeight in 425, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=425)
      0.5 = coord(2/4)
    
    Abstract
    The degree of structure of Web content is the determining factor for the types of functionality that search engines can provide. The more well structured the Web content is, the easier it is for search engines to understand Web content and provide advanced functionality, such as faceted filtering or the aggregation of content from multiple Web sites, based on this understanding. Today, most Web sites are generated from structured data that is stored in relational databases. Thus, it does not require too much extra effort for Web sites to publish this structured data directly on the Web in addition to HTML pages, and thus help search engines to understand Web content and provide improved functionality. An early approach to realize this idea and help search engines to understand Web content is Microformats, a technique for markingup structured data about specific types on entities-such as tags, blog posts, people, or reviews-within HTML pages. As Microformats are focused on a few entity types, the World Wide Web Consortium (W3C) started in 2004 to standardize RDFa as an alternative, more generic language for embedding any type of data into HTML pages. Today, major search engines such as Google, Yahoo, and Bing extract Microformat and RDFa data describing products, reviews, persons, events, and recipes from Web pages and use the extracted data to improve the user's search experience. The search engines have started to aggregate structured data from different Web sites and augment their search results with these aggregated information units in the form of rich snippets which combine, for instance, data This chapter gives an overview of the topology of the Web of Data that has been created by publishing data on the Web using the microformats RDFa, Microdata and Linked Data publishing techniques.
    Source
    Semantic search over the Web. Eds.: R. De Virgilio, et al
    Theme
    Semantic Web
  8. Elsweiler, D.; Harvey, M.: Engaging and maintaining a sense of being informed : understanding the tasks motivating twitter search (2015) 0.07
    0.07169047 = product of:
      0.14338094 = sum of:
        0.029088326 = weight(_text_:web in 1635) [ClassicSimilarity], result of:
          0.029088326 = score(doc=1635,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 1635, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1635)
        0.114292614 = weight(_text_:search in 1635) [ClassicSimilarity], result of:
          0.114292614 = score(doc=1635,freq=24.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.66512775 = fieldWeight in 1635, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1635)
      0.5 = coord(2/4)
    
    Abstract
    Micro-blogging services such as Twitter represent constantly evolving, user-generated sources of information. Previous studies show that users search such content regularly but are often dissatisfied with current search facilities. We argue that an enhanced understanding of the motivations for search would aid the design of improved search systems, better reflecting what people need. Building on previous research, we present qualitative analyses of two sources of data regarding how and why people search Twitter. The first, a diary study (p?=?68), provides descriptions of Twitter information needs (n?=?117) and important meta-data from active study participants. The second data set was established by collecting first-person descriptions of search behavior (n?=?388) tweeted by twitter users themselves (p?=?381) and complements the first data set by providing similar descriptions from a more plentiful source. The results of our analyses reveal numerous characteristics of Twitter search that differentiate it from more commonly studied search domains, such as web search. The findings also shed light on some of the difficulties users encounter. By highlighting examples that go beyond those previously published, this article adds to the understanding of how and why people search such content. Based on these new insights, we conclude with a discussion of possible design implications for search systems that index micro-blogging content.
  9. Bhavnani, S.K.; Peck, F.A.: Scatter matters : regularities and implications for the scatter of healthcare information on the Web (2010) 0.07
    0.070746794 = product of:
      0.14149359 = sum of:
        0.08550187 = weight(_text_:web in 3433) [ClassicSimilarity], result of:
          0.08550187 = score(doc=3433,freq=12.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.5299281 = fieldWeight in 3433, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3433)
        0.055991717 = weight(_text_:search in 3433) [ClassicSimilarity], result of:
          0.055991717 = score(doc=3433,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.3258447 = fieldWeight in 3433, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=3433)
      0.5 = coord(2/4)
    
    Abstract
    Despite the development of huge healthcare Web sites and powerful search engines, many searchers end their searches prematurely with incomplete information. Recent studies suggest that users often retrieve incomplete information because of the complex scatter of relevant facts about a topic across Web pages. However, little is understood about regularities underlying such information scatter. To probe regularities within the scatter of facts across Web pages, this article presents the results of two analyses: (a) a cluster analysis of Web pages that reveals the existence of three page clusters that vary in information density and (b) a content analysis that suggests the role each of the above-mentioned page clusters play in providing comprehensive information. These results provide implications for the design of Web sites, search tools, and training to help users find comprehensive information about a topic and for a hypothesis describing the underlying mechanisms causing the scatter. We conclude by briefly discussing how the analysis of information scatter, at the granularity of facts, complements existing theories of information-seeking behavior.
  10. MacKay, B.; Watters, C.: ¬An examination of multisession web tasks (2012) 0.06
    0.06473425 = product of:
      0.1294685 = sum of:
        0.09647507 = weight(_text_:web in 255) [ClassicSimilarity], result of:
          0.09647507 = score(doc=255,freq=22.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.59793836 = fieldWeight in 255, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=255)
        0.032993436 = weight(_text_:search in 255) [ClassicSimilarity], result of:
          0.032993436 = score(doc=255,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.19200584 = fieldWeight in 255, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=255)
      0.5 = coord(2/4)
    
    Abstract
    Today, people perform many types of tasks on the web, including those that require multiple web sessions. In this article, we build on research about web tasks and present an in-depth evaluation of the types of tasks people perform on the web over multiple web sessions. Multisession web tasks are goal-based tasks that often contain subtasks requiring more than one web session to complete. We will detail the results of two longitudinal studies that we conducted to explore this topic. The first study was a weeklong web-diary study where participants self-reported information on their own multisession tasks. The second study was a monthlong field study where participants used a customized version of Firefox, which logged their interactions for both their own multisession tasks and their other web activity. The results from both studies found that people perform eight different types of multisession tasks, that these tasks often consist of several subtasks, that these lasted different lengths of time, and that users have unique strategies to help continue the tasks which involved a variety of web and browser tools such as search engines and bookmarks and external applications such as Notepad or Word. Using the results from these studies, we have suggested three guidelines for developers to consider when designing browser-tool features to help people perform these types of tasks: (a) to maintain a list of current multisession tasks, (b) to support multitasking, and (c) to manage task-related information between sessions.
  11. Villela Dantas, J.R.; Muniz Farias, P.F.: Conceptual navigation in knowledge management environments using NavCon (2010) 0.06
    0.06290185 = product of:
      0.1258037 = sum of:
        0.06981198 = weight(_text_:web in 4230) [ClassicSimilarity], result of:
          0.06981198 = score(doc=4230,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43268442 = fieldWeight in 4230, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4230)
        0.055991717 = weight(_text_:search in 4230) [ClassicSimilarity], result of:
          0.055991717 = score(doc=4230,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.3258447 = fieldWeight in 4230, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=4230)
      0.5 = coord(2/4)
    
    Abstract
    This article presents conceptual navigation and NavCon, an architecture that implements this navigation in World Wide Web pages. NavCon architecture makes use of ontology as metadata to contextualize user search for information. Based on ontologies, NavCon automatically inserts conceptual links in Web pages. By using these links, the user may navigate in a graph representing ontology concepts and their relationships. By browsing this graph, it is possible to reach documents associated with the user desired ontology concept. This Web navigation supported by ontology concepts we call conceptual navigation. Conceptual navigation is a technique to browse Web sites within a context. The context filters relevant retrieved information. The context also drives user navigation through paths that meet his needs. A company may implement conceptual navigation to improve user search for information in a knowledge management environment. We suggest that the use of an ontology to conduct navigation in an Intranet may help the user to have a better understanding about the knowledge structure of the company.
  12. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.06
    0.05941207 = product of:
      0.11882414 = sum of:
        0.09872905 = weight(_text_:web in 2158) [ClassicSimilarity], result of:
          0.09872905 = score(doc=2158,freq=16.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.6119082 = fieldWeight in 2158, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.02009509 = product of:
          0.04019018 = sum of:
            0.04019018 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.04019018 = score(doc=2158,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper introduces a project to develop a reliable, cost-effective method for classifying Internet texts into register categories, and apply that approach to the analysis of a large corpus of web documents. To date, the project has proceeded in 2 key phases. First, we developed a bottom-up method for web register classification, asking end users of the web to utilize a decision-tree survey to code relevant situational characteristics of web documents, resulting in a bottom-up identification of register and subregister categories. We present details regarding the development and testing of this method through a series of 10 pilot studies. Then, in the second phase of our project we applied this procedure to a corpus of 53,000 web documents. An analysis of the results demonstrates the effectiveness of these methods for web register classification and provides a preliminary description of the types and distribution of registers on the web.
    Date
    4. 8.2015 19:22:04
  13. Huvila, I.: Mining qualitative data on human information behaviour from the Web (2010) 0.06
    0.05836313 = product of:
      0.11672626 = sum of:
        0.07053544 = weight(_text_:web in 4676) [ClassicSimilarity], result of:
          0.07053544 = score(doc=4676,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43716836 = fieldWeight in 4676, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4676)
        0.046190813 = weight(_text_:search in 4676) [ClassicSimilarity], result of:
          0.046190813 = score(doc=4676,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 4676, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4676)
      0.5 = coord(2/4)
    
    Abstract
    This paper discusses an approach of collecting qualitative data on human information behaviour that is based on mining web data using search engines. The approach is technically the same that has been used for some time in webometric research to make statistical inferences on web data, but the present paper shows how the same tools and data collecting methods can be used to gather data for qualitative data analysis on human information behaviour.
  14. Lee, L.-H.; Chen, H.-H.: Mining search intents for collaborative cyberporn filtering (2012) 0.06
    0.05819038 = product of:
      0.11638076 = sum of:
        0.029088326 = weight(_text_:web in 4988) [ClassicSimilarity], result of:
          0.029088326 = score(doc=4988,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 4988, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4988)
        0.08729243 = weight(_text_:search in 4988) [ClassicSimilarity], result of:
          0.08729243 = score(doc=4988,freq=14.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.5079997 = fieldWeight in 4988, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4988)
      0.5 = coord(2/4)
    
    Abstract
    This article presents a search-intent-based method to generate pornographic blacklists for collaborative cyberporn filtering. A novel porn-detection framework that can find newly appearing pornographic web pages by mining search query logs is proposed. First, suspected queries are identified along with their clicked URLs by an automatically constructed lexicon. Then, a candidate URL is determined if the number of clicks satisfies majority voting rules. Finally, a candidate whose URL contains at least one categorical keyword will be included in a blacklist. Several experiments are conducted on an MSN search porn dataset to demonstrate the effectiveness of our method. The resulting blacklist generated by our search-intent-based method achieves high precision (0.701) while maintaining a favorably low false-positive rate (0.086). The experiments of a real-life filtering simulation reveal that our proposed method with its accumulative update strategy can achieve 44.15% of a macro-averaging blocking rate, when the update frequency is set to 1 day. In addition, the overblocking rates are less than 9% with time change due to the strong advantages of our search-intent-based method. This user-behavior-oriented method can be easily applied to search engines for incorporating only implicit collective intelligence from query logs without other efforts. In practice, it is complementary to intelligent content analysis for keeping up with the changing trails of objectionable websites from users' perspectives.
  15. Song, L.; Tso, G.; Fu, Y.: Click behavior and link prioritization : multiple demand theory application for web improvement (2019) 0.05
    0.05470205 = product of:
      0.1094041 = sum of:
        0.06981198 = weight(_text_:web in 5322) [ClassicSimilarity], result of:
          0.06981198 = score(doc=5322,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43268442 = fieldWeight in 5322, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5322)
        0.03959212 = weight(_text_:search in 5322) [ClassicSimilarity], result of:
          0.03959212 = score(doc=5322,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 5322, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=5322)
      0.5 = coord(2/4)
    
    Abstract
    A common problem encountered in Web improvement is how to arrange the homepage links of a Website. This study analyses Web information search behavior, and applies the multiple demand theory to propose two models to help a visitor allocate time for multiple links. The process of searching is viewed as a formal choice problem in which the visitor attempts to choose from multiple Web links to maximize the total utility. The proposed models are calibrated to clickstream data collected from an educational institute over a seven-and-a-half month period. Based on the best fit model, a metric, utility loss, is constructed to measure the performance of each link and arrange them accordingly. Empirical results show that the proposed metric is highly efficient for prioritizing the links on a homepage and the methodology can also be used to study the feasibility of introducing a new function in a Website.
  16. Thelwall, M.: ¬A comparison of link and URL citation counting (2011) 0.05
    0.05376438 = product of:
      0.10752876 = sum of:
        0.050382458 = weight(_text_:web in 4533) [ClassicSimilarity], result of:
          0.050382458 = score(doc=4533,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3122631 = fieldWeight in 4533, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4533)
        0.057146307 = weight(_text_:search in 4533) [ClassicSimilarity], result of:
          0.057146307 = score(doc=4533,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.33256388 = fieldWeight in 4533, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4533)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - Link analysis is an established topic within webometrics. It normally uses counts of links between sets of web sites or to sets of web sites. These link counts are derived from web crawlers or commercial search engines with the latter being the only alternative for some investigations. This paper compares link counts with URL citation counts in order to assess whether the latter could be a replacement for the former if the major search engines withdraw their advanced hyperlink search facilities. Design/methodology/approach - URL citation counts are compared with link counts for a variety of data sets used in previous webometric studies. Findings - The results show a high degree of correlation between the two but with URL citations being much less numerous, at least outside academia and business. Research limitations/implications - The results cover a small selection of 15 case studies and so the findings are only indicative. Significant differences between results indicate that the difference between link counts and URL citation counts will vary between webometric studies. Practical implications - Should link searches be withdrawn, then link analyses of less well linked non-academic, non-commercial sites would be seriously weakened, although citations based on e-mail addresses could help to make citations more numerous than links for some business and academic contexts. Originality/value - This is the first systematic study of the difference between link counts and URL citation counts in a variety of contexts and it shows that there are significant differences between the two.
  17. Social Media und Web Science : das Web als Lebensraum, Düsseldorf, 22. - 23. März 2012, Proceedings, hrsg. von Marlies Ockenfeld, Isabella Peters und Katrin Weller. DGI, Frankfurt am Main 2012 (2012) 0.05
    0.052445795 = product of:
      0.10489159 = sum of:
        0.08144732 = weight(_text_:web in 1517) [ClassicSimilarity], result of:
          0.08144732 = score(doc=1517,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.50479853 = fieldWeight in 1517, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1517)
        0.023444273 = product of:
          0.046888545 = sum of:
            0.046888545 = weight(_text_:22 in 1517) [ClassicSimilarity], result of:
              0.046888545 = score(doc=1517,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.2708308 = fieldWeight in 1517, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1517)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    RSWK
    Soziale Software / World Wide Web 2.0 / Kongress / Düsseldorf <2012>
    Subject
    Soziale Software / World Wide Web 2.0 / Kongress / Düsseldorf <2012>
  18. Zielinski, K.; Nielek, R.; Wierzbicki, A.; Jatowt, A.: Computing controversy : formal model and algorithms for detecting controversy on Wikipedia and in search queries (2018) 0.05
    0.051431946 = product of:
      0.10286389 = sum of:
        0.029088326 = weight(_text_:web in 5093) [ClassicSimilarity], result of:
          0.029088326 = score(doc=5093,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 5093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5093)
        0.07377557 = weight(_text_:search in 5093) [ClassicSimilarity], result of:
          0.07377557 = score(doc=5093,freq=10.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.4293381 = fieldWeight in 5093, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5093)
      0.5 = coord(2/4)
    
    Abstract
    Controversy is a complex concept that has been attracting attention of scholars from diverse fields. In the era of Internet and social media, detecting controversy and controversial concepts by the means of automatic methods is especially important. Web searchers could be alerted when the contents they consume are controversial or when they attempt to acquire information on disputed topics. Presenting users with the indications and explanations of the controversy should offer them chance to see the "wider picture" rather than letting them obtain one-sided views. In this work we first introduce a formal model of controversy as the basis of computational approaches to detecting controversial concepts. Then we propose a classification based method for automatic detection of controversial articles and categories in Wikipedia. Next, we demonstrate how to use the obtained results for the estimation of the controversy level of search queries. The proposed method can be incorporated into search engines as a component responsible for detection of queries related to controversial topics. The method is independent of the search engine's retrieval and search results recommendation algorithms, and is therefore unaffected by a possible filter bubble. Our approach can be also applied in Wikipedia or other knowledge bases for supporting the detection of controversy and content maintenance. Finally, we believe that our results could be useful for social science researchers for understanding the complex nature of controversy and in fostering their studies.
  19. Jansen, B.J.; Liu, Z.; Simon, Z.: ¬The effect of ad rank on the performance of keyword advertising campaigns (2013) 0.05
    0.050962992 = product of:
      0.101925984 = sum of:
        0.046659768 = weight(_text_:search in 1095) [ClassicSimilarity], result of:
          0.046659768 = score(doc=1095,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.27153727 = fieldWeight in 1095, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1095)
        0.05526622 = product of:
          0.11053244 = sum of:
            0.11053244 = weight(_text_:engine in 1095) [ClassicSimilarity], result of:
              0.11053244 = score(doc=1095,freq=4.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.41792953 = fieldWeight in 1095, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1095)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The goal of this research is to evaluate the effect of ad rank on the performance of keyword advertising campaigns. We examined a large-scale data file comprised of nearly 7,000,000 records spanning 33 consecutive months of a major US retailer's search engine marketing campaign. The theoretical foundation is serial position effect to explain searcher behavior when interacting with ranked ad listings. We control for temporal effects and use one-way analysis of variance (ANOVA) with Tamhane's T2 tests to examine the effect of ad rank on critical keyword advertising metrics, including clicks, cost-per-click, sales revenue, orders, items sold, and advertising return on investment. Our findings show significant ad rank effect on most of those metrics, although less effect on conversion rates. A primacy effect was found on both clicks and sales, indicating a general compelling performance of top-ranked ads listed on the first results page. Conversion rates, on the other hand, follow a relatively stable distribution except for the top 2 ads, which had significantly higher conversion rates. However, examining conversion potential (the effect of both clicks and conversion rate), we show that ad rank has a significant effect on the performance of keyword advertising campaigns. Conversion potential is a more accurate measure of the impact of an ad's position. In fact, the first ad position generates about 80% of the total profits, after controlling for advertising costs. In addition to providing theoretical grounding, the research results reported in this paper are beneficial to companies using search engine marketing as they strive to design more effective advertising campaigns.
  20. Bodoff, D.; Raban, D.: User models as revealed in web-based research services (2012) 0.04
    0.044478323 = product of:
      0.08895665 = sum of:
        0.049364526 = weight(_text_:web in 76) [ClassicSimilarity], result of:
          0.049364526 = score(doc=76,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3059541 = fieldWeight in 76, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=76)
        0.03959212 = weight(_text_:search in 76) [ClassicSimilarity], result of:
          0.03959212 = score(doc=76,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=76)
      0.5 = coord(2/4)
    
    Abstract
    The user-centered approach to information retrieval emphasizes the importance of a user model in determining what information will be most useful to a particular user, given their context. Mediated search provides an opportunity to elaborate on this idea, as an intermediary's elicitations reveal what aspects of the user model they think are worth inquiring about. However, empirical evidence is divided over whether intermediaries actually work to develop a broadly conceived user model. Our research revisits the issue in a web research services setting, whose characteristics are expected to result in more thorough user modeling on the part of intermediaries. Our empirical study confirms that intermediaries engage in rich user modeling. While intermediaries behave differently across settings, our interpretation is that the underlying user model characteristics that intermediaries inquire about in our setting are applicable to other settings as well.

Languages

  • e 76
  • d 43

Types

  • a 101
  • m 14
  • el 10
  • s 3
  • x 1
  • More… Less…

Subjects

Classifications