Search (14 results, page 1 of 1)

  • × theme_ss:"Suchmaschinen"
  1. MacLeod, R.: Promoting a subject gateway : a case study from EEVL (Edinburgh Engineering Virtual Library) (2000) 0.05
    0.046494167 = product of:
      0.11623542 = sum of:
        0.073567346 = weight(_text_:study in 4872) [ClassicSimilarity], result of:
          0.073567346 = score(doc=4872,freq=4.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.50803196 = fieldWeight in 4872, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.078125 = fieldNorm(doc=4872)
        0.042668078 = product of:
          0.085336156 = sum of:
            0.085336156 = weight(_text_:22 in 4872) [ClassicSimilarity], result of:
              0.085336156 = score(doc=4872,freq=4.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.54716086 = fieldWeight in 4872, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4872)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Describes the development of EEVL and outlines the services offered. The potential market for EEVL is discussed, and a case study of promotional activities is presented
    Date
    22. 6.2002 19:40:22
  2. Lewandowski, D.; Sünkler, S.: What does Google recommend when you want to compare insurance offerings? (2019) 0.03
    0.026842168 = product of:
      0.06710542 = sum of:
        0.052019972 = weight(_text_:study in 5288) [ClassicSimilarity], result of:
          0.052019972 = score(doc=5288,freq=8.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.35923287 = fieldWeight in 5288, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5288)
        0.015085445 = product of:
          0.03017089 = sum of:
            0.03017089 = weight(_text_:22 in 5288) [ClassicSimilarity], result of:
              0.03017089 = score(doc=5288,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.19345059 = fieldWeight in 5288, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5288)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose The purpose of this paper is to describe a new method to improve the analysis of search engine results by considering the provider level as well as the domain level. This approach is tested by conducting a study using queries on the topic of insurance comparisons. Design/methodology/approach The authors conducted an empirical study that analyses the results of search queries aimed at comparing insurance companies. The authors used a self-developed software system that automatically queries commercial search engines and automatically extracts the content of the returned result pages for further data analysis. The data analysis was carried out using the KNIME Analytics Platform. Findings Google's top search results are served by only a few providers that frequently appear in these results. The authors show that some providers operate several domains on the same topic and that these domains appear for the same queries in the result lists. Research limitations/implications The authors demonstrate the feasibility of this approach and draw conclusions for further investigations from the empirical study. However, the study is a limited use case based on a limited number of search queries. Originality/value The proposed method allows large-scale analysis of the composition of the top results from commercial search engines. It allows using valid empirical data to determine what users actually see on the search engine result pages.
    Date
    20. 1.2015 18:30:22
  3. Fong, W.W.: Searching the World Wide Web (1996) 0.03
    0.026301075 = product of:
      0.065752685 = sum of:
        0.041615978 = weight(_text_:study in 6597) [ClassicSimilarity], result of:
          0.041615978 = score(doc=6597,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.2873863 = fieldWeight in 6597, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0625 = fieldNorm(doc=6597)
        0.02413671 = product of:
          0.04827342 = sum of:
            0.04827342 = weight(_text_:22 in 6597) [ClassicSimilarity], result of:
              0.04827342 = score(doc=6597,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.30952093 = fieldWeight in 6597, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6597)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Reviews the availability on the WWW, of search engines designed to organize various web information sources. Discusses the differences and similarities of each search engine and their advantages and disadvantages. Search engines included in the study were: AltaVista, CUI W3 Catalog, InfoSeek, Lycos, Magellan, Yahoo
    Source
    Journal of library and information science. 22(1996) no.1, S.15-36
  4. Gossen, T.: Search engines for children : search user interfaces and information-seeking behaviour (2016) 0.03
    0.026072312 = product of:
      0.06518078 = sum of:
        0.05462097 = weight(_text_:study in 2752) [ClassicSimilarity], result of:
          0.05462097 = score(doc=2752,freq=18.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.3771945 = fieldWeight in 2752, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2752)
        0.010559811 = product of:
          0.021119623 = sum of:
            0.021119623 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
              0.021119623 = score(doc=2752,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.1354154 = fieldWeight in 2752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2752)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Inhalt: Acknowledgments; Abstract; Zusammenfassung; Contents; List of Figures; List of Tables; List of Acronyms; Chapter 1 Introduction ; 1.1 Research Questions; 1.2 Thesis Outline; Part I Fundamentals ; Chapter 2 Information Retrieval for Young Users ; 2.1 Basics of Information Retrieval; 2.1.1 Architecture of an IR System; 2.1.2 Relevance Ranking; 2.1.3 Search User Interfaces; 2.1.4 Targeted Search Engines; 2.2 Aspects of Child Development Relevant for Information Retrieval Tasks; 2.2.1 Human Cognitive Development; 2.2.2 Information Processing Theory; 2.2.3 Psychosocial Development 2.3 User Studies and Evaluation2.3.1 Methods in User Studies; 2.3.2 Types of Evaluation; 2.3.3 Evaluation with Children; 2.4 Discussion; Chapter 3 State of the Art ; 3.1 Children's Information-Seeking Behaviour; 3.1.1 Querying Behaviour; 3.1.2 Search Strategy; 3.1.3 Navigation Style; 3.1.4 User Interface; 3.1.5 Relevance Judgement; 3.2 Existing Algorithms and User Interface Concepts for Children; 3.2.1 Query; 3.2.2 Content; 3.2.3 Ranking; 3.2.4 Search Result Visualisation; 3.3 Existing Information Retrieval Systems for Children; 3.3.1 Digital Book Libraries; 3.3.2 Web Search Engines 3.4 Summary and DiscussionPart II Studying Open Issues ; Chapter 4 Usability of Existing Search Engines for Young Users ; 4.1 Assessment Criteria; 4.1.1 Criteria for Matching the Motor Skills; 4.1.2 Criteria for Matching the Cognitive Skills; 4.2 Results; 4.2.1 Conformance with Motor Skills; 4.2.2 Conformance with the Cognitive Skills; 4.2.3 Presentation of Search Results; 4.2.4 Browsing versus Searching; 4.2.5 Navigational Style; 4.3 Summary and Discussion; Chapter 5 Large-scale Analysis of Children's Queries and Search Interactions; 5.1 Dataset; 5.2 Results; 5.3 Summary and Discussion Chapter 6 Differences in Usability and Perception of Targeted Web Search Engines between Children and Adults 6.1 Related Work; 6.2 User Study; 6.3 Study Results; 6.4 Summary and Discussion; Part III Tackling the Challenges ; Chapter 7 Search User Interface Design for Children ; 7.1 Conceptual Challenges and Possible Solutions; 7.2 Knowledge Journey Design; 7.3 Evaluation; 7.3.1 Study Design; 7.3.2 Study Results; 7.4 Voice-Controlled Search: Initial Study; 7.4.1 User Study; 7.5 Summary and Discussion; Chapter 8 Addressing User Diversity ; 8.1 Evolving Search User Interface 8.1.1 Mapping Function8.1.2 Evolving Skills; 8.1.3 Detection of User Abilities; 8.1.4 Design Concepts; 8.2 Adaptation of a Search User Interface towards User Needs; 8.2.1 Design & Implementation; 8.2.2 Search Input; 8.2.3 Result Output; 8.2.4 General Properties; 8.2.5 Configuration and Further Details; 8.3 Evaluation; 8.3.1 Study Design; 8.3.2 Study Results; 8.3.3 Preferred UI Settings; 8.3.4 User satisfaction; 8.4 Knowledge Journey Exhibit; 8.4.1 Hardware; 8.4.2 Frontend; 8.4.3 Backend; 8.5 Summary and Discussion; Chapter 9 Supporting Visual Searchers in Processing Search Results 9.1 Related Work
    Date
    1. 2.2016 18:25:22
    Series
    Study in computer science and media design
  5. Bilal, D.: Children's use of the Yahooligans! Web search engine : III. Cognitive and physical behaviors on fully self-generated search tasks (2002) 0.02
    0.024897177 = product of:
      0.06224294 = sum of:
        0.04414041 = weight(_text_:study in 5228) [ClassicSimilarity], result of:
          0.04414041 = score(doc=5228,freq=4.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.3048192 = fieldWeight in 5228, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.046875 = fieldNorm(doc=5228)
        0.018102532 = product of:
          0.036205065 = sum of:
            0.036205065 = weight(_text_:22 in 5228) [ClassicSimilarity], result of:
              0.036205065 = score(doc=5228,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.23214069 = fieldWeight in 5228, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5228)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Bilal, in this third part of her Yahooligans! study looks at children's performance with self-generated search tasks, as compared to previously assigned search tasks looking for differences in success, cognitive behavior, physical behavior, and task preference. Lotus ScreenCam was used to record interactions and post search interviews to record impressions. The subjects, the same 22 seventh grade children in the previous studies, generated topics of interest that were mediated with the researcher into more specific topics where necessary. Fifteen usable sessions form the basis of the study. Eleven children were successful in finding information, a rate of 73% compared to 69% in assigned research questions, and 50% in assigned fact-finding questions. Eighty-seven percent began using one or two keyword searches. Spelling was a problem. Successful children made fewer keyword searches and the number of search moves averaged 5.5 as compared to 2.4 on the research oriented task and 3.49 on the factual. Backtracking and looping were common. The self-generated task was preferred by 47% of the subjects.
  6. Stacey, Alison; Stacey, Adrian: Effective information retrieval from the Internet : an advanced user's guide (2004) 0.02
    0.024758985 = product of:
      0.061897464 = sum of:
        0.020807989 = weight(_text_:study in 4497) [ClassicSimilarity], result of:
          0.020807989 = score(doc=4497,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.14369315 = fieldWeight in 4497, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.03125 = fieldNorm(doc=4497)
        0.041089475 = product of:
          0.08217895 = sum of:
            0.08217895 = weight(_text_:teaching in 4497) [ClassicSimilarity], result of:
              0.08217895 = score(doc=4497,freq=4.0), product of:
                0.24199244 = queryWeight, product of:
                  5.433489 = idf(docFreq=524, maxDocs=44218)
                  0.044537213 = queryNorm
                0.33959305 = fieldWeight in 4497, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.433489 = idf(docFreq=524, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4497)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Key Features - Importantly, the book enables readers to develop strategies which will continue to be useful despite the rapidly-evolving state of the Internet and Internet technologies - it is not about technological `tricks'. - Enables readers to be aware of and compensate for bias and errors which are ubiquitous an the Internet. - Provides contemporary information an the deficiencies in web skills of novice users as well as practical techniques for teaching such users. The Authors Dr Alison Stacey works at the Learning Resource Centre, Cambridge Regional College. Dr Adrian Stacey, formerly based at Cambridge University, is a software programmer. Readership The book is aimed at a wide range of librarians and other information professionals who need to retrieve information from the Internet efficiently, to evaluate their confidence in the information they retrieve and/or to train others to use the Internet. It is primarily aimed at intermediate to advanced users of the Internet. Contents Fundamentals of information retrieval from the Internet - why learn web searching technique; types of information requests; patterns for information retrieval; leveraging the technology: Search term choice: pinpointing information an the web - why choose queries carefully; making search terms work together; how to pick search terms; finding the 'unfindable': Blas an the Internet - importance of bias; sources of bias; usergenerated bias: selecting information with which you already agree; assessing and compensating for bias; case studies: Query reformulation and longer term strategies - how to interact with your search engine; foraging for information; long term information retrieval: using the Internet to find trends; automating searches: how to make your machine do your work: Assessing the quality of results- how to assess and ensure quality: The novice user and teaching internet skills - novice users and their problems with the web; case study: research in a college library; interpreting 'second hand' web information.
  7. Su, L.T.: ¬A comprehensive and systematic model of user evaluation of Web search engines : Il. An evaluation by undergraduates (2003) 0.02
    0.024054427 = product of:
      0.060136065 = sum of:
        0.04505062 = weight(_text_:study in 2117) [ClassicSimilarity], result of:
          0.04505062 = score(doc=2117,freq=6.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.3111048 = fieldWeight in 2117, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2117)
        0.015085445 = product of:
          0.03017089 = sum of:
            0.03017089 = weight(_text_:22 in 2117) [ClassicSimilarity], result of:
              0.03017089 = score(doc=2117,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.19345059 = fieldWeight in 2117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2117)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents an application of the model described in Part I to the evaluation of Web search engines by undergraduates. The study observed how 36 undergraduate used four major search engines to find information for their own individual problems and how they evaluated these engines based an actual interaction with the search engines. User evaluation was based an 16 performance measures representing five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Non-performance (user-related) measures were also applied. Each participant searched his/ her own topic an all four engines and provided satisfaction ratings for system features and interaction and reasons for satisfaction. Each also made relevance judgements of retrieved items in relation to his/her own information need and participated in post-search Interviews to provide reactions to the search results and overall performance. The study found significant differences in precision PR1 relative recall, user satisfaction with output display, time saving, value of search results, and overall performance among the four engines and also significant engine by discipline interactions an all these measures. In addition, the study found significant differences in user satisfaction with response time among four engines, and significant engine by discipline interaction in user satisfaction with search interface. None of the four search engines dominated in every aspect of the multidimensional evaluation. Content analysis of verbal data identified a number of user criteria and users evaluative comments based an these criteria. Results from both quantitative analysis and content analysis provide insight for system design and development, and useful feedback an strengths and weaknesses of search engines for system improvement
    Date
    24. 1.2004 18:27:22
  8. Aloteibi, S.; Sanderson, M.: Analyzing geographic query reformulation : an exploratory study (2014) 0.02
    0.020747647 = product of:
      0.051869117 = sum of:
        0.036783673 = weight(_text_:study in 1177) [ClassicSimilarity], result of:
          0.036783673 = score(doc=1177,freq=4.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.25401598 = fieldWeight in 1177, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1177)
        0.015085445 = product of:
          0.03017089 = sum of:
            0.03017089 = weight(_text_:22 in 1177) [ClassicSimilarity], result of:
              0.03017089 = score(doc=1177,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.19345059 = fieldWeight in 1177, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1177)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Search engine users typically engage in multiquery sessions in their quest to fulfill their information needs. Despite a plethora of research findings suggesting that a significant group of users look for information within a specific geographical scope, existing reformulation studies lack a focused analysis of how users reformulate geographic queries. This study comprehensively investigates the ways in which users reformulate such needs in an attempt to fill this gap in the literature. Reformulated sessions were sampled from a query log of a major search engine to extract 2,400 entries that were manually inspected to filter geo sessions. This filter identified 471 search sessions that included geographical intent, and these sessions were analyzed quantitatively and qualitatively. The results revealed that one in five of the users who reformulated their queries were looking for geographically related information. They reformulated their queries by changing the content of the query rather than the structure. Users were not following a unified sequence of modifications and instead performed a single reformulation action. However, in some cases it was possible to anticipate their next move. A number of tasks in geo modifications were identified, including standard, multi-needs, multi-places, and hybrid approaches. The research concludes that it is important to specialize query reformulation studies to focus on particular query types rather than generically analyzing them, as it is apparent that geographic queries have their special reformulation characteristics.
    Date
    26. 1.2014 18:48:22
  9. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.02
    0.020747647 = product of:
      0.051869117 = sum of:
        0.036783673 = weight(_text_:study in 1605) [ClassicSimilarity], result of:
          0.036783673 = score(doc=1605,freq=4.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.25401598 = fieldWeight in 1605, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1605)
        0.015085445 = product of:
          0.03017089 = sum of:
            0.03017089 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.03017089 = score(doc=1605,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Numerous studies have explored the possibility of uncovering information from web search queries but few have examined the factors that affect web query data sources. We conducted a study that investigated this issue by comparing Google Trends and Baidu Index. Data from these two services are based on queries entered by users into Google and Baidu, two of the largest search engines in the world. We first compared the features and functions of the two services based on documents and extensive testing. We then carried out an empirical study that collected query volume data from the two sources. We found that data from both sources could be used to predict the quality of Chinese universities and companies. Despite the differences between the two services in terms of technology, such as differing methods of language processing, the search volume data from the two were highly correlated and combining the two data sources did not improve the predictive power of the data. However, there was a major difference between the two in terms of data availability. Baidu Index was able to provide more search volume data than Google Trends did. Our analysis showed that the disadvantage of Google Trends in this regard was due to Google's smaller user base in China. The implication of this finding goes beyond China. Google's user bases in many countries are smaller than that in China, so the search volume data related to those countries could result in the same issue as that related to China.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
  10. Sachse, J.: ¬The influence of snippet length on user behavior in mobile web search (2019) 0.02
    0.020747647 = product of:
      0.051869117 = sum of:
        0.036783673 = weight(_text_:study in 5493) [ClassicSimilarity], result of:
          0.036783673 = score(doc=5493,freq=4.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.25401598 = fieldWeight in 5493, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5493)
        0.015085445 = product of:
          0.03017089 = sum of:
            0.03017089 = weight(_text_:22 in 5493) [ClassicSimilarity], result of:
              0.03017089 = score(doc=5493,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.19345059 = fieldWeight in 5493, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5493)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose Web search is more and more moving into mobile contexts. However, screen size of mobile devices is limited and search engine result pages face a trade-off between offering informative snippets and optimal use of space. One factor clearly influencing this trade-off is snippet length. The purpose of this paper is to find out what snippet size to use in mobile web search. Design/methodology/approach For this purpose, an eye-tracking experiment was conducted showing participants search interfaces with snippets of one, three or five lines on a mobile device to analyze 17 dependent variables. In total, 31 participants took part in the study. Each of the participants solved informational and navigational tasks. Findings Results indicate a strong influence of page fold on scrolling behavior and attention distribution across search results. Regardless of query type, short snippets seem to provide too little information about the result, so that search performance and subjective measures are negatively affected. Long snippets of five lines lead to better performance than medium snippets for navigational queries, but to worse performance for informational queries. Originality/value Although space in mobile search is limited, this study shows that longer snippets improve usability and user experience. It further emphasizes that page fold plays a stronger role in mobile than in desktop search for attention distribution.
    Date
    20. 1.2015 18:30:22
  11. Garcés, P.J.; Olivas, J.A.; Romero, F.P.: Concept-matching IR systems versus word-matching information retrieval systems : considering fuzzy interrelations for indexing Web pages (2006) 0.02
    0.016438173 = product of:
      0.041095432 = sum of:
        0.026009986 = weight(_text_:study in 5288) [ClassicSimilarity], result of:
          0.026009986 = score(doc=5288,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.17961644 = fieldWeight in 5288, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5288)
        0.015085445 = product of:
          0.03017089 = sum of:
            0.03017089 = weight(_text_:22 in 5288) [ClassicSimilarity], result of:
              0.03017089 = score(doc=5288,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.19345059 = fieldWeight in 5288, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5288)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article presents a semantic-based Web retrieval system that is capable of retrieving the Web pages that are conceptually related to the implicit concepts of the query. The concept of concept is managed from a fuzzy point of view by means of semantic areas. In this context, the proposed system improves most search engines that are based on matching words. The key of the system is to use a new version of the Fuzzy Interrelations and Synonymy-Based Concept Representation Model (FIS-CRM) to extract and represent the concepts contained in both the Web pages and the user query. This model, which was integrated into other tools such as the Fuzzy Interrelations and Synonymy based Searcher (FISS) metasearcher and the fz-mail system, considers the fuzzy synonymy and the fuzzy generality interrelations as a means of representing word interrelations (stored in a fuzzy synonymy dictionary and ontologies). The new version of the model, which is based on the study of the cooccurrences of synonyms, integrates a soft method for disambiguating word senses. This method also considers the context of the word to be disambiguated and the thematic ontologies and sets of synonyms stored in the dictionary.
    Date
    22. 7.2006 17:14:12
  12. Golderman, G.M.; Connolly, B.: Between the book covers : going beyond OPAC keyword searching with the deep linking capabilities of Google Scholar and Google Book Search (2004/05) 0.02
    0.016438173 = product of:
      0.041095432 = sum of:
        0.026009986 = weight(_text_:study in 731) [ClassicSimilarity], result of:
          0.026009986 = score(doc=731,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.17961644 = fieldWeight in 731, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0390625 = fieldNorm(doc=731)
        0.015085445 = product of:
          0.03017089 = sum of:
            0.03017089 = weight(_text_:22 in 731) [ClassicSimilarity], result of:
              0.03017089 = score(doc=731,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.19345059 = fieldWeight in 731, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=731)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    One finding of the 2006 OCLC study of College Students' Perceptions of Libraries and Information Resources was that students expressed equal levels of trust in libraries and search engines when it came to meeting their information needs in a way that they felt was authoritative. Seeking to incorporate this insight into our own instructional methodology, Schaffer Library at Union College has attempted to engineer a shift from Google to Google Scholar among our student users by representing Scholar as a viable adjunct to the catalog and to snore traditional electronic resources. By attempting to engage student researchers on their own terms, we have discovered that most of them react enthusiastically to the revelation that the Google they think they know so well is, it turns out, a multifaceted resource that is capable of delivering the sort of scholarly information that will meet with their professors' approval. Specifically, this article focuses on the fact that many Google Scholar searches link hack to our own Web catalog where they identify useful book titles that direct OPAC keyword searches have missed.
    Date
    2.12.2007 19:39:22
  13. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.02
    0.016438173 = product of:
      0.041095432 = sum of:
        0.026009986 = weight(_text_:study in 2564) [ClassicSimilarity], result of:
          0.026009986 = score(doc=2564,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.17961644 = fieldWeight in 2564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2564)
        0.015085445 = product of:
          0.03017089 = sum of:
            0.03017089 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
              0.03017089 = score(doc=2564,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.19345059 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    PageRank is defined as the stationary state of a Markov chain. The chain is obtained by perturbing the transition matrix induced by a web graph with a damping factor alpha that spreads uniformly part of the rank. The choice of alpha is eminently empirical, and in most cases the original suggestion alpha=0.85 by Brin and Page is still used. Recently, however, the behaviour of PageRank with respect to changes in alpha was discovered to be useful in link-spam detection. Moreover, an analytical justification of the value chosen for alpha is still missing. In this paper, we give the first mathematical analysis of PageRank when alpha changes. In particular, we show that, contrarily to popular belief, for real-world graphs values of alpha close to 1 do not give a more meaningful ranking. Then, we give closed-form formulae for PageRank derivatives of any order, and an extension of the Power Method that approximates them with convergence O(t**k*alpha**t) for the k-th derivative. Finally, we show a tight connection between iterated computation and analytical behaviour by proving that the k-th iteration of the Power Method gives exactly the PageRank value obtained using a Maclaurin polynomial of degree k. The latter result paves the way towards the application of analytical methods to the study of PageRank.
    Date
    16. 1.2016 10:22:28
  14. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.02
    0.016438173 = product of:
      0.041095432 = sum of:
        0.026009986 = weight(_text_:study in 2565) [ClassicSimilarity], result of:
          0.026009986 = score(doc=2565,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.17961644 = fieldWeight in 2565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2565)
        0.015085445 = product of:
          0.03017089 = sum of:
            0.03017089 = weight(_text_:22 in 2565) [ClassicSimilarity], result of:
              0.03017089 = score(doc=2565,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.19345059 = fieldWeight in 2565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper introduces a family of link-based ranking algorithms that propagate page importance through links. In these algorithms there is a damping function that decreases with distance, so a direct link implies more endorsement than a link through a long path. PageRank is the most widely known ranking function of this family. The main objective of this paper is to determine whether this family of ranking techniques has some interest per se, and how different choices for the damping function impact on rank quality and on convergence speed. Even though our results suggest that PageRank can be approximated with other simpler forms of rankings that may be computed more efficiently, our focus is of more speculative nature, in that it aims at separating the kernel of PageRank, that is, link-based importance propagation, from the way propagation decays over paths. We focus on three damping functions, having linear, exponential, and hyperbolic decay on the lengths of the paths. The exponential decay corresponds to PageRank, and the other functions are new. Our presentation includes algorithms, analysis, comparisons and experiments that study their behavior under different parameters in real Web graph data. Among other results, we show how to calculate a linear approximation that induces a page ordering that is almost identical to PageRank's using a fixed small number of iterations; comparisons were performed using Kendall's tau on large domain datasets.
    Date
    16. 1.2016 10:22:28