Search (87 results, page 2 of 5)

  • × theme_ss:"Suchmaschinen"
  • × year_i:[2010 TO 2020}
  1. Zhitomirsky-Geffet, M.; Bar-Ilan, J.; Levene, M.: Analysis of change in users' assessment of search results over time (2017) 0.01
    0.0050474075 = product of:
      0.025237037 = sum of:
        0.025237037 = weight(_text_:of in 3593) [ClassicSimilarity], result of:
          0.025237037 = score(doc=3593,freq=40.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.38633084 = fieldWeight in 3593, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3593)
      0.2 = coord(1/5)
    
    Abstract
    We present the first systematic study of the influence of time on user judgements for rankings and relevance grades of web search engine results. The goal of this study is to evaluate the change in user assessment of search results and explore how users' judgements change. To this end, we conducted a large-scale user study with 86 participants who evaluated 2 different queries and 4 diverse result sets twice with an interval of 2 months. To analyze the results we investigate whether 2 types of patterns of user behavior from the theory of categorical thinking hold for the case of evaluation of search results: (a) coarseness and (b) locality. To quantify these patterns we devised 2 new measures of change in user judgements and distinguish between local (when users swap between close ranks and relevance values) and nonlocal changes. Two types of judgements were considered in this study: (a) relevance on a 4-point scale, and (b) ranking on a 10-point scale without ties. We found that users tend to change their judgements of the results over time in about 50% of cases for relevance and in 85% of cases for ranking. However, the majority of these changes were local.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.5, S.1137-1148
  2. Vaughan, L.; Romero-Frías, E.: Web search volume as a predictor of academic fame : an exploration of Google trends (2014) 0.00
    0.004691646 = product of:
      0.02345823 = sum of:
        0.02345823 = weight(_text_:of in 1233) [ClassicSimilarity], result of:
          0.02345823 = score(doc=1233,freq=24.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.3591007 = fieldWeight in 1233, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1233)
      0.2 = coord(1/5)
    
    Abstract
    Searches conducted on web search engines reflect the interests of users and society. Google Trends, which provides information about the queries searched by users of the Google web search engine, is a rich data source from which a wealth of information can be mined. We investigated the possibility of using web search volume data from Google Trends to predict academic fame. As queries are language-dependent, we studied universities from two countries with different languages, the United States and Spain. We found a significant correlation between the search volume of a university name and the university's academic reputation or fame. We also examined the effect of some Google Trends features, namely, limiting the search to a specific country or topic category on the search volume data. Finally, we examined the effect of university sizes on the correlations found to gain a deeper understanding of the nature of the relationships.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.4, S.707-720
  3. Lewandowski, D.: Query understanding (2011) 0.00
    0.0045278776 = product of:
      0.022639386 = sum of:
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 344) [ClassicSimilarity], result of:
              0.045278773 = score(doc=344,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 344, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=344)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    18. 9.2018 18:22:18
  4. Tober, M.; Hennig, L.; Furch, D.: SEO Ranking-Faktoren und Rang-Korrelationen 2014 : Google Deutschland (2014) 0.00
    0.0045278776 = product of:
      0.022639386 = sum of:
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 1484) [ClassicSimilarity], result of:
              0.045278773 = score(doc=1484,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 1484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1484)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    13. 9.2014 14:45:22
  5. Schaat, S.: Von der automatisierten Manipulation zur Manipulation der Automatisierung (2019) 0.00
    0.0045278776 = product of:
      0.022639386 = sum of:
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 4996) [ClassicSimilarity], result of:
              0.045278773 = score(doc=4996,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 4996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4996)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    19. 2.2019 17:22:00
  6. Johnson, F.; Rowley, J.; Sbaffi, L.: Exploring information interactions in the context of Google (2016) 0.00
    0.0044919094 = product of:
      0.022459546 = sum of:
        0.022459546 = weight(_text_:of in 2885) [ClassicSimilarity], result of:
          0.022459546 = score(doc=2885,freq=22.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34381276 = fieldWeight in 2885, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2885)
      0.2 = coord(1/5)
    
    Abstract
    The study sets out to explore the factors that influence the evaluation of information and the judgments made in the process of finding useful information in web search contexts. Based on a diary study of 2 assigned tasks to search on Google and Google Scholar, factor analysis identified the core constructs of content, relevance, scope, and style, as well as informational and system "ease of use" as influencing the judgment that useful information had been found. Differences were found in the participants' evaluation of information across the search tasks on Google and on Google Scholar when identified by the factors related to both content and ease of use. The findings from this study suggest how searchers might critically evaluate information, and the study identifies a relation between the user's involvement in the information interaction and the influences of the perceived system ease of use and information design.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.4, S.824-840
  7. Truran, M.; Schmakeit, J.-F.; Ashman, H.: ¬The effect of user intent on the stability of search engine results (2011) 0.00
    0.004469165 = product of:
      0.022345824 = sum of:
        0.022345824 = weight(_text_:of in 4478) [ClassicSimilarity], result of:
          0.022345824 = score(doc=4478,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34207192 = fieldWeight in 4478, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4478)
      0.2 = coord(1/5)
    
    Abstract
    Previous work has established that search engine queries can be classified according to the intent of the searcher (i.e., why is the user searching, what specifically do they intend to do). In this article, we describe an experiment in which four sets of queries, each set representing a different user intent, are repeatedly submitted to three search engines over a period of 60 days. Using a variety of measurements, we describe the overall stability of the search engine results recorded for each group. Our findings suggest that search engine results for informational queries are significantly more stable than the results obtained using transactional, navigational, or commercial queries.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.7, S.1276-1287
  8. Ortiz-Cordova, A.; Jansen, B.J.: Classifying web search queries to identify high revenue generating customers (2012) 0.00
    0.004282867 = product of:
      0.021414334 = sum of:
        0.021414334 = weight(_text_:of in 279) [ClassicSimilarity], result of:
          0.021414334 = score(doc=279,freq=20.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.32781258 = fieldWeight in 279, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=279)
      0.2 = coord(1/5)
    
    Abstract
    Traffic from search engines is important for most online businesses, with the majority of visitors to many websites being referred by search engines. Therefore, an understanding of this search engine traffic is critical to the success of these websites. Understanding search engine traffic means understanding the underlying intent of the query terms and the corresponding user behaviors of searchers submitting keywords. In this research, using 712,643 query keywords from a popular Spanish music website relying on contextual advertising as its business model, we use a k-means clustering algorithm to categorize the referral keywords with similar characteristics of onsite customer behavior, including attributes such as clickthrough rate and revenue. We identified 6 clusters of consumer keywords. Clusters range from a large number of users who are low impact to a small number of high impact users. We demonstrate how online businesses can leverage this segmentation clustering approach to provide a more tailored consumer experience. Implications are that businesses can effectively segment customers to develop better business models to increase advertising conversion rates.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.7, S.1426-1441
  9. Joint, N.: ¬The one-stop shop search engine : a transformational library technology? ANTAEUS (2010) 0.00
    0.0042229644 = product of:
      0.02111482 = sum of:
        0.02111482 = weight(_text_:of in 4201) [ClassicSimilarity], result of:
          0.02111482 = score(doc=4201,freq=28.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.32322758 = fieldWeight in 4201, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4201)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - The purpose of this paper is to form one of a series which will give an overview of so-called "transformational" areas of digital library technology. The aim will be to assess how much real transformation these applications are bringing about, in terms of creating genuine user benefit and also changing everyday library practice. Design/methodology/approach - An overview of the present state of development of the one-stop shop library search engine, with particular reference to its relationship with the underlying bibliographic databases to which it provides a simplified single interface. Findings - The paper finds that the success of federated searching has proved valuable but limited to date in creating a one-stop shop search engine to rival Google Scholar; but the persistent value of the bibliographic databases sitting underneath a federated search system means that a harvesting search engine could well answer the need for a true one-stop search engine for academic and scholarly information. Research limitations/implications - This paper is based on the hypothesis that Google's success in providing such an apparently high degree of access to electronic journal services is not what it seems, and that it does not render library discovery tools obsolete. It argues that Google has not diminished the pre-eminent role of library bibliographic databases in mediating access to e-journal text, although this hypothesis needs further research to validate or disprove it. Practical implications - The paper affirms the value of bibliographic databases to practitioner librarians and the potential of single interface discovery tools in library practice. Originality/value - The paper uses statistics from US LIS sources to shed light on UK discovery tool issues.
  10. Makris, C.; Plegas, Y.; Stamou, S.: Web query disambiguation using PageRank (2012) 0.00
    0.0040630843 = product of:
      0.02031542 = sum of:
        0.02031542 = weight(_text_:of in 378) [ClassicSimilarity], result of:
          0.02031542 = score(doc=378,freq=18.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.3109903 = fieldWeight in 378, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=378)
      0.2 = coord(1/5)
    
    Abstract
    In this article, we propose new word sense disambiguation strategies for resolving the senses of polysemous query terms issued to Web search engines, and we explore the application of those strategies when used in a query expansion framework. The novelty of our approach lies in the exploitation of the Web page PageRank values as indicators of the significance the different senses of a term carry when employed in search queries. We also aim at scalable query sense resolution techniques that can be applied without loss of efficiency to large data sets such as those on the Web. Our experimental findings validate that the proposed techniques perform more accurately than do the traditional disambiguation strategies and improve the quality of the search results, when involved in query expansion.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.8, S.1581-1592
  11. Web search engine research (2012) 0.00
    0.0040630843 = product of:
      0.02031542 = sum of:
        0.02031542 = weight(_text_:of in 478) [ClassicSimilarity], result of:
          0.02031542 = score(doc=478,freq=18.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.3109903 = fieldWeight in 478, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=478)
      0.2 = coord(1/5)
    
    Abstract
    "Web Search Engine Research", edited by Dirk Lewandowski, provides an understanding of Web search engines from the unique perspective of Library and Information Science. The book explores a range of topics including retrieval effectiveness, user satisfaction, the evaluation of search interfaces, the impact of search on society, reliability of search results, query log analysis, user guidance in the search process, and the influence of search engine optimization (SEO) on results quality. While research in computer science has mainly focused on technical aspects of search engines, LIS research is centred on users' behaviour when using search engines and how this interaction can be evaluated. LIS research provides a unique perspective in intermediating between the technical aspects, user aspects and their impact on their role in knowledge acquisition. This book is directly relevant to researchers and practitioners in library and information science, computer science, including Web researchers.
    Footnote
    Weitere Rez. in: Journal of Documentation, 69(2013) no.4, S.594-596 (A. MacFarlane)
  12. Wichor, M.B.: Variation in number of hits for complex searches in Google Scholar (2016) 0.00
    0.0040630843 = product of:
      0.02031542 = sum of:
        0.02031542 = weight(_text_:of in 2909) [ClassicSimilarity], result of:
          0.02031542 = score(doc=2909,freq=18.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.3109903 = fieldWeight in 2909, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2909)
      0.2 = coord(1/5)
    
    Abstract
    Google Scholar is often used to search for medical literature. Numbers of results reported by Google Scholar outperform the numbers reported by traditional databases. How reliable are these numbers? Why are often not all available 1,000 references shown? Methods: For several complex search strategies used in systematic review projects, the number of citations and the total number of versions were calculated. Several search strategies were followed over a two-year period, registering fluctuations in reported search results. Results: Changes in numbers of reported search results varied enormously between search strategies and dates. Theories for calculations of the reported and shown number of hits were not proved. Conclusions: The number of hits reported in Google Scholar is an unreliable measure. Therefore, its repeatability is problematic, at least when equal results are needed.
    Source
    Journal of the Medical Library Association. 104(2016), no.2, S.143-145
  13. Zhao, Y.; Ma, F.; Xia, X.: Evaluating the coverage of entities in knowledge graphs behind general web search engines : Poster (2017) 0.00
    0.003909705 = product of:
      0.019548526 = sum of:
        0.019548526 = weight(_text_:of in 3854) [ClassicSimilarity], result of:
          0.019548526 = score(doc=3854,freq=24.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2992506 = fieldWeight in 3854, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3854)
      0.2 = coord(1/5)
    
    Abstract
    Web search engines, such as Google and Bing, are constantly employing results from knowledge organization and various visualization features to improve their search services. Knowledge graph, a large repository of structured knowledge represented by formal languages such as RDF (Resource Description Framework), is used to support entity search feature of Google and Bing (Demartini, 2016). When a user searchs for an entity, such as a person, an organization, or a place in Google or Bing, it is likely that a knowledge cardwill be presented on the right side bar of the search engine result pages (SERPs). For example, when a user searches the entity Benedict Cumberbatch on Google, the knowledge card will show the basic structured information about this person, including his date of birth, height, spouse, parents, and his movies, etc. The knowledge card, which is used to present the result of entity search, is generated from knowledge graphs. Therefore, the quality of knowledge graphs is essential to the performance of entity search. However, studies on the quality of knowledge graphs from the angle of entity coverage are scant in the literature. This study aims to investigate the coverage of entities of knowledge graphs behind Google and Bing.
  14. Lewandowski, D.; Kerkmann, F.; Rümmele, S.; Sünkler, S.: ¬An empirical investigation on search engine ad disclosure (2018) 0.00
    0.0038704101 = product of:
      0.01935205 = sum of:
        0.01935205 = weight(_text_:of in 4115) [ClassicSimilarity], result of:
          0.01935205 = score(doc=4115,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.29624295 = fieldWeight in 4115, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4115)
      0.2 = coord(1/5)
    
    Abstract
    This representative study of German search engine users (N?=?1,000) focuses on the ability of users to distinguish between organic results and advertisements on Google results pages. We combine questions about Google's business with task-based studies in which users were asked to distinguish between ads and organic results in screenshots of results pages. We find that only a small percentage of users can reliably distinguish between ads and organic results, and that user knowledge of Google's business model is very limited. We conclude that ads are insufficiently labelled as such, and that many users may click on ads assuming that they are selecting organic results.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.3, S.420-437
  15. Ke, W.: Decentralized search and the clustering paradox in large scale information networks (2012) 0.00
    0.0038307128 = product of:
      0.019153563 = sum of:
        0.019153563 = weight(_text_:of in 94) [ClassicSimilarity], result of:
          0.019153563 = score(doc=94,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2932045 = fieldWeight in 94, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=94)
      0.2 = coord(1/5)
    
    Abstract
    Amid the rapid growth of information today is the increasing challenge for people to navigate its magnitude. Dynamics and heterogeneity of large information spaces such as the Web raise important questions about information retrieval in these environments. Collection of all information in advance and centralization of IR operations are extremely difficult, if not impossible, because systems are dynamic and information is distributed. The chapter discusses some of the key issues facing classic information retrieval models and presents a decentralized, organic view of information systems pertaining to search in large scale networks. It focuses on the impact of network structure on search performance and discusses a phenomenon we refer to as the Clustering Paradox, in which the topology of interconnected systems imposes a scalability limit.
  16. Berget, G.; Sandnes, F.E.: Do autocomplete functions reduce the impact of dyslexia on information-searching behavior? : the case of Google (2016) 0.00
    0.0038307128 = product of:
      0.019153563 = sum of:
        0.019153563 = weight(_text_:of in 3112) [ClassicSimilarity], result of:
          0.019153563 = score(doc=3112,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2932045 = fieldWeight in 3112, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3112)
      0.2 = coord(1/5)
    
    Abstract
    Dyslexic users often do not exhibit spelling and reading skills at a level required to perform effective search. To explore whether autocomplete functions reduce the impact of dyslexia on information searching, 20 participants with dyslexia and 20 controls solved 10 predefined tasks in the search engine Google. Eye-tracking and screen-capture documented the searches. There were no significant differences between the dyslexic students and the controls in time usage, number of queries, query lengths, or the use of the autocomplete function. However, participants with dyslexia made more misspellings and looked less at the screen and the autocomplete suggestions lists while entering the queries. The results indicate that although the autocomplete function supported the participants in the search process, a more extensive use of the autocomplete function would have reduced misspellings. Further, the high tolerance for spelling errors considerably reduced the effect of dyslexia, and may be as important as the autocomplete function.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.10, S.2320-2328
  17. Milonas, E.: ¬An examination of facets within search engine result pages (2017) 0.00
    0.0038307128 = product of:
      0.019153563 = sum of:
        0.019153563 = weight(_text_:of in 4160) [ClassicSimilarity], result of:
          0.019153563 = score(doc=4160,freq=4.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2932045 = fieldWeight in 4160, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=4160)
      0.2 = coord(1/5)
    
    Source
    Dimensions of knowledge: facets for knowledge organization. Eds.: R.P. Smiraglia, u. H.-L. Lee
  18. Kucukyilmaz, T.; Cambazoglu, B.B.; Aykanat, C.; Baeza-Yates, R.: ¬A machine learning approach for result caching in web search engines (2017) 0.00
    0.0038307128 = product of:
      0.019153563 = sum of:
        0.019153563 = weight(_text_:of in 5100) [ClassicSimilarity], result of:
          0.019153563 = score(doc=5100,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2932045 = fieldWeight in 5100, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5100)
      0.2 = coord(1/5)
    
    Abstract
    A commonly used technique for improving search engine performance is result caching. In result caching, precomputed results (e.g., URLs and snippets of best matching pages) of certain queries are stored in a fast-access storage. The future occurrences of a query whose results are already stored in the cache can be directly served by the result cache, eliminating the need to process the query using costly computing resources. Although other performance metrics are possible, the main performance metric for evaluating the success of a result cache is hit rate. In this work, we present a machine learning approach to improve the hit rate of a result cache by facilitating a large number of features extracted from search engine query logs. We then apply the proposed machine learning approach to static, dynamic, and static-dynamic caching. Compared to the previous methods in the literature, the proposed approach improves the hit rate of the result cache up to 0.66%, which corresponds to 9.60% of the potential room for improvement.
  19. Ozcan, R.; Altingovde, I.S.; Ulusoy, O.: Exploiting navigational queries for result presentation and caching in Web search engines (2011) 0.00
    0.0037432574 = product of:
      0.018716287 = sum of:
        0.018716287 = weight(_text_:of in 4364) [ClassicSimilarity], result of:
          0.018716287 = score(doc=4364,freq=22.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.28651062 = fieldWeight in 4364, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4364)
      0.2 = coord(1/5)
    
    Abstract
    Caching of query results is an important mechanism for efficiency and scalability of web search engines. Query results are cached and presented in terms of pages, which typically include 10 results each. In navigational queries, users seek a particular website, which would be typically listed at the top ranks (maybe, first or second) by the search engine, if found. For this type of query, caching and presenting results in the 10-per-page manner may waste cache space and network bandwidth. In this article, we propose nonuniform result page models with varying numbers of results for navigational queries. The experimental results show that our approach reduces the cache miss count by up to 9.17% (because of better utilization of cache space). Furthermore, bandwidth usage, which is measured in terms of number of snippets sent, is also reduced by 71% for navigational queries. This means a considerable reduction in the number of transmitted network packets, i.e., a crucial gain especially for mobile-search scenarios. A user study reveals that users easily adapt to the proposed result page model and that the efficiency gains observed in the experiments can be carried over to real-life situations.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.4, S.714-726
  20. Thelwall, M.: Assessing web search engines : a webometric approach (2011) 0.00
    0.003583304 = product of:
      0.01791652 = sum of:
        0.01791652 = weight(_text_:of in 10) [ClassicSimilarity], result of:
          0.01791652 = score(doc=10,freq=14.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2742677 = fieldWeight in 10, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=10)
      0.2 = coord(1/5)
    
    Abstract
    Information Retrieval (IR) research typically evaluates search systems in terms of the standard precision, recall and F-measures to weight the relative importance of precision and recall (e.g. van Rijsbergen, 1979). All of these assess the extent to which the system returns good matches for a query. In contrast, webometric measures are designed specifically for web search engines and are designed to monitor changes in results over time and various aspects of the internal logic of the way in which search engine select the results to be returned. This chapter introduces a range of webometric measurements and illustrates them with case studies of Google, Bing and Yahoo! This is a very fertile area for simple and complex new investigations into search engine results.

Languages

  • e 72
  • d 14

Types

  • a 72
  • el 14
  • m 7
  • s 3
  • r 1
  • x 1
  • More… Less…