Search (120 results, page 2 of 6)

  • × theme_ss:"Suchmaschinen"
  • × year_i:[2010 TO 2020}
  1. Gillitzer, B.: Yewno (2017) 0.01
    0.0062400475 = product of:
      0.012480095 = sum of:
        0.012480095 = product of:
          0.02496019 = sum of:
            0.02496019 = weight(_text_:22 in 3447) [ClassicSimilarity], result of:
              0.02496019 = score(doc=3447,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15476047 = fieldWeight in 3447, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3447)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2017 10:16:49
  2. Bauckhage, C.: Marginalizing over the PageRank damping factor (2014) 0.00
    0.0033826875 = product of:
      0.006765375 = sum of:
        0.006765375 = product of:
          0.01353075 = sum of:
            0.01353075 = weight(_text_:a in 928) [ClassicSimilarity], result of:
              0.01353075 = score(doc=928,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.25478977 = fieldWeight in 928, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=928)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this note, we show how to marginalize over the damping parameter of the PageRank equation so as to obtain a parameter-free version known as TotalRank. Our discussion is meant as a reference and intended to provide a guided tour towards an interesting result that has applications in information retrieval and classification.
    Type
    a
  3. Kucukyilmaz, T.; Cambazoglu, B.B.; Aykanat, C.; Baeza-Yates, R.: ¬A machine learning approach for result caching in web search engines (2017) 0.00
    0.0030444188 = product of:
      0.0060888375 = sum of:
        0.0060888375 = product of:
          0.012177675 = sum of:
            0.012177675 = weight(_text_:a in 5100) [ClassicSimilarity], result of:
              0.012177675 = score(doc=5100,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22931081 = fieldWeight in 5100, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5100)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A commonly used technique for improving search engine performance is result caching. In result caching, precomputed results (e.g., URLs and snippets of best matching pages) of certain queries are stored in a fast-access storage. The future occurrences of a query whose results are already stored in the cache can be directly served by the result cache, eliminating the need to process the query using costly computing resources. Although other performance metrics are possible, the main performance metric for evaluating the success of a result cache is hit rate. In this work, we present a machine learning approach to improve the hit rate of a result cache by facilitating a large number of features extracted from search engine query logs. We then apply the proposed machine learning approach to static, dynamic, and static-dynamic caching. Compared to the previous methods in the literature, the proposed approach improves the hit rate of the result cache up to 0.66%, which corresponds to 9.60% of the potential room for improvement.
    Type
    a
  4. Berri, J.; Benlamri, R.: Context-aware mobile search engine (2012) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 104) [ClassicSimilarity], result of:
              0.011481222 = score(doc=104,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 104, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=104)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Exploiting context information in a web search engine helps fine-tuning web services and applications to deliver custom-made information to end users. While context, including user and environment information, cannot be exploited efficiently in the wired Internet interaction type, it is becoming accessible with the mobile web where users have an intimate relationship with their handsets. In this type of interaction, context plays a significant role enhancing information search and therefore, allowing a search engine to detect relevant content in all digital forms and formats. This chapter proposes a context model and an architecture that promote integration of context information for individuals and social communities to add value to their interaction with the mobile web. The architecture relies on efficient knowledge management of multimedia resources for a wide range of applications and web services. The research is illustrated with a corporate case study showing how efficient context integration improves usability of a mobile search engine.
    Type
    a
  5. Vaughan, L.; Romero-Frías, E.: Web search volume as a predictor of academic fame : an exploration of Google trends (2014) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 1233) [ClassicSimilarity], result of:
              0.011481222 = score(doc=1233,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 1233, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1233)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Searches conducted on web search engines reflect the interests of users and society. Google Trends, which provides information about the queries searched by users of the Google web search engine, is a rich data source from which a wealth of information can be mined. We investigated the possibility of using web search volume data from Google Trends to predict academic fame. As queries are language-dependent, we studied universities from two countries with different languages, the United States and Spain. We found a significant correlation between the search volume of a university name and the university's academic reputation or fame. We also examined the effect of some Google Trends features, namely, limiting the search to a specific country or topic category on the search volume data. Finally, we examined the effect of university sizes on the correlations found to gain a deeper understanding of the nature of the relationships.
    Type
    a
  6. Vidinli, I.B.; Ozcan, R.: New query suggestion framework and algorithms : a case study for an educational search engine (2016) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 3185) [ClassicSimilarity], result of:
              0.011481222 = score(doc=3185,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 3185, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3185)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Query suggestion is generally an integrated part of web search engines. In this study, we first redefine and reduce the query suggestion problem as "comparison of queries". We then propose a general modular framework for query suggestion algorithm development. We also develop new query suggestion algorithms which are used in our proposed framework, exploiting query, session and user features. As a case study, we use query logs of a real educational search engine that targets K-12 students in Turkey. We also exploit educational features (course, grade) in our query suggestion algorithms. We test our framework and algorithms over a set of queries by an experiment and demonstrate a 66-90% statistically significant increase in relevance of query suggestions compared to a baseline method.
    Type
    a
  7. Luo, M.M.; Nahl, D.: Let's Google : uncertainty and bilingual search (2019) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 5363) [ClassicSimilarity], result of:
              0.011481222 = score(doc=5363,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 5363, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5363)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study applies Kuhlthau's Information Search Process stage (ISP) model to understand bilingual users' Internet search experience. We conduct a quasi-field experiment with 30 bilingual searchers and the results suggested that the ISP model was applicable in studying searchers' information retrieval behavior in search tasks. The ISP model was applicable in studying searchers' information retrieval behavior in simple tasks. However, searchers' emotional responses differed from those of the ISP model for a complex task. By testing searchers using different search strategies, the results suggested that search engines with multilanguage search functions provide an advantage for bilingual searchers in the Internet's multilingual environment. The findings showed that when searchers used a search engine as a tool for problem solving, they might experience different feelings in each ISP stage than in searching for information for a term paper using a library. The results echo other research findings that indicate that information seeking is a multifaceted phenomenon.
    Type
    a
  8. Ortiz-Cordova, A.; Jansen, B.J.: Classifying web search queries to identify high revenue generating customers (2012) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 279) [ClassicSimilarity], result of:
              0.010739701 = score(doc=279,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 279, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=279)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Traffic from search engines is important for most online businesses, with the majority of visitors to many websites being referred by search engines. Therefore, an understanding of this search engine traffic is critical to the success of these websites. Understanding search engine traffic means understanding the underlying intent of the query terms and the corresponding user behaviors of searchers submitting keywords. In this research, using 712,643 query keywords from a popular Spanish music website relying on contextual advertising as its business model, we use a k-means clustering algorithm to categorize the referral keywords with similar characteristics of onsite customer behavior, including attributes such as clickthrough rate and revenue. We identified 6 clusters of consumer keywords. Clusters range from a large number of users who are low impact to a small number of high impact users. We demonstrate how online businesses can leverage this segmentation clustering approach to provide a more tailored consumer experience. Implications are that businesses can effectively segment customers to develop better business models to increase advertising conversion rates.
    Type
    a
  9. Kruschwitz, U.; Lungley, D.; Albakour, M-D.; Song, D.: Deriving query suggestions for site search (2013) 0.00
    0.0026742492 = product of:
      0.0053484985 = sum of:
        0.0053484985 = product of:
          0.010696997 = sum of:
            0.010696997 = weight(_text_:a in 1085) [ClassicSimilarity], result of:
              0.010696997 = score(doc=1085,freq=20.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20142901 = fieldWeight in 1085, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1085)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Modern search engines have been moving away from simplistic interfaces that aimed at satisfying a user's need with a single-shot query. Interactive features are now integral parts of web search engines. However, generating good query modification suggestions remains a challenging issue. Query log analysis is one of the major strands of work in this direction. Although much research has been performed on query logs collected on the web as a whole, query log analysis to enhance search on smaller and more focused collections has attracted less attention, despite its increasing practical importance. In this article, we report on a systematic study of different query modification methods applied to a substantial query log collected on a local website that already uses an interactive search engine. We conducted experiments in which we asked users to assess the relevance of potential query modification suggestions that have been constructed using a range of log analysis methods and different baseline approaches. The experimental results demonstrate the usefulness of log analysis to extract query modification suggestions. Furthermore, our experiments demonstrate that a more fine-grained approach than grouping search requests into sessions allows for extraction of better refinement terms from query log files.
    Type
    a
  10. Sarigil, E.; Sengor Altingovde, I.; Blanco, R.; Barla Cambazoglu, B.; Ozcan, R.; Ulusoy, Ö.: Characterizing, predicting, and handling web search queries that match very few or no results (2018) 0.00
    0.0025370158 = product of:
      0.0050740317 = sum of:
        0.0050740317 = product of:
          0.010148063 = sum of:
            0.010148063 = weight(_text_:a in 4039) [ClassicSimilarity], result of:
              0.010148063 = score(doc=4039,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19109234 = fieldWeight in 4039, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4039)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A non-negligible fraction of user queries end up with very few or even no matching results in leading commercial web search engines. In this work, we provide a detailed characterization of such queries and show that search engines try to improve such queries by showing the results of related queries. Through a user study, we show that these query suggestions are usually perceived as relevant. Also, through a query log analysis, we show that the users are dissatisfied after submitting a query that match no results at least 88.5% of the time. As a first step towards solving these no-answer queries, we devised a large number of features that can be used to identify such queries and built machine-learning models. These models can be useful for scenarios such as the mobile- or meta-search, where identifying a query that will retrieve no results at the client device (i.e., even before submitting it to the search engine) may yield gains in terms of the bandwidth usage, power consumption, and/or monetary costs. Experiments over query logs indicate that, despite the heavy skew in class sizes, our models achieve good prediction quality, with accuracy (in terms of area under the curve) up to 0.95.
    Type
    a
  11. Waller, V.: Not just information : who searches for what on the search engine Google? (2011) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 4373) [ClassicSimilarity], result of:
              0.00994303 = score(doc=4373,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 4373, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4373)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper reports on a transaction log analysis of the type and topic of search queries entered into the search engine Google (Australia). Two aspects, in particular, set this apart from previous studies: the sampling and analysis take account of the distribution of search queries, and lifestyle information of the searcher was matched with each search query. A surprising finding was that there was no observed statistically significant difference in search type or topics for different segments of the online population. It was found that queries about popular culture and Ecommerce accounted for almost half of all search engine queries and that half of the queries were entered with a particular Website in mind. The findings of this study also suggest that the Internet search engine is not only an interface to information or a shortcut to Websites, it is equally a site of leisure. This study has implications for the design and evaluation of search engines as well as our understanding of search engine use.
    Type
    a
  12. Thelwall, M.: Assessing web search engines : a webometric approach (2011) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 10) [ClassicSimilarity], result of:
              0.00994303 = score(doc=10,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 10, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=10)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information Retrieval (IR) research typically evaluates search systems in terms of the standard precision, recall and F-measures to weight the relative importance of precision and recall (e.g. van Rijsbergen, 1979). All of these assess the extent to which the system returns good matches for a query. In contrast, webometric measures are designed specifically for web search engines and are designed to monitor changes in results over time and various aspects of the internal logic of the way in which search engine select the results to be returned. This chapter introduces a range of webometric measurements and illustrates them with case studies of Google, Bing and Yahoo! This is a very fertile area for simple and complex new investigations into search engine results.
    Source
    Innovations in information retrieval: perspectives for theory and practice. Eds.: A. Foster, u. P. Rafferty
    Type
    a
  13. Ortega, J.L.; Aguillo, I.F.: Microsoft academic search and Google scholar citations : comparative analysis of author profiles (2014) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 1284) [ClassicSimilarity], result of:
              0.00994303 = score(doc=1284,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 1284, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1284)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article offers a comparative analysis of the personal profiling capabilities of the two most important free citation-based academic search engines, namely, Microsoft Academic Search (MAS) and Google Scholar Citations (GSC). Author profiles can be useful for evaluation purposes once the advantages and the shortcomings of these services are described and taken into consideration. In total, 771 personal profiles appearing in both the MAS and the GSC databases were analyzed. Results show that the GSC profiles include more documents and citations than those in MAS but with a strong bias toward the information and computing sciences, whereas the MAS profiles are disciplinarily better balanced. MAS shows technical problems such as a higher number of duplicated profiles and a lower updating rate than GSC. It is concluded that both services could be used for evaluation proposes only if they are applied along with other citation indices as a way to supplement that information.
    Type
    a
  14. Haynes, M.: Your Google algorithm cheat sheet : Panda, Penguin, and Hummingbird (2013) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 2542) [ClassicSimilarity], result of:
              0.00994303 = score(doc=2542,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 2542, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2542)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    If you're reading the Moz blog, then you probably have a decent understanding of Google and its algorithm changes. However, there is probably a good percentage of the Moz audience that is still confused about the effects that Panda, Penguin, and Hummingbird can have on your site. I did write a post last year about the main differences between Penguin and a Manual Unnautral Links Penalty, and if you haven't read that, it'll give you a good primer. The point of this article is to explain very simply what each of these algorithms are meant to do. It is hopefully a good reference that you can point your clients to if you want to explain an algorithm change and not overwhelm them with technical details about 301s, canonicals, crawl errors, and other confusing SEO terminologies.
  15. Joint, N.: ¬The one-stop shop search engine : a transformational library technology? ANTAEUS (2010) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 4201) [ClassicSimilarity], result of:
              0.009567685 = score(doc=4201,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 4201, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4201)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to form one of a series which will give an overview of so-called "transformational" areas of digital library technology. The aim will be to assess how much real transformation these applications are bringing about, in terms of creating genuine user benefit and also changing everyday library practice. Design/methodology/approach - An overview of the present state of development of the one-stop shop library search engine, with particular reference to its relationship with the underlying bibliographic databases to which it provides a simplified single interface. Findings - The paper finds that the success of federated searching has proved valuable but limited to date in creating a one-stop shop search engine to rival Google Scholar; but the persistent value of the bibliographic databases sitting underneath a federated search system means that a harvesting search engine could well answer the need for a true one-stop search engine for academic and scholarly information. Research limitations/implications - This paper is based on the hypothesis that Google's success in providing such an apparently high degree of access to electronic journal services is not what it seems, and that it does not render library discovery tools obsolete. It argues that Google has not diminished the pre-eminent role of library bibliographic databases in mediating access to e-journal text, although this hypothesis needs further research to validate or disprove it. Practical implications - The paper affirms the value of bibliographic databases to practitioner librarians and the potential of single interface discovery tools in library practice. Originality/value - The paper uses statistics from US LIS sources to shed light on UK discovery tool issues.
    Type
    a
  16. Truran, M.; Schmakeit, J.-F.; Ashman, H.: ¬The effect of user intent on the stability of search engine results (2011) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 4478) [ClassicSimilarity], result of:
              0.009471525 = score(doc=4478,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 4478, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4478)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Previous work has established that search engine queries can be classified according to the intent of the searcher (i.e., why is the user searching, what specifically do they intend to do). In this article, we describe an experiment in which four sets of queries, each set representing a different user intent, are repeatedly submitted to three search engines over a period of 60 days. Using a variety of measurements, we describe the overall stability of the search engine results recorded for each group. Our findings suggest that search engine results for informational queries are significantly more stable than the results obtained using transactional, navigational, or commercial queries.
    Type
    a
  17. Hodson, H.: Google's fact-checking bots build vast knowledge bank (2014) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 1700) [ClassicSimilarity], result of:
              0.009374379 = score(doc=1700,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 1700, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1700)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The search giant is automatically building Knowledge Vault, a massive database that could give us unprecedented access to the world's facts GOOGLE is building the largest store of knowledge in human history - and it's doing so without any human help. Instead, Knowledge Vault autonomously gathers and merges information from across the web into a single base of facts about the world, and the people and objects in it.
    Type
    a
  18. Bressan, M.; Peserico, E.: Choose the damping, choose the ranking? (2010) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 2563) [ClassicSimilarity], result of:
              0.009374379 = score(doc=2563,freq=24.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 2563, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    To what extent can changes in PageRank's damping factor affect node ranking? We prove that, at least on some graphs, the top k nodes assume all possible k! orderings as the damping factor varies, even if it varies within an arbitrarily small interval (e.g. [0.84999,0.85001][0.84999,0.85001]). Thus, the rank of a node for a given (finite set of discrete) damping factor(s) provides very little information about the rank of that node as the damping factor varies over a continuous interval. We bypass this problem introducing lineage analysis and proving that there is a simple condition, with a "natural" interpretation independent of PageRank, that allows one to verify "in one shot" if a node outperforms another simultaneously for all damping factors and all damping variables (informally, time variant damping factors). The novel notions of strong rank and weak rank of a node provide a measure of the fuzziness of the rank of that node, of the objective orderability of a graph's nodes, and of the quality of results returned by different ranking algorithms based on the random surfer model. We deploy our analytical tools on a 41M node snapshot of the .it Web domain and on a 0.7M node snapshot of the CiteSeer citation graph. Among other findings, we show that rank is indeed relatively stable in both graphs; that "classic" PageRank (d=0.85) marginally outperforms Weighted In-degree (d->0), mainly due to its ability to ferret out "niche" items; and that, for both the Web and CiteSeer, the ideal damping factor appears to be 0.8-0.9 to obtain those items of high importance to at least one (model of randomly surfing) user, but only 0.5-0.6 to obtain those items important to every (model of randomly surfing) user.
    Type
    a
  19. Hoeber, O.: Human-centred Web search (2012) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 102) [ClassicSimilarity], result of:
              0.009076704 = score(doc=102,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 102, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=102)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    People commonly experience difficulties when searching the Web, arising from an incomplete knowledge regarding their information needs, an inability to formulate accurate queries, and a low tolerance for considering the relevance of the search results. While simple and easy to use interfaces have made Web search universally accessible, they provide little assistance for people to overcome the difficulties they experience when their information needs are more complex than simple fact-verification. In human-centred Web search, the purpose of the search engine expands from a simple information retrieval engine to a decision support system. People are empowered to take an active role in the search process, with the search engine supporting them in developing a deeper understanding of their information needs, assisting them in crafting and refining their queries, and aiding them in evaluating and exploring the search results. In this chapter, recent research in this domain is outlined and discussed.
    Type
    a
  20. Lewandowski, D.: ¬A framework for evaluating the retrieval effectiveness of search engines (2012) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 106) [ClassicSimilarity], result of:
              0.009076704 = score(doc=106,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 106, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=106)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This chapter presents a theoretical framework for evaluating next generation search engines. The author focuses on search engines whose results presentation is enriched with additional information and does not merely present the usual list of "10 blue links," that is, of ten links to results, accompanied by a short description. While Web search is used as an example here, the framework can easily be applied to search engines in any other area. The framework not only addresses the results presentation, but also takes into account an extension of the general design of retrieval effectiveness tests. The chapter examines the ways in which this design might influence the results of such studies and how a reliable test is best designed.
    Type
    a

Languages

  • e 72
  • d 46

Types

  • a 106
  • el 22
  • m 6
  • r 2
  • s 2
  • x 1
  • More… Less…