Search (12 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Suchmaschinen"
  • × theme_ss:"Suchtaktik"
  1. Morville, P.: Ambient findability : what we find changes who we become (2005) 0.02
    0.018344399 = product of:
      0.027516596 = sum of:
        0.015604332 = weight(_text_:im in 312) [ClassicSimilarity], result of:
          0.015604332 = score(doc=312,freq=6.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.10819038 = fieldWeight in 312, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.015625 = fieldNorm(doc=312)
        0.011912264 = product of:
          0.03573679 = sum of:
            0.03573679 = weight(_text_:retrieval in 312) [ClassicSimilarity], result of:
              0.03573679 = score(doc=312,freq=24.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23154683 = fieldWeight in 312, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.015625 = fieldNorm(doc=312)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Footnote
    Das zweite Kapitel ("A Brief History of Wayfinding") beschreibt, wie Menschen sich in Umgebungen zurechtfinden. Dies ist insofern interessant, als hier nicht erst bei Informationssystemen oder dem WWW begonnen wird, sondern allgemeine Erkenntnisse beispielsweise über die Orientierung in natürlichen Umgebungen präsentiert werden. Viele typische Verhaltensweisen der Nutzer von Informationssystemen können so erklärt werden. So interessant dieses Thema allerdings ist, wirkt das Kapitel leider doch nur wie eine Zusammenstellung von Informationen aus zweiter Hand. Offensichtlich ist, dass Morville nicht selbst an diesen Themen geforscht hat, sondern die Ergebnisse (wenn auch auf ansprechende Weise) zusammengeschrieben hat. Dieser Eindruck bestätigt sich auch in weiteren Kapiteln: Ein flüssig geschriebener Text, der es jedoch an einigen Stellen an Substanz fehlen lässt. Kapitel drei, "Information Interaction" beginnt mit einem Rückgriff auf Calvin Mooers zentrale Aussage aus dem Jahre 1959: "An information retrieval system will tend not to be used whenever it is more painful and troublesome for a customer to have information than for him not to have it." In der Tat sollte man sich dies bei der Erstellung von Informationssystemen immer vergegenwärtigen; die Reihe der Systeme, die gerade an dieser Hürde gescheitert sind, ist lang. Das weitere Kapitel führt in einige zentrale Konzepte der Informationswissenschaft (Definition des Begriffs Information, Erläuterung des Information Retrieval, Wissensrepräsentation, Information Seeking Behaviour) ein, allerdings ohne jeden Anspruch auf Vollständigkeit. Es wirkt vielmehr so, dass der Autor sich die gerade für sein Anliegen passenden Konzepte auswählt und konkurrierende Ansätze beiseite lässt. Nur ein Beispiel: Im Abschnitt "Information Interaction" wird relativ ausführlich das Konzept des Berrypicking nach Marcia J. Bates präsentiert, allerdings wird es geradezu als exklusiv verkauft, was es natürlich bei weitem nicht ist. Natürlich kann es nicht Aufgabe dieses Buchs sein, einen vollständigen Überblick über alle Theorien des menschlichen Suchverhaltens zu geben (dies ist an anderer Stelle vorbildlich geleistet worden'), aber doch wenigstens der Hinweis auf einige zentrale Ansätze wäre angebracht gewesen. Spätestens in diesem Kapitel wird klar, dass das Buch sich definitiv nicht an Informationswissenschaftler wendet, die auf der einen Seite mit den grundlegenden Themen vertraut sein dürften, andererseits ein wenig mehr Tiefgang erwarten würden. Also stellt sich die Frage - und diese ist zentral für die Bewertung des gesamten Werks.
    Im Kapitel über das "Sociosemantic Web" werden die groben Grundzüge der Klassifikationslehre erläutert, um dann ausführlich auf neuere Ansätze der Web-Erschließung wie Social Tagging und Folksonomies einzugehen. Auch dieses Kapitel gibt eher einen Überblick als den schon Kundigen vertiefende Informationen zu liefern. Das letzte Kapitel widmet sich schließlich der Art, wie Entscheidungen getroffen werden, der Network Culture, dem Information Overload, um schließlich zu den "Inspired Decisions" zu gelangen - Entscheidungen, die sowohl auf "sachlichen Informationen" (also den klassischen Zutaten der "informed decisions") als auch aus aus Netzwerken stammenden Informationen wie etwa Empfehlungen durch Freunde oder Community-Mitglieder irgendeiner Art gewonnen werden. Fasst man zusammen, so ist an Morvilles Text besonders bemerkenswert, dass nach einigen Jahren, in denen die Suche im Web als ein Problem der Suche in unstrukturierten Daten angesehen wurde, nun wieder verstärkt Erschließungsansätze, die auf klassische Erschließungsinstrumente zurückgreifen, propagiert werden. Zwar sollen sie nicht in ihrer ursprünglichen Form angewandt werden, da den Nutzern nicht zuzumuten ist, sich mit den entsprechenden Regeln auseinanderzusetzen, aber auch hinter der auf den ersten Blick zumindest chaotisch wirkenden Folksonomy ist das Prinzip der Klassifikation zu erkennen. Um die modernen Ansätze erfolgreich zu machen, bedarf es aber dringend Information Professionals, die das "beste aus beiden Welten" verbinden, um moderne, für den Nutzer optimale Informationssysteme zu schaffen. Für die Gesamtbewertung des Buchs gelten die bereits zu einzelnen Kapitels angeführten Kritikpunkte: In erster Linie bleibt das Buch zu sehr an der Oberfläche und wirkt irgendwie "zusammengeschrieben" anstatt als Ergebnis der tiefgreifenden Beschäftigung mit dem Thema. Als eine Einführung in aufkommende Technologien rund um die Suche ist es aber durchaus geeignet - gut lesbar ist der Text auf jeden Fall.
    LCSH
    Information storage and retrieval systems
    RSWK
    Information Retrieval (GBV)
    Information Retrieval / Ubiquitous Computing (GBV)
    Information Retrieval / Datenbanksystem / Suchmaschine (GBV)
    Information Retrieval / Datenbanksystem (BVB)
    Subject
    Information Retrieval (GBV)
    Information Retrieval / Ubiquitous Computing (GBV)
    Information Retrieval / Datenbanksystem / Suchmaschine (GBV)
    Information Retrieval / Datenbanksystem (BVB)
    Information storage and retrieval systems
  2. Rieh, S.Y.; Kim, Y.-M.; Markey, K.: Amount of invested mental effort (AIME) in online searching (2012) 0.02
    0.015723832 = product of:
      0.047171496 = sum of:
        0.047171496 = product of:
          0.07075724 = sum of:
            0.04496643 = weight(_text_:online in 2726) [ClassicSimilarity], result of:
              0.04496643 = score(doc=2726,freq=6.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.29038906 = fieldWeight in 2726, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2726)
            0.025790809 = weight(_text_:retrieval in 2726) [ClassicSimilarity], result of:
              0.025790809 = score(doc=2726,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 2726, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2726)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This research investigates how people's perceptions of information retrieval (IR) systems, their perceptions of search tasks, and their perceptions of self-efficacy influence the amount of invested mental effort (AIME) they put into using two different IR systems: a Web search engine and a library system. It also explores the impact of mental effort on an end user's search experience. To assess AIME in online searching, two experiments were conducted using these methods: Experiment 1 relied on self-reports and Experiment 2 employed the dual-task technique. In both experiments, data were collected through search transaction logs, a pre-search background questionnaire, a post-search questionnaire and an interview. Important findings are these: (1) subjects invested greater mental effort searching a library system than searching the Web; (2) subjects put little effort into Web searching because of their high sense of self-efficacy in their searching ability and their perception of the easiness of the Web; (3) subjects did not recognize that putting mental effort into searching was something needed to improve the search results; and (4) data collected from multiple sources proved to be effective for assessing mental effort in online searching.
  3. Drabenstott, K.M.: Web search strategies (2000) 0.01
    0.01072981 = product of:
      0.03218943 = sum of:
        0.03218943 = product of:
          0.048284143 = sum of:
            0.020632647 = weight(_text_:retrieval in 1188) [ClassicSimilarity], result of:
              0.020632647 = score(doc=1188,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.13368362 = fieldWeight in 1188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1188)
            0.027651496 = weight(_text_:22 in 1188) [ClassicSimilarity], result of:
              0.027651496 = score(doc=1188,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.15476047 = fieldWeight in 1188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1188)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Surfing the World Wide Web used to be cool, dude, real cool. But things have gotten hot - so hot that finding something useful an the Web is no longer cool. It is suffocating Web searchers in the smoke and debris of mountain-sized lists of hits, decisions about which search engines they should use, whether they will get lost in the dizzying maze of a subject directory, use the right syntax for the search engine at hand, enter keywords that are likely to retrieve hits an the topics they have in mind, or enlist a browser that has sufficient functionality to display the most promising hits. When it comes to Web searching, in a few short years we have gone from the cool image of surfing the Web into the frying pan of searching the Web. We can turn down the heat by rethinking what Web searchers are doing and introduce some order into the chaos. Web search strategies that are tool-based-oriented to specific Web searching tools such as search en gines, subject directories, and meta search engines-have been widely promoted, and these strategies are just not working. It is time to dissect what Web searching tools expect from searchers and adjust our search strategies to these new tools. This discussion offers Web searchers help in the form of search strategies that are based an strategies that librarians have been using for a long time to search commercial information retrieval systems like Dialog, NEXIS, Wilsonline, FirstSearch, and Data-Star.
    Date
    22. 9.1997 19:16:05
  4. Hoeber, O.: Human-centred Web search (2012) 0.00
    0.0048631616 = product of:
      0.014589485 = sum of:
        0.014589485 = product of:
          0.043768454 = sum of:
            0.043768454 = weight(_text_:retrieval in 102) [ClassicSimilarity], result of:
              0.043768454 = score(doc=102,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2835858 = fieldWeight in 102, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=102)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    People commonly experience difficulties when searching the Web, arising from an incomplete knowledge regarding their information needs, an inability to formulate accurate queries, and a low tolerance for considering the relevance of the search results. While simple and easy to use interfaces have made Web search universally accessible, they provide little assistance for people to overcome the difficulties they experience when their information needs are more complex than simple fact-verification. In human-centred Web search, the purpose of the search engine expands from a simple information retrieval engine to a decision support system. People are empowered to take an active role in the search process, with the search engine supporting them in developing a deeper understanding of their information needs, assisting them in crafting and refining their queries, and aiding them in evaluating and exploring the search results. In this chapter, recent research in this domain is outlined and discussed.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  5. Zorn, P.: Advanced web searching : tricks of the trade (1996) 0.00
    0.004615356 = product of:
      0.013846068 = sum of:
        0.013846068 = product of:
          0.0415382 = sum of:
            0.0415382 = weight(_text_:online in 5142) [ClassicSimilarity], result of:
              0.0415382 = score(doc=5142,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2682499 = fieldWeight in 5142, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5142)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    Online. 20(1996) no.3, S.15-28
  6. Notess, G.R.: Internet search techniques and strategies (1997) 0.00
    0.004615356 = product of:
      0.013846068 = sum of:
        0.013846068 = product of:
          0.0415382 = sum of:
            0.0415382 = weight(_text_:online in 389) [ClassicSimilarity], result of:
              0.0415382 = score(doc=389,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2682499 = fieldWeight in 389, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0625 = fieldNorm(doc=389)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    Online. 21(1997) no.4, S.63-66
  7. Stacey, Alison; Stacey, Adrian: Effective information retrieval from the Internet : an advanced user's guide (2004) 0.00
    0.004585033 = product of:
      0.013755098 = sum of:
        0.013755098 = product of:
          0.041265294 = sum of:
            0.041265294 = weight(_text_:retrieval in 4497) [ClassicSimilarity], result of:
              0.041265294 = score(doc=4497,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.26736724 = fieldWeight in 4497, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4497)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Key Features - Importantly, the book enables readers to develop strategies which will continue to be useful despite the rapidly-evolving state of the Internet and Internet technologies - it is not about technological `tricks'. - Enables readers to be aware of and compensate for bias and errors which are ubiquitous an the Internet. - Provides contemporary information an the deficiencies in web skills of novice users as well as practical techniques for teaching such users. The Authors Dr Alison Stacey works at the Learning Resource Centre, Cambridge Regional College. Dr Adrian Stacey, formerly based at Cambridge University, is a software programmer. Readership The book is aimed at a wide range of librarians and other information professionals who need to retrieve information from the Internet efficiently, to evaluate their confidence in the information they retrieve and/or to train others to use the Internet. It is primarily aimed at intermediate to advanced users of the Internet. Contents Fundamentals of information retrieval from the Internet - why learn web searching technique; types of information requests; patterns for information retrieval; leveraging the technology: Search term choice: pinpointing information an the web - why choose queries carefully; making search terms work together; how to pick search terms; finding the 'unfindable': Blas an the Internet - importance of bias; sources of bias; usergenerated bias: selecting information with which you already agree; assessing and compensating for bias; case studies: Query reformulation and longer term strategies - how to interact with your search engine; foraging for information; long term information retrieval: using the Internet to find trends; automating searches: how to make your machine do your work: Assessing the quality of results- how to assess and ensure quality: The novice user and teaching internet skills - novice users and their problems with the web; case study: research in a college library; interpreting 'second hand' web information.
  8. White, R.W.; Jose, J.M.; Ruthven, I.: ¬A task-oriented study on the influencing effects of query-biased summarisation in web searching (2003) 0.00
    0.004052635 = product of:
      0.012157904 = sum of:
        0.012157904 = product of:
          0.03647371 = sum of:
            0.03647371 = weight(_text_:retrieval in 1081) [ClassicSimilarity], result of:
              0.03647371 = score(doc=1081,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23632148 = fieldWeight in 1081, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1081)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The aim of the work described in this paper is to evaluate the influencing effects of query-biased summaries in web searching. For this purpose, a summarisation system has been developed, and a summary tailored to the user's query is generated automatically for each document retrieved. The system aims to provide both a better means of assessing document relevance than titles or abstracts typical of many web search result lists. Through visiting each result page at retrieval-time, the system provides the user with an idea of the current page content and thus deals with the dynamic nature of the web. To examine the effectiveness of this approach, a task-oriented, comparative evaluation between four different web retrieval systems was performed; two that use query-biased summarisation, and two that use the standard ranked titles/abstracts approach. The results from the evaluation indicate that query-biased summarisation techniques appear to be more useful and effective in helping users gauge document relevance than the traditional ranked titles/abstracts approach. The same methodology was used to compare the effectiveness of two of the web's major search engines; AltaVista and Google.
  9. Aloteibi, S.; Sanderson, M.: Analyzing geographic query reformulation : an exploratory study (2014) 0.00
    0.0038404856 = product of:
      0.011521457 = sum of:
        0.011521457 = product of:
          0.03456437 = sum of:
            0.03456437 = weight(_text_:22 in 1177) [ClassicSimilarity], result of:
              0.03456437 = score(doc=1177,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.19345059 = fieldWeight in 1177, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1177)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    26. 1.2014 18:48:22
  10. Sachse, J.: ¬The influence of snippet length on user behavior in mobile web search (2019) 0.00
    0.0038404856 = product of:
      0.011521457 = sum of:
        0.011521457 = product of:
          0.03456437 = sum of:
            0.03456437 = weight(_text_:22 in 5493) [ClassicSimilarity], result of:
              0.03456437 = score(doc=5493,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.19345059 = fieldWeight in 5493, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5493)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    20. 1.2015 18:30:22
  11. Ford, N.; Miller, D.; Moss, N.: ¬The role of individual differences in Internet searching : an empirical study (2001) 0.00
    0.0034387745 = product of:
      0.0103163235 = sum of:
        0.0103163235 = product of:
          0.03094897 = sum of:
            0.03094897 = weight(_text_:retrieval in 6978) [ClassicSimilarity], result of:
              0.03094897 = score(doc=6978,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20052543 = fieldWeight in 6978, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6978)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This article reports the results of a study of the role of individual differences in Internet searching. The dimensions of individual differences forming the focus of the research consisted of: cognitive styles; levels of prior experience; Internet perceptions; study approaches; age; and gender. Sixty-nine Masters students searched for information on a prescribed topic using the AItaVista search engine. Results were assessed using simple binary relevance judgements. Factor analysis and multiple regression revealed interesting differences, retrieval effectiveness being linked to: male gender; low cognitive complexity; an imager (as opposed to verbalizer) cognitive style; and a number of Internet perceptions and study approaches grouped here as indicating low self-efficacy. The implications of these findings for system development and for future research are discussed.
  12. Kang, X.; Wu, Y.; Ren, W.: Toward action comprehension for searching : mining actionable intents in query entities (2020) 0.00
    0.0028656456 = product of:
      0.008596936 = sum of:
        0.008596936 = product of:
          0.025790809 = sum of:
            0.025790809 = weight(_text_:retrieval in 5613) [ClassicSimilarity], result of:
              0.025790809 = score(doc=5613,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 5613, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5613)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Understanding search engine users' intents has been a popular study in information retrieval, which directly affects the quality of retrieved information. One of the fundamental problems in this field is to find a connection between the entity in a query and the potential intents of the users, the latter of which would further reveal important information for facilitating the users' future actions. In this article, we present a novel research method for mining the actionable intents for search users, by generating a ranked list of the potentially most informative actions based on a massive pool of action samples. We compare different search strategies and their combinations for retrieving the action pool and develop three criteria for measuring the informativeness of the selected action samples, that is, the significance of an action sample within the pool, the representativeness of an action sample for the other candidate samples, and the diverseness of an action sample with respect to the selected actions. Our experiment, based on the Action Mining (AM) query entity data set from the Actionable Knowledge Graph (AKG) task at NTCIR-13, suggests that the proposed approach is effective in generating an informative and early-satisfying ranking of potential actions for search users.