Search (78 results, page 4 of 4)

  • × theme_ss:"Suchmaschinen"
  • × type_ss:"m"
  1. Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010) 0.00
    0.0019955188 = product of:
      0.011973113 = sum of:
        0.011973113 = weight(_text_:in in 2605) [ClassicSimilarity], result of:
          0.011973113 = score(doc=2605,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.20163295 = fieldWeight in 2605, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2605)
      0.16666667 = coord(1/6)
    
    Abstract
    For introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice, is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines. Coverage of the underlying IR and mathematical models reinforce key concepts. The book's numerous programming exercises make extensive use of Galago, a Java-based open source search engine. SUPPLEMENTS / Extensive lecture slides (in PDF and PPT format) / Solutions to selected end of chapter problems (Instructors only) / Test collections for exercises / Galago search engine
  2. Auletta, K: Googled : the end of the world as we know it (2009) 0.00
    0.0015740865 = product of:
      0.009444519 = sum of:
        0.009444519 = weight(_text_:in in 1991) [ClassicSimilarity], result of:
          0.009444519 = score(doc=1991,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15905021 = fieldWeight in 1991, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1991)
      0.16666667 = coord(1/6)
    
    Abstract
    There are companies that create waves and those that ride or are drowned by them. This is a ride on the Google wave, and the fullest account of how it formed and crashed into traditional media businesses. With unprecedented access to Google's founders and executives, as well as to those in media who are struggling to keep their heads above water, Ken Auletta reveals how the industry is being disrupted and redefined. On one level Auletta uses Google as a stand-in for the digital revolution as a whole - and goes inside Google's closed door meetings, introducing Google's notoriously private founders, Larry Page and Sergey Brin, as well as those who work with - and against - them. In "Googled", the reader discovers the 'secret sauce' of the company's success and why the worlds of 'new' and 'old' media often communicate as if residents of different planets. It may send chills down traditionalists' spines, but it's a crucial roadmap to the future of media business: the Google story may well be the canary in the coal mine. "Googled" is candid, objective and authoritative - based on extensive research including in-house at Google HQ. Crucially, it's not just a history or reportage: it's forward-looking. This book is ahead of the curve and, unlike any other Google books, which tend to have been near-histories, somewhat starstruck, to be now out of date or which fail to look at the full synthesis of business and technology.
    Content
    Messing with the magic -- Starting in a garage -- Buzz but few $'s -- Prepping the Google rocket -- Innocence or arrogance? -- Google goes public -- The new evil empire? -- Chasing the fox -- War on multiple fronts -- Waking the government bear -- Google enters adolescence -- Is old media drowning? -- Compete or collaborate? -- Happy birthday -- Googled -- Where is the wave taking old media? -- Where is the wave taking Google? -- Media maxims.
    Footnote
    Rez. in: Frankfurter Rundschau. Nr.287 vom 10.12.2009, S.34-35 u.d.T.: Baker, N.: Seelenverkäufer oder Helden?: Ken Aulettas Buch über die weltbeherrschende Suchmaschine Google.
  3. Philippus, T.: Informationssuche im Internet : Tips für Profis (1997) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 928) [ClassicSimilarity], result of:
          0.008924231 = score(doc=928,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=928)
      0.16666667 = coord(1/6)
    
    Footnote
    Rez. in: nfd 48(1997) H.5, S.314-315 (H. Nerlich)
  4. Babiak, U.: Effektive Suche im Internet : Suchstrategien, Methoden, Quellen (1999) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 3895) [ClassicSimilarity], result of:
          0.008924231 = score(doc=3895,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 3895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=3895)
      0.16666667 = coord(1/6)
    
    Footnote
    Rez. in: Internet Professionell 1999, August, S.19 (GO)
  5. Bradley, P.: Advanced Internet searcher's handbook (1998) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 5454) [ClassicSimilarity], result of:
          0.008924231 = score(doc=5454,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 5454, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=5454)
      0.16666667 = coord(1/6)
    
    Footnote
    Rez. in: Information world review. 1999, no.146, S.26 (D. Parr)
  6. Hock, R.: ¬The extreme searcher's guide to Web search engines : a handbook for the serious searcher (1999) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 526) [ClassicSimilarity], result of:
          0.008924231 = score(doc=526,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=526)
      0.16666667 = coord(1/6)
    
    Footnote
    Rez. in: Electronic library 17(1999) no.5, S.334-335 (P. Barker)
  7. Cooke, A.: ¬A guide to finding quality information on the Internet : selection and evaluation strategies (1999) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 662) [ClassicSimilarity], result of:
          0.008924231 = score(doc=662,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 662, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=662)
      0.16666667 = coord(1/6)
    
    Footnote
    Rez. in: Australian library journal 49(2000) no.1, S.76-77 (R.Cullen)
  8. Babiak, U.: Effektive Suche im Internet : Suchstrategien, Methoden, Quellen (1997) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 672) [ClassicSimilarity], result of:
          0.008924231 = score(doc=672,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 672, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=672)
      0.16666667 = coord(1/6)
    
    Footnote
    Rez. in: nfd 48(1997) H.5, S.315-316 (H.-C. Hobohm)
  9. Cheong, F.C.: Internet agents : spiders, wanderers, brokers, and bots (1996) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 774) [ClassicSimilarity], result of:
          0.008924231 = score(doc=774,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 774, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=774)
      0.16666667 = coord(1/6)
    
    Footnote
    Rez. in: Database 19(1996) no.4, S.102 (J.A. Copler)
  10. Burke, J.: IntroNet : a beginner's guide to searching the Internet (1999) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 1439) [ClassicSimilarity], result of:
          0.008924231 = score(doc=1439,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 1439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=1439)
      0.16666667 = coord(1/6)
    
    Footnote
    Rez. in: Electronic library 18(2000) no.4, S.294-295 (J. Edwards); Teacher librarian 27(1999) S.50 (P. Genco)
  11. Next generation search engines : advanced models for information retrieval (2012) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 357) [ClassicSimilarity], result of:
          0.008924231 = score(doc=357,freq=32.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 357, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.01953125 = fieldNorm(doc=357)
      0.16666667 = coord(1/6)
    
    Abstract
    The main goal of this book is to transfer new research results from the fields of advanced computer sciences and information science to the design of new search engines. The readers will have a better idea of the new trends in applied research. The achievement of relevant, organized, sorted, and workable answers- to name but a few - from a search is becoming a daily need for enterprises and organizations, and, to a greater extent, for anyone. It does not consist of getting access to structural information as in standard databases; nor does it consist of searching information strictly by way of a combination of key words. It goes far beyond that. Whatever its modality, the information sought should be identified by the topics it contains, that is to say by its textual, audio, video or graphical contents. This is not a new issue. However, recent technological advances have completely changed the techniques being used. New Web technologies, the emergence of Intranet systems and the abundance of information on the Internet have created the need for efficient search and information access tools.
    Recent technological progress in computer science, Web technologies, and constantly evolving information available on the Internet has drastically changed the landscape of search and access to information. Web search has significantly evolved in recent years. In the beginning, web search engines such as Google and Yahoo! were only providing search service over text documents. Aggregated search was one of the first steps to go beyond text search, and was the beginning of a new era for information seeking and retrieval. These days, new web search engines support aggregated search over a number of vertices, and blend different types of documents (e.g., images, videos) in their search results. New search engines employ advanced techniques involving machine learning, computational linguistics and psychology, user interaction and modeling, information visualization, Web engineering, artificial intelligence, distributed systems, social networks, statistical analysis, semantic analysis, and technologies over query sessions. Documents no longer exist on their own; they are connected to other documents, they are associated with users and their position in a social network, and they can be mapped onto a variety of ontologies. Similarly, retrieval tasks have become more interactive and are solidly embedded in a user's geospatial, social, and historical context. It is conjectured that new breakthroughs in information retrieval will not come from smarter algorithms that better exploit existing information sources, but from new retrieval algorithms that can intelligently use and combine new sources of contextual metadata.
    With the rapid growth of web-based applications, such as search engines, Facebook, and Twitter, the development of effective and personalized information retrieval techniques and of user interfaces is essential. The amount of shared information and of social networks has also considerably grown, requiring metadata for new sources of information, like Wikipedia and ODP. These metadata have to provide classification information for a wide range of topics, as well as for social networking sites like Twitter, and Facebook, each of which provides additional preferences, tagging information and social contexts. Due to the explosion of social networks and other metadata sources, it is an opportune time to identify ways to exploit such metadata in IR tasks such as user modeling, query understanding, and personalization, to name a few. Although the use of traditional metadata such as html text, web page titles, and anchor text is fairly well-understood, the use of category information, user behavior data, and geographical information is just beginning to be studied. This book is intended for scientists and decision-makers who wish to gain working knowledge about search engines in order to evaluate available solutions and to dialogue with software and data providers.
    Content
    Enthält die Beiträge: Das, A., A. Jain: Indexing the World Wide Web: the journey so far. Ke, W.: Decentralized search and the clustering paradox in large scale information networks. Roux, M.: Metadata for search engines: what can be learned from e-Sciences? Fluhr, C.: Crosslingual access to photo databases. Djioua, B., J.-P. Desclés u. M. Alrahabi: Searching and mining with semantic categories. Ghorbel, H., A. Bahri u. R. Bouaziz: Fuzzy ontologies building platform for Semantic Web: FOB platform. Lassalle, E., E. Lassalle: Semantic models in information retrieval. Berry, M.W., R. Esau u. B. Kiefer: The use of text mining techniques in electronic discovery for legal matters. Sleem-Amer, M., I. Bigorgne u. S. Brizard u.a.: Intelligent semantic search engines for opinion and sentiment mining. Hoeber, O.: Human-centred Web search.
    Vert, S.: Extensions of Web browsers useful to knowledge workers. Chen, L.-C.: Next generation search engine for the result clustering technology. Biskri, I., L. Rompré: Using association rules for query reformulation. Habernal, I., M. Konopík u. O. Rohlík: Question answering. Grau, B.: Finding answers to questions, in text collections or Web, in open domain or specialty domains. Berri, J., R. Benlamri: Context-aware mobile search engine. Bouidghaghen, O., L. Tamine: Spatio-temporal based personalization for mobile search. Chaudiron, S., M. Ihadjadene: Studying Web search engines from a user perspective: key concepts and main approaches. Karaman, F.: Artificial intelligence enabled search engines (AIESE) and the implications. Lewandowski, D.: A framework for evaluating the retrieval effectiveness of search engines.
  12. Battelle, J.: ¬The search : how Google and its rivals rewrote the rules of business and transformed our culture (2005) 0.00
    0.0014573209 = product of:
      0.008743925 = sum of:
        0.008743925 = weight(_text_:in in 5941) [ClassicSimilarity], result of:
          0.008743925 = score(doc=5941,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14725187 = fieldWeight in 5941, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=5941)
      0.16666667 = coord(1/6)
    
    Abstract
    If you pick your books by their popularity--how many and which other people are reading them--then know this about The Search: it's probably on Bill Gates' reading list, and that of almost every venture capitalist and startup-hungry entrepreneur in Silicon Valley. In its sweeping survey of the history of Internet search technologies, its gossip about and analysis of Google, and its speculation on the larger cultural implications of a Web-connected world, it will likely receive attention from a variety of businesspeople, technology futurists, journalists, and interested observers of mid-2000s zeitgeist. This ambitious book comes with a strong pedigree. Author John Battelle was a founder of The Industry Standard and then one of the original editors of Wired, two magazines which helped shape our early perceptions of the wild world of the Internet. Battelle clearly drew from his experience and contacts in writing The Search. In addition to the sure-handed historical perspective and easy familiarity with such dot-com stalwarts as AltaVista, Lycos, and Excite, he speckles his narrative with conversational asides from a cast of fascinating characters, such Google's founders, Larry Page and Sergey Brin; Yahoo's, Jerry Yang and David Filo; key executives at Microsoft and different VC firms on the famed Sandhill road; and numerous other insiders, particularly at the company which currently sits atop the search world, Google. The Search is not exactly the corporate history of Google. At the book's outset, Battelle specifically indicates his desire to understand what he calls the cultural anthropology of search, and to analyze search engines' current role as the "database of our intentions"--the repository of humanity's curiosity, exploration, and expressed desires. Interesting though that beginning is, though, Battelle's story really picks up speed when he starts dishing inside scoop on the darling business story of the decade, Google. To Battelle's credit, though, he doesn't stop just with historical retrospective: the final part of his book focuses on the potential future directions of Google and its products' development. In what Battelle himself acknowledges might just be a "digital fantasy train", he describes the possibility that Google will become the centralizing platform for our entire lives and quotes one early employee on the weightiness of Google's potential impact: "Sometimes I feel like I am on a bridge, twenty thousand feet up in the air. If I look down I'm afraid I'll fall. I don't feel like I can think about all the implications." Some will shrug at such words; after all, similar hype has accompanied other technologies and other companies before. Many others, though, will search Battelle's story for meaning--and fast.
  13. Langville, A.N.; Meyer, C.D.: Google's PageRank and beyond : the science of search engine rankings (2006) 0.00
    0.0012620769 = product of:
      0.0075724614 = sum of:
        0.0075724614 = weight(_text_:in in 6) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=6,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 6, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=6)
      0.16666667 = coord(1/6)
    
    Abstract
    Why doesn't your home page appear on the first page of search results, even when you query your own name? How do other Web pages always appear at the top? What creates these powerful rankings? And how? The first book ever about the science of Web page rankings, "Google's PageRank and Beyond" supplies the answers to these and other questions and more. The book serves two very different audiences: the curious science reader and the technical computational reader. The chapters build in mathematical sophistication, so that the first five are accessible to the general academic reader. While other chapters are much more mathematical in nature, each one contains something for both audiences. For example, the authors include entertaining asides such as how search engines make money and how the Great Firewall of China influences research. The book includes an extensive background chapter designed to help readers learn more about the mathematics of search engines, and it contains several MATLAB codes and links to sample Web data sets. The philosophy throughout is to encourage readers to experiment with the ideas and algorithms in the text. Any business seriously interested in improving its rankings in the major search engines can benefit from the clear examples, sample code, and list of resources provided. It includes: many illustrative examples and entertaining asides; MATLAB code; accessible and informal style; and complete and self-contained section for mathematics review.
    Content
    Inhalt: Chapter 1. Introduction to Web Search Engines: 1.1 A Short History of Information Retrieval - 1.2 An Overview of Traditional Information Retrieval - 1.3 Web Information Retrieval Chapter 2. Crawling, Indexing, and Query Processing: 2.1 Crawling - 2.2 The Content Index - 2.3 Query Processing Chapter 3. Ranking Webpages by Popularity: 3.1 The Scene in 1998 - 3.2 Two Theses - 3.3 Query-Independence Chapter 4. The Mathematics of Google's PageRank: 4.1 The Original Summation Formula for PageRank - 4.2 Matrix Representation of the Summation Equations - 4.3 Problems with the Iterative Process - 4.4 A Little Markov Chain Theory - 4.5 Early Adjustments to the Basic Model - 4.6 Computation of the PageRank Vector - 4.7 Theorem and Proof for Spectrum of the Google Matrix Chapter 5. Parameters in the PageRank Model: 5.1 The a Factor - 5.2 The Hyperlink Matrix H - 5.3 The Teleportation Matrix E Chapter 6. The Sensitivity of PageRank; 6.1 Sensitivity with respect to alpha - 6.2 Sensitivity with respect to H - 6.3 Sensitivity with respect to vT - 6.4 Other Analyses of Sensitivity - 6.5 Sensitivity Theorems and Proofs Chapter 7. The PageRank Problem as a Linear System: 7.1 Properties of (I - alphaS) - 7.2 Properties of (I - alphaH) - 7.3 Proof of the PageRank Sparse Linear System Chapter 8. Issues in Large-Scale Implementation of PageRank: 8.1 Storage Issues - 8.2 Convergence Criterion - 8.3 Accuracy - 8.4 Dangling Nodes - 8.5 Back Button Modeling
  14. Internet searching and indexing : the subject approach (2000) 0.00
    0.0011898974 = product of:
      0.0071393843 = sum of:
        0.0071393843 = weight(_text_:in in 1468) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=1468,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 1468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1468)
      0.16666667 = coord(1/6)
    
    Footnote
    Rez. in: International cataloguing and bibliographic control 30(2001) no.3, S.59 (I.C. McIlwaine)
  15. Stacey, Alison; Stacey, Adrian: Effective information retrieval from the Internet : an advanced user's guide (2004) 0.00
    0.0011898974 = product of:
      0.0071393843 = sum of:
        0.0071393843 = weight(_text_:in in 4497) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=4497,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 4497, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4497)
      0.16666667 = coord(1/6)
    
    Abstract
    This book provides practical strategies which enable the advanced web user to locate information effectively and to form a precise evaluation of the accuracy of that information. Although the book provides a brief but thorough review of the technologies which are currently available for these purposes, most of the book concerns practical `future-proof' techniques which are independent of changes in the tools available. For example, the book covers: how to retrieve salient information quickly; how to remove or compensate for bias; and tuition of novice Internet users.
    Content
    Key Features - Importantly, the book enables readers to develop strategies which will continue to be useful despite the rapidly-evolving state of the Internet and Internet technologies - it is not about technological `tricks'. - Enables readers to be aware of and compensate for bias and errors which are ubiquitous an the Internet. - Provides contemporary information an the deficiencies in web skills of novice users as well as practical techniques for teaching such users. The Authors Dr Alison Stacey works at the Learning Resource Centre, Cambridge Regional College. Dr Adrian Stacey, formerly based at Cambridge University, is a software programmer. Readership The book is aimed at a wide range of librarians and other information professionals who need to retrieve information from the Internet efficiently, to evaluate their confidence in the information they retrieve and/or to train others to use the Internet. It is primarily aimed at intermediate to advanced users of the Internet. Contents Fundamentals of information retrieval from the Internet - why learn web searching technique; types of information requests; patterns for information retrieval; leveraging the technology: Search term choice: pinpointing information an the web - why choose queries carefully; making search terms work together; how to pick search terms; finding the 'unfindable': Blas an the Internet - importance of bias; sources of bias; usergenerated bias: selecting information with which you already agree; assessing and compensating for bias; case studies: Query reformulation and longer term strategies - how to interact with your search engine; foraging for information; long term information retrieval: using the Internet to find trends; automating searches: how to make your machine do your work: Assessing the quality of results- how to assess and ensure quality: The novice user and teaching internet skills - novice users and their problems with the web; case study: research in a college library; interpreting 'second hand' web information.
  16. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (2005) 0.00
    0.0011898974 = product of:
      0.0071393843 = sum of:
        0.0071393843 = weight(_text_:in in 7) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=7,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 7, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=7)
      0.16666667 = coord(1/6)
    
    Abstract
    The second edition of Understanding Search Engines: Mathematical Modeling and Text Retrieval follows the basic premise of the first edition by discussing many of the key design issues for building search engines and emphasizing the important role that applied mathematics can play in improving information retrieval. The authors discuss important data structures, algorithms, and software as well as user-centered issues such as interfaces, manual indexing, and document preparation. Significant changes bring the text up to date on current information retrieval methods: for example the addition of a new chapter on link-structure algorithms used in search engines such as Google. The chapter on user interface has been rewritten to specifically focus on search engine usability. In addition the authors have added new recommendations for further reading and expanded the bibliography, and have updated and streamlined the index to make it more reader friendly.
    Content
    Inhalt: Introduction Document File Preparation - Manual Indexing - Information Extraction - Vector Space Modeling - Matrix Decompositions - Query Representations - Ranking and Relevance Feedback - Searching by Link Structure - User Interface - Book Format Document File Preparation Document Purification and Analysis - Text Formatting - Validation - Manual Indexing - Automatic Indexing - Item Normalization - Inverted File Structures - Document File - Dictionary List - Inversion List - Other File Structures Vector Space Models Construction - Term-by-Document Matrices - Simple Query Matching - Design Issues - Term Weighting - Sparse Matrix Storage - Low-Rank Approximations Matrix Decompositions QR Factorization - Singular Value Decomposition - Low-Rank Approximations - Query Matching - Software - Semidiscrete Decomposition - Updating Techniques Query Management Query Binding - Types of Queries - Boolean Queries - Natural Language Queries - Thesaurus Queries - Fuzzy Queries - Term Searches - Probabilistic Queries Ranking and Relevance Feedback Performance Evaluation - Precision - Recall - Average Precision - Genetic Algorithms - Relevance Feedback Searching by Link Structure HITS Method - HITS Implementation - HITS Summary - PageRank Method - PageRank Adjustments - PageRank Implementation - PageRank Summary User Interface Considerations General Guidelines - Search Engine Interfaces - Form Fill-in - Display Considerations - Progress Indication - No Penalties for Error - Results - Test and Retest - Final Considerations Further Reading
  17. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.00
    0.0011898974 = product of:
      0.0071393843 = sum of:
        0.0071393843 = weight(_text_:in in 3346) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=3346,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 3346, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
      0.16666667 = coord(1/6)
    
    Abstract
    The World Wide Web is rapidly filling with more text than anyone could have imagined even a short time ago, but the task of isolating relevant parts of this vast information has become just that much more daunting. Richard Belew brings a cognitive perspective to the study of information retrieval as a discipline within computer science. He introduces the idea of Finding Out About (FDA) as the process of actively seeking out information relevant to a topic of interest and describes its many facets - ranging from creating a good characterization of what the user seeks, to what documents actually mean, to methods of inferring semantic clues about each document, to the problem of evaluating whether our search engines are performing as we have intended. Finding Out About explains how to build the tools that are useful for searching collections of text and other media. In the process it takes a close look at the properties of textual documents that do not become clear until very large collections of them are brought together and shows that the construction of effective search engines requires knowledge of the statistical and mathematical properties of linguistic phenomena, as well as an appreciation for the cognitive foundation we bring to the task as language users. The unique approach of this book is its even handling of the phenomena of both numbers and words, making it accessible to a wide audience. The textbook is usable in both undergraduate and graduate classes on information retrieval, library science, and computational linguistics. The text is accompanied by a CD-ROM that contains a hypertext version of the book, including additional topics and notes not present in the printed edition. In addition, the CD contains the full text of C.J. "Keith" van Rijsbergen's famous textbook, Information Retrieval (now out of print). Many active links from Belew's to van Rijsbergen's hypertexts help to unite the material. Several test corpora and indexing tools are provided, to support the design of your own search engine. Additional exercises using these corpora and code are available to instructors. Also supporting this book is a Web site that will include recent additions to the book, as well as links to sites of new topics and methods.
  18. Sherman, C.: Google power : Unleash the full potential of Google (2005) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 3185) [ClassicSimilarity], result of:
          0.005354538 = score(doc=3185,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 3185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3185)
      0.16666667 = coord(1/6)
    
    Abstract
    With this title, readers learn to push the search engine to its limits and extract the best content from Google, without having to learn complicated code. "Google Power" takes Google users under the hood, and teaches them a wide range of advanced web search techniques, through practical examples. Its content is organised by topic, so reader learns how to conduct in-depth searches on the most popular search topics, from health to government listings to people.

Years

Languages

  • d 49
  • e 29

Subjects

Classifications