Search (5 results, page 1 of 1)

  • × subject_ss:"Web search engines"
  1. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.01
    0.005464649 = product of:
      0.027323244 = sum of:
        0.027323244 = weight(_text_:it in 3346) [ClassicSimilarity], result of:
          0.027323244 = score(doc=3346,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.18076637 = fieldWeight in 3346, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
      0.2 = coord(1/5)
    
    Abstract
    The World Wide Web is rapidly filling with more text than anyone could have imagined even a short time ago, but the task of isolating relevant parts of this vast information has become just that much more daunting. Richard Belew brings a cognitive perspective to the study of information retrieval as a discipline within computer science. He introduces the idea of Finding Out About (FDA) as the process of actively seeking out information relevant to a topic of interest and describes its many facets - ranging from creating a good characterization of what the user seeks, to what documents actually mean, to methods of inferring semantic clues about each document, to the problem of evaluating whether our search engines are performing as we have intended. Finding Out About explains how to build the tools that are useful for searching collections of text and other media. In the process it takes a close look at the properties of textual documents that do not become clear until very large collections of them are brought together and shows that the construction of effective search engines requires knowledge of the statistical and mathematical properties of linguistic phenomena, as well as an appreciation for the cognitive foundation we bring to the task as language users. The unique approach of this book is its even handling of the phenomena of both numbers and words, making it accessible to a wide audience. The textbook is usable in both undergraduate and graduate classes on information retrieval, library science, and computational linguistics. The text is accompanied by a CD-ROM that contains a hypertext version of the book, including additional topics and notes not present in the printed edition. In addition, the CD contains the full text of C.J. "Keith" van Rijsbergen's famous textbook, Information Retrieval (now out of print). Many active links from Belew's to van Rijsbergen's hypertexts help to unite the material. Several test corpora and indexing tools are provided, to support the design of your own search engine. Additional exercises using these corpora and code are available to instructors. Also supporting this book is a Web site that will include recent additions to the book, as well as links to sites of new topics and methods.
  2. Hearst, M.A.: Search user interfaces (2009) 0.01
    0.005464649 = product of:
      0.027323244 = sum of:
        0.027323244 = weight(_text_:it in 4029) [ClassicSimilarity], result of:
          0.027323244 = score(doc=4029,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.18076637 = fieldWeight in 4029, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.03125 = fieldNorm(doc=4029)
      0.2 = coord(1/5)
    
    Abstract
    This book outlines the human side of the information seeking process, and focuses on the aspects of this process that can best be supported by the user interface. It describes the methods behind user interface design generally, and search interface design in particular, with an emphasis on how best to evaluate search interfaces. It discusses research results and current practices surrounding user interfaces for query specification, display of retrieval results, grouping retrieval results, navigation of information collections, query reformulation, search personalization, and the broader tasks of sensemaking and text analysis. Much of the discussion pertains to Web search engines, but the book also covers the special considerations surrounding search of other information collections.
  3. Rogers, R.: Digital methods (2013) 0.01
    0.005464649 = product of:
      0.027323244 = sum of:
        0.027323244 = weight(_text_:it in 2354) [ClassicSimilarity], result of:
          0.027323244 = score(doc=2354,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.18076637 = fieldWeight in 2354, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.03125 = fieldNorm(doc=2354)
      0.2 = coord(1/5)
    
    Abstract
    In Digital Methods, Richard Rogers proposes a methodological outlook for social and cultural scholarly research on the Web that seeks to move Internet research beyond the study of online culture. It is not a toolkit for Internet research, or operating instructions for a software package; it deals with broader questions. How can we study social media to learn something about society rather than about social media use? How can hyperlinks reveal not just the value of a Web site but the politics of association? Rogers proposes repurposing Web-native techniques for research into cultural change and societal conditions. We can learn to reapply such "methods of the medium" as crawling and crowd sourcing, PageRank and similar algorithms, tag clouds and other visualizations; we can learn how they handle hits, likes, tags, date stamps, and other Web-native objects. By "thinking along" with devices and the objects they handle, digital research methods can follow the evolving methods of the medium. Rogers uses this new methodological outlook to examine the findings of inquiries into 9/11 search results, the recognition of climate change skeptics by climate-change-related Web sites, the events surrounding the Srebrenica massacre according to Dutch, Serbian, Bosnian, and Croatian Wikipedias, presidential candidates' social media "friends," and the censorship of the Iranian Web. With Digital Methods, Rogers introduces a new vision and method for Internet research and at the same time applies them to the Web's objects of study, from tiny particles (hyperlinks) to large masses (social media).
  4. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (2005) 0.00
    0.0038640907 = product of:
      0.019320453 = sum of:
        0.019320453 = weight(_text_:it in 7) [ClassicSimilarity], result of:
          0.019320453 = score(doc=7,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.12782113 = fieldWeight in 7, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.03125 = fieldNorm(doc=7)
      0.2 = coord(1/5)
    
    Abstract
    The second edition of Understanding Search Engines: Mathematical Modeling and Text Retrieval follows the basic premise of the first edition by discussing many of the key design issues for building search engines and emphasizing the important role that applied mathematics can play in improving information retrieval. The authors discuss important data structures, algorithms, and software as well as user-centered issues such as interfaces, manual indexing, and document preparation. Significant changes bring the text up to date on current information retrieval methods: for example the addition of a new chapter on link-structure algorithms used in search engines such as Google. The chapter on user interface has been rewritten to specifically focus on search engine usability. In addition the authors have added new recommendations for further reading and expanded the bibliography, and have updated and streamlined the index to make it more reader friendly.
  5. Libraries and Google (2005) 0.00
    0.0038640907 = product of:
      0.019320453 = sum of:
        0.019320453 = weight(_text_:it in 1973) [ClassicSimilarity], result of:
          0.019320453 = score(doc=1973,freq=8.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.12782113 = fieldWeight in 1973, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.015625 = fieldNorm(doc=1973)
      0.2 = coord(1/5)
    
    Footnote
    ... This book is written by library professionals and aimed at the librarians in particular, but it will be useful to others who may be interested in knowing what libraries are up to in the age of Google. It is intended for library science educators and students, library administrators, publishers and university presses. It is well organized, well researched, and easily readable. Article titles are descriptive, allowing the reader to find what he needs by scanning the table of contents or by consulting the index. The only flaw in this book is the lack of summary or conclusions in a few articles. The book is in paperback and has 240 pages. This book is a significant contribution and I highly recommend it."

Types