Search (4 results, page 1 of 1)

  • × theme_ss:"Internet"
  • × theme_ss:"Literaturübersicht"
  • × type_ss:"a"
  1. Chowdhury, G.G.: ¬The Internet and information retrieval research : a brief review (1999) 0.02
    0.018694704 = product of:
      0.03738941 = sum of:
        0.03738941 = product of:
          0.07477882 = sum of:
            0.07477882 = weight(_text_:searching in 3424) [ClassicSimilarity], result of:
              0.07477882 = score(doc=3424,freq=2.0), product of:
                0.2091384 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.051699217 = queryNorm
                0.3575566 = fieldWeight in 3424, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3424)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Internet and related information services attract increasing interest from information retrieval researchers. A survey of recent publications shows that frequent topics are the effectiveness of search engines, information validation and quality, user studies, design of user interfaces, data structures and metadata, classification and vocabulary based aids, and indexing and search agents. Current research in these areas is briefly discussed. The changing balance between CD-ROM sources and traditional online searching is quite important and is noted
  2. Rasmussen, E.M.: Indexing and retrieval for the Web (2002) 0.01
    0.014166329 = product of:
      0.028332658 = sum of:
        0.028332658 = product of:
          0.056665316 = sum of:
            0.056665316 = weight(_text_:searching in 4285) [ClassicSimilarity], result of:
              0.056665316 = score(doc=4285,freq=6.0), product of:
                0.2091384 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.051699217 = queryNorm
                0.2709465 = fieldWeight in 4285, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4285)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The introduction and growth of the World Wide Web (WWW, or Web) have resulted in a profound change in the way individuals and organizations access information. In terms of volume, nature, and accessibility, the characteristics of electronic information are significantly different from those of even five or six years ago. Control of, and access to, this flood of information rely heavily an automated techniques for indexing and retrieval. According to Gudivada, Raghavan, Grosky, and Kasanagottu (1997, p. 58), "The ability to search and retrieve information from the Web efficiently and effectively is an enabling technology for realizing its full potential." Almost 93 percent of those surveyed consider the Web an "indispensable" Internet technology, second only to e-mail (Graphie, Visualization & Usability Center, 1998). Although there are other ways of locating information an the Web (browsing or following directory structures), 85 percent of users identify Web pages by means of a search engine (Graphie, Visualization & Usability Center, 1998). A more recent study conducted by the Stanford Institute for the Quantitative Study of Society confirms the finding that searching for information is second only to e-mail as an Internet activity (Nie & Ebring, 2000, online). In fact, Nie and Ebring conclude, "... the Internet today is a giant public library with a decidedly commercial tilt. The most widespread use of the Internet today is as an information search utility for products, travel, hobbies, and general information. Virtually all users interviewed responded that they engaged in one or more of these information gathering activities."
    Techniques for automated indexing and information retrieval (IR) have been developed, tested, and refined over the past 40 years, and are well documented (see, for example, Agosti & Smeaton, 1996; BaezaYates & Ribeiro-Neto, 1999a; Frakes & Baeza-Yates, 1992; Korfhage, 1997; Salton, 1989; Witten, Moffat, & Bell, 1999). With the introduction of the Web, and the capability to index and retrieve via search engines, these techniques have been extended to a new environment. They have been adopted, altered, and in some Gases extended to include new methods. "In short, search engines are indispensable for searching the Web, they employ a variety of relatively advanced IR techniques, and there are some peculiar aspects of search engines that make searching the Web different than more conventional information retrieval" (Gordon & Pathak, 1999, p. 145). The environment for information retrieval an the World Wide Web differs from that of "conventional" information retrieval in a number of fundamental ways. The collection is very large and changes continuously, with pages being added, deleted, and altered. Wide variability between the size, structure, focus, quality, and usefulness of documents makes Web documents much more heterogeneous than a typical electronic document collection. The wide variety of document types includes images, video, audio, and scripts, as well as many different document languages. Duplication of documents and sites is common. Documents are interconnected through networks of hyperlinks. Because of the size and dynamic nature of the Web, preprocessing all documents requires considerable resources and is often not feasible, certainly not an the frequent basis required to ensure currency. Query length is usually much shorter than in other environments-only a few words-and user behavior differs from that in other environments. These differences make the Web a novel environment for information retrieval (Baeza-Yates & Ribeiro-Neto, 1999b; Bharat & Henzinger, 1998; Huang, 2000).
  3. Woodward, J.: Cataloging and classifying information resources on the Internet (1996) 0.01
    0.014021028 = product of:
      0.028042056 = sum of:
        0.028042056 = product of:
          0.05608411 = sum of:
            0.05608411 = weight(_text_:searching in 7397) [ClassicSimilarity], result of:
              0.05608411 = score(doc=7397,freq=2.0), product of:
                0.2091384 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.051699217 = queryNorm
                0.26816747 = fieldWeight in 7397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7397)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    State of the art review exploring the problem of bibliographic citations to resources that exist only in electronic form where the cited items may no longer be locatable at the URL indicated. Notes that the Internet is currently in a state of near chaos in terms of access and organization, while searching, usually performed with word based search engines, is generally not adequate for the needs of most users. Reviews strategies used by librarians for cataloguing and classifying information resources on the Internet. Techniques used include: automatic classification projects and classified subject trees, like the BUBL Subject Tree; CyberDewey, and the WWW Virtual Library. Considers OPAC like library catalogues such as the UK's CATRIONA Project and OCLC's InterCat. Explores retrieval tools used with concept analysis and other non traditional proposals, which include some library expertise, usually the use of one of the major library classifications. Pays particular attention to the UDC
  4. Chen, H.; Chau, M.: Web mining : machine learning for Web applications (2003) 0.01
    0.014021028 = product of:
      0.028042056 = sum of:
        0.028042056 = product of:
          0.05608411 = sum of:
            0.05608411 = weight(_text_:searching in 4242) [ClassicSimilarity], result of:
              0.05608411 = score(doc=4242,freq=2.0), product of:
                0.2091384 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.051699217 = queryNorm
                0.26816747 = fieldWeight in 4242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4242)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    With more than two billion pages created by millions of Web page authors and organizations, the World Wide Web is a tremendously rich knowledge base. The knowledge comes not only from the content of the pages themselves, but also from the unique characteristics of the Web, such as its hyperlink structure and its diversity of content and languages. Analysis of these characteristics often reveals interesting patterns and new knowledge. Such knowledge can be used to improve users' efficiency and effectiveness in searching for information an the Web, and also for applications unrelated to the Web, such as support for decision making or business management. The Web's size and its unstructured and dynamic content, as well as its multilingual nature, make the extraction of useful knowledge a challenging research problem. Furthermore, the Web generates a large amount of data in other formats that contain valuable information. For example, Web server logs' information about user access patterns can be used for information personalization or improving Web page design.