Search (290 results, page 1 of 15)

  • × theme_ss:"Suchmaschinen"
  1. Li, L.; Shang, Y.; Zhang, W.: Improvement of HITS-based algorithms on Web documents 0.19
    0.19181363 = product of:
      0.38362727 = sum of:
        0.07317444 = product of:
          0.21952331 = sum of:
            0.21952331 = weight(_text_:3a in 2514) [ClassicSimilarity], result of:
              0.21952331 = score(doc=2514,freq=2.0), product of:
                0.39059833 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046071928 = queryNorm
                0.56201804 = fieldWeight in 2514, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.33333334 = coord(1/3)
        0.31045282 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.31045282 = score(doc=2514,freq=4.0), product of:
            0.39059833 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046071928 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http%3A%2F%2Fdelab.csd.auth.gr%2F~dimitris%2Fcourses%2Fir_spring06%2Fpage_rank_computing%2Fp527-li.pdf. Vgl. auch: http://www2002.org/CDROM/refereed/643/.
  2. Duval, B.K.; Main, L.: Searching on the Net : general overview (1996) 0.12
    0.123770654 = product of:
      0.24754131 = sum of:
        0.22257286 = weight(_text_:sites in 7268) [ClassicSimilarity], result of:
          0.22257286 = score(doc=7268,freq=8.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.92412436 = fieldWeight in 7268, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0625 = fieldNorm(doc=7268)
        0.024968442 = product of:
          0.049936883 = sum of:
            0.049936883 = weight(_text_:22 in 7268) [ClassicSimilarity], result of:
              0.049936883 = score(doc=7268,freq=2.0), product of:
                0.16133605 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046071928 = queryNorm
                0.30952093 = fieldWeight in 7268, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7268)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    First of a 3 part series discussing how to access and use Web search engines on the Internet. Distinguishes between FTP sites, Gopher sites, Usenet News sites and Web sites. Considers subject searching versus keyword; how to improve search strategies and success rates; bookmarks; Yahoo!, Lycos; InfoSeek; Magellan; Excite; Inktomi; HotBot and AltaVista
    Date
    6. 3.1997 16:22:15
  3. Alqaraleh, S.; Ramadan, O.; Salamah, M.: Efficient watcher based web crawler design (2015) 0.12
    0.11631859 = product of:
      0.23263718 = sum of:
        0.13910803 = weight(_text_:sites in 1627) [ClassicSimilarity], result of:
          0.13910803 = score(doc=1627,freq=8.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.5775777 = fieldWeight in 1627, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1627)
        0.09352915 = sum of:
          0.062318597 = weight(_text_:design in 1627) [ClassicSimilarity], result of:
            0.062318597 = score(doc=1627,freq=6.0), product of:
              0.17322445 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046071928 = queryNorm
              0.35975635 = fieldWeight in 1627, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1627)
          0.031210553 = weight(_text_:22 in 1627) [ClassicSimilarity], result of:
            0.031210553 = score(doc=1627,freq=2.0), product of:
              0.16133605 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046071928 = queryNorm
              0.19345059 = fieldWeight in 1627, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1627)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The purpose of this paper is to design a watcher-based crawler (WBC) that has the ability of crawling static and dynamic web sites, and can download only the updated and newly added web pages. Design/methodology/approach In the proposed WBC crawler, a watcher file, which can be uploaded to the web sites servers, prepares a report that contains the addresses of the updated and the newly added web pages. In addition, the WBC is split into five units, where each unit is responsible for performing a specific crawling process. Findings Several experiments have been conducted and it has been observed that the proposed WBC increases the number of uniquely visited static and dynamic web sites as compared with the existing crawling techniques. In addition, the proposed watcher file not only allows the crawlers to visit the updated and newly web pages, but also solves the crawlers overlapping and communication problems. Originality/value The proposed WBC performs all crawling processes in the sense that it detects all updated and newly added pages automatically without any human explicit intervention or downloading the entire web sites.
    Date
    20. 1.2015 18:30:22
  4. Gourbin, G.: ¬Une nouvelle profession : cyber-documentaliste l'exemple de Nomade (1998) 0.08
    0.07977866 = product of:
      0.15955731 = sum of:
        0.13770993 = weight(_text_:sites in 2980) [ClassicSimilarity], result of:
          0.13770993 = score(doc=2980,freq=4.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.57177275 = fieldWeight in 2980, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2980)
        0.021847386 = product of:
          0.04369477 = sum of:
            0.04369477 = weight(_text_:22 in 2980) [ClassicSimilarity], result of:
              0.04369477 = score(doc=2980,freq=2.0), product of:
                0.16133605 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046071928 = queryNorm
                0.2708308 = fieldWeight in 2980, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2980)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Users who want to exploit all the information sources on the Web will need an efficient search and selection tool e.g. a directory or search engine. Directories list Web sites and analyze their contents. Describes the behind-the-scenes work of documentalists specialized in surfing, tracking and indexing French language sites for the directory Nomade. Describes the creation of Nomade, its functioning and indexing, and how this new profession of 'cyber-documentalist' is changing the practices and functions of information professionals as they become Internet information organizers
    Date
    1. 8.1996 22:01:00
  5. Carrière, S.J.; Kazman, R.: Webquery : searching and visualising the Web through connectivity (1997) 0.07
    0.06838171 = product of:
      0.13676342 = sum of:
        0.11803709 = weight(_text_:sites in 2674) [ClassicSimilarity], result of:
          0.11803709 = score(doc=2674,freq=4.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.49009097 = fieldWeight in 2674, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046875 = fieldNorm(doc=2674)
        0.01872633 = product of:
          0.03745266 = sum of:
            0.03745266 = weight(_text_:22 in 2674) [ClassicSimilarity], result of:
              0.03745266 = score(doc=2674,freq=2.0), product of:
                0.16133605 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046071928 = queryNorm
                0.23214069 = fieldWeight in 2674, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2674)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The WebQuery system offers a powerful new method for searching the Web based on connectivity and content. Examines links among the nodes returned in a keyword-based query. Rankes the nodes, giving the highest rank to the most highly connected nodes. By doing so, finds hot spots on the Web that contain information germane to a user's query. WebQuery not only ranks and filters the results of a Web query; it also extends the result set beyond what the search engine retrieves, by finding interesting sites that are highly connected to those sites returned by the original query. Even with WebQuery filering and ranking query results, the result set can be enormous. Explores techniques for visualizing the returned information and discusses the criteria for using each of the technique
    Date
    1. 8.1996 22:08:06
  6. Conhaim, W.W.: Search tools (1996) 0.07
    0.06812744 = product of:
      0.13625488 = sum of:
        0.11128643 = weight(_text_:sites in 4738) [ClassicSimilarity], result of:
          0.11128643 = score(doc=4738,freq=2.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.46206218 = fieldWeight in 4738, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0625 = fieldNorm(doc=4738)
        0.024968442 = product of:
          0.049936883 = sum of:
            0.049936883 = weight(_text_:22 in 4738) [ClassicSimilarity], result of:
              0.049936883 = score(doc=4738,freq=2.0), product of:
                0.16133605 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046071928 = queryNorm
                0.30952093 = fieldWeight in 4738, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4738)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Describes the 3 most popular searching tools for the WWW: InfoSeek, Yahoo and Lycos. Searching Internet directories can also be a useful search technique. Lists other searching engines. Points out a number of evaluations of these search engines published on the WWW. A number of search tools are available for specialized areas. Sites are available that enable parallel searching using several tools at once. Describes WWW pages with information about search engines
    Date
    1. 8.1996 22:39:31
  7. Berinstein, P.: Turning visual : image search engines on the Web (1998) 0.07
    0.06812744 = product of:
      0.13625488 = sum of:
        0.11128643 = weight(_text_:sites in 3595) [ClassicSimilarity], result of:
          0.11128643 = score(doc=3595,freq=2.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.46206218 = fieldWeight in 3595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0625 = fieldNorm(doc=3595)
        0.024968442 = product of:
          0.049936883 = sum of:
            0.049936883 = weight(_text_:22 in 3595) [ClassicSimilarity], result of:
              0.049936883 = score(doc=3595,freq=2.0), product of:
                0.16133605 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046071928 = queryNorm
                0.30952093 = fieldWeight in 3595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3595)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Gives an overview of image search engines on the Web. They work by: looking for graphics files; looking for a caption; looking for Web sites whose titles indicate the presence of picturres on a certain subject; or employing human intervention. Describes the image search capabilities of: AltaVista; Amazing Picture Machine (Http://www.ncrtec.org/picture.htm); HotBot; ImageSurfer (http://ipix.yahoo.com); Lycos; Web Clip Art Search Engine and WebSEEK. The search engines employing human intervention provide the best results
    Source
    Online. 22(1998) no.3, S.37-38,40-42
  8. Lawrence, S.; Giles, C.L.: Accessibility and distribution of information on the Web (1999) 0.07
    0.06598473 = product of:
      0.26393893 = sum of:
        0.26393893 = weight(_text_:sites in 4952) [ClassicSimilarity], result of:
          0.26393893 = score(doc=4952,freq=20.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            1.0958767 = fieldWeight in 4952, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046875 = fieldNorm(doc=4952)
      0.25 = coord(1/4)
    
    Abstract
    Search engine coverage relative to the estimated size of the publicly indexable web has decreased substantially since December 97, with no engine indexing more than about 16% of the estimated size of the publicly indexable web. (Note that many queries can be satisfied with a relatively small database). Search engines are typically more likely to index sites that have more links to them (more 'popular' sites). They are also typically more likely to index US sites than non-US sites (AltaVista is an exception), and more likely to index commercial sites than educational sites. Indexing of new or modified pages byjust one of the major search engines can take months. 83% of sites contain commercial content and 6% contain scientific or educational content. Only 1.5% of sites contain pornographic content. The publicly indexable web contains an estimated 800 million pages as of February 1999, encompassing about 15 terabytes of information or about 6 terabytes of text after removing HTML tags, comments, and extra whitespace. The simple HTML "keywords" and "description" metatags are only used on the homepages of 34% of sites. Only 0.3% of sites use the Dublin Core metadata standard.
  9. Collins, B.R.: Webwatch (1997) 0.06
    0.06023555 = product of:
      0.2409422 = sum of:
        0.2409422 = weight(_text_:sites in 172) [ClassicSimilarity], result of:
          0.2409422 = score(doc=172,freq=6.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            1.000394 = fieldWeight in 172, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.078125 = fieldNorm(doc=172)
      0.25 = coord(1/4)
    
    Abstract
    The Internet and WWW can be searched by using search engines such as Yahoo! or by using review directories, that is sites which review and rate 1.000s of web sites and provide proprietary search engines to the sites. Describes and evaluates a number of these review directories, including Magellan Internet Guide, CyberHound, NetGuide and Excite
  10. Herrera-Viedma, E.; Pasi, G.: Soft approaches to information retrieval and information access on the Web : an introduction to the special topic section (2006) 0.05
    0.054697692 = product of:
      0.109395385 = sum of:
        0.055643216 = weight(_text_:sites in 5285) [ClassicSimilarity], result of:
          0.055643216 = score(doc=5285,freq=2.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.23103109 = fieldWeight in 5285, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.03125 = fieldNorm(doc=5285)
        0.05375217 = sum of:
          0.028783726 = weight(_text_:design in 5285) [ClassicSimilarity], result of:
            0.028783726 = score(doc=5285,freq=2.0), product of:
              0.17322445 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046071928 = queryNorm
              0.16616434 = fieldWeight in 5285, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.03125 = fieldNorm(doc=5285)
          0.024968442 = weight(_text_:22 in 5285) [ClassicSimilarity], result of:
            0.024968442 = score(doc=5285,freq=2.0), product of:
              0.16133605 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046071928 = queryNorm
              0.15476047 = fieldWeight in 5285, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=5285)
      0.5 = coord(2/4)
    
    Abstract
    The World Wide Web is a popular and interactive medium used to collect, disseminate, and access an increasingly huge amount of information, which constitutes the mainstay of the so-called information and knowledge society. Because of its spectacular growth, related to both Web resources (pages, sites, and services) and number of users, the Web is nowadays the main information repository and provides some automatic systems for locating, accessing, and retrieving information. However, an open and crucial question remains: how to provide fast and effective retrieval of the information relevant to specific users' needs. This is a very hard and complex task, since it is pervaded with subjectivity, vagueness, and uncertainty. The expression soft computing refers to techniques and methodologies that work synergistically with the aim of providing flexible information processing tolerant of imprecision, vagueness, partial truth, and approximation. So, soft computing represents a good candidate to design effective systems for information access and retrieval on the Web. One of the most representative tools of soft computing is fuzzy set theory. This special topic section collects research articles witnessing some recent advances in improving the processes of information access and retrieval on the Web by using soft computing tools, and in particular, by using fuzzy sets and/or integrating them with other soft computing tools. In this introductory article, we first review the problem of Web retrieval and the concept of soft computing technology. We then briefly introduce the articles in this section and conclude by highlighting some future research directions that could benefit from the use of soft computing technologies.
    Date
    22. 7.2006 16:59:33
  11. Fischer, T.; Neuroth, H.: SSG-FI - special subject gateways to high quality Internet resources for scientific users (2000) 0.05
    0.051095575 = product of:
      0.10219115 = sum of:
        0.08346482 = weight(_text_:sites in 4873) [ClassicSimilarity], result of:
          0.08346482 = score(doc=4873,freq=2.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.34654665 = fieldWeight in 4873, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046875 = fieldNorm(doc=4873)
        0.01872633 = product of:
          0.03745266 = sum of:
            0.03745266 = weight(_text_:22 in 4873) [ClassicSimilarity], result of:
              0.03745266 = score(doc=4873,freq=2.0), product of:
                0.16133605 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046071928 = queryNorm
                0.23214069 = fieldWeight in 4873, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4873)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Project SSG-FI at SUB Göttingen provides special subject gateways to international high quality Internet resources for scientific users. Internet sites are selected by subject specialists and described using an extension of qualified Dublin Core metadata. A basic evaluation is added. These descriptions are freely available and can be searched and browsed. These are now subject gateways for 3 subject ares: earth sciences (GeoGuide); mathematics (MathGuide); and Anglo-American culture (split into HistoryGuide and AnglistikGuide). Together they receive about 3.300 'hard' requests per day, thus reaching over 1 million requests per year. The project SSG-FI behind these guides is open to collaboration. Institutions and private persons wishing to contribute can notify the SSG-FI team or send full data sets. Regular contributors can request registration with the project to access the database via the Internet and create and edit records
    Date
    22. 6.2002 19:40:42
  12. ap: Konkurrenz für Google : Neue Suchmaschine "Teoma" gestartet (2002) 0.05
    0.051095575 = product of:
      0.10219115 = sum of:
        0.08346482 = weight(_text_:sites in 187) [ClassicSimilarity], result of:
          0.08346482 = score(doc=187,freq=2.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.34654665 = fieldWeight in 187, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046875 = fieldNorm(doc=187)
        0.01872633 = product of:
          0.03745266 = sum of:
            0.03745266 = weight(_text_:22 in 187) [ClassicSimilarity], result of:
              0.03745266 = score(doc=187,freq=2.0), product of:
                0.16133605 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046071928 = queryNorm
                0.23214069 = fieldWeight in 187, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=187)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    "Die Suchmaschine Google gilt oft als der beste Weg, um schnell etwas im Internet zu finden. Das war einmal, behauptet Apostolos Gerasoulis, jetzt gebe es www.teoma. com. "Wir sind die nächste Generation der Suchmaschinen", verspricht der Mathematikprofessor. Die Web-Sites von Google und Teoma sind ähnlich aufgemacht. Beide bieten eine weitgehend weiße Startseite mit wenigen, klaren Farben. Beide Suchmaschinen benutzen bei ihrer Arbeit zur Analyse der Anfragen einen komplizierten Algorithmus. Teoma hält den eigenen Ansatz aber für besser, weil dabei das Internet in Gruppen von Online-Gemeinschaften unterteilt wird. Dies liefere bessere Ergebnisse und erlaube eine nützlichere Auswahl. Zu einem Suchbegriff erscheinen bei Teoma zuerst links oben die bezahlten Verweise, darunter dann' alle anderen gefundenen Web-Seiten. Rechts erscheinen Vorschläge zur Verfeinerung der Suchanfrage, darunter manchmal Links von "Experten und Enthusiasten". Diese qualifizierten Antworten sind eine der Stärken, mit denen Teoma wuchern möchte. Sie sind besonders für Anfänger nützlich, die nach allgemeinen Themen wie Afrika" oder "Fußball" suchen. Allerdings könnte dieser Ergebnisdienst Nutzer auch überfordern, gerade wenn sie an das einfache Google gewöhnt seien, kritsiert Rob Lancaster von der Yankee Group."
    Date
    3. 5.1997 8:44:22
  13. Fryxell, D.A.: ¬9 Web search sites examined (1996) 0.05
    0.049182117 = product of:
      0.19672847 = sum of:
        0.19672847 = weight(_text_:sites in 5141) [ClassicSimilarity], result of:
          0.19672847 = score(doc=5141,freq=4.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.81681824 = fieldWeight in 5141, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.078125 = fieldNorm(doc=5141)
      0.25 = coord(1/4)
    
    Abstract
    WWW search engines continue to proliferate. Tests 9 sites by carrying out 5 different searches on each. Covers: Yahoo!, OpenText, AltaVista, Lycos, WebCrawler, Excite, InfoSeek, Magellan and Point
  14. Raeder, A.: Finding Web sites (1995) 0.05
    0.04818844 = product of:
      0.19275376 = sum of:
        0.19275376 = weight(_text_:sites in 2230) [ClassicSimilarity], result of:
          0.19275376 = score(doc=2230,freq=6.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.8003152 = fieldWeight in 2230, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0625 = fieldNorm(doc=2230)
      0.25 = coord(1/4)
    
    Abstract
    WWW sites provide graphical hyperlinked views of Internet information. Reviews selected sites that offer access to the Internet. Discusses the services offered by O'Reilly and Associates Inc Whole Internet Guide; Webcrawler from Washington University; Yahoo's Guide to WWW; Library of Congress' Global Electronic Library; The Internet Scout Report; Commerce Net; Commercial Yellow pages; the Virtual Tourist; Geographic Directory of WWW servers; and the Hot, Hot List
  15. Blake, P.: CyberHound sniffs out the right sites (1996) 0.05
    0.04818844 = product of:
      0.19275376 = sum of:
        0.19275376 = weight(_text_:sites in 6960) [ClassicSimilarity], result of:
          0.19275376 = score(doc=6960,freq=6.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.8003152 = fieldWeight in 6960, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0625 = fieldNorm(doc=6960)
      0.25 = coord(1/4)
    
    Abstract
    Describes Gale Research's new search engine for the Web. CyberHound currently covers more than 30.000 sites and will expand to over 50.000 by the end of the year. CyberHound is designed for information professionals, providing reviewed and rated sites, searious searching and editorial analysis. The service offers a search capability based on Personal Library's PL Web search engine
  16. El-Ramly, N.; Peterson. R.E.; Volonino, L.: Top ten Web sites using search engines : the case of the desalination industry (1996) 0.05
    0.04665825 = product of:
      0.186633 = sum of:
        0.186633 = weight(_text_:sites in 945) [ClassicSimilarity], result of:
          0.186633 = score(doc=945,freq=10.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.7749018 = fieldWeight in 945, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046875 = fieldNorm(doc=945)
      0.25 = coord(1/4)
    
    Abstract
    The desalination industry involves the desalting of sea or brackish water and achieves the purpose of increasing the worls's effective water supply. There are approximately 4.000 desalination Web sites. The six major Internet search engines were used to determine, according to each of the six, the top twenty sites for desalination. Each site was visited and the 120 gross returns were pared down to the final ten - the 'Top Ten'. The Top Ten were then analyzed to determine what it was that made the sites useful and informative. The major attributes were: a) currency (up-to-date); b) search site capability; c) access to articles on desalination; d) newsletters; e) databases; f) product information; g) online conferencing; h) valuable links to other sites; l) communication links; j) site maps; and k) case studies. Reasons for having a Web site and the current status and prospects for Internet commerce are discussed
  17. Toms, E.G.; Taves, A.R.: Measuring user perceptions of Web site reputation (2004) 0.05
    0.046005663 = product of:
      0.18402265 = sum of:
        0.18402265 = weight(_text_:sites in 2565) [ClassicSimilarity], result of:
          0.18402265 = score(doc=2565,freq=14.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.76406354 = fieldWeight in 2565, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2565)
      0.25 = coord(1/4)
    
    Abstract
    In this study, we compare a search tool, TOPIC, with three other widely used tools that retrieve information from the Web: AltaVista, Google, and Lycos. These tools use different techniques for outputting and ranking Web sites: external link structure (TOPIC and Google) and semantic content analysis (AltaVista and Lycos). TOPIC purports to output, and highly rank within its hit list, reputable Web sites for searched topics. In this study, 80 participants reviewed the output (i.e., highly ranked sites) from each tool and assessed the quality of retrieved sites. The 4800 individual assessments of 240 sites that represent 12 topics indicated that Google tends to identify and highly rank significantly more reputable Web sites than TOPIC, which, in turn, outputs more than AltaVista and Lycos, but this was not consistent from topic to topic. Metrics derived from reputation research were used in the assessment and a factor analysis was employed to identify a key factor, which we call 'repute'. The results of this research include insight into the factors that Web users consider in formulating perceptions of Web site reputation, and insight into which search tools are outputting reputable sites for Web users. Our findings, we believe, have implications for Web users and suggest the need for future research to assess the relationship between Web page characteristics and their perceived reputation.
  18. Jacobs, M.: Criteria for evaluating alternative MEDLINE search engines (1998) 0.04
    0.042164885 = product of:
      0.16865954 = sum of:
        0.16865954 = weight(_text_:sites in 3264) [ClassicSimilarity], result of:
          0.16865954 = score(doc=3264,freq=6.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.7002758 = fieldWeight in 3264, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3264)
      0.25 = coord(1/4)
    
    Abstract
    Reports results of a study, undertaken at the J. Otto Lottes Health Sciences Library, Missouri University at Columbia, to derive a set of evaluation criteria to assist librarians in determining the positive and negative aspects of alternative Web sites available for searching MEDLINE via the WWW. A set of searches were used systematically to compare MEDLINE Web sites, including: Avicenna; America Online; HealthGate; PubMed; Medscape; and Physicians' Online. Focuses on the principle features of the search engines used in the sites: default fields and operators; MeSH; subheadings; stopwords protected in MeSH; truncation and stemming. Describes the group processes used to arrive at the evaluation criteria and some general conclusions which will help librarians in directing their users to a particular MEDLINE site
  19. Notess, G.R.: Custom search engines : tools and tips (2008) 0.04
    0.042164885 = product of:
      0.16865954 = sum of:
        0.16865954 = weight(_text_:sites in 2145) [ClassicSimilarity], result of:
          0.16865954 = score(doc=2145,freq=6.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.7002758 = fieldWeight in 2145, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2145)
      0.25 = coord(1/4)
    
    Abstract
    The basic steps to build one are fairly simple: * Sign up * Pick a search engine name * Choose a list of sites * Add the sites * Publish That quickly, a search engine can be created to search a specific portion of the web, such as local government sites, childcare resources, or historical archives. It is easy to create a simple customized vertical search engine as well as support much more advanced capabilities (see the Google AJAX search API article). Try these tools and tips and build a customized search engine or two for your own users to help them find more targeted and relevant web information.
  20. Campbell, K.: Understanding and comparing search engines (1996) 0.04
    0.04173241 = product of:
      0.16692965 = sum of:
        0.16692965 = weight(_text_:sites in 5666) [ClassicSimilarity], result of:
          0.16692965 = score(doc=5666,freq=2.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.6930933 = fieldWeight in 5666, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.09375 = fieldNorm(doc=5666)
      0.25 = coord(1/4)
    
    Abstract
    A meta-list of 11 other sites that critique search engines

Years

Languages

  • e 181
  • d 104
  • nl 3
  • f 2
  • More… Less…

Types

  • a 257
  • el 20
  • m 14
  • s 3
  • x 3
  • p 2
  • r 2
  • More… Less…