Search (674 results, page 1 of 34)

  • × theme_ss:"Suchmaschinen"
  1. Metzger, C.: Gratis-Bildmaterial aus dem Web (2005) 0.11
    0.105149016 = product of:
      0.15772352 = sum of:
        0.045240454 = weight(_text_:web in 3412) [ClassicSimilarity], result of:
          0.045240454 = score(doc=3412,freq=14.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.28619388 = fieldWeight in 3412, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3412)
        0.11248306 = sum of:
          0.09279523 = weight(_text_:designer in 3412) [ClassicSimilarity], result of:
            0.09279523 = score(doc=3412,freq=2.0), product of:
              0.36824805 = queryWeight, product of:
                7.602543 = idf(docFreq=59, maxDocs=44218)
                0.048437484 = queryNorm
              0.2519911 = fieldWeight in 3412, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.602543 = idf(docFreq=59, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3412)
          0.01968783 = weight(_text_:22 in 3412) [ClassicSimilarity], result of:
            0.01968783 = score(doc=3412,freq=2.0), product of:
              0.16961981 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.048437484 = queryNorm
              0.116070345 = fieldWeight in 3412, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3412)
      0.6666667 = coord(2/3)
    
    Abstract
    Schluss mit langweiligen Web-Seiten, wenig aussagekräftigen Homepages oder Sites mit mickrigem Hintergrund: Aus dem Internet laden Sie gratis das passende Bildmaterial. Wer viel mit Texten aller Art zutun hat, weiß: Bei manchen Schriftstücken ist es erst die ansprechende Gestaltung mit eingefügten Grafiken, die zum Lesen animiert. Doch Textillustrationen, Fotos und Grafiken dienen nicht nur dazu, die Eintönigkeit des Schriftbilds aufzulockern. Vielmehr unterstreichen passende Bildmotive an der richtigen Stelle die Kernaussagen des Dokuments - gedruckt wie auch im Web. Auch Digitalfotografen benötigen manchmal fremdes Bildmaterial - etwa, um es für eine Bildmontage einzusetzen oder um bestimmte Bildbereiche für eine Retusche zu kopieren. Web-Designer nutzen Bildelemente etwa bei der Seitengestaltung oder für aussagestarke Navigationselemente. Doch längst nicht immer ist im eigenen Fundus das passende Bild für die Dokumentengestaltung oder die kreative Fotobearbeitung vorhanden.
    Content
    Lizenzfreie Bilder mit einer Suchmaschine aufstöbern Im Internet gibt es fast auf jeder Website Bilder, die Sie im Browser auf Ihrer Festplatte speichern und in anderen Anwendungen weiterverarbeiten können. Entsprechend riesig ist das Gesamtangebot an Fotos, Grafiken und Clipart-Elementen. Allerdings dürfen Sie Grafikelemente, die in eine Website eingebaut sind, nur dann für eigene Zwecke einsetzen, wenn der Urheber das ausdrücklich gestattet. Diese Erlaubnis ist normalerweise mit einem Begriff wie "rechtefrei", "lizenzfrei", "zur freien Nutzung" oder -englischsprachig - "royalty-free" gekennzeichnet. Das Problem: Auf den meisten Websites finden Sie keine Urheberrechtshinweise zu den eingebetteten Bildern. Am einfachsten ist die Suche nach lizenzfreien Web-Bildern mit einer für Grafiken und Fotos optimierten Suchmaschine wie Google (www.google.de), Fotos.de (www. fotos.de) oder Picsearch (www.picsearch. com). Für die Foto-Indizierung verwenden Suchmaschinen normalerweise den Text auf der Web-Seite, auf der sich auch das betreffende Bild befindet. Dabei werden doppelte Fundstellen automatisch aussortiert und Bilder mit der höchsten Qualität an den Anfang der Ergebnisliste gestellt. In Google sind laut Betreiber derzeit 880 Millionen Grafiken registriert. Zum Bildersuchdienst gelangen Sie auf der Google-Startseite per Klick auf die Registerkarte "Bilder". Geben Sie einen oder mehrere Suchbegriffe - durch Leerzeichen getrennt - in das Suchfeld ein, und klicken Sie auf den Button "Google Suche". Die Fundstellenanzeige erfolgt in Form von Miniaturvorschaubildern. Ein Klick auf das gewünschte Motiv öffnet die Website mit dem Foto. Um eine Grafik auf Ihrer Festplatte abzuspeichern, klicken Sie mit der rechten Maustaste darauf und wählen anschlie ßend im Kontextmenü den Befehl "Bild speichern unter". Lizenzfreie Bilder oder ganze Online-Fotogalerien stöbern Sie auch ohne spezielle Bildersuchfunktion mit einer Standardrecherche in einer Suchmaschine wie Alltheweb (www.alltheweb.com) auf. Geben Sie dazu einen Begriff wie "Foto", "Bilder" oder "Picture" in Kombination mit "lizenzfrei" oder "royalty-free" in das Suchfeld der verwendeten Suchmaschine ein.
    Date
    22. 5.2005 10:06:58
    Footnote
    Web-Bilderdienste - www.72px.de Das Angebot besteht aus kostenlosen Bildern für nichtkommerzielle Projekte. Als registrierter Nutzer können Sie eigene Fotos veröffentlichen. - www.fotodatabase.net Bei der kostenlosen Foto-Community kann jeder eigene Bilder beisteuern und deren zeitlich und räumlich unbegrenztes Nutzungsrecht für 9,90 Euro an Interessenten weiterverkaufen. - www.fotodatenbank.com Die Foto-Website bietet eine Kommentierungsmöglichkeit. Die private und kommerzielle Weiterverwendung der Bilder ist kostenlos, sofern ein Bildquellnachweis erfolgt. - www.fotos-direkt.de Die Nutzungsrechte an den hochauflösenden Bildern kosten 9,90 Euro, Fotos mit niedriger Auflösung sind kostenlos. Außerdem können Sie thematisch gebundene Foto-CDs für rund 40 Euro bestellen. - www.photobox.ru Auf der Foto-Website mit englischsprachiger Bedienung müssen Sie für die Bilderrechte je nach Auflösung zwischen 5 und 35 Euro bezahlen. - www.photocase.de Die Fotos ambitionierter Hobbyfotografen liegen in einer Mindestauflösungvon 1800 x1400 Pixeln vor. Downloads sind nach einem Bonuspunktesystem eingeschränkt. - www.pixelquelle.de Alle Bilder lassen sich gratis für kommerzielle wie für nichtkommerzielle Projekte nutzen. Außerdem gibt es eine FotoUpload-FUnktion. - www.sxc.hu Bei der Fototausch-Community für lizenzfreie Bilder kann jeder Besucher eigene Bilder beisteuern und Fotos anderer Anwender herunterladen und nutzen. - www.visipix.ch Die Website bietet Fotoreproduktionen von Gemälden. Insgesamt umfasst der Bestand an Bildern rund 90.000 Aufnahmen. Die meisten Motive sind sowohl für die private als auch für die kommerzielle Nutzung kostenlos. Eine Suchmaschine erleichtert das Aufspüren von Motiven.
  2. Berinstein, P.: Turning visual : image search engines on the Web (1998) 0.09
    0.08547392 = product of:
      0.12821087 = sum of:
        0.10196043 = weight(_text_:web in 3595) [ClassicSimilarity], result of:
          0.10196043 = score(doc=3595,freq=10.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.6450079 = fieldWeight in 3595, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3595)
        0.02625044 = product of:
          0.05250088 = sum of:
            0.05250088 = weight(_text_:22 in 3595) [ClassicSimilarity], result of:
              0.05250088 = score(doc=3595,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.30952093 = fieldWeight in 3595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3595)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Gives an overview of image search engines on the Web. They work by: looking for graphics files; looking for a caption; looking for Web sites whose titles indicate the presence of picturres on a certain subject; or employing human intervention. Describes the image search capabilities of: AltaVista; Amazing Picture Machine (Http://www.ncrtec.org/picture.htm); HotBot; ImageSurfer (http://ipix.yahoo.com); Lycos; Web Clip Art Search Engine and WebSEEK. The search engines employing human intervention provide the best results
    Object
    Web Clip Art Search Engine
    Source
    Online. 22(1998) no.3, S.37-38,40-42
  3. Vidmar, D.J.: Darwin on the Web : the evolution of search tools (1999) 0.08
    0.083823286 = product of:
      0.12573493 = sum of:
        0.07979666 = weight(_text_:web in 3175) [ClassicSimilarity], result of:
          0.07979666 = score(doc=3175,freq=2.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.50479853 = fieldWeight in 3175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=3175)
        0.045938272 = product of:
          0.091876544 = sum of:
            0.091876544 = weight(_text_:22 in 3175) [ClassicSimilarity], result of:
              0.091876544 = score(doc=3175,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.5416616 = fieldWeight in 3175, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3175)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Computers in libraries. 19(1999) no.5, S.22-28
  4. Li, L.; Shang, Y.; Zhang, W.: Improvement of HITS-based algorithms on Web documents 0.08
    0.083530426 = product of:
      0.12529564 = sum of:
        0.076931566 = product of:
          0.2307947 = sum of:
            0.2307947 = weight(_text_:3a in 2514) [ClassicSimilarity], result of:
              0.2307947 = score(doc=2514,freq=2.0), product of:
                0.41065353 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.048437484 = queryNorm
                0.56201804 = fieldWeight in 2514, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.33333334 = coord(1/3)
        0.048364073 = weight(_text_:web in 2514) [ClassicSimilarity], result of:
          0.048364073 = score(doc=2514,freq=4.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.3059541 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http%3A%2F%2Fdelab.csd.auth.gr%2F~dimitris%2Fcourses%2Fir_spring06%2Fpage_rank_computing%2Fp527-li.pdf. Vgl. auch: http://www2002.org/CDROM/refereed/643/.
    Source
    WWW '02: Proceedings of the 11th International Conference on World Wide Web, May 7-11, 2002, Honolulu, Hawaii, USA
  5. Höfer, W.: Detektive im Web (1999) 0.08
    0.08272182 = product of:
      0.12408273 = sum of:
        0.06839713 = weight(_text_:web in 4007) [ClassicSimilarity], result of:
          0.06839713 = score(doc=4007,freq=2.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.43268442 = fieldWeight in 4007, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=4007)
        0.0556856 = product of:
          0.1113712 = sum of:
            0.1113712 = weight(_text_:22 in 4007) [ClassicSimilarity], result of:
              0.1113712 = score(doc=4007,freq=4.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.6565931 = fieldWeight in 4007, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4007)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 8.1999 20:22:06
  6. Rogers, I.: ¬The Google Pagerank algorithm and how it works (2002) 0.07
    0.07055211 = product of:
      0.10582816 = sum of:
        0.028498804 = weight(_text_:web in 2548) [ClassicSimilarity], result of:
          0.028498804 = score(doc=2548,freq=2.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.18028519 = fieldWeight in 2548, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2548)
        0.07732935 = product of:
          0.1546587 = sum of:
            0.1546587 = weight(_text_:designer in 2548) [ClassicSimilarity], result of:
              0.1546587 = score(doc=2548,freq=2.0), product of:
                0.36824805 = queryWeight, product of:
                  7.602543 = idf(docFreq=59, maxDocs=44218)
                  0.048437484 = queryNorm
                0.41998512 = fieldWeight in 2548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.602543 = idf(docFreq=59, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2548)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Page Rank is a topic much discussed by Search Engine Optimisation (SEO) experts. At the heart of PageRank is a mathematical formula that seems scary to look at but is actually fairly simple to understand. Despite this many people seem to get it wrong! In particular "Chris Ridings of www.searchenginesystems.net" has written a paper entitled "PageRank Explained: Everything you've always wanted to know about PageRank", pointed to by many people, that contains a fundamental mistake early on in the explanation! Unfortunately this means some of the recommendations in the paper are not quite accurate. By showing code to correctly calculate real PageRank I hope to achieve several things in this response: - Clearly explain how PageRank is calculated. - Go through every example in Chris' paper, and add some more of my own, showing the correct PageRank for each diagram. By showing the code used to calculate each diagram I've opened myself up to peer review - mostly in an effort to make sure the examples are correct, but also because the code can help explain the PageRank calculations. - Describe some principles and observations on website design based on these correctly calculated examples. Any good web designer should take the time to fully understand how PageRank really works - if you don't then your site's layout could be seriously hurting your Google listings! [Note: I have nothing in particular against Chris. If I find any other papers on the subject I'll try to comment evenly]
  7. Drabenstott, K.M.: Web search strategies (2000) 0.07
    0.06954759 = product of:
      0.10432139 = sum of:
        0.09119617 = weight(_text_:web in 1188) [ClassicSimilarity], result of:
          0.09119617 = score(doc=1188,freq=32.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.5769126 = fieldWeight in 1188, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1188)
        0.01312522 = product of:
          0.02625044 = sum of:
            0.02625044 = weight(_text_:22 in 1188) [ClassicSimilarity], result of:
              0.02625044 = score(doc=1188,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.15476047 = fieldWeight in 1188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1188)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Surfing the World Wide Web used to be cool, dude, real cool. But things have gotten hot - so hot that finding something useful an the Web is no longer cool. It is suffocating Web searchers in the smoke and debris of mountain-sized lists of hits, decisions about which search engines they should use, whether they will get lost in the dizzying maze of a subject directory, use the right syntax for the search engine at hand, enter keywords that are likely to retrieve hits an the topics they have in mind, or enlist a browser that has sufficient functionality to display the most promising hits. When it comes to Web searching, in a few short years we have gone from the cool image of surfing the Web into the frying pan of searching the Web. We can turn down the heat by rethinking what Web searchers are doing and introduce some order into the chaos. Web search strategies that are tool-based-oriented to specific Web searching tools such as search en gines, subject directories, and meta search engines-have been widely promoted, and these strategies are just not working. It is time to dissect what Web searching tools expect from searchers and adjust our search strategies to these new tools. This discussion offers Web searchers help in the form of search strategies that are based an strategies that librarians have been using for a long time to search commercial information retrieval systems like Dialog, NEXIS, Wilsonline, FirstSearch, and Data-Star.
    Content
    "Web searching is different from searching commercial IR systems. We can learn from search strategies recommended for searching IR systems, but most won't be effective for Web searching. Web searchers need strate gies that let search engines do the job they were designed to do. This article presents six new Web searching strategies that do just that."
    Date
    22. 9.1997 19:16:05
  8. Alqaraleh, S.; Ramadan, O.; Salamah, M.: Efficient watcher based web crawler design (2015) 0.06
    0.064675555 = product of:
      0.097013324 = sum of:
        0.080606796 = weight(_text_:web in 1627) [ClassicSimilarity], result of:
          0.080606796 = score(doc=1627,freq=16.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.5099235 = fieldWeight in 1627, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1627)
        0.016406527 = product of:
          0.032813054 = sum of:
            0.032813054 = weight(_text_:22 in 1627) [ClassicSimilarity], result of:
              0.032813054 = score(doc=1627,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.19345059 = fieldWeight in 1627, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1627)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of this paper is to design a watcher-based crawler (WBC) that has the ability of crawling static and dynamic web sites, and can download only the updated and newly added web pages. Design/methodology/approach In the proposed WBC crawler, a watcher file, which can be uploaded to the web sites servers, prepares a report that contains the addresses of the updated and the newly added web pages. In addition, the WBC is split into five units, where each unit is responsible for performing a specific crawling process. Findings Several experiments have been conducted and it has been observed that the proposed WBC increases the number of uniquely visited static and dynamic web sites as compared with the existing crawling techniques. In addition, the proposed watcher file not only allows the crawlers to visit the updated and newly web pages, but also solves the crawlers overlapping and communication problems. Originality/value The proposed WBC performs all crawling processes in the sense that it detects all updated and newly added pages automatically without any human explicit intervention or downloading the entire web sites.
    Date
    20. 1.2015 18:30:22
  9. Carrière, S.J.; Kazman, R.: Webquery : searching and visualising the Web through connectivity (1997) 0.06
    0.064105436 = product of:
      0.096158154 = sum of:
        0.07647032 = weight(_text_:web in 2674) [ClassicSimilarity], result of:
          0.07647032 = score(doc=2674,freq=10.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.48375595 = fieldWeight in 2674, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2674)
        0.01968783 = product of:
          0.03937566 = sum of:
            0.03937566 = weight(_text_:22 in 2674) [ClassicSimilarity], result of:
              0.03937566 = score(doc=2674,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.23214069 = fieldWeight in 2674, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2674)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The WebQuery system offers a powerful new method for searching the Web based on connectivity and content. Examines links among the nodes returned in a keyword-based query. Rankes the nodes, giving the highest rank to the most highly connected nodes. By doing so, finds hot spots on the Web that contain information germane to a user's query. WebQuery not only ranks and filters the results of a Web query; it also extends the result set beyond what the search engine retrieves, by finding interesting sites that are highly connected to those sites returned by the original query. Even with WebQuery filering and ranking query results, the result set can be enormous. Explores techniques for visualizing the returned information and discusses the criteria for using each of the technique
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue of papers from the 6th International World Wide Web conference, held 7-11 Apr 1997, Santa Clara, California
  10. Nicholson, S.; Sierra, T.; Eseryel, U.Y.; Park, J.-H.; Barkow, P.; Pozo, E.J.; Ward, J.: How much of it is real? : analysis of paid placement in Web search engine results (2006) 0.06
    0.064105436 = product of:
      0.096158154 = sum of:
        0.07647032 = weight(_text_:web in 5278) [ClassicSimilarity], result of:
          0.07647032 = score(doc=5278,freq=10.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.48375595 = fieldWeight in 5278, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5278)
        0.01968783 = product of:
          0.03937566 = sum of:
            0.03937566 = weight(_text_:22 in 5278) [ClassicSimilarity], result of:
              0.03937566 = score(doc=5278,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.23214069 = fieldWeight in 5278, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5278)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Most Web search tools integrate sponsored results with results from their internal editorial database in providing results to users. The goal of this research is to get a better idea of how much of the screen real estate displays real editorial results as compared to sponsored results. The overall average results are that 40% of all results presented on the first screen are real results, and when the entire first Web page is considered, 67% of the results are nonsponsored results. For general search tools such as Google, 56% of the first screen and 82% of the first Web page contain nonsponsored results. Other results include that query structure makes a significant difference in the percentage of nonsponsored results returned by a search. Similarly, the topic of the query also can have a significant effect on the percentage of sponsored results displayed by most Web search tools.
    Date
    22. 7.2006 16:32:57
  11. Marchiori, M.: ¬The quest for correct information on the Web : hyper search engines (1997) 0.06
    0.061383378 = product of:
      0.092075065 = sum of:
        0.06910593 = weight(_text_:web in 7453) [ClassicSimilarity], result of:
          0.06910593 = score(doc=7453,freq=6.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.43716836 = fieldWeight in 7453, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7453)
        0.022969136 = product of:
          0.045938272 = sum of:
            0.045938272 = weight(_text_:22 in 7453) [ClassicSimilarity], result of:
              0.045938272 = score(doc=7453,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.2708308 = fieldWeight in 7453, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7453)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Presents a novel method to extract from a web object its hyper informative content, in contrast with current search engines, which only deal with the textual information content. This method is not only valuable per se, but it is shown to be able to considerably increase the precision of current search engines. It integrates with existing search engine technology since it can be implemented on top of every search engine, acting as a post-processor, thus automatically transforming a search engine into its corresponding hyper version. Shows how the hyper information can be usefully employed to face the search engines persuasion problem
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue of papers from the 6th International World Wide Web conference, held 7-11 Apr 1997, Santa Clara, California
  12. Kurzke, C.; Galle, M.; Bathelt, M.: WebAssistant : a user profile specific information retrieval assistant (1998) 0.06
    0.061383378 = product of:
      0.092075065 = sum of:
        0.06910593 = weight(_text_:web in 3559) [ClassicSimilarity], result of:
          0.06910593 = score(doc=3559,freq=6.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.43716836 = fieldWeight in 3559, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3559)
        0.022969136 = product of:
          0.045938272 = sum of:
            0.045938272 = weight(_text_:22 in 3559) [ClassicSimilarity], result of:
              0.045938272 = score(doc=3559,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.2708308 = fieldWeight in 3559, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3559)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes the concept of a proxy based information classification and filtering utility, named Web Assistant. On the behalf of users a private view of the WWW is generated based on a previously determined profile. This profile is created by monitoring the user anf group activities when browsing WWW pages. Additional features are integrated to allow for easy interoperability workgroups with similar project interests, maintain personal and common hotlists with automatic modification checks and a sophisticated search engine front-end
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
    Theme
    Web-Agenten
  13. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.06
    0.061383378 = product of:
      0.092075065 = sum of:
        0.06910593 = weight(_text_:web in 1673) [ClassicSimilarity], result of:
          0.06910593 = score(doc=1673,freq=6.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.43716836 = fieldWeight in 1673, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.022969136 = product of:
          0.045938272 = sum of:
            0.045938272 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.045938272 = score(doc=1673,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia; vgl. auch: http://www7.scu.edu.au/programme/posters/1846/com1846.htm.
  14. Loia, V.; Pedrycz, W.; Senatore, S.; Sessa, M.I.: Web navigation support by means of proximity-driven assistant agents (2006) 0.06
    0.06120485 = product of:
      0.091807276 = sum of:
        0.07540075 = weight(_text_:web in 5283) [ClassicSimilarity], result of:
          0.07540075 = score(doc=5283,freq=14.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.47698978 = fieldWeight in 5283, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5283)
        0.016406527 = product of:
          0.032813054 = sum of:
            0.032813054 = weight(_text_:22 in 5283) [ClassicSimilarity], result of:
              0.032813054 = score(doc=5283,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.19345059 = fieldWeight in 5283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5283)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The explosive growth of the Web and the consequent exigency of the Web personalization domain have gained a key position in the direction of customization of the Web information to the needs of specific users, taking advantage of the knowledge acquired from the analysis of the user's navigational behavior (usage data) in correlation with other information collected in the Web context, namely, structure, content, and user profile data. This work presents an agent-based framework designed to help a user in achieving personalized navigation, by recommending related documents according to the user's responses in similar-pages searching mode. Our agent-based approach is grounded in the integration of different techniques and methodologies into a unique platform featuring user profiling, fuzzy multisets, proximity-oriented fuzzy clustering, and knowledge-based discovery technologies. Each of these methodologies serves to solve one facet of the general problem (discovering documents relevant to the user by searching the Web) and is treated by specialized agents that ultimately achieve the final functionality through cooperation and task distribution.
    Date
    22. 7.2006 16:59:13
    Footnote
    Beitrag in einer Special Topic Section on Soft Approaches to Information Retrieval and Information Access on the Web
  15. Fong, W.W.: Searching the World Wide Web (1996) 0.06
    0.06049058 = product of:
      0.09073587 = sum of:
        0.06448543 = weight(_text_:web in 6597) [ClassicSimilarity], result of:
          0.06448543 = score(doc=6597,freq=4.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.4079388 = fieldWeight in 6597, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=6597)
        0.02625044 = product of:
          0.05250088 = sum of:
            0.05250088 = weight(_text_:22 in 6597) [ClassicSimilarity], result of:
              0.05250088 = score(doc=6597,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.30952093 = fieldWeight in 6597, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6597)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Reviews the availability on the WWW, of search engines designed to organize various web information sources. Discusses the differences and similarities of each search engine and their advantages and disadvantages. Search engines included in the study were: AltaVista, CUI W3 Catalog, InfoSeek, Lycos, Magellan, Yahoo
    Source
    Journal of library and information science. 22(1996) no.1, S.15-36
  16. Duval, B.K.; Main, L.: Searching on the Net : general overview (1996) 0.06
    0.06049058 = product of:
      0.09073587 = sum of:
        0.06448543 = weight(_text_:web in 7268) [ClassicSimilarity], result of:
          0.06448543 = score(doc=7268,freq=4.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.4079388 = fieldWeight in 7268, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=7268)
        0.02625044 = product of:
          0.05250088 = sum of:
            0.05250088 = weight(_text_:22 in 7268) [ClassicSimilarity], result of:
              0.05250088 = score(doc=7268,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.30952093 = fieldWeight in 7268, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7268)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    First of a 3 part series discussing how to access and use Web search engines on the Internet. Distinguishes between FTP sites, Gopher sites, Usenet News sites and Web sites. Considers subject searching versus keyword; how to improve search strategies and success rates; bookmarks; Yahoo!, Lycos; InfoSeek; Magellan; Excite; Inktomi; HotBot and AltaVista
    Date
    6. 3.1997 16:22:15
  17. Notess, G.R.: Toward more comprehensive Web searching : single searching versus megasearching (1998) 0.06
    0.06049058 = product of:
      0.09073587 = sum of:
        0.06448543 = weight(_text_:web in 3278) [ClassicSimilarity], result of:
          0.06448543 = score(doc=3278,freq=4.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.4079388 = fieldWeight in 3278, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3278)
        0.02625044 = product of:
          0.05250088 = sum of:
            0.05250088 = weight(_text_:22 in 3278) [ClassicSimilarity], result of:
              0.05250088 = score(doc=3278,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.30952093 = fieldWeight in 3278, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3278)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In spite of their size, the major Web indexes are not comprehensive. Considers approaches carrying out comprehensive searches, highlighting their advantages and disadvantages. In the single search tool approach, users search the largest of the databses one by one, using the command language uniqe to each to increase the precision of the esearch. In the megasearch approach, search engines use 1 form that simultaneously seands a single query to a number of search engines and then presents the results. Inference Find, Dogpile and MetaFind are examples of good metasearch engines
    Source
    Online. 22(1998) no.2, S.73-76
  18. Amato, G.; Rabitti, F.; Savino, P.: Multimedia document search on the Web (1998) 0.06
    0.06049058 = product of:
      0.09073587 = sum of:
        0.06448543 = weight(_text_:web in 3605) [ClassicSimilarity], result of:
          0.06448543 = score(doc=3605,freq=4.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.4079388 = fieldWeight in 3605, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3605)
        0.02625044 = product of:
          0.05250088 = sum of:
            0.05250088 = weight(_text_:22 in 3605) [ClassicSimilarity], result of:
              0.05250088 = score(doc=3605,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.30952093 = fieldWeight in 3605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3605)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
  19. Dresel, R.; Hörnig, D.; Kaluza, H.; Peter, A.; Roßmann, A.; Sieber, W.: Evaluation deutscher Web-Suchwerkzeuge : Ein vergleichender Retrievaltest (2001) 0.06
    0.06049058 = product of:
      0.09073587 = sum of:
        0.06448543 = weight(_text_:web in 261) [ClassicSimilarity], result of:
          0.06448543 = score(doc=261,freq=4.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.4079388 = fieldWeight in 261, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=261)
        0.02625044 = product of:
          0.05250088 = sum of:
            0.05250088 = weight(_text_:22 in 261) [ClassicSimilarity], result of:
              0.05250088 = score(doc=261,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.30952093 = fieldWeight in 261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=261)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Die deutschen Suchmaschinen, Abacho, Acoon, Fireball und Lycos sowie die Web-Kataloge Web.de und Yahoo! werden einem Qualitätstest nach relativem Recall, Precision und Availability unterzogen. Die Methoden der Retrievaltests werden vorgestellt. Im Durchschnitt werden bei einem Cut-Off-Wert von 25 ein Recall von rund 22%, eine Precision von knapp 19% und eine Verfügbarkeit von 24% erreicht
  20. Eggeling, T.; Kroschel, A.: Alles finden im Web (2000) 0.06
    0.059873775 = product of:
      0.08981066 = sum of:
        0.05699761 = weight(_text_:web in 4884) [ClassicSimilarity], result of:
          0.05699761 = score(doc=4884,freq=2.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.36057037 = fieldWeight in 4884, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=4884)
        0.032813054 = product of:
          0.06562611 = sum of:
            0.06562611 = weight(_text_:22 in 4884) [ClassicSimilarity], result of:
              0.06562611 = score(doc=4884,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.38690117 = fieldWeight in 4884, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4884)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    9. 7.2000 14:06:22

Years

Languages

  • e 407
  • d 256
  • f 4
  • nl 4
  • ja 1
  • sp 1
  • More… Less…

Types

  • a 576
  • el 59
  • m 47
  • x 11
  • s 9
  • r 3
  • p 2
  • More… Less…

Subjects

Classifications