Search (30 results, page 1 of 2)

  • × theme_ss:"Suchmaschinen"
  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Austin, D.: How Google finds your needle in the Web's haystack : as we'll see, the trick is to ask the web itself to rank the importance of pages... (2006) 0.09
    0.09150508 = product of:
      0.13725762 = sum of:
        0.033718713 = weight(_text_:wide in 93) [ClassicSimilarity], result of:
          0.033718713 = score(doc=93,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.171337 = fieldWeight in 93, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=93)
        0.05174041 = weight(_text_:web in 93) [ClassicSimilarity], result of:
          0.05174041 = score(doc=93,freq=16.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.35694647 = fieldWeight in 93, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=93)
        0.02293892 = weight(_text_:computer in 93) [ClassicSimilarity], result of:
          0.02293892 = score(doc=93,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.14131951 = fieldWeight in 93, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02734375 = fieldNorm(doc=93)
        0.028859572 = product of:
          0.057719145 = sum of:
            0.057719145 = weight(_text_:programs in 93) [ClassicSimilarity], result of:
              0.057719145 = score(doc=93,freq=2.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.22416902 = fieldWeight in 93, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=93)
          0.5 = coord(1/2)
      0.6666667 = coord(4/6)
    
    Abstract
    Imagine a library containing 25 billion documents but with no centralized organization and no librarians. In addition, anyone may add a document at any time without telling anyone. You may feel sure that one of the documents contained in the collection has a piece of information that is vitally important to you, and, being impatient like most of us, you'd like to find it in a matter of seconds. How would you go about doing it? Posed in this way, the problem seems impossible. Yet this description is not too different from the World Wide Web, a huge, highly-disorganized collection of documents in many different formats. Of course, we're all familiar with search engines (perhaps you found this article using one) so we know that there is a solution. This article will describe Google's PageRank algorithm and how it returns pages from the web's collection of 25 billion documents that match search criteria so well that "google" has become a widely used verb. Most search engines, including Google, continually run an army of computer programs that retrieve pages from the web, index the words in each document, and store this information in an efficient format. Each time a user asks for a web search using a search phrase, such as "search engine," the search engine determines all the pages on the web that contains the words in the search phrase. (Perhaps additional information such as the distance between the words "search" and "engine" will be noted as well.) Here is the problem: Google now claims to index 25 billion pages. Roughly 95% of the text in web pages is composed from a mere 10,000 words. This means that, for most searches, there will be a huge number of pages containing the words in the search phrase. What is needed is a means of ranking the importance of the pages that fit the search criteria so that the pages can be sorted with the most important pages at the top of the list. One way to determine the importance of pages is to use a human-generated ranking. For instance, you may have seen pages that consist mainly of a large number of links to other resources in a particular area of interest. Assuming the person maintaining this page is reliable, the pages referenced are likely to be useful. Of course, the list may quickly fall out of date, and the person maintaining the list may miss some important pages, either unintentionally or as a result of an unstated bias. Google's PageRank algorithm assesses the importance of web pages without human evaluation of the content. In fact, Google feels that the value of its service is largely in its ability to provide unbiased results to search queries; Google claims, "the heart of our software is PageRank." As we'll see, the trick is to ask the web itself to rank the importance of pages.
  2. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.05
    0.050085746 = product of:
      0.10017149 = sum of:
        0.04816959 = weight(_text_:wide in 2564) [ClassicSimilarity], result of:
          0.04816959 = score(doc=2564,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 2564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2564)
        0.036957435 = weight(_text_:web in 2564) [ClassicSimilarity], result of:
          0.036957435 = score(doc=2564,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25496176 = fieldWeight in 2564, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2564)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
              0.030088935 = score(doc=2564,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    PageRank is defined as the stationary state of a Markov chain. The chain is obtained by perturbing the transition matrix induced by a web graph with a damping factor alpha that spreads uniformly part of the rank. The choice of alpha is eminently empirical, and in most cases the original suggestion alpha=0.85 by Brin and Page is still used. Recently, however, the behaviour of PageRank with respect to changes in alpha was discovered to be useful in link-spam detection. Moreover, an analytical justification of the value chosen for alpha is still missing. In this paper, we give the first mathematical analysis of PageRank when alpha changes. In particular, we show that, contrarily to popular belief, for real-world graphs values of alpha close to 1 do not give a more meaningful ranking. Then, we give closed-form formulae for PageRank derivatives of any order, and an extension of the Power Method that approximates them with convergence O(t**k*alpha**t) for the k-th derivative. Finally, we show a tight connection between iterated computation and analytical behaviour by proving that the k-th iteration of the Power Method gives exactly the PageRank value obtained using a Maclaurin polynomial of degree k. The latter result paves the way towards the application of analytical methods to the study of PageRank.
    Date
    16. 1.2016 10:22:28
    Source
    http://vigna.di.unimi.it/ftp/papers/PageRankAsFunction.pdf [Proceedings of the ACM World Wide Web Conference (WWW), 2005]
  3. Dambeck, H.: Wie Google mit Milliarden Unbekannten rechnet : Teil.1 (2009) 0.04
    0.03962797 = product of:
      0.11888391 = sum of:
        0.07707134 = weight(_text_:wide in 3081) [ClassicSimilarity], result of:
          0.07707134 = score(doc=3081,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 3081, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=3081)
        0.041812565 = weight(_text_:web in 3081) [ClassicSimilarity], result of:
          0.041812565 = score(doc=3081,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.2884563 = fieldWeight in 3081, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3081)
      0.33333334 = coord(2/6)
    
    Abstract
    Ein Leben ohne Suchmaschinen? Für alle, die viel im World Wide Web unterwegs sind, eine geradezu absurde Vorstellung. Bei der Berechnung der Trefferlisten nutzt Google ein erstaunlich simples mathematisches Verfahren, das sogar Milliarden von Internetseiten in den Griff bekommt.
  4. Smith, A.G.: Search features of digital libraries (2000) 0.04
    0.037701976 = product of:
      0.11310592 = sum of:
        0.0817465 = weight(_text_:wide in 940) [ClassicSimilarity], result of:
          0.0817465 = score(doc=940,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.4153836 = fieldWeight in 940, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=940)
        0.031359423 = weight(_text_:web in 940) [ClassicSimilarity], result of:
          0.031359423 = score(doc=940,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 940, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=940)
      0.33333334 = coord(2/6)
    
    Abstract
    Traditional on-line search services such as Dialog, DataStar and Lexis provide a wide range of search features (boolean and proximity operators, truncation, etc). This paper discusses the use of these features for effective searching, and argues that these features are required, regardless of advances in search engine technology. The literature on on-line searching is reviewed, identifying features that searchers find desirable for effective searching. A selective survey of current digital libraries available on the Web was undertaken, identifying which search features are present. The survey indicates that current digital libraries do not implement a wide range of search features. For instance: under half of the examples included controlled vocabulary, under half had proximity searching, only one enabled browsing of term indexes, and none of the digital libraries enable searchers to refine an initial search. Suggestions are made for enhancing the search effectiveness of digital libraries; for instance, by providing a full range of search operators, enabling browsing of search terms, enhancement of records with controlled vocabulary, enabling the refining of initial searches, etc.
  5. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.04
    0.037471574 = product of:
      0.11241472 = sum of:
        0.08619881 = weight(_text_:web in 4709) [ClassicSimilarity], result of:
          0.08619881 = score(doc=4709,freq=34.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.59466785 = fieldWeight in 4709, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4709)
        0.02621591 = weight(_text_:computer in 4709) [ClassicSimilarity], result of:
          0.02621591 = score(doc=4709,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.16150802 = fieldWeight in 4709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=4709)
      0.33333334 = coord(2/6)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
    Source
    http://www.searchenginejournal.com/swoogle-an-engine-for-the-semantic-web/5469/
    Theme
    Semantic Web
  6. Lossau, N.: Search engine technology and digital libraries : libraries need to discover the academic internet (2004) 0.03
    0.034674477 = product of:
      0.10402343 = sum of:
        0.067437425 = weight(_text_:wide in 1161) [ClassicSimilarity], result of:
          0.067437425 = score(doc=1161,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1161)
        0.036585998 = weight(_text_:web in 1161) [ClassicSimilarity], result of:
          0.036585998 = score(doc=1161,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1161)
      0.33333334 = coord(2/6)
    
    Abstract
    With the development of the World Wide Web, the "information search" has grown to be a significant business sector of a global, competitive and commercial market. Powerful players have entered this market, such as commercial internet search engines, information portals, multinational publishers and online content integrators. Will Google, Yahoo or Microsoft be the only portals to global knowledge in 2010? If libraries do not want to become marginalized in a key area of their traditional services, they need to acknowledge the challenges that come with the globalisation of scholarly information, the existence and further growth of the academic internet
  7. Körber, S.: Suchmuster erfahrener und unerfahrener Suchmaschinennutzer im deutschsprachigen World Wide Web (2000) 0.03
    0.03023614 = product of:
      0.09070842 = sum of:
        0.05449767 = weight(_text_:wide in 5938) [ClassicSimilarity], result of:
          0.05449767 = score(doc=5938,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.2769224 = fieldWeight in 5938, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=5938)
        0.036210746 = weight(_text_:web in 5938) [ClassicSimilarity], result of:
          0.036210746 = score(doc=5938,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.24981049 = fieldWeight in 5938, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=5938)
      0.33333334 = coord(2/6)
    
    Abstract
    In einem Labor-Experiment wurden insgesamt achtzehn Studenten und Studentinnen mit zwei offenen Web-Rechercheaufgaben konfrontiert. Während deren Bewältigung mit einer Suchmaschine wurden sie per Proxy-Logfile-Protokollierung verdeckt beobachtet. Sie machten demographische und ihre Webnutzungs-Gewohnheiten betreffende Angaben, bewerteten Aufgaben-, Performance- und Suchmaschinen-Eigenschaften in Fragebögen und stellten sich einem Multiple-Choice-Test zu ihrem Wissen über Suchmaschinen. Die Versuchspersonen wurden gezielt angeworben und eingeteilt: in eine erfahrene und eine unerfahrene Untergruppe mit je neun Teilnehmern. Die Untersuchung beruht auf dem Vergleich der beiden Gruppen: Im Zentrum stehen dabei die Lesezeichen, die sie als Lösungen ablegten, ihre Einschätzungen aus den Fragebögen, ihre Suchphrasen sowie die Muster ihrer Suchmaschinen-Interaktion und Navigation in Zielseiten. Diese aus den Logfiles gewonnen sequentiellen Aktionsmuster wurden vergleichend visualisiert, ausgezählt und interpretiert. Zunächst wird das World Wide Web als strukturell und inhaltlich komplexer Informationsraum beschrieben. Daraufhin beleuchtet der Autor die allgemeinen Aufgaben und Typen von Meta-Medienanwendungen, sowie die Komponenten Index-basierter Suchmaschinen. Im Anschluß daran wechselt die Perspektive von der strukturell-medialen Seite hin zu Nutzungsaspekten. Der Autor beschreibt Nutzung von Meta-Medienanwendungen als Ko-Selektion zwischen Nutzer und Suchmaschine auf der Basis von Entscheidungen und entwickelt ein einfaches, dynamisches Phasenmodell. Der Einfluß unterschiedlicher Wissensarten auf den Selektionsprozeß findet hier Beachtung.Darauf aufbauend werden im folgenden Schritt allgemeine Forschungsfragen und Hypothesen für das Experiment formuliert. Dessen Eigenschaften sind das anschließende Thema, wobei das Beobachtungsinstrument Logfile-Analyse, die Wahl des Suchdienstes, die Formulierung der Aufgaben, Ausarbeitung der Fragebögen und der Ablauf im Zentrum stehen. Im folgenden präsentiert der Autor die Ergebnisse in drei Schwerpunkten: erstens in bezug auf die Performance - was die Prüfung der Hypothesen erlaubt - zweitens in bezug auf die Bewertungen, Kommentare und Suchphrasen der Versuchspersonen und drittens in bezug auf die visuelle und rechnerische Auswertung der Suchmuster. Letztere erlauben einen Einblick in das Suchverhalten der Versuchspersonen. Zusammenfassende Interpretationen und ein Ausblick schließen die Arbeit ab
  8. Dodge, M.: ¬A map of Yahoo! (2000) 0.02
    0.023535319 = product of:
      0.070605956 = sum of:
        0.047902312 = weight(_text_:web in 1555) [ClassicSimilarity], result of:
          0.047902312 = score(doc=1555,freq=42.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3304682 = fieldWeight in 1555, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=1555)
        0.022703644 = weight(_text_:computer in 1555) [ClassicSimilarity], result of:
          0.022703644 = score(doc=1555,freq=6.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.13987005 = fieldWeight in 1555, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.015625 = fieldNorm(doc=1555)
      0.33333334 = coord(2/6)
    
    Content
    "Introduction Yahoo! is the undisputed king of the Web directories, providing one of the key information navigation tools on the Internet. It has maintained its popularity over many Internet-years as the most visited Web site, against intense competition. This is because it does a good job of shifting, cataloguing and organising the Web [1] . But what would a map of Yahoo!'s hierarchical classification of the Web look like? Would an interactive map of Yahoo!, rather than the conventional listing of sites, be more useful as navigational tool? We can get some idea what a map of Yahoo! might be like by taking a look at ET-Map, a prototype developed by Hsinchun Chen and colleagues in the Artificial Intelligence Lab [2] at the University of Arizona. ET-Map was developed in 1995 as part of innovative research in automatic Internet homepage categorization and it charts a large chunk of Yahoo!, from the entertainment section representing some 110,000 different Web links. The map is a two-dimensional, multi-layered category map; its aim is to provide an intuitive visual information browsing tool. ET-Map can be browsed interactively, explored and queried, using the familiar point-and-click navigation style of the Web to find information of interest.
    The View From Above Browsing for a particular piece on information on the Web can often feel like being stuck in an unfamiliar part of town walking around at street level looking for a particular store. You know the store is around there somewhere, but your viewpoint at ground level is constrained. What you really want is to get above the streets, hovering half a mile or so up in the air, to see the whole neighbourhood. This kind of birds-eye view function has been memorably described by David D. Clark, Senior Research Scientist at MIT's Laboratory for Computer Science and the Chairman of the Invisible Worlds Protocol Advisory Board, as the missing "up button" on the browser [3] . ET-Map is a nice example of a prototype for Clark's "up-button" view of an information space. The goal of information maps, like ET-Map, is to provide the browser with a sense of the lie of the information landscape, what is where, the location of clusters and hotspots, what is related to what. Ideally, this 'big-picture' all-in-one visual summary needs to fit on a single standard computer screen. ET-Map is one of my favourite examples, but there are many other interesting information maps being developed by other researchers and companies (see inset at the bottom of this page). How does ET-Map work? Here is a sequence of screenshots of a typical browsing session with ET-Map, which ends with access to Web pages on jazz musician Miles Davis. You can also tryout ET-Map for yourself, using a fully working demo on the AI Lab's website [4] . We begin with the top-level map showing forty odd broad entertainment 'subject regions' represented by regularly shaped tiles. Each tile is a visual summary of a group of Web pages with similar content. These tiles are shaded different colours to differentiate them, while labels identify the subject of the tile and the number in brackets telling you how many individual Web page links it contains. ET-Map uses two important, but common-sense, spatial concepts in its organisation and representation of the Web. Firstly, the 'subject regions' size is directly related to the number of Web pages in that category. For example, the 'MUSIC' subject area contains over 11,000 pages and so has a much larger area than the neighbouring area of 'LIVE' which only has 4,300 odd pages. This is intuitively meaningful, as the largest tiles are visually more prominent on the map and are likely to be more significant as they contain the most links. In addition, a second spatial concept, that of neighbourhood proximity, is applied so 'subject regions' closely related in term of content are plotted close to each other on the map. For example, 'FILM' and 'YEAR'S OSCARS', at the bottom left, are neighbours in both semantic and spatial space. This make senses as many things in the real-world are ordered in this way, with things that are alike being spatially close together (e.g. layout of goods in a store, or books in a library). Importantly, ET-Map is also a multi-layer map, with sub-maps showing greater informational resolution through a finer degree of categorization. So for any subject region that contains more than two hundred Web pages, a second-level map, with more detailed categories is generated. This subdivision of information space is repeated down the hierarchy as far as necessary. In the example, the user selected the 'MUSIC' subject region which, not surprisingly, contained many thousands of pages. A second-level map with numerous different music categories is then presented to the user. Delving deeper, the user wants to learn more about jazz music, so clicking on the 'JAZZ' tile leads to a third-level map, a fine-grained map of jazz related Web pages. Finally, selecting the 'MILES DAVIS' subject region leads to more a conventional looking ranking of pages from which the user selects one to download.
    ET-Map was created using a sophisticated AI technique called Kohonen self-organizing map, a neural network approach that has been used for automatic analysis and classification of semantic content of text documents like Web pages. I do not pretend to fully understand how this technique works; I tend to think of it as a clever 'black-box' that group together things that are alike [5] . It is a real challenge to automatically classify pages from a very heterogeneous information collection like the Web into categories that will match the conceptions of a typical user. Directories like Yahoo! tend to rely on the skill of human editors to achieve this. ET-Map is an interesting prototype that I think highlights well the potential for a map-based approach to Web browsing. I am surprised none of the major search engines or directories have introduced the option of mapping results. Although, I am sure many are working on ideas. People certainly need all the help they get, as Web growth shows no sign of slowing. Just last month it was reported that the Web had surpassed one billion indexable pages [6].
    Research Prototypes Visual SiteMap Developed by Xia Lin, based at the College of Library and Information Science, Drexel University. CVG Cyberspace geography visualization, developed by Luc Girardin, at The Graduate Institute of International Studies, Switzerland. WEBSOM Maps the thousands of articles posted on Usenet newsgroups. It is being developed by researchers at the Neural Networks Research Centre, Helsinki University of Technology in Finland. TreeMaps Developed by Brian Johnson, Ben Shneiderman and colleagues in the Human-Computer Interaction Lab at the University of Maryland. Commercial Information Maps: NewsMaps Provides interactive information landscapes summarizing daily news stories, developed Cartia, Inc. Web Squirrel Creates maps known as information farms. It is developed by Eastgate Systems, Inc. Umap Produces interactive maps of Web searches. Map of the Market An interactive map of the market performance of the stocks of major US corporations developed by SmartMoney.com."
  9. Bates, M.E.: Quick answers to odd questions (2004) 0.02
    0.018686606 = product of:
      0.056059815 = sum of:
        0.028901752 = weight(_text_:wide in 3071) [ClassicSimilarity], result of:
          0.028901752 = score(doc=3071,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.14686027 = fieldWeight in 3071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3071)
        0.027158061 = weight(_text_:web in 3071) [ClassicSimilarity], result of:
          0.027158061 = score(doc=3071,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.18735787 = fieldWeight in 3071, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3071)
      0.33333334 = coord(2/6)
    
    Content
    "One of the things I enjoyed the most when I was a reference librarian was the wide range of questions my clients sent my way. What was the original title of the first Godzilla movie? (Gojira, released in 1954) Who said 'I'm as pure as the driven slush'? (Tallulah Bankhead) What percentage of adults have gone to a jazz performance in the last year? (11%) I have found that librarians, speech writers and journalists have one thing in common - we all need to find information on all kinds of topics, and we usually need the answers right now. The following are a few of my favorite sites for finding answers to those there-must-be-an-answer-out-there questions. - For the electronic equivalent to the "ready reference" shelf of resources that most librarians keep hidden behind their desks, check out RefDesk . It is particularly good for answering factual questions - Where do I get the new Windows XP Service Pack? Where is the 386 area code? How do I contact my member of Congress? - Another resource for lots of those quick-fact questions is InfoPlease, the publishers of the Information Please almanac .- Right now, it's full of Olympics data, but it also has links to facts and factoids that you would look up in an almanac, atlas, or encyclopedia. - If you want numbers, start with the Statistical Abstract of the US. This source, produced by the U.S. Census Bureau, gives you everything from the divorce rate by state to airline cost indexes going back to 1980. It is many librarians' secret weapon for pulling numbers together quickly. - My favorite question is "how does that work?" Haven't you ever wondered how they get that Olympic torch to continue to burn while it is being carried by runners from one city to the next? Or how solar sails manage to propel a spacecraft? For answers, check out the appropriately-named How Stuff Works. - For questions about movies, my first resource is the Internet Movie Database. It is easy to search, is such a popular site that mistakes are corrected quickly, and is a fun place to catch trailers of both upcoming movies and those dating back to the 30s. - When I need to figure out who said what, I still tend to rely on the print sources such as Bartlett's Familiar Quotations . No, the current edition is not available on the web, but - and this is the librarian in me - I really appreciate the fact that I not only get the attribution but I also see the source of the quote. There are far too many quotes being attributed to a celebrity, but with no indication of the publication in which the quote appeared. Take, for example, the much-cited quote of Margaret Meade, "Never doubt that a small group of thoughtful committed people can change the world; indeed, it's the only thing that ever has!" Then see the page on the Institute for Intercultural Studies site, founded by Meade, and read its statement that it has never been able to verify this alleged quote from Meade. While there are lots of web-based sources of quotes (see QuotationsPage.com and Bartleby, for example), unless the site provides the original source for the quotation, I wouldn't rely on the citation. Of course, if you have a hunch as to the source of a quote, and it was published prior to 1923, head over to Project Gutenberg , which includes the full text of over 12,000 books that are in the public domain. When I needed to confirm a quotation of the Red Queen in "Through the Looking Glass", this is where I started. - And if you are stumped as to where to go to find information, instead of Googling it, try the Librarians' Index to the Internet. While it is somewhat US-centric, it is a great directory of web resources."
  10. Broder, A.; Kumar, R.; Maghoul, F.; Raghavan, P.; Rajagopalan, S.; Stata, R.; Tomkins, A.; Wiener, J.: Graph structure in the Web (2000) 0.02
    0.015582624 = product of:
      0.09349574 = sum of:
        0.09349574 = weight(_text_:web in 5595) [ClassicSimilarity], result of:
          0.09349574 = score(doc=5595,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.6450079 = fieldWeight in 5595, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=5595)
      0.16666667 = coord(1/6)
    
    Abstract
    The study of the web as a graph is not only fascinating in its own right, but also yields valuable insight into web algorithms for crawling, searching and community discovery, and the sociological phenomena which characterize its evolution. We report on experiments on local and global properties of the web graph using two Altavista crawls each with over 200M pages and 1.5 billion links. Our study indicates that the macroscopic structure of the web is considerably more intricate than suggested by earlier experiments on a smaller scale
  11. Spink, A.; Gunar, O.: E-Commerce Web queries : Excite and AskJeeves study (2001) 0.01
    0.013937522 = product of:
      0.08362513 = sum of:
        0.08362513 = weight(_text_:web in 910) [ClassicSimilarity], result of:
          0.08362513 = score(doc=910,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5769126 = fieldWeight in 910, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.125 = fieldNorm(doc=910)
      0.16666667 = coord(1/6)
    
  12. Gerhart, S.L.: Do Web search engines suppress controversy? : Simulating the exchange process (2004) 0.01
    0.013937522 = product of:
      0.08362513 = sum of:
        0.08362513 = weight(_text_:web in 8164) [ClassicSimilarity], result of:
          0.08362513 = score(doc=8164,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5769126 = fieldWeight in 8164, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.125 = fieldNorm(doc=8164)
      0.16666667 = coord(1/6)
    
  13. Ding, L.; Finin, T.; Joshi, A.; Peng, Y.; Cost, R.S.; Sachs, J.; Pan, R.; Reddivari, P.; Doshi, V.: Swoogle : a Semantic Web search and metadata engine (2004) 0.01
    0.013828207 = product of:
      0.08296924 = sum of:
        0.08296924 = weight(_text_:web in 4704) [ClassicSimilarity], result of:
          0.08296924 = score(doc=4704,freq=14.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.57238775 = fieldWeight in 4704, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4704)
      0.16666667 = coord(1/6)
    
    Abstract
    Swoogle is a crawler-based indexing and retrieval system for the Semantic Web, i.e., for Web documents in RDF or OWL. It extracts metadata for each discovered document, and computes relations between documents. Discovered documents are also indexed by an information retrieval system which can use either character N-Gram or URIrefs as keywords to find relevant documents and to compute the similarity among a set of documents. One of the interesting properties we compute is rank, a measure of the importance of a Semantic Web document.
    Content
    Vgl. unter: http://www.dblab.ntua.gr/~bikakis/LD/5.pdf Vgl. auch: http://swoogle.umbc.edu/. Vgl. auch: http://ebiquity.umbc.edu/paper/html/id/183/. Vgl. auch: Radhakrishnan, A.: Swoogle : An Engine for the Semantic Web unter: http://www.searchenginejournal.com/swoogle-an-engine-for-the-semantic-web/5469/.
    Theme
    Semantic Web
  14. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.01
    0.013725774 = product of:
      0.04117732 = sum of:
        0.026132854 = weight(_text_:web in 2565) [ClassicSimilarity], result of:
          0.026132854 = score(doc=2565,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.18028519 = fieldWeight in 2565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2565)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 2565) [ClassicSimilarity], result of:
              0.030088935 = score(doc=2565,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 2565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper introduces a family of link-based ranking algorithms that propagate page importance through links. In these algorithms there is a damping function that decreases with distance, so a direct link implies more endorsement than a link through a long path. PageRank is the most widely known ranking function of this family. The main objective of this paper is to determine whether this family of ranking techniques has some interest per se, and how different choices for the damping function impact on rank quality and on convergence speed. Even though our results suggest that PageRank can be approximated with other simpler forms of rankings that may be computed more efficiently, our focus is of more speculative nature, in that it aims at separating the kernel of PageRank, that is, link-based importance propagation, from the way propagation decays over paths. We focus on three damping functions, having linear, exponential, and hyperbolic decay on the lengths of the paths. The exponential decay corresponds to PageRank, and the other functions are new. Our presentation includes algorithms, analysis, comparisons and experiments that study their behavior under different parameters in real Web graph data. Among other results, we show how to calculate a linear approximation that induces a page ordering that is almost identical to PageRank's using a fixed small number of iterations; comparisons were performed using Kendall's tau on large domain datasets.
    Date
    16. 1.2016 10:22:28
  15. Dambeck, H.: Wie Google mit Milliarden Unbekannten rechnet : Teil 2: Ausgerechnet: Der Page Rank für ein Mini-Web aus drei Seiten (2009) 0.01
    0.012319146 = product of:
      0.07391487 = sum of:
        0.07391487 = weight(_text_:web in 3080) [ClassicSimilarity], result of:
          0.07391487 = score(doc=3080,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5099235 = fieldWeight in 3080, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=3080)
      0.16666667 = coord(1/6)
    
    Abstract
    Ein simples Beispiel eines Mini-Internets aus drei Web-Seiten verdeutlicht, wie dieses Ranking-System in der Praxis funktioniert.
  16. Bradley, P.: ¬The relevance of underpants to searching the Web (2000) 0.01
    0.012195333 = product of:
      0.073171996 = sum of:
        0.073171996 = weight(_text_:web in 3961) [ClassicSimilarity], result of:
          0.073171996 = score(doc=3961,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.50479853 = fieldWeight in 3961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=3961)
      0.16666667 = coord(1/6)
    
  17. Entlich, R.: FAQ: Image Search Engines (2001) 0.01
    0.011686969 = product of:
      0.07012181 = sum of:
        0.07012181 = weight(_text_:web in 155) [ClassicSimilarity], result of:
          0.07012181 = score(doc=155,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.48375595 = fieldWeight in 155, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=155)
      0.16666667 = coord(1/6)
    
    Abstract
    Everyone loves images. The web wasn't anything until images came along, then it was an overnight success. So how does one find a specific image on the web? By using one of a burgeoning number of image-focused search engines. These search engines are simply optimized versions of typical web indexes, with crawlers that go around sucking down web content and indexing it. But with image search engines, they focus on images only, and the web page text that may describe them. As information professionals, we know that this is a clumsy approach at best, but as the author puts it, until more sophisticated methods become available, the tools profiled here will "have to suffice." Seven search engines are thoroughly tested in this review article, with Google's Image Search (http://www.google.com/imghp?hl=en) being the highest rated
  18. Khare, R.; Cutting, D.; Sitaker, K.; Rifkin, A.: Nutch: a flexible and scalable open-source Web search engine (2004) 0.01
    0.010453141 = product of:
      0.062718846 = sum of:
        0.062718846 = weight(_text_:web in 852) [ClassicSimilarity], result of:
          0.062718846 = score(doc=852,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 852, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=852)
      0.16666667 = coord(1/6)
    
    Abstract
    Nutch is an open-source Web search engine that can be used at global, local, and even personal scale. Its initial design goal was to enable a transparent alternative for global Web search in the public interest - one of its signature features is the ability to "explain" its result rankings. Recent work has emphasized how it can also be used for intranets; by local communities with richer data models, such as the Creative Commons metadata-enabled search for licensed content; on a personal scale to index a user's files, email, and web-surfing history; and we also report on several other research projects built on Nutch. In this paper, we present how the architecture of the Nutch system enables it to be more flexible and scalable than other comparable systems today.
  19. Semantische Suche über 500 Millionen Web-Dokumente (2009) 0.01
    0.010453141 = product of:
      0.062718846 = sum of:
        0.062718846 = weight(_text_:web in 2434) [ClassicSimilarity], result of:
          0.062718846 = score(doc=2434,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 2434, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2434)
      0.16666667 = coord(1/6)
    
    Content
    "Wissenschaftler an der University of Washington haben eine neue Suchmaschinen-Engine geschrieben, die Zusammenhänge und Fakten aus mehr als 500 Millionen einzelner Web-Seiten zusammentragen kann. Das Werkzeug extrahiert dabei Informationen aus Milliarden von Textzeilen, indem die grundlegenden sprachlichen Beziehungen zwischen Wörtern analysiert werden. Experten glauben, dass solche Systeme zur automatischen Informationsgewinnung eines Tages die Grundlage deutlich smarterer Suchmaschinen bilden werden, als sie heute verfügbar sind. Dazu werden die wichtigsten Datenhappen zunächst von einem Algorithmus intern begutachtet und dann intelligent kombiniert, berichtet Technology Review in seiner Online-Ausgabe. Das Projekt US-Forscher stellt eine deutliche Ausweitung einer zuvor an der gleichen Hochschule entwickelten Technik namens TextRunner dar. Sowohl die Anzahl analysierbarer Seiten als auch die Themengebiete wurden dabei stark erweitert. "TextRunner ist deshalb so bedeutsam, weil es skaliert, ohne dass dabei ein Mensch eingreifen müsste", sagt Peter Norvig, Forschungsdirektor bei Google. Der Internet-Konzern spendete dem Projekt die riesige Datenbank aus einzelnen Web-Seiten, die TextRunner analysiert. "Das System kann Millionen von Beziehungen erkennen und erlernen - und zwar nicht nur jede einzeln. Einen Betreuer braucht die Software nicht, die Informationen werden selbstständig ermittelt.""
    Source
    http://www.heise.de/newsticker/Semantische-Suche-ueber-500-Millionen-Web-Dokumente--/meldung/140630
  20. Web search service features (2002) 0.01
    0.009855317 = product of:
      0.059131898 = sum of:
        0.059131898 = weight(_text_:web in 923) [ClassicSimilarity], result of:
          0.059131898 = score(doc=923,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.4079388 = fieldWeight in 923, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=923)
      0.16666667 = coord(1/6)
    
    Abstract
    The table shows some of the features and techniques for the most common general Web search services to show how to use them and to help decide which may be the most appropriate. See the notes below that explain the headings. Each service also provides more detailed instructions. Note that some features will be available under an 'advanced', 'power' or other further search option and not from the main page.