Search (18 results, page 1 of 1)

  • × theme_ss:"Suchmaschinen"
  • × type_ss:"el"
  1. Place, E.: Internationale Zusammenarbeit bei Internet Subject Gateways (1999) 0.01
    0.008208264 = product of:
      0.024624791 = sum of:
        0.008898375 = product of:
          0.01779675 = sum of:
            0.01779675 = weight(_text_:science in 4189) [ClassicSimilarity], result of:
              0.01779675 = score(doc=4189,freq=2.0), product of:
                0.10191755 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03869132 = queryNorm
                0.17461908 = fieldWeight in 4189, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4189)
          0.5 = coord(1/2)
        0.015726417 = product of:
          0.031452835 = sum of:
            0.031452835 = weight(_text_:22 in 4189) [ClassicSimilarity], result of:
              0.031452835 = score(doc=4189,freq=2.0), product of:
                0.1354904 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03869132 = queryNorm
                0.23214069 = fieldWeight in 4189, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4189)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Eine ganze Anzahl von Bibliotheken in Europa befaßt sich mit der Entwicklung von Internet Subject Gateways - einer Serviceleistung, die den Nutzern helfen soll, qualitativ hochwertige Internetquellen zu finden. Subject Gateways wie SOSIG (The Social Science Information Gateway) sind bereits seit einigen Jahren im Internet verfügbar und stellen eine Alternative zu Internet-Suchmaschinen wie AltaVista und Verzeichnissen wie Yahoo dar. Bezeichnenderweise stützen sich Subject Gateways auf die Fertigkeiten, Verfahrensweisen und Standards der internationalen Bibliothekswelt und wenden diese auf Informationen aus dem Internet an. Dieses Referat will daher betonen, daß Bibliothekare/innen idealerweise eine vorherrschende Rolle im Aufbau von Suchservices für Internetquellen spielen und daß Information Gateways eine Möglichkeit dafür darstellen. Es wird einige der Subject Gateway-Initiativen in Europa umreißen und die Werkzeuge und Technologien beschreiben, die vom Projekt DESIRE entwickelt wurden, um die Entwicklung neuer Gateways in anderen Ländern zu unterstützen. Es wird auch erörtert, wie IMesh, eine Gruppe für Gateways aus der ganzen Welt eine internationale Strategie für Gateways anstrebt und versucht, Standards zur Umsetzung dieses Projekts zu entwickeln
    Date
    22. 6.2002 19:35:09
  2. Dunning, A.: Do we still need search engines? (1999) 0.01
    0.006115829 = product of:
      0.036694974 = sum of:
        0.036694974 = product of:
          0.07338995 = sum of:
            0.07338995 = weight(_text_:22 in 6021) [ClassicSimilarity], result of:
              0.07338995 = score(doc=6021,freq=2.0), product of:
                0.1354904 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03869132 = queryNorm
                0.5416616 = fieldWeight in 6021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6021)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Ariadne. 1999, no.22
  3. Li, Z.: ¬A domain specific search engine with explicit document relations (2013) 0.01
    0.00583781 = product of:
      0.03502686 = sum of:
        0.03502686 = weight(_text_:processing in 1210) [ClassicSimilarity], result of:
          0.03502686 = score(doc=1210,freq=2.0), product of:
            0.15662816 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.03869132 = queryNorm
            0.22363065 = fieldWeight in 1210, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
      0.16666667 = coord(1/6)
    
    Abstract
    The current web consists of documents that are highly heterogeneous and hard for machines to understand. The Semantic Web is a progressive movement of the Word Wide Web, aiming at converting the current web of unstructured documents to the web of data. In the Semantic Web, web documents are annotated with metadata using standardized ontology language. These annotated documents are directly processable by machines and it highly improves their usability and usefulness. In Ericsson, similar problems occur. There are massive documents being created with well-defined structures. Though these documents are about domain specific knowledge and can have rich relations, they are currently managed by a traditional search engine, which ignores the rich domain specific information and presents few data to users. Motivated by the Semantic Web, we aim to find standard ways to process these documents, extract rich domain specific information and annotate these data to documents with formal markup languages. We propose this project to develop a domain specific search engine for processing different documents and building explicit relations for them. This research project consists of the three main focuses: examining different domain specific documents and finding ways to extract their metadata; integrating a text search engine with an ontology server; exploring novel ways to build relations for documents. We implement this system and demonstrate its functions. As a prototype, the system provides required features and will be extended in the future.
  4. Birmingham, J.: Internet search engines (1996) 0.01
    0.005242139 = product of:
      0.031452835 = sum of:
        0.031452835 = product of:
          0.06290567 = sum of:
            0.06290567 = weight(_text_:22 in 5664) [ClassicSimilarity], result of:
              0.06290567 = score(doc=5664,freq=2.0), product of:
                0.1354904 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03869132 = queryNorm
                0.46428138 = fieldWeight in 5664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5664)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    10.11.1996 16:36:22
  5. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.00
    0.0034947598 = product of:
      0.020968558 = sum of:
        0.020968558 = product of:
          0.041937117 = sum of:
            0.041937117 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
              0.041937117 = score(doc=1149,freq=2.0), product of:
                0.1354904 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03869132 = queryNorm
                0.30952093 = fieldWeight in 1149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1149)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    17.12.2013 11:02:22
  6. Schaat, S.: Von der automatisierten Manipulation zur Manipulation der Automatisierung (2019) 0.00
    0.0034947598 = product of:
      0.020968558 = sum of:
        0.020968558 = product of:
          0.041937117 = sum of:
            0.041937117 = weight(_text_:22 in 4996) [ClassicSimilarity], result of:
              0.041937117 = score(doc=4996,freq=2.0), product of:
                0.1354904 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03869132 = queryNorm
                0.30952093 = fieldWeight in 4996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4996)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    19. 2.2019 17:22:00
  7. Franke-Maier, M.; Rüter, C.: Discover Sacherschließung! : Was machen suchmaschinenbasierte Systeme mit unseren inhaltlichen Metadaten? (2015) 0.00
    0.0030856724 = product of:
      0.018514033 = sum of:
        0.018514033 = product of:
          0.037028067 = sum of:
            0.037028067 = weight(_text_:29 in 1706) [ClassicSimilarity], result of:
              0.037028067 = score(doc=1706,freq=2.0), product of:
                0.13610396 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03869132 = queryNorm
                0.27205724 = fieldWeight in 1706, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1706)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    2. 3.2015 10:29:44
  8. Günther, M.: Vermitteln Suchmaschinen vollständige Bilder aktueller Themen? : Untersuchung der Gewichtung inhaltlicher Aspekte von Suchmaschinenergebnissen in Deutschland und den USA (2016) 0.00
    0.0022040517 = product of:
      0.01322431 = sum of:
        0.01322431 = product of:
          0.02644862 = sum of:
            0.02644862 = weight(_text_:29 in 3068) [ClassicSimilarity], result of:
              0.02644862 = score(doc=3068,freq=2.0), product of:
                0.13610396 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03869132 = queryNorm
                0.19432661 = fieldWeight in 3068, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3068)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Young information scientists. 1(2016), S.13-29
  9. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.00
    0.0021842248 = product of:
      0.013105349 = sum of:
        0.013105349 = product of:
          0.026210697 = sum of:
            0.026210697 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
              0.026210697 = score(doc=2564,freq=2.0), product of:
                0.1354904 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03869132 = queryNorm
                0.19345059 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    16. 1.2016 10:22:28
  10. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.00
    0.0021842248 = product of:
      0.013105349 = sum of:
        0.013105349 = product of:
          0.026210697 = sum of:
            0.026210697 = weight(_text_:22 in 2565) [ClassicSimilarity], result of:
              0.026210697 = score(doc=2565,freq=2.0), product of:
                0.1354904 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03869132 = queryNorm
                0.19345059 = fieldWeight in 2565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    16. 1.2016 10:22:28
  11. Gillitzer, B.: Yewno (2017) 0.00
    0.0017473799 = product of:
      0.010484279 = sum of:
        0.010484279 = product of:
          0.020968558 = sum of:
            0.020968558 = weight(_text_:22 in 3447) [ClassicSimilarity], result of:
              0.020968558 = score(doc=3447,freq=2.0), product of:
                0.1354904 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03869132 = queryNorm
                0.15476047 = fieldWeight in 3447, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3447)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    22. 2.2017 10:16:49
  12. Christensen, A.: Wissenschaftliche Literatur entdecken : was bibliothekarische Discovery-Systeme von der Konkurrenz lernen und was sie ihr zeigen können (2022) 0.00
    0.0017302395 = product of:
      0.010381437 = sum of:
        0.010381437 = product of:
          0.020762874 = sum of:
            0.020762874 = weight(_text_:science in 833) [ClassicSimilarity], result of:
              0.020762874 = score(doc=833,freq=2.0), product of:
                0.10191755 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03869132 = queryNorm
                0.20372227 = fieldWeight in 833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=833)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    In den letzten Jahren ist das Angebot an Academic Search Engines für die Recherche nach Fachliteratur zu allen Wissenschaftsgebieten stark angewachsen und ergänzt die beliebten kommerziellen Angebote wie Web of Science oder Scopus. Der Artikel zeigt die wesentlichen Unterschiede zwischen bibliothekarischen Discovery-Systemen und Academic Search Engines wie Base, Dimensions oder Open Alex auf und diskutiert Möglichkeiten, wie beide von einander profitieren können. Diese Entwicklungsperspektiven betreffen Aspekte wie die Kontextualisierung von Wissen, die Datenmodellierung, die automatischen Datenanreicherung sowie den Zuschnitt von Suchräumen.
  13. Niemann, J.: "Ich cuil das mal" : Neue Suchmaschine fordert Google heraus (2008) 0.00
    0.0015428362 = product of:
      0.009257017 = sum of:
        0.009257017 = product of:
          0.018514033 = sum of:
            0.018514033 = weight(_text_:29 in 2049) [ClassicSimilarity], result of:
              0.018514033 = score(doc=2049,freq=2.0), product of:
                0.13610396 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03869132 = queryNorm
                0.13602862 = fieldWeight in 2049, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2049)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    29. 7.2008 18:17:14
  14. Hogan, A.; Harth, A.; Umbrich, J.; Kinsella, S.; Polleres, A.; Decker, S.: Searching and browsing Linked Data with SWSE : the Semantic Web Search Engine (2011) 0.00
    0.0012358854 = product of:
      0.0074153123 = sum of:
        0.0074153123 = product of:
          0.014830625 = sum of:
            0.014830625 = weight(_text_:science in 438) [ClassicSimilarity], result of:
              0.014830625 = score(doc=438,freq=2.0), product of:
                0.10191755 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03869132 = queryNorm
                0.1455159 = fieldWeight in 438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=438)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    http://www.sciencedirect.com/science/article/pii/S1570826811000473
  15. Ogden, J.; Summers, E.; Walker, S.: Know(ing) Infrastructure : the wayback machine as object and instrument of digital research (2023) 0.00
    0.0012358854 = product of:
      0.0074153123 = sum of:
        0.0074153123 = product of:
          0.014830625 = sum of:
            0.014830625 = weight(_text_:science in 1084) [ClassicSimilarity], result of:
              0.014830625 = score(doc=1084,freq=2.0), product of:
                0.10191755 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03869132 = queryNorm
                0.1455159 = fieldWeight in 1084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1084)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    From documenting human rights abuses to studying online advertising, web archives are increasingly positioned as critical resources for a broad range of scholarly Internet research agendas. In this article, we reflect on the motivations and methodological challenges of investigating the world's largest web archive, the Internet Archive's Wayback Machine (IAWM). Using a mixed methods approach, we report on a pilot project centred around documenting the inner workings of 'Save Page Now' (SPN) - an Internet Archive tool that allows users to initiate the creation and storage of 'snapshots' of web resources. By improving our understanding of SPN and its role in shaping the IAWM, this work examines how the public tool is being used to 'save the Web' and highlights the challenges of operationalising a study of the dynamic sociotechnical processes supporting this knowledge infrastructure. Inspired by existing Science and Technology Studies (STS) approaches, the paper charts our development of methodological interventions to support an interdisciplinary investigation of SPN, including: ethnographic methods, 'experimental blackbox tactics', data tracing, modelling and documentary research. We discuss the opportunities and limitations of our methodology when interfacing with issues associated with temporality, scale and visibility, as well as critically engage with our own positionality in the research process (in terms of expertise and access). We conclude with reflections on the implications of digital STS approaches for 'knowing infrastructure', where the use of these infrastructures is unavoidably intertwined with our ability to study the situated and material arrangements of their creation.
  16. Search Engines and Beyond : Developing efficient knowledge management systems, April 19-20 1999, Boston, Mass (1999) 0.00
    9.887083E-4 = product of:
      0.0059322496 = sum of:
        0.0059322496 = product of:
          0.011864499 = sum of:
            0.011864499 = weight(_text_:science in 2596) [ClassicSimilarity], result of:
              0.011864499 = score(doc=2596,freq=2.0), product of:
                0.10191755 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03869132 = queryNorm
                0.11641272 = fieldWeight in 2596, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2596)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Content
    Ramana Rao (Inxight, Palo Alto, CA) 7 ± 2 Insights on achieving Effective Information Access Session One: Updates and a twelve month perspective Danny Sullivan (Search Engine Watch, US / England) Portalization and other search trends Carol Tenopir (University of Tennessee) Search realities faced by end users and professional searchers Session Two: Today's search engines and beyond Daniel Hoogterp (Retrieval Technologies, McLean, VA) Effective presentation and utilization of search techniques Rick Kenny (Fulcrum Technologies, Ontario, Canada) Beyond document clustering: The knowledge impact statement Gary Stock (Ingenius, Kalamazoo, MI) Automated change monitoring Gary Culliss (Direct Hit, Wellesley Hills, MA) User popularity ranked search engines Byron Dom (IBM, CA) Automatically finding the best pages on the World Wide Web (CLEVER) Peter Tomassi (LookSmart, San Francisco, CA) Adding human intellect to search technology Session Three: Panel discussion: Human v automated categorization and editing Ev Brenner (New York, NY)- Chairman James Callan (University of Massachusetts, MA) Marc Krellenstein (Northern Light Technology, Cambridge, MA) Dan Miller (Ask Jeeves, Berkeley, CA) Session Four: Updates and a twelve month perspective Steve Arnold (AIT, Harrods Creek, KY) Review: The leading edge in search and retrieval software Ellen Voorhees (NIST, Gaithersburg, MD) TREC update Session Five: Search engines now and beyond Intelligent Agents John Snyder (Muscat, Cambridge, England) Practical issues behind intelligent agents Text summarization Therese Firmin, (Dept of Defense, Ft George G. Meade, MD) The TIPSTER/SUMMAC evaluation of automatic text summarization systems Cross language searching Elizabeth Liddy (TextWise, Syracuse, NY) A conceptual interlingua approach to cross-language retrieval. Video search and retrieval Armon Amir (IBM, Almaden, CA) CueVideo: Modular system for automatic indexing and browsing of video/audio Speech recognition Michael Witbrock (Lycos, Waltham, MA) Retrieval of spoken documents Visualization James A. Wise (Integral Visuals, Richland, WA) Information visualization in the new millennium: Emerging science or passing fashion? Text mining David Evans (Claritech, Pittsburgh, PA) Text mining - towards decision support
  17. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.00
    9.887083E-4 = product of:
      0.0059322496 = sum of:
        0.0059322496 = product of:
          0.011864499 = sum of:
            0.011864499 = weight(_text_:science in 4709) [ClassicSimilarity], result of:
              0.011864499 = score(doc=4709,freq=2.0), product of:
                0.10191755 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03869132 = queryNorm
                0.11641272 = fieldWeight in 4709, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4709)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  18. Dodge, M.: ¬A map of Yahoo! (2000) 0.00
    6.991224E-4 = product of:
      0.004194734 = sum of:
        0.004194734 = product of:
          0.008389468 = sum of:
            0.008389468 = weight(_text_:science in 1555) [ClassicSimilarity], result of:
              0.008389468 = score(doc=1555,freq=4.0), product of:
                0.10191755 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03869132 = queryNorm
                0.08231623 = fieldWeight in 1555, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1555)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Content
    The View From Above Browsing for a particular piece on information on the Web can often feel like being stuck in an unfamiliar part of town walking around at street level looking for a particular store. You know the store is around there somewhere, but your viewpoint at ground level is constrained. What you really want is to get above the streets, hovering half a mile or so up in the air, to see the whole neighbourhood. This kind of birds-eye view function has been memorably described by David D. Clark, Senior Research Scientist at MIT's Laboratory for Computer Science and the Chairman of the Invisible Worlds Protocol Advisory Board, as the missing "up button" on the browser [3] . ET-Map is a nice example of a prototype for Clark's "up-button" view of an information space. The goal of information maps, like ET-Map, is to provide the browser with a sense of the lie of the information landscape, what is where, the location of clusters and hotspots, what is related to what. Ideally, this 'big-picture' all-in-one visual summary needs to fit on a single standard computer screen. ET-Map is one of my favourite examples, but there are many other interesting information maps being developed by other researchers and companies (see inset at the bottom of this page). How does ET-Map work? Here is a sequence of screenshots of a typical browsing session with ET-Map, which ends with access to Web pages on jazz musician Miles Davis. You can also tryout ET-Map for yourself, using a fully working demo on the AI Lab's website [4] . We begin with the top-level map showing forty odd broad entertainment 'subject regions' represented by regularly shaped tiles. Each tile is a visual summary of a group of Web pages with similar content. These tiles are shaded different colours to differentiate them, while labels identify the subject of the tile and the number in brackets telling you how many individual Web page links it contains. ET-Map uses two important, but common-sense, spatial concepts in its organisation and representation of the Web. Firstly, the 'subject regions' size is directly related to the number of Web pages in that category. For example, the 'MUSIC' subject area contains over 11,000 pages and so has a much larger area than the neighbouring area of 'LIVE' which only has 4,300 odd pages. This is intuitively meaningful, as the largest tiles are visually more prominent on the map and are likely to be more significant as they contain the most links. In addition, a second spatial concept, that of neighbourhood proximity, is applied so 'subject regions' closely related in term of content are plotted close to each other on the map. For example, 'FILM' and 'YEAR'S OSCARS', at the bottom left, are neighbours in both semantic and spatial space. This make senses as many things in the real-world are ordered in this way, with things that are alike being spatially close together (e.g. layout of goods in a store, or books in a library). Importantly, ET-Map is also a multi-layer map, with sub-maps showing greater informational resolution through a finer degree of categorization. So for any subject region that contains more than two hundred Web pages, a second-level map, with more detailed categories is generated. This subdivision of information space is repeated down the hierarchy as far as necessary. In the example, the user selected the 'MUSIC' subject region which, not surprisingly, contained many thousands of pages. A second-level map with numerous different music categories is then presented to the user. Delving deeper, the user wants to learn more about jazz music, so clicking on the 'JAZZ' tile leads to a third-level map, a fine-grained map of jazz related Web pages. Finally, selecting the 'MILES DAVIS' subject region leads to more a conventional looking ranking of pages from which the user selects one to download.
    Research Prototypes Visual SiteMap Developed by Xia Lin, based at the College of Library and Information Science, Drexel University. CVG Cyberspace geography visualization, developed by Luc Girardin, at The Graduate Institute of International Studies, Switzerland. WEBSOM Maps the thousands of articles posted on Usenet newsgroups. It is being developed by researchers at the Neural Networks Research Centre, Helsinki University of Technology in Finland. TreeMaps Developed by Brian Johnson, Ben Shneiderman and colleagues in the Human-Computer Interaction Lab at the University of Maryland. Commercial Information Maps: NewsMaps Provides interactive information landscapes summarizing daily news stories, developed Cartia, Inc. Web Squirrel Creates maps known as information farms. It is developed by Eastgate Systems, Inc. Umap Produces interactive maps of Web searches. Map of the Market An interactive map of the market performance of the stocks of major US corporations developed by SmartMoney.com."