Search (81 results, page 1 of 5)

  • × type_ss:"el"
  • × year_i:[1990 TO 2000}
  1. ¬Third International World Wide Web Conference, Darmstadt 1995 : [Inhaltsverzeichnis] (1995) 0.07
    0.07074061 = product of:
      0.21222183 = sum of:
        0.12925258 = weight(_text_:wide in 3458) [ClassicSimilarity], result of:
          0.12925258 = score(doc=3458,freq=10.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.65677917 = fieldWeight in 3458, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3458)
        0.08296924 = weight(_text_:web in 3458) [ClassicSimilarity], result of:
          0.08296924 = score(doc=3458,freq=14.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.57238775 = fieldWeight in 3458, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3458)
      0.33333334 = coord(2/6)
    
    Abstract
    ANDREW, K. u. F. KAPPE: Serving information to the Web with Hyper-G; BARBIERI, K., H.M. DOERR u. D. DWYER: Creating a virtual classroom for interactive education on the Web; CAMPBELL, J.K., S.B. JONES, N.M. STEPHENS u. S. HURLEY: Constructing educational courseware using NCSA Mosaic and the World Wide Web; CATLEDGE, L.L. u. J.E. PITKOW: Characterizing browsing strategies in the World-Wide Web; CLAUSNITZER, A. u. P. VOGEL: A WWW interface to the OMNIS/Myriad literature retrieval engine; FISCHER, R. u. L. PERROCHON: IDLE: Unified W3-access to interactive information servers; FOLEY, J.D.: Visualizing the World-Wide Web with the navigational view builder; FRANKLIN, S.D. u. B. IBRAHIM: Advanced educational uses of the World-Wide Web; FUHR, N., U. PFEIFER u. T. HUYNH: Searching structured documents with the enhanced retrieval functionality of free WAIS-sf and SFgate; FIORITO, M., J. OKSANEN u. D.R. IOIVANE: An educational environment using WWW; KENT, R.E. u. C. NEUSS: Conceptual analysis of resource meta-information; SHELDON, M.A. u. R. WEISS: Discover: a resource discovery system based on content routing; WINOGRAD, T.: Beyond browsing: shared comments, SOAPs, Trails, and On-line communities
  2. Leighton, H.V.: Performance of four World Wide Web (WWW) index services : Infoseek, Lycos, WebCrawler and WWWWorm (1995) 0.06
    0.059441954 = product of:
      0.17832586 = sum of:
        0.11560701 = weight(_text_:wide in 3168) [ClassicSimilarity], result of:
          0.11560701 = score(doc=3168,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.5874411 = fieldWeight in 3168, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.09375 = fieldNorm(doc=3168)
        0.062718846 = weight(_text_:web in 3168) [ClassicSimilarity], result of:
          0.062718846 = score(doc=3168,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 3168, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=3168)
      0.33333334 = coord(2/6)
    
  3. World Wide Web JAVA : die revolutionäre Programmiersprache nicht nur für das Internet (1996) 0.06
    0.059441954 = product of:
      0.17832586 = sum of:
        0.11560701 = weight(_text_:wide in 5222) [ClassicSimilarity], result of:
          0.11560701 = score(doc=5222,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.5874411 = fieldWeight in 5222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.09375 = fieldNorm(doc=5222)
        0.062718846 = weight(_text_:web in 5222) [ClassicSimilarity], result of:
          0.062718846 = score(doc=5222,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 5222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=5222)
      0.33333334 = coord(2/6)
    
  4. Wätjen, H.-J.: Automatisches Sammeln, Klassifizieren und Indexieren von wissenschaftlich relevanten Informationsressourcen im deutschen World Wide Web : das DFG-Projekt GERHARD (1998) 0.05
    0.04953496 = product of:
      0.14860488 = sum of:
        0.09633918 = weight(_text_:wide in 3066) [ClassicSimilarity], result of:
          0.09633918 = score(doc=3066,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.48953426 = fieldWeight in 3066, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.078125 = fieldNorm(doc=3066)
        0.052265707 = weight(_text_:web in 3066) [ClassicSimilarity], result of:
          0.052265707 = score(doc=3066,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.36057037 = fieldWeight in 3066, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=3066)
      0.33333334 = coord(2/6)
    
  5. Herwijnen, E. van: SGML tutorial (1993) 0.05
    0.0493319 = product of:
      0.1479957 = sum of:
        0.06553978 = weight(_text_:computer in 8747) [ClassicSimilarity], result of:
          0.06553978 = score(doc=8747,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.40377006 = fieldWeight in 8747, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=8747)
        0.08245592 = product of:
          0.16491184 = sum of:
            0.16491184 = weight(_text_:programs in 8747) [ClassicSimilarity], result of:
              0.16491184 = score(doc=8747,freq=2.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.6404829 = fieldWeight in 8747, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.078125 = fieldNorm(doc=8747)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Contains extensive beginning and advanced interactive tutorials and exercises to teach SGML and uses DynaText software to manage, browse and search the text, thus demonstrating the features of one of the most widely known programs available for SGML marked-up text
    Issue
    Version 2. Computer file.
  6. Peters, C.; Picchi, E.: Across languages, across cultures : issues in multilinguality and digital libraries (1997) 0.04
    0.04316772 = product of:
      0.12950316 = sum of:
        0.07707134 = weight(_text_:wide in 1233) [ClassicSimilarity], result of:
          0.07707134 = score(doc=1233,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 1233, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=1233)
        0.05243182 = weight(_text_:computer in 1233) [ClassicSimilarity], result of:
          0.05243182 = score(doc=1233,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.32301605 = fieldWeight in 1233, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=1233)
      0.33333334 = coord(2/6)
    
    Abstract
    With the recent rapid diffusion over the international computer networks of world-wide distributed document bases, the question of multilingual access and multilingual information retrieval is becoming increasingly relevant. We briefly discuss just some of the issues that must be addressed in order to implement a multilingual interface for a Digital Library system and describe our own approach to this problem.
  7. Powell, J.; Fox, E.A.: Multilingual federated searching across heterogeneous collections (1998) 0.04
    0.03962797 = product of:
      0.11888391 = sum of:
        0.07707134 = weight(_text_:wide in 1250) [ClassicSimilarity], result of:
          0.07707134 = score(doc=1250,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 1250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=1250)
        0.041812565 = weight(_text_:web in 1250) [ClassicSimilarity], result of:
          0.041812565 = score(doc=1250,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.2884563 = fieldWeight in 1250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1250)
      0.33333334 = coord(2/6)
    
    Abstract
    This article describes a scalable system for searching heterogeneous multilingual collections on the World Wide Web. It details a markup language for describing the characteristics of a search engine and its interface, and a protocol for requesting word translations between languages.
  8. Subramanian, S.; Shafer, K.E.: Clustering (1998) 0.04
    0.039268494 = product of:
      0.11780548 = sum of:
        0.052265707 = weight(_text_:web in 1103) [ClassicSimilarity], result of:
          0.052265707 = score(doc=1103,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.36057037 = fieldWeight in 1103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=1103)
        0.06553978 = weight(_text_:computer in 1103) [ClassicSimilarity], result of:
          0.06553978 = score(doc=1103,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.40377006 = fieldWeight in 1103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=1103)
      0.33333334 = coord(2/6)
    
    Abstract
    This article presents our exploration of computer science clustering algorithms as they relate to the Scorpion system. Scorpion is a research project at OCLC that explores the indexing and cataloging of electronic resources. For a more complete description of the Scorpion, please visit the Scorpion Web site at <http://purl.oclc.org/scorpion>
  9. Oehler, A.: Informationssuche im Internet : In welchem Ausmaß entsprechen existierende Suchwerkzeuge für das World Wide Web Anforderungen für die wissenschaftliche Suche (1998) 0.03
    0.034674477 = product of:
      0.10402343 = sum of:
        0.067437425 = weight(_text_:wide in 826) [ClassicSimilarity], result of:
          0.067437425 = score(doc=826,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=826)
        0.036585998 = weight(_text_:web in 826) [ClassicSimilarity], result of:
          0.036585998 = score(doc=826,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=826)
      0.33333334 = coord(2/6)
    
  10. Brin, S.; Page, L.: ¬The anatomy of a large-scale hypertextual Web search engine (1998) 0.03
    0.033970308 = product of:
      0.10191092 = sum of:
        0.06914103 = weight(_text_:web in 947) [ClassicSimilarity], result of:
          0.06914103 = score(doc=947,freq=14.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.47698978 = fieldWeight in 947, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=947)
        0.03276989 = weight(_text_:computer in 947) [ClassicSimilarity], result of:
          0.03276989 = score(doc=947,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 947, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=947)
      0.33333334 = coord(2/6)
    
    Abstract
    In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/. To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want
    Source
    Computer networks. 30(1998) no.1-7, S.107-117
  11. Van de Sompel, H.; Hochstenbach, P.: Reference linking in a hybrid library environment : part 1: frameworks for linking (1999) 0.03
    0.028021207 = product of:
      0.08406362 = sum of:
        0.05449767 = weight(_text_:wide in 1244) [ClassicSimilarity], result of:
          0.05449767 = score(doc=1244,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.2769224 = fieldWeight in 1244, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1244)
        0.029565949 = weight(_text_:web in 1244) [ClassicSimilarity], result of:
          0.029565949 = score(doc=1244,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.2039694 = fieldWeight in 1244, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1244)
      0.33333334 = coord(2/6)
    
    Abstract
    The creation of services linking related information entities is an area that is attracting an ever increasing interest in the ongoing development of the World Wide Web in general, and of research-related information systems in particular. Currently, both practice and theory point at linking services as being a major domain for innovation enabled by digital communication of content. Publishers, subscription agents, researchers and libraries are all looking into ways to create added value by linking related information entities, as such presenting the information within a broader context estimated to be relevant to the users of the information. This is the first of two articles in D-Lib Magazine on this topic. This first part describes the current state-of-the-art and contrasts various approaches to the problem. It identifies static and dynamic linking solutions as well as open and closed linking frameworks. It also includes an extensive bibliography. The second part, SFX, a Generic Linking Solution describes a system that we have developed for linking in a hybrid working environment. The creation of services linking related information entities is an area that is attracting an ever increasing interest in the ongoing development of the World Wide Web in general, and of research-related information systems in particular. Although most writings on electronic scientific communication have touted other benefits, such as the increase in communication speed, the possibility to exchange multimedia content and the absence of limitations on the length of research papers, currently both practice and theory point at linking services as being a major opportunity for improved communication of content. Publishers, subscription agents, researchers and libraries are all looking into ways to create added-value by linking related information entities, as such presenting the information within a broader context estimated to be relevant to the users of the information.
  12. Rusch-Feja, D.; Becker, H.J.: Global Info : the German digital libraries project (1999) 0.02
    0.02158386 = product of:
      0.06475158 = sum of:
        0.03853567 = weight(_text_:wide in 1242) [ClassicSimilarity], result of:
          0.03853567 = score(doc=1242,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 1242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1242)
        0.02621591 = weight(_text_:computer in 1242) [ClassicSimilarity], result of:
          0.02621591 = score(doc=1242,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.16150802 = fieldWeight in 1242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=1242)
      0.33333334 = coord(2/6)
    
    Abstract
    The concept for the German Digital Libraries Program is imbedded in the Information Infrastructure Program of the German Federal Government for the years 1996-2000 which has been explicated in the Program Paper entitled "Information as Raw Material for Innovation".3 The Program Paper was published 1996 by the Federal Ministry for Education, Research, and Technology. The actual grants program "Global Info" was initiated by the Information and Communication Commission of the Joint Learned Societies to further technological advancement in enabling all researchers in Germany direct access to literature, research results, and other relevant information. This Commission was founded by four of the learned societies in 1995, and it has sponsored a series of workshops to increase awareness of leading edge technology and innovations in accessing electronic information sources. Now, nine of the leading research-level learned societies -- often those with umbrella responsibilities for other learned societies in their field -- are members of the Information and Communication Commission and represent the mathematicians, physicists, computer scientists, chemists, educational researchers, sociologists, psychologists, biologists and information technologists in the German Association of Engineers. (The German professional librarian societies are not members, as such, of this Commission, but are represented through delegates from libraries in the learned societies and in the future, hopefully, also by the German Association of Documentalists or through the cooperation between the documentalist and librarian professional societies.) The Federal Ministry earmarked 60 Million German Marks for projects within the framework of the German Digital Libraries Program in two phases over the next six years. The scope for the German Digital Libraries Program was announced in a press release in April 1997,4 and the first call for preliminary projects and expressions of interest in participation ended in July 1997. The Consortium members were suggested by the Information and Communication Commission of the Learned Societies (IuK Kommission), by key scientific research funding agencies in the German government, and by the publishers themselves. The first official meeting of the participants took place on December 1, 1997, at the Deutsche Bibliothek, located in the renowned center of German book trade, Frankfurt, thus documenting the active role and participation of libraries and publishers. In contrast to the Digital Libraries Project of the National Science Foundation in the United States, the German Digital Libraries project is based on furthering cooperation with universities, scientific publishing houses (including various international publishers), book dealers, and special subject information centers, as well as academic and research libraries. The goals of the German Digital Libraries Project are to achieve: 1) efficient access to world wide information; 2) directly from the scientist's desktop; 3) while providing the organization for and stimulating fundamental structural changes in the information and communication process of the scientific community.
  13. Search Engines and Beyond : Developing efficient knowledge management systems, April 19-20 1999, Boston, Mass (1999) 0.02
    0.019813985 = product of:
      0.059441954 = sum of:
        0.03853567 = weight(_text_:wide in 2596) [ClassicSimilarity], result of:
          0.03853567 = score(doc=2596,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
        0.020906283 = weight(_text_:web in 2596) [ClassicSimilarity], result of:
          0.020906283 = score(doc=2596,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.14422815 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
      0.33333334 = coord(2/6)
    
    Content
    Ramana Rao (Inxight, Palo Alto, CA) 7 ± 2 Insights on achieving Effective Information Access Session One: Updates and a twelve month perspective Danny Sullivan (Search Engine Watch, US / England) Portalization and other search trends Carol Tenopir (University of Tennessee) Search realities faced by end users and professional searchers Session Two: Today's search engines and beyond Daniel Hoogterp (Retrieval Technologies, McLean, VA) Effective presentation and utilization of search techniques Rick Kenny (Fulcrum Technologies, Ontario, Canada) Beyond document clustering: The knowledge impact statement Gary Stock (Ingenius, Kalamazoo, MI) Automated change monitoring Gary Culliss (Direct Hit, Wellesley Hills, MA) User popularity ranked search engines Byron Dom (IBM, CA) Automatically finding the best pages on the World Wide Web (CLEVER) Peter Tomassi (LookSmart, San Francisco, CA) Adding human intellect to search technology Session Three: Panel discussion: Human v automated categorization and editing Ev Brenner (New York, NY)- Chairman James Callan (University of Massachusetts, MA) Marc Krellenstein (Northern Light Technology, Cambridge, MA) Dan Miller (Ask Jeeves, Berkeley, CA) Session Four: Updates and a twelve month perspective Steve Arnold (AIT, Harrods Creek, KY) Review: The leading edge in search and retrieval software Ellen Voorhees (NIST, Gaithersburg, MD) TREC update Session Five: Search engines now and beyond Intelligent Agents John Snyder (Muscat, Cambridge, England) Practical issues behind intelligent agents Text summarization Therese Firmin, (Dept of Defense, Ft George G. Meade, MD) The TIPSTER/SUMMAC evaluation of automatic text summarization systems Cross language searching Elizabeth Liddy (TextWise, Syracuse, NY) A conceptual interlingua approach to cross-language retrieval. Video search and retrieval Armon Amir (IBM, Almaden, CA) CueVideo: Modular system for automatic indexing and browsing of video/audio Speech recognition Michael Witbrock (Lycos, Waltham, MA) Retrieval of spoken documents Visualization James A. Wise (Integral Visuals, Richland, WA) Information visualization in the new millennium: Emerging science or passing fashion? Text mining David Evans (Claritech, Pittsburgh, PA) Text mining - towards decision support
  14. Schmidt, A.P.: ¬Der Wissensnavigator : Das Lexikon der Zukunft (1999) 0.02
    0.019813985 = product of:
      0.059441954 = sum of:
        0.03853567 = weight(_text_:wide in 3315) [ClassicSimilarity], result of:
          0.03853567 = score(doc=3315,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 3315, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=3315)
        0.020906283 = weight(_text_:web in 3315) [ClassicSimilarity], result of:
          0.020906283 = score(doc=3315,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.14422815 = fieldWeight in 3315, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3315)
      0.33333334 = coord(2/6)
    
    Abstract
    Der Wissensnavigator ist ein Lexikon der Zukunft auf dem Weg zu einer interaktiven Enzyklopädie. Wenn Sie die elektronische Fassung online benutzen, können Sie von den einzelnen Artikeln über Hyperlinks zu Seiten im World Wide Web gelangen, die noch mehr Informationen zum jeweiligen Zukunftsbegriff enthalten. Bei der elektronischen Ausgabe des Wissensnavigators, die auch im Internet zugänglich ist handelt es sich um eine "lebende" Anwendung, die sich gerade auch durch die Mitwirkung der Nutzer weiterentwickelt. Sie sind herzlich eingeladen, zum Teilnehmer dieses Evolutionsprozesses zu werden - etwa, indem Sie neue Begriffe vorschlagen, die aufgenommen werden sollen, oder Experten benennen, die zur Bearbeitung neuer Begriffe in Frage kommen, oder auch sich selbst als Experte zu erkennen geben. Eine Redaktion, die aus dem Autor und einem Expertenteam im Verlag besteht, wird über die Aufnahme neuer Begriffe entscheiden
  15. Tillman, H.N.: Evaluating quality on the net (1996) 0.02
    0.019267835 = product of:
      0.11560701 = sum of:
        0.11560701 = weight(_text_:wide in 5673) [ClassicSimilarity], result of:
          0.11560701 = score(doc=5673,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.5874411 = fieldWeight in 5673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.09375 = fieldNorm(doc=5673)
      0.16666667 = coord(1/6)
    
    Abstract
    Wide ranging article providing background information on the search process. Also includes a considerable amount of information about formulating searches and the difficult process of getting relevant returns from a search
  16. CBT-Multimedialexikon : WINDOWS-Hypertextlexikon Multimedia (199?) 0.02
    0.017477274 = product of:
      0.10486364 = sum of:
        0.10486364 = weight(_text_:computer in 5991) [ClassicSimilarity], result of:
          0.10486364 = score(doc=5991,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.6460321 = fieldWeight in 5991, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.125 = fieldNorm(doc=5991)
      0.16666667 = coord(1/6)
    
    Theme
    Computer Based Training
  17. Dolin, R.; Agrawal, D.; El Abbadi, A.; Pearlman, J.: Using automated classification for summarizing and selecting heterogeneous information sources (1998) 0.02
    0.017025404 = product of:
      0.05107621 = sum of:
        0.028901752 = weight(_text_:wide in 1253) [ClassicSimilarity], result of:
          0.028901752 = score(doc=1253,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.14686027 = fieldWeight in 1253, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1253)
        0.02217446 = weight(_text_:web in 1253) [ClassicSimilarity], result of:
          0.02217446 = score(doc=1253,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.15297705 = fieldWeight in 1253, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1253)
      0.33333334 = coord(2/6)
    
    Abstract
    Information retrieval over the Internet increasingly requires the filtering of thousands of heterogeneous information sources. Important sources of information include not only traditional databases with structured data and queries, but also increasing numbers of non-traditional, semi- or unstructured collections such as Web sites, FTP archives, etc. As the number and variability of sources increases, new ways of automatically summarizing, discovering, and selecting collections relevant to a user's query are needed. One such method involves the use of classification schemes, such as the Library of Congress Classification (LCC), within which a collection may be represented based on its content, irrespective of the structure of the actual data or documents. For such a system to be useful in a large-scale distributed environment, it must be easy to use for both collection managers and users. As a result, it must be possible to classify documents automatically within a classification scheme. Furthermore, there must be a straightforward and intuitive interface with which the user may use the scheme to assist in information retrieval (IR). Our work with the Alexandria Digital Library (ADL) Project focuses on geo-referenced information, whether text, maps, aerial photographs, or satellite images. As a result, we have emphasized techniques which work with both text and non-text, such as combined textual and graphical queries, multi-dimensional indexing, and IR methods which are not solely dependent on words or phrases. Part of this work involves locating relevant online sources of information. In particular, we have designed and are currently testing aspects of an architecture, Pharos, which we believe will scale up to 1.000.000 heterogeneous sources. Pharos accommodates heterogeneity in content and format, both among multiple sources as well as within a single source. That is, we consider sources to include Web sites, FTP archives, newsgroups, and full digital libraries; all of these systems can include a wide variety of content and multimedia data formats. Pharos is based on the use of hierarchical classification schemes. These include not only well-known 'subject' (or 'concept') based schemes such as the Dewey Decimal System and the LCC, but also, for example, geographic classifications, which might be constructed as layers of smaller and smaller hierarchical longitude/latitude boxes. Pharos is designed to work with sophisticated queries which utilize subjects, geographical locations, temporal specifications, and other types of information domains. The Pharos architecture requires that hierarchically structured collection metadata be extracted so that it can be partitioned in such a way as to greatly enhance scalability. Automated classification is important to Pharos because it allows information sources to extract the requisite collection metadata automatically that must be distributed.
  18. Sadun, E.: ¬Die JavaScript CD (1996) 0.02
    0.015292614 = product of:
      0.09175568 = sum of:
        0.09175568 = weight(_text_:computer in 3900) [ClassicSimilarity], result of:
          0.09175568 = score(doc=3900,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.56527805 = fieldWeight in 3900, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.109375 = fieldNorm(doc=3900)
      0.16666667 = coord(1/6)
    
    Series
    Midas-Computer-Bücher
  19. Koch, T.: Searching the Web : systematic overview over indexes (1995) 0.01
    0.0147829745 = product of:
      0.08869784 = sum of:
        0.08869784 = weight(_text_:web in 3169) [ClassicSimilarity], result of:
          0.08869784 = score(doc=3169,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.6119082 = fieldWeight in 3169, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=3169)
      0.16666667 = coord(1/6)
    
    Object
    Nordic Web Index
  20. Sullivan D.: How search engines rank web pages (1998) 0.01
    0.013937522 = product of:
      0.08362513 = sum of:
        0.08362513 = weight(_text_:web in 5808) [ClassicSimilarity], result of:
          0.08362513 = score(doc=5808,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5769126 = fieldWeight in 5808, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.125 = fieldNorm(doc=5808)
      0.16666667 = coord(1/6)
    

Languages

  • e 56
  • d 23
  • nl 1
  • More… Less…

Types

  • a 27
  • i 6
  • m 5
  • b 2
  • s 2
  • n 1
  • r 1
  • x 1
  • More… Less…