Search (16 results, page 1 of 1)

  • × theme_ss:"Hypertext"
  • × theme_ss:"Internet"
  • × type_ss:"a"
  1. Spertus, E.: ParaSite : mining structural information on the Web (1997) 0.04
    0.038148586 = product of:
      0.057222877 = sum of:
        0.030176813 = weight(_text_:on in 2740) [ClassicSimilarity], result of:
          0.030176813 = score(doc=2740,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.27492687 = fieldWeight in 2740, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=2740)
        0.027046064 = product of:
          0.054092128 = sum of:
            0.054092128 = weight(_text_:22 in 2740) [ClassicSimilarity], result of:
              0.054092128 = score(doc=2740,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.30952093 = fieldWeight in 2740, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2740)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Discusses the varieties of link information on the WWW, how the Web differs from conventional hypertext, and how the links can be exploited to build useful applications. Specific applications presented as part of the ParaSite system find individuals' homepages, new locations of moved pages and unindexed information
    Date
    1. 8.1996 22:08:06
  2. Milosavljevic, M.; Oberlander, J.: Dynamic catalogues on the WWW (1998) 0.04
    0.038148586 = product of:
      0.057222877 = sum of:
        0.030176813 = weight(_text_:on in 3594) [ClassicSimilarity], result of:
          0.030176813 = score(doc=3594,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.27492687 = fieldWeight in 3594, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=3594)
        0.027046064 = product of:
          0.054092128 = sum of:
            0.054092128 = weight(_text_:22 in 3594) [ClassicSimilarity], result of:
              0.054092128 = score(doc=3594,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.30952093 = fieldWeight in 3594, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3594)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Natural language generation techniques can be used to dynamically produce hypertext dynamic catalogues on the Web, resulting in DYNAMIC HYPERTEXT. A dynamic hypertext document can be tailored more precisely to a particular user's needs and background, thus helping the user to search more effectively. Describes the automatic generation of WWW documents and illustrates with 2 implemented systems
    Date
    1. 8.1996 22:08:06
  3. Falquet, G.; Guyot, J.; Nerima, L.: Languages and tools to specify hypertext views on databases (1999) 0.03
    0.034861263 = product of:
      0.052291892 = sum of:
        0.032007344 = weight(_text_:on in 3968) [ClassicSimilarity], result of:
          0.032007344 = score(doc=3968,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.29160398 = fieldWeight in 3968, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=3968)
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 3968) [ClassicSimilarity], result of:
              0.040569093 = score(doc=3968,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 3968, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3968)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We present a declarative language for the construction of hypertext views on databases. The language is based on an object-oriented data model and a simple hypertext model with reference and inclusion links. A hypertext view specification consists in a collection of parameterized node schemes which specify how to construct node and links instances from the database contents. We show how this language can express different issues in hypertext view design. These include: the direct mapping of objects to nodes; the construction of complex nodes based on sets of objects; the representation of polymorphic sets of objects; and the representation of tree and graph structures. We have defined sublanguages corresponding to particular database models (relational, semantic, object-oriented) and implemented tools to generate Web views for these database models
    Date
    21.10.2000 15:01:22
  4. Yang, C.C.; Liu, N.: Web site topic-hierarchy generation based on link structure (2009) 0.03
    0.02905105 = product of:
      0.043576576 = sum of:
        0.026672786 = weight(_text_:on in 2738) [ClassicSimilarity], result of:
          0.026672786 = score(doc=2738,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24300331 = fieldWeight in 2738, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2738)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 2738) [ClassicSimilarity], result of:
              0.03380758 = score(doc=2738,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 2738, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2738)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Navigating through hyperlinks within a Web site to look for information from one of its Web pages without the support of a site map can be inefficient and ineffective. Although the content of a Web site is usually organized with an inherent structure like a topic hierarchy, which is a directed tree rooted at a Web site's homepage whose vertices and edges correspond to Web pages and hyperlinks, such a topic hierarchy is not always available to the user. In this work, we studied the problem of automatic generation of Web sites' topic hierarchies. We modeled a Web site's link structure as a weighted directed graph and proposed methods for estimating edge weights based on eight types of features and three learning algorithms, namely decision trees, naïve Bayes classifiers, and logistic regression. Three graph algorithms, namely breadth-first search, shortest-path search, and directed minimum-spanning tree, were adapted to generate the topic hierarchy based on the graph model. We have tested the model and algorithms on real Web sites. It is found that the directed minimum-spanning tree algorithm with the decision tree as the weight learning algorithm achieves the highest performance with an average accuracy of 91.9%.
    Date
    22. 3.2009 12:51:47
  5. Nickerson, G.: World Wide Web : Hypertext from CERN (1992) 0.01
    0.012573673 = product of:
      0.03772102 = sum of:
        0.03772102 = weight(_text_:on in 4535) [ClassicSimilarity], result of:
          0.03772102 = score(doc=4535,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.3436586 = fieldWeight in 4535, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.078125 = fieldNorm(doc=4535)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes WorldWideWeb (WWW) software developed at CERN to provide hypertext links to the resources on the Internet telecommunications network. Outlines how to access WWW, itd features and approach to handling of multiple document types on multiplatform servers and to multiple clients
  6. Scott, P.: Hypertext ... information at your fingertips (1993) 0.01
    0.010058938 = product of:
      0.030176813 = sum of:
        0.030176813 = weight(_text_:on in 6) [ClassicSimilarity], result of:
          0.030176813 = score(doc=6,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.27492687 = fieldWeight in 6, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=6)
      0.33333334 = coord(1/3)
    
    Abstract
    Hypertext is an alternative to traditional linear text and has been used successfully to create useful indexes on various types of computers. HyperRez, from MaxThink, is discussed in details, as is the creation of the major Internet index, HYTELNET. Reference is also made to hypertext utilities currently under development that make use of the HyperRez software
    Source
    Proceedings of the Clinic on Library Applications of Data Processing: held April 5-7 1992 at University of Illinois at Urbana-Champaign. Ed. by L.C. Smith and P.W. Dalrymple
  7. Heffron, J.K.; Dillon, A.; Mostafa, J.: Landmarks in the World Wide Web : a preliminary study (1996) 0.01
    0.010058938 = product of:
      0.030176813 = sum of:
        0.030176813 = weight(_text_:on in 7474) [ClassicSimilarity], result of:
          0.030176813 = score(doc=7474,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.27492687 = fieldWeight in 7474, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=7474)
      0.33333334 = coord(1/3)
    
    Abstract
    Outlines the results of a pilot study designed to consider what constitutes a landmark in hypertext. Tests users' memories for locations visited on the WWW. Reports the results, and outlines a refined methodology for a new study. By understanding more about users' navigation through hypertext information space, the issue of recognition of informative materials on the WWW may be addressed
  8. Capps, M.; Ladd, B.; Stotts, D.: Enhanced graph models in the Web : multi-client, multi-head, multi-tail browsing (1996) 0.01
    0.007888435 = product of:
      0.023665305 = sum of:
        0.023665305 = product of:
          0.04733061 = sum of:
            0.04733061 = weight(_text_:22 in 5860) [ClassicSimilarity], result of:
              0.04733061 = score(doc=5860,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2708308 = fieldWeight in 5860, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5860)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 8.1996 22:08:06
  9. Machovec, G.S.: World Wide Web : accessing the Internet (1993) 0.01
    0.007112743 = product of:
      0.021338228 = sum of:
        0.021338228 = weight(_text_:on in 4534) [ClassicSimilarity], result of:
          0.021338228 = score(doc=4534,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 4534, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=4534)
      0.33333334 = coord(1/3)
    
    Abstract
    The World Wide Web (WWW) is one of the newest tools available to assist in the navigation of the Internet. As with other client/server network tools such as Gopher and WAIS, developments with the Web are in a dynamic state of change. Basically, WWW is an effort to organize information on the Internet plus local information into a set of hypertext documents; a person navigates the network by moving from one document to another via a set of hypertext links
  10. Bieber, M.: Fourth generation hypermedia : some missing links for the World Wide Web (1997) 0.01
    0.007112743 = product of:
      0.021338228 = sum of:
        0.021338228 = weight(_text_:on in 1209) [ClassicSimilarity], result of:
          0.021338228 = score(doc=1209,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 1209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=1209)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents a set of high-level hypermedia features: typed nodes and links, link attributes, structure-based query, transclusion, warm and hot links, private and public linkds, external link databases, link update mechanisms, overview, trails guided tours, backtracking and history-based navigation. Illustrates each feature from existing implementations and a running scenario. Gives suggestions for implementing these on the WWW and in other information systems
  11. Amitay, E.: Trends, fashions, patterns, norms, conventions and hypertext too (2001) 0.01
    0.007112743 = product of:
      0.021338228 = sum of:
        0.021338228 = weight(_text_:on in 5192) [ClassicSimilarity], result of:
          0.021338228 = score(doc=5192,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 5192, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=5192)
      0.33333334 = coord(1/3)
    
    Abstract
    At a finer level, Amitay speculates about the use of language on the Web. The Web may be one large corpus of text, but she suggests that communities will express themselves by the conventions used for writing hypertext. It may be that new information technologies will spawn new communities.
  12. Ihadjadene, M.; Bouché, R.; Zâafrani, R.: ¬The dynamic nature of searching and browsing on Web-OPACs : the CATHIE experience (2000) 0.01
    0.0062868367 = product of:
      0.01886051 = sum of:
        0.01886051 = weight(_text_:on in 118) [ClassicSimilarity], result of:
          0.01886051 = score(doc=118,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 118, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=118)
      0.33333334 = coord(1/3)
    
    Abstract
    The paradigm shift from the old system centered view to a user centered approach involves new tools needed for accessing library resources under the condition that the user's needs are taken into account. An end-user, who has only a little knowledge of classification systems or thesauri, understands little of the mode of the representation of contents and the use of authority lists. In addition, he will have difficulty in formulating his question in a precise manner. He needs to know better what the library proposes in order to define of what use it would be for him. Many studies have been carried out on the use of controlled vocabularies (classification, authority lists, thesauri) as searching devices. It is surprising to find that relatively little attention has been given to the role of these tools in filtering and browsing processes. We have developed a prototype named CATHIE (CATalog Hypertextuel Interactif et Enrichi) that supports such filtering and interactive reformulation features
  13. Saarela, J.: Logical structure of a hypermedia newspaper (1997) 0.01
    0.00622365 = product of:
      0.01867095 = sum of:
        0.01867095 = weight(_text_:on in 1546) [ClassicSimilarity], result of:
          0.01867095 = score(doc=1546,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 1546, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1546)
      0.33333334 = coord(1/3)
    
    Abstract
    The OtaOnline project at the Helsinki University of Technology, Finland, has been deploying the distribution of Finnish newspapers such as Iltalehti, Aamulehti and Kauppalehi on the Internet since 1994. The editors produce the electronic counterpart of these papers by a conversion process from QuarkXpress documents to HTML. The project is about to introduce an approach which provides many new features. Describes an object-oriented approach which implements the logical model of a hypermedia newspaper. It encapsulates the structures of the hypermedia documents as well as their capability for transforming into different presentation formats. It also provides a semantical rating mechanism to be used with intelligent agents. Presents a distribution scheme which enables efficient use of this model
  14. Tredinnick, L.: Post-structuralism, hypertext, and the World Wide Web (2007) 0.01
    0.00622365 = product of:
      0.01867095 = sum of:
        0.01867095 = weight(_text_:on in 650) [ClassicSimilarity], result of:
          0.01867095 = score(doc=650,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 650, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=650)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to explore the application of post-structuralist theory to understanding hypertext and the World Wide Web, and the challenge posed by digital information technology to the practices of the information profession. Design/methodology/approach - The method adopted is that of a critical study. Findings - The paper argues for the importance of post-structuralism for an understanding of the implications of digital information for the information management profession. Originality/value - Focuses on an epistemological gap between the traditional practices of the information profession, and the structure of the World Wide Web.
  15. Heo, M.; Hirtle, S.C.: ¬An empirical comparison of visualization tools to assist information retrieval on the Web (2001) 0.00
    0.0035563714 = product of:
      0.010669114 = sum of:
        0.010669114 = weight(_text_:on in 5215) [ClassicSimilarity], result of:
          0.010669114 = score(doc=5215,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.097201325 = fieldWeight in 5215, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=5215)
      0.33333334 = coord(1/3)
    
  16. Jünger, G.: ¬Ein neues Universum (2003) 0.00
    0.0017781857 = product of:
      0.005334557 = sum of:
        0.005334557 = weight(_text_:on in 1553) [ClassicSimilarity], result of:
          0.005334557 = score(doc=1553,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.048600662 = fieldWeight in 1553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.015625 = fieldNorm(doc=1553)
      0.33333334 = coord(1/3)
    
    Content
    Eine stetige Erfahrung der Techniksoziologie und -geschichte besagt, dass sich wirklich neue Konzepte, die ihrer Zeit vorauseilen, am Ende nicht durchsetzen können. Erfolg haben stattdessen mittelmäßige Nachbildungen der ersten Idee, die dann, um periphere Funktionen und Dekorationen erweitert, als große Innovationen auftreten. Beispiele für zweitbeste Lösungen, von denen jeder weiß, dass sie nur Krücken sind, liefert gerade die Informatik in großer Zahl. Das Gespann der Programmiersprachen Smalltalk und C++ gehört dazu, aber auch das World Wide Web, das wir heute kennen, bleibt weit hinter Konzepten eines universalen, globalen Informationssystems zurück, die lange vor der Definition des Hypertext-Protokolls durch Tim Berners-Lee entwickelt worden sind. Die Frage nach der technischen Vorgeschichte und ihren verpassten Chancen ist keineswegs nur von akademischem Interesse. Das "Xanadu" genannte System, das zum ersten Mal das weltweit vorhandene Wissen in digitaler Form radikal demokratisieren wollte, kann sehr gut als Folie dienen für die Diskussion über die zukünftige Entwicklung des WWW. Zweifellos ist der Wunsch, möglichst viel Wissen anzuhäufen, uralt. Er hat die Errichter der Bibliothek von Alexandria angetrieben, die kopierenden und kommentierenden Mönche des Mittelalters oder die Enzyklopädisten des 18. Jahrhunderts in Frankreich. Spätestens seit dem 20. Jahrhundert war die pure Menge des nun Wissbaren so nicht mehr zu bewältigen. Über die materielle Ablage der Dokumente hinaus mussten neue Organisationsprinzipien gefunden werden, um den Berg zu erschließen und seine Bestandteile untereinander in nutzbarer Weise zu verbinden. Nur dann konnte eine Wissenschaftlerin oder ein Wissenschaftler jetzt noch in vertretbarer Zeit zum aktuellen Wissensstand auf einem Gebiet aufschließen. Im Epochenjahr 1945 entwarf Vannevar Bush, ein wissenschaftlicher Berater von Roosevelt während des Zweiten Weltkriegs, eine erste Antwort auf die Frage nach einem solchen Organisationsprinzip. Er nannte sein System "Memex" (Memory Extender), also "Gedächtniserweiterer". Wissen sollte in der Form von Mikrofilmen archiviert und die dabei erzeugten Einzelbestandteile sollten so mit einander verknüpft werden, dass das sofortige Aufsuchen von Verweisen möglich würde. Technisch misslang das System, mit Hilfe von Mikrofilmen ließ es sich wohl kaum realisieren. Aber der Gedanke war formuliert, dass große Wissensbestände nicht unbedingt in separaten Dokumenten und überwiegend linear (Seite 2 folgt auf Seite 1) angeordnet zu werden brauchten. Sie können durch interne Verknüpfungen zwischen Einzelseiten zu etwas Neuem zusammengefügt werden. Der Flugzeugingenieur Douglas Engelbart las schon in den Vierzigerjahren von Bushs Idee. Ihm gebührt das Verdienst, sie auf die neue Technik der digitalen Computer übertragen zu haben. Eine Sitzung der "Fall Joint Computer Conference" im Jahr 1968 demonstrierte seine "NLS" (oN Line System) genannte Verwirklichung des Memex-Konzepts in der Praxis und war für viele Teilnehmer die Initialzündung zu eigenen Versuchen auf diesem Gebiet. NLS war ein riesiges Journal von einzelnen Memos und Berichten eines Vorgängerprojekts, das es den beteiligten Wissenschaftlern erlaubte, über adressierte Verweise unmittelbar zu einem benachbarten Dokument zu springen - ein Netz aus Knoten und `Kanten, dem nur noch ein geeigneter Name für seine neue Eigenschaft fehlte: