Search (1307 results, page 1 of 66)

  • × type_ss:"a"
  • × theme_ss:"Internet"
  1. Oliveira Machado, L.M.; Souza, R.R.; Simões, M. da Graça: Semantic web or web of data? : a diachronic study (1999 to 2017) of the publications of Tim Berners-Lee and the World Wide Web Consortium (2019) 0.10
    0.09999726 = product of:
      0.1666621 = sum of:
        0.033088673 = weight(_text_:retrieval in 5300) [ClassicSimilarity], result of:
          0.033088673 = score(doc=5300,freq=4.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.23632148 = fieldWeight in 5300, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.0884113 = weight(_text_:semantic in 5300) [ClassicSimilarity], result of:
          0.0884113 = score(doc=5300,freq=8.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.45938298 = fieldWeight in 5300, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.04516213 = product of:
          0.09032426 = sum of:
            0.09032426 = weight(_text_:web in 5300) [ClassicSimilarity], result of:
              0.09032426 = score(doc=5300,freq=22.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.59793836 = fieldWeight in 5300, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5300)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The web has been, in the last decades, the place where information retrieval achieved its maximum importance, given its ubiquity and the sheer volume of information. However, its exponential growth made the retrieval task increasingly hard, relying in its effectiveness on idiosyncratic and somewhat biased ranking algorithms. To deal with this problem, a "new" web, called the Semantic Web (SW), was proposed, bringing along concepts like "Web of Data" and "Linked Data," although the definitions and connections among these concepts are often unclear. Based on a qualitative approach built over a literature review, a definition of SW is presented, discussing the related concepts sometimes used as synonyms. It concludes that the SW is a comprehensive and ambitious construct that includes the great purpose of making the web a global database. It also follows the specifications developed and/or associated with its operationalization and the necessary procedures for the connection of data in an open format on the web. The goals of this comprehensive SW are the union of two outcomes still tenuously connected: the virtually unlimited possibility of connections between data-the web domain-with the potentiality of the automated inference of "intelligent" systems-the semantic component.
    Theme
    Semantic Web
  2. Heflin, J.; Hendler, J.: ¬A portrait of the Semantic Web in action (2001) 0.09
    0.090193264 = product of:
      0.22548315 = sum of:
        0.17504546 = weight(_text_:semantic in 2547) [ClassicSimilarity], result of:
          0.17504546 = score(doc=2547,freq=16.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.90953195 = fieldWeight in 2547, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2547)
        0.05043768 = product of:
          0.10087536 = sum of:
            0.10087536 = weight(_text_:web in 2547) [ClassicSimilarity], result of:
              0.10087536 = score(doc=2547,freq=14.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.6677857 = fieldWeight in 2547, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2547)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Without semantically enriched content, the Web cannot reach its full potential. The authors discuss tools and techniques for generating and processing such content, thus setting a foundation upon which to build the Semantic Web. In particular, they put a Semantic Web language through its paces and try to answer questions about how people can use it, such as, How do authors generate semantic descriptions? How do agents discover these descriptions? How can agents integrate information from different sites? How can users query the Semantic Web? The authors present a system that addresses these questions and describe tools that help users interact with the Semantic Web. They motivate the design of their system with a specific application: semantic markup for computer science.
    Theme
    Semantic Web
  3. Menczer, F.: Lexical and semantic clustering by Web links (2004) 0.09
    0.087797076 = product of:
      0.14632845 = sum of:
        0.028076671 = weight(_text_:retrieval in 3090) [ClassicSimilarity], result of:
          0.028076671 = score(doc=3090,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.20052543 = fieldWeight in 3090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
        0.075019486 = weight(_text_:semantic in 3090) [ClassicSimilarity], result of:
          0.075019486 = score(doc=3090,freq=4.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.38979942 = fieldWeight in 3090, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
        0.0432323 = product of:
          0.0864646 = sum of:
            0.0864646 = weight(_text_:web in 3090) [ClassicSimilarity], result of:
              0.0864646 = score(doc=3090,freq=14.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.57238775 = fieldWeight in 3090, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3090)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Recent Web-searching and -mining tools are combining text and link analysis to improve ranking and crawling algorithms. The central assumption behind such approaches is that there is a correiation between the graph structure of the Web and the text and meaning of pages. Here I formalize and empirically evaluate two general conjectures drawing connections from link information to lexical and semantic Web content. The link-content conjecture states that a page is similar to the pages that link to it, and the link-cluster conjecture that pages about the same topic are clustered together. These conjectures are offen simply assumed to hold, and Web search tools are built an such assumptions. The present quantitative confirmation sheds light an the connection between the success of the latest Web-mining techniques and the small world topology of the Web, with encouraging implications for the design of better crawling algorithms.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  4. Aldana, J.F.; Gómez, A.C.; Moreno, N.; Nebro, A.J.; Roldán, M.M.: Metadata functionality for semantic Web integration (2003) 0.08
    0.083867714 = product of:
      0.13977952 = sum of:
        0.026470939 = weight(_text_:retrieval in 2731) [ClassicSimilarity], result of:
          0.026470939 = score(doc=2731,freq=4.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.18905719 = fieldWeight in 2731, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2731)
        0.08662503 = weight(_text_:semantic in 2731) [ClassicSimilarity], result of:
          0.08662503 = score(doc=2731,freq=12.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.45010158 = fieldWeight in 2731, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.03125 = fieldNorm(doc=2731)
        0.026683552 = product of:
          0.053367104 = sum of:
            0.053367104 = weight(_text_:web in 2731) [ClassicSimilarity], result of:
              0.053367104 = score(doc=2731,freq=12.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.35328537 = fieldWeight in 2731, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2731)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    We propose an extension of a mediator architecture. This extension is oriented to ontology-driven data integration. In our architecture ontologies are not managed by an extemal component or service, but are integrated in the mediation layer. This approach implies rethinking the mediator design, but at the same time provides advantages from a database perspective. Some of these advantages include the application of optimization and evaluation techniques that use and combine information from all abstraction levels (physical schema, logical schema and semantic information defined by ontology). 1. Introduction Although the Web is probably the richest information repository in human history, users cannot specify what they want from it. Two major problems that arise in current search engines (Heflin, 2001) are: a) polysemy, when the same word is used with different meanings; b) synonymy, when two different words have the same meaning. Polysemy causes irrelevant information retrieval. On the other hand, synonymy produces lost of useful documents. The lack of a capability to understand the context of the words and the relationships among required terms, explains many of the lost and false results produced by search engines. The Semantic Web will bring structure to the meaningful content of Web pages, giving semantic relationships among terms and possibly avoiding the previous problems. Various proposals have appeared for meta-data representation and communication standards, and other services and tools that may eventually merge into the global Semantic Web (Berners-lee, 2001). Hopefully, in the next few years we will see the universal adoption of open standards for representation and sharing of meta-information. In this environment, software agents roaming from page to page can readily carry out sophisticated tasks for users (Berners-Lee, 2001). In this context, ontologies can be seen as metadata that represent semantic of data; providing a knowledge domain standard vocabulary, like DTDs and XML Schema do. If its pages were so structured, the Web could be seen as a heterogeneous collection of autonomous databases. This suggests that techniques developed in the Database area could be useful. Database research mainly deals with efficient storage and retrieval and with powerful query languages.
  5. Berners-Lee, T.; Hendler, J.; Lassila, O.: ¬The Semantic Web : a new form of Web content that is meaningful to computers will unleash a revolution of new possibilities (2001) 0.08
    0.08304018 = product of:
      0.20760044 = sum of:
        0.15313287 = weight(_text_:semantic in 376) [ClassicSimilarity], result of:
          0.15313287 = score(doc=376,freq=6.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.7956747 = fieldWeight in 376, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.078125 = fieldNorm(doc=376)
        0.054467574 = product of:
          0.10893515 = sum of:
            0.10893515 = weight(_text_:web in 376) [ClassicSimilarity], result of:
              0.10893515 = score(doc=376,freq=8.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.72114074 = fieldWeight in 376, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.078125 = fieldNorm(doc=376)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Initialbeitrag zum sog. "Semantic Web". - Dt. Übersetzung: dt.: Mein Computer versteht mich. In: Spektrum der Wissenschaft, August 2001, S. 42-49
    Theme
    Semantic Web
  6. Burke, M.: ¬The semantic web and the digital library (2009) 0.08
    0.07828399 = product of:
      0.19570997 = sum of:
        0.14661357 = weight(_text_:semantic in 2962) [ClassicSimilarity], result of:
          0.14661357 = score(doc=2962,freq=22.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.7618005 = fieldWeight in 2962, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2962)
        0.049096406 = product of:
          0.09819281 = sum of:
            0.09819281 = weight(_text_:web in 2962) [ClassicSimilarity], result of:
              0.09819281 = score(doc=2962,freq=26.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.65002745 = fieldWeight in 2962, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2962)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - The purpose of this paper is to discuss alternative definitions of and approaches to the semantic web. It aims to clarify the relationship between the semantic web, Web 2.0 and Library 2.0. Design/methodology/approach - The paper is based on a literature review and evaluation of systems with semantic web features. It identifies and describes semantic web projects of relevance to libraries and evaluates the usefulness of JeromeDL and other social semantic digital library systems. It discusses actual and potential applications for libraries and makes recommendations for actions needed by researchers and practitioners. Findings - The paper concludes that the library community has a lot to offer to, and benefit from, the semantic web, but there is limited interest in the library community. It recommends that there be greater collaboration between semantic web researchers and project developers, library management systems providers and the library community. Librarians should get involved in the development of semantic web standards, for example, metadata and taxonomies. Originality/value - The paper clarifies the distinction between semantic web and Web 2.0 in a digital library environment. It evaluates and predicts future developments for operational systems.
    Object
    Web 2.0
    Theme
    Semantic Web
  7. Chen, Z.; Wenyin, L.; Zhang, F.; Li, M.; Zhang, H.: Web mining for Web image retrieval (2001) 0.07
    0.072454646 = product of:
      0.12075774 = sum of:
        0.040525187 = weight(_text_:retrieval in 6521) [ClassicSimilarity], result of:
          0.040525187 = score(doc=6521,freq=6.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.28943354 = fieldWeight in 6521, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6521)
        0.04420565 = weight(_text_:semantic in 6521) [ClassicSimilarity], result of:
          0.04420565 = score(doc=6521,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.22969149 = fieldWeight in 6521, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6521)
        0.036026914 = product of:
          0.07205383 = sum of:
            0.07205383 = weight(_text_:web in 6521) [ClassicSimilarity], result of:
              0.07205383 = score(doc=6521,freq=14.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.47698978 = fieldWeight in 6521, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6521)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The popularity of digital images is rapidly increasing due to improving digital imaging technologies and convenient availability facilitated by the Internet. However, how to find user-intended images from the Internet is nontrivial. The main reason is that the Web images are usually not annotated using semantic descriptors. In this article, we present an effective approach to and a prototype system for image retrieval from the Internet using Web mining. The system can also serve as a Web image search engine. One of the key ideas in the approach is to extract the text information on the Web pages to semantically describe the images. The text description is then combined with other low-level image features in the image similarity assessment. Another main contribution of this work is that we apply data mining on the log of users' feedback to improve image retrieval performance in three aspects. First, the accuracy of the document space model of image representation obtained from the Web pages is improved by removing clutter and irrelevant text information. Second, to construct the user space model of users' representation of images, which is then combined with the document space model to eliminate mismatch between the page author's expression and the user's understanding and expectation. Third, to discover the relationship between low-level and high-level features, which is extremely useful for assigning the low-level features' weights in similarity assessment
  8. Li, W.-S.; Shim, J.: Facilitating complex Web queries through visual user interfaces and query relaxation (1998) 0.07
    0.06873019 = product of:
      0.17182547 = sum of:
        0.06188791 = weight(_text_:semantic in 3602) [ClassicSimilarity], result of:
          0.06188791 = score(doc=3602,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.32156807 = fieldWeight in 3602, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3602)
        0.10993756 = sum of:
          0.06603842 = weight(_text_:web in 3602) [ClassicSimilarity], result of:
            0.06603842 = score(doc=3602,freq=6.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.43716836 = fieldWeight in 3602, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3602)
          0.043899145 = weight(_text_:22 in 3602) [ClassicSimilarity], result of:
            0.043899145 = score(doc=3602,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.2708308 = fieldWeight in 3602, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3602)
      0.4 = coord(2/5)
    
    Abstract
    Describes a novel visual user interface, WebIFQ (Web-In-Frame-Query), to assist users in specifying queries and visualising query criteria including document metadata, strucutres, and linkage information. WebIFQ automatically generates corresponding query statements for WebDB. As a result, users are not required to be aware of underlying complex schema design and language syntax. WebDB supports automated query relaxation to include additional terms related by semantic or co-occurence relationship. WebIFQ can facilitate users to reformulate queries perpetually in an interactive mode
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
  9. Hocine, A.; Lo, M.; Smadhi, S.: Information retrieval on the Web : an approach using a base of concepts and XML (2000) 0.07
    0.0682824 = product of:
      0.113804 = sum of:
        0.028076671 = weight(_text_:retrieval in 149) [ClassicSimilarity], result of:
          0.028076671 = score(doc=149,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.20052543 = fieldWeight in 149, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=149)
        0.05304678 = weight(_text_:semantic in 149) [ClassicSimilarity], result of:
          0.05304678 = score(doc=149,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.2756298 = fieldWeight in 149, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=149)
        0.03268054 = product of:
          0.06536108 = sum of:
            0.06536108 = weight(_text_:web in 149) [ClassicSimilarity], result of:
              0.06536108 = score(doc=149,freq=8.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.43268442 = fieldWeight in 149, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=149)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The emergence of XML as a new standard for documents semi-structured on the Web opens opportunities to improve the process of interrogation of Web sites. HTML's major inconvenience is that it does not allow one to distinguish the logical and physical aspects of documents. We propose a model of data-web based on XML, and a concepts base composed of a meta-data base and a domain thesaurus. The meta-data base includes information on the content, the semantic structure, and the organisation of data of the site. The process of searching for information is based on the exploitation of the elements of the concepts base and allows for an interactive search for relevant documents (or extractions from documents)
  10. Gradmann, S.; Hol, R.; Wesseling, M.G.: Auf dem Weg zum "Semantic Web" : Perspektiven der Verbundarbeit aus der Sicht von Pica (2001) 0.07
    0.065789424 = product of:
      0.16447355 = sum of:
        0.07072904 = weight(_text_:semantic in 1773) [ClassicSimilarity], result of:
          0.07072904 = score(doc=1773,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.36750638 = fieldWeight in 1773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0625 = fieldNorm(doc=1773)
        0.09374451 = sum of:
          0.043574058 = weight(_text_:web in 1773) [ClassicSimilarity], result of:
            0.043574058 = score(doc=1773,freq=2.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.2884563 = fieldWeight in 1773, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0625 = fieldNorm(doc=1773)
          0.05017045 = weight(_text_:22 in 1773) [ClassicSimilarity], result of:
            0.05017045 = score(doc=1773,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.30952093 = fieldWeight in 1773, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1773)
      0.4 = coord(2/5)
    
    Date
    22. 3.2008 13:53:45
  11. Maarek, Y.S.: WebCutter : a system for dynamic and tailorable site mapping (1997) 0.07
    0.06550022 = product of:
      0.16375054 = sum of:
        0.05304678 = weight(_text_:semantic in 2739) [ClassicSimilarity], result of:
          0.05304678 = score(doc=2739,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.2756298 = fieldWeight in 2739, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=2739)
        0.11070376 = sum of:
          0.07307592 = weight(_text_:web in 2739) [ClassicSimilarity], result of:
            0.07307592 = score(doc=2739,freq=10.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.48375595 = fieldWeight in 2739, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=2739)
          0.03762784 = weight(_text_:22 in 2739) [ClassicSimilarity], result of:
            0.03762784 = score(doc=2739,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.23214069 = fieldWeight in 2739, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2739)
      0.4 = coord(2/5)
    
    Abstract
    Presents an approach that integrates searching and browsing in a manner that improves both paradigms. When browsing is the primary task, it enables semantic content-based tailoring of Web maps in both the generation as well as the visualization phases. When search is the primary task, it enables contextualization of the results by augmenting them with the documents' neighbourhoods. This approach is embodied in WebCutter, a client-server system fully integrated with Web software. WebCutter consists of a map generator running off a standard Web server and a map visualization client implemented as a Java applet runalble from any standard Web browser and requiring no installation or external plug-in application. WebCutter is in beta stage and is in the process of being integrated into the Lotus Domino application product line
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue of papers from the 6th International World Wide Web conference, held 7-11 Apr 1997, Santa Clara, California
  12. Capps, M.; Ladd, B.; Stotts, D.: Enhanced graph models in the Web : multi-client, multi-head, multi-tail browsing (1996) 0.06
    0.06388288 = product of:
      0.1597072 = sum of:
        0.06188791 = weight(_text_:semantic in 5860) [ClassicSimilarity], result of:
          0.06188791 = score(doc=5860,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.32156807 = fieldWeight in 5860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5860)
        0.09781929 = sum of:
          0.053920146 = weight(_text_:web in 5860) [ClassicSimilarity], result of:
            0.053920146 = score(doc=5860,freq=4.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.35694647 = fieldWeight in 5860, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5860)
          0.043899145 = weight(_text_:22 in 5860) [ClassicSimilarity], result of:
            0.043899145 = score(doc=5860,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.2708308 = fieldWeight in 5860, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5860)
      0.4 = coord(2/5)
    
    Abstract
    Richer graph models permit authors to 'program' the browsing behaviour they want WWW readers to see by turning the hypertext into a hyperprogram with specific semantics. Multiple browsing streams can be started under the author's control and then kept in step through the synchronization mechanisms provided by the graph model. Adds a Semantic Web Graph Layer (SWGL) which allows dynamic interpretation of link and node structures according to graph models. Details the SWGL and its architecture, some sample protocol implementations, and the latest extensions to MHTML
    Date
    1. 8.1996 22:08:06
  13. Chang, C.-H.; Hsu, C.-C.: Integrating query expansion and conceptual relevance feedback for personalized Web information retrieval (1998) 0.06
    0.06250469 = product of:
      0.15626171 = sum of:
        0.04632414 = weight(_text_:retrieval in 1319) [ClassicSimilarity], result of:
          0.04632414 = score(doc=1319,freq=4.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.33085006 = fieldWeight in 1319, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1319)
        0.10993756 = sum of:
          0.06603842 = weight(_text_:web in 1319) [ClassicSimilarity], result of:
            0.06603842 = score(doc=1319,freq=6.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.43716836 = fieldWeight in 1319, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1319)
          0.043899145 = weight(_text_:22 in 1319) [ClassicSimilarity], result of:
            0.043899145 = score(doc=1319,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.2708308 = fieldWeight in 1319, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1319)
      0.4 = coord(2/5)
    
    Abstract
    Keyword based querying has been an immediate and efficient way to specify and retrieve related information that the user inquired. However, conventional document ranking based on an automatic assessment of document relevance to the query may not be the best approach when little information is given. Proposes an idea to integrate 2 existing techniques, query expansion and relevance feedback to achieve a concept-based information search for the Web
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  14. O'Kane, K.C.: World Wide Web-based information storage and retrieval (1996) 0.06
    0.06182182 = product of:
      0.15455455 = sum of:
        0.05673526 = weight(_text_:retrieval in 4737) [ClassicSimilarity], result of:
          0.05673526 = score(doc=4737,freq=6.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.40520695 = fieldWeight in 4737, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4737)
        0.09781929 = sum of:
          0.053920146 = weight(_text_:web in 4737) [ClassicSimilarity], result of:
            0.053920146 = score(doc=4737,freq=4.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.35694647 = fieldWeight in 4737, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4737)
          0.043899145 = weight(_text_:22 in 4737) [ClassicSimilarity], result of:
            0.043899145 = score(doc=4737,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.2708308 = fieldWeight in 4737, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4737)
      0.4 = coord(2/5)
    
    Abstract
    Describes the design and implementation of a system for computer generation of linked HTML documents to support information retrieval and hypertext applications on the WWW. The system does not require text query input, nor any client or host processing other than hypertext linkage. The goal is to construct a fully automatic system in which original text documents are read and processed by a computer program that generates HTML files, which can be used immediately by Web browsers to search and retrieve the original documents. A user with a large collection of information: for instance, newspaper articles; can feed these documents to this program and produce directly the necessary files to establish WWW home page and related pages, to support interactive retrieval and distribution of the original documents
    Date
    1. 8.1996 22:13:07
  15. Bachiochi, D.: Usability studies and designing navigational aids for the World Wide Web (1997) 0.06
    0.059691615 = product of:
      0.14922903 = sum of:
        0.03743556 = weight(_text_:retrieval in 2402) [ClassicSimilarity], result of:
          0.03743556 = score(doc=2402,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.26736724 = fieldWeight in 2402, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2402)
        0.11179347 = sum of:
          0.061623022 = weight(_text_:web in 2402) [ClassicSimilarity], result of:
            0.061623022 = score(doc=2402,freq=4.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.4079388 = fieldWeight in 2402, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0625 = fieldNorm(doc=2402)
          0.05017045 = weight(_text_:22 in 2402) [ClassicSimilarity], result of:
            0.05017045 = score(doc=2402,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.30952093 = fieldWeight in 2402, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2402)
      0.4 = coord(2/5)
    
    Abstract
    Describes how usability testing was used to validate design recommendations WWW navigation aids. The results show a need for navigational aids that are related to the particular Website and located beneath browser buttons. Usability criteria were established that limits page changes to 4 and search times to 60 seconds for information retrieval
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue of papers from the 6th International World Wide Web conference, held 7-11 Apr 1997, Santa Clara, California
  16. Nait-Baha, L.; Jackiewicz, A.; Djioua, B.; Laublet, P.: Query reformulation for information retrieval on the Web using the point of view methodology : preliminary results (2001) 0.06
    0.05847824 = product of:
      0.09746373 = sum of:
        0.028076671 = weight(_text_:retrieval in 249) [ClassicSimilarity], result of:
          0.028076671 = score(doc=249,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.20052543 = fieldWeight in 249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=249)
        0.05304678 = weight(_text_:semantic in 249) [ClassicSimilarity], result of:
          0.05304678 = score(doc=249,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.2756298 = fieldWeight in 249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=249)
        0.01634027 = product of:
          0.03268054 = sum of:
            0.03268054 = weight(_text_:web in 249) [ClassicSimilarity], result of:
              0.03268054 = score(doc=249,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.21634221 = fieldWeight in 249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=249)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The work we are presenting is devoted to the information collected on the WWW. By the term collected we mean the whole process of retrieving, extracting and presenting results to the user. This research is part of the RAP (Research, Analyze, Propose) project in which we propose to combine two methods: (i) query reformulation using linguistic markers according to a given point of view; and (ii) text semantic analysis by means of contextual exploration results (Descles, 1991). The general project architecture describing the interactions between the users, the RAP system and the WWW search engines is presented in Nait-Baha et al. (1998). We will focus this paper on showing how we use linguistic markers to reformulate the queries according to a given point of view
  17. Johnson, E.H.: S R Ranganathan in the Internet age (2019) 0.06
    0.05847824 = product of:
      0.09746373 = sum of:
        0.028076671 = weight(_text_:retrieval in 5406) [ClassicSimilarity], result of:
          0.028076671 = score(doc=5406,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.20052543 = fieldWeight in 5406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5406)
        0.05304678 = weight(_text_:semantic in 5406) [ClassicSimilarity], result of:
          0.05304678 = score(doc=5406,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.2756298 = fieldWeight in 5406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=5406)
        0.01634027 = product of:
          0.03268054 = sum of:
            0.03268054 = weight(_text_:web in 5406) [ClassicSimilarity], result of:
              0.03268054 = score(doc=5406,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.21634221 = fieldWeight in 5406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5406)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    S R Ranganathan's ideas have influenced library classification since the inception of his Colon Classification in 1933. His address at Elsinore, "Library Classification Through a Century", was his grand vision of the century of progress in classification from 1876 to 1975, and looked to the future of faceted classification as the means to provide a cohesive system to organize the world's information. Fifty years later, the internet and its achievements, social ecology, and consequences present a far more complicated picture, with the library as he knew it as a very small part and the problems that he confronted now greatly exacerbated. The systematic nature of Ranganathan's canons, principles, postulates, and devices suggest that modern semantic algorithms could guide automatic subject tagging. The vision presented here is one of internet-wide faceted classification and retrieval, implemented as open, distributed facets providing unified faceted searching across all web sites.
  18. Krabo, U.; Knitel, M.: Library linked data : Technologien, Projekte, Potentiale (2011) 0.06
    0.058128126 = product of:
      0.14532031 = sum of:
        0.10719301 = weight(_text_:semantic in 4908) [ClassicSimilarity], result of:
          0.10719301 = score(doc=4908,freq=6.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.55697227 = fieldWeight in 4908, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4908)
        0.038127303 = product of:
          0.076254606 = sum of:
            0.076254606 = weight(_text_:web in 4908) [ClassicSimilarity], result of:
              0.076254606 = score(doc=4908,freq=8.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.50479853 = fieldWeight in 4908, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4908)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Das Semantic Web und seine Auswirkungen auf Bibliotheken rücken immer mehr in den Fokus informationswissenschaftlicher Forschung. Dieser Artikel erläutert grundlegende funktionale wie technische Konzepte des Semantic Web, um darauf aufbauend in das Thema Library Linked Data einzuführen. Dafür werden einige kürzlich entstandene Projekte und Initiativen vorgestellt. Neben den Visionen und Zielen der jeweiligen Initiatoren, wie bessere Sichtbarkeit von bibliographischen Daten und Entwicklung neuer Applikationen, werden auch offene technische und rechtliche Fragestellungen bzw. Probleme kurzangerissen. In einem letzten Punkt werden mögliche praktische Linked Data-Anwendungsfalle für den österreichischen Kontext vorgestellt.
    Content
    Inhalt 1. Einleitung 2. Das Semantic Web 3. Technologien und Standards 4. Linked Data 5. Library Linked Data: Projekte und Erwartungen 6. Herausforderungen 7. LLD-Anwendungen in Österreich 8. Fazit
    Object
    Web 2.0
  19. Berners-Lee, T.; Hendler, J.; Lassila, O.: Mein Computer versteht mich (2001) 0.06
    0.057440013 = product of:
      0.14360003 = sum of:
        0.100025974 = weight(_text_:semantic in 4550) [ClassicSimilarity], result of:
          0.100025974 = score(doc=4550,freq=4.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.51973253 = fieldWeight in 4550, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0625 = fieldNorm(doc=4550)
        0.043574058 = product of:
          0.087148115 = sum of:
            0.087148115 = weight(_text_:web in 4550) [ClassicSimilarity], result of:
              0.087148115 = score(doc=4550,freq=8.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.5769126 = fieldWeight in 4550, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4550)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Was wäre, wenn der Computer den Inhalt einer Seite aus dem World Wide Web nicht nur anzeigen, sondern auch seine Bedeutung erfassen würde? Er könnte ungeahnte Dinge für seinen Benutzer tun - und das vielleicht schon bald, wenn das semantische Netz etabliert ist
    Footnote
    Dt. Übersetzung von: The Semantic Web: a new form of Web content that is meaningful to computers will unleash a revolution of new possibilities. In: Scientific American. 284(2001) no.5, S.34-43.
    Theme
    Semantic Web
  20. Feigenbaum, L.; Herman, I.; Hongsermeier, T.; Neumann, E.; Stephens, S.: ¬The Semantic Web in action (2007) 0.06
    0.055721242 = product of:
      0.1393031 = sum of:
        0.100025974 = weight(_text_:semantic in 3000) [ClassicSimilarity], result of:
          0.100025974 = score(doc=3000,freq=16.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.51973253 = fieldWeight in 3000, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.03125 = fieldNorm(doc=3000)
        0.039277125 = product of:
          0.07855425 = sum of:
            0.07855425 = weight(_text_:web in 3000) [ClassicSimilarity], result of:
              0.07855425 = score(doc=3000,freq=26.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.520022 = fieldWeight in 3000, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3000)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Six years ago in this magazine, Tim Berners-Lee, James Hendler and Ora Lassila unveiled a nascent vision of the Semantic Web: a highly interconnected network of data that could be easily accessed and understood by any desktop or handheld machine. They painted a future of intelligent software agents that would head out on the World Wide Web and automatically book flights and hotels for our trips, update our medical records and give us a single, customized answer to a particular question without our having to search for information or pore through results. They also presented the young technologies that would make this vision come true: a common language for representing data that could be understood by all kinds of software agents; ontologies--sets of statements--that translate information from disparate databases into common terms; and rules that allow software agents to reason about the information described in those terms. The data format, ontologies and reasoning software would operate like one big application on the World Wide Web, analyzing all the raw data stored in online databases as well as all the data about the text, images, video and communications the Web contained. Like the Web itself, the Semantic Web would grow in a grassroots fashion, only this time aided by working groups within the World Wide Web Consortium, which helps to advance the global medium. Since then skeptics have said the Semantic Web would be too difficult for people to understand or exploit. Not so. The enabling technologies have come of age. A vibrant community of early adopters has agreed on standards that have steadily made the Semantic Web practical to use. Large companies have major projects under way that will greatly improve the efficiencies of in-house operations and of scientific research. Other firms are using the Semantic Web to enhance business-to-business interactions and to build the hidden data-processing structures, or back ends, behind new consumer services. And like an iceberg, the tip of this large body of work is emerging in direct consumer applications, too.
    Content
    Vgl. auch unter: http://thefigtrees.net/lee/sw/sciam/semantic-web-in-action#single-page.
    Theme
    Semantic Web

Years

Languages

Types