Search (79 results, page 1 of 4)

  • × theme_ss:"Semantic Web"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.13
    0.12916699 = product of:
      0.30138963 = sum of:
        0.04305566 = product of:
          0.12916698 = sum of:
            0.12916698 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12916698 = score(doc=701,freq=2.0), product of:
                0.34474066 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04066292 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.12916698 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12916698 = score(doc=701,freq=2.0), product of:
            0.34474066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04066292 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.12916698 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12916698 = score(doc=701,freq=2.0), product of:
            0.34474066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04066292 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.42857143 = coord(3/7)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.03
    0.0328998 = product of:
      0.1151493 = sum of:
        0.049110502 = weight(_text_:sites in 1626) [ClassicSimilarity], result of:
          0.049110502 = score(doc=1626,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.23103109 = fieldWeight in 1626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.03125 = fieldNorm(doc=1626)
        0.066038795 = sum of:
          0.044001736 = weight(_text_:design in 1626) [ClassicSimilarity], result of:
            0.044001736 = score(doc=1626,freq=6.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.28780508 = fieldWeight in 1626, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
          0.022037057 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
            0.022037057 = score(doc=1626,freq=2.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.15476047 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
  3. Veltman, K.H.: Syntactic and semantic interoperability : new approaches to knowledge and the Semantic Web (2001) 0.03
    0.03172881 = product of:
      0.11105083 = sum of:
        0.056559652 = weight(_text_:united in 3883) [ClassicSimilarity], result of:
          0.056559652 = score(doc=3883,freq=2.0), product of:
            0.22812355 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.04066292 = queryNorm
            0.2479343 = fieldWeight in 3883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.03125 = fieldNorm(doc=3883)
        0.054491177 = weight(_text_:states in 3883) [ClassicSimilarity], result of:
          0.054491177 = score(doc=3883,freq=2.0), product of:
            0.22391328 = queryWeight, product of:
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.04066292 = queryNorm
            0.24335839 = fieldWeight in 3883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.03125 = fieldNorm(doc=3883)
      0.2857143 = coord(2/7)
    
    Abstract
    At VVWW-7 (Brisbane, 1997), Tim Berners-Lee outlined his vision of a global reasoning web. At VVWW- 8 (Toronto, May 1998), he developed this into a vision of a semantic web, where one Gould search not just for isolated words, but for meaning in the form of logically provable claims. In the past four years this vision has spread with amazing speed. The semantic web has been adopted by the European Commission as one of the important goals of the Sixth Framework Programme. In the United States it has become linked with the Defense Advanced Research Projects Agency (DARPA). While this quest to achieve a semantic web is new, the quest for meaning in language has a history that is almost as old as language itself. Accordingly this paper opens with a survey of the historical background. The contributions of the Dublin Core are reviewed briefly. To achieve a semantic web requires both syntactic and semantic interoperability. These challenges are outlined. A basic contention of this paper is that semantic interoperability requires much more than a simple agreement concerning the static meaning of a term. Different levels of agreement (local, regional, national and international) are involved and these levels have their own history. Hence, one of the larger challenges is to create new systems of knowledge organization, which identify and connect these different levels. With respect to meaning or semantics, early twentieth century pioneers such as Wüster were hopeful that it might be sufficient to limit oneself to isolated terms and words without reference to the larger grammatical context: to concept systems rather than to propositional logic. While a fascination with concept systems implicitly dominates many contemporary discussions, this paper suggests why this approach is not sufficient. The final section of this paper explores how an approach using propositional logic could lead to a new approach to universals and particulars. This points to a re-organization of knowledge, and opens the way for a vision of a semantic web with all the historical and cultural richness and complexity of language itself.
  4. Heflin, J.; Hendler, J.: ¬A portrait of the Semantic Web in action (2001) 0.03
    0.030906355 = product of:
      0.10817224 = sum of:
        0.08594338 = weight(_text_:sites in 2547) [ClassicSimilarity], result of:
          0.08594338 = score(doc=2547,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.40430441 = fieldWeight in 2547, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2547)
        0.022228861 = product of:
          0.044457722 = sum of:
            0.044457722 = weight(_text_:design in 2547) [ClassicSimilarity], result of:
              0.044457722 = score(doc=2547,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.29078758 = fieldWeight in 2547, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2547)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Without semantically enriched content, the Web cannot reach its full potential. The authors discuss tools and techniques for generating and processing such content, thus setting a foundation upon which to build the Semantic Web. In particular, they put a Semantic Web language through its paces and try to answer questions about how people can use it, such as, How do authors generate semantic descriptions? How do agents discover these descriptions? How can agents integrate information from different sites? How can users query the Semantic Web? The authors present a system that addresses these questions and describe tools that help users interact with the Semantic Web. They motivate the design of their system with a specific application: semantic markup for computer science.
  5. Faaborg, A.; Lagoze, C.: Semantic browsing (2003) 0.03
    0.030064516 = product of:
      0.1052258 = sum of:
        0.08594338 = weight(_text_:sites in 1026) [ClassicSimilarity], result of:
          0.08594338 = score(doc=1026,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.40430441 = fieldWeight in 1026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1026)
        0.019282425 = product of:
          0.03856485 = sum of:
            0.03856485 = weight(_text_:22 in 1026) [ClassicSimilarity], result of:
              0.03856485 = score(doc=1026,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.2708308 = fieldWeight in 1026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1026)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    We have created software applications that allow users to both author and use Semantic Web metadata. To create and use a layer of semantic content on top of the existing Web, we have (1) implemented a user interface that expedites the task of attributing metadata to resources on the Web, and (2) augmented a Web browser to leverage this semantic metadata to provide relevant information and tasks to the user. This project provides a framework for annotating and reorganizing existing files, pages, and sites on the Web that is similar to Vannevar Bushrsquos original concepts of trail blazing and associative indexing.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
  6. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.02
    0.019830506 = product of:
      0.06940677 = sum of:
        0.035349783 = weight(_text_:united in 468) [ClassicSimilarity], result of:
          0.035349783 = score(doc=468,freq=2.0), product of:
            0.22812355 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.04066292 = queryNorm
            0.15495893 = fieldWeight in 468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
        0.034056988 = weight(_text_:states in 468) [ClassicSimilarity], result of:
          0.034056988 = score(doc=468,freq=2.0), product of:
            0.22391328 = queryWeight, product of:
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.04066292 = queryNorm
            0.152099 = fieldWeight in 468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
      0.2857143 = coord(2/7)
    
    Footnote
    The next chapter introduces resource description framework (RDF) and RDF schema (RDFS). Unlike XML, RDF provides a foundation for expressing the semantics of dada: it is a standard dada model for machine-processable semantics. Resource description framework schema offers a number of modeling primitives for organizing RDF vocabularies in typed hierarchies. In addition to RDF and RDFS, a query language for RDF, i.e. RQL. is introduced. This chapter and the next chapter are two of the most important chapters in the book. Chapter 4 presents another language called Web Ontology Language (OWL). Because RDFS is quite primitive as a modeling language for the Web, more powerful languages are needed. A richer language. DAML+OIL, is thus proposed as a joint endeavor of the United States and Europe. OWL takes DAML+OIL as the starting point, and aims to be the standardized and broadly accepted ontology language. At the beginning of the chapter, the nontrivial relation with RDF/RDFS is discussed. Then the authors describe the various language elements of OWL in some detail. Moreover, Appendix A contains an abstract OWL syntax. which compresses OWL and makes OWL much easier to read. Chapter 5 covers both monotonic and nonmonotonic rules. Whereas the previous chapter's mainly concentrate on specializations of knowledge representation, this chapter depicts the foundation of knowledge representation and inference. Two examples are also givwn to explain monotonic and non-monotonic rules, respectively. "To get the most out of the chapter. readers had better gain a thorough understanding of predicate logic first. Chapter 6 presents several realistic application scenarios to which the Semantic Web technology can be applied. including horizontal information products at Elsevier, data integration at Audi, skill finding at Swiss Life, a think tank portal at EnerSearch, e-learning. Web services, multimedia collection indexing, online procurement, raid device interoperability. These case studies give us some real feelings about the Semantic Web.
  7. Knitting the semantic Web (2007) 0.02
    0.017315464 = product of:
      0.06060412 = sum of:
        0.04948969 = weight(_text_:united in 1397) [ClassicSimilarity], result of:
          0.04948969 = score(doc=1397,freq=2.0), product of:
            0.22812355 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.04066292 = queryNorm
            0.2169425 = fieldWeight in 1397, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1397)
        0.011114431 = product of:
          0.022228861 = sum of:
            0.022228861 = weight(_text_:design in 1397) [ClassicSimilarity], result of:
              0.022228861 = score(doc=1397,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.14539379 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1397)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The Semantic Web, the extension that goes beyond the current Web, better enables computers and people to effectively work together by giving information well-defined meaning. Knitting the Semantic Web explains the interdisciplinary efforts underway to build a more library-like Web through "semantic knitting." The book examines tagging information with standardized semantic metadata to result in a network able to support computational activities and provide people with services efficiently. Leaders in library and information science, computer science, and information intensive domains provide insight and inspiration to give readers a greater understanding in the development, growth, and maintenance of the Semantic Web. Librarians are uniquely qualified to play a major role in the development and maintenance of the Semantic Web. Knitting the Semantic Web closely examines this crucial relationship in detail. This single source reviews the foundations, standards, and tools of the Semantic Web, as well as discussions on projects and perspectives. Many chapters include figures to illustrate concepts and ideas, and the entire text is extensively referenced. Topics in Knitting the Semantic Web include: - RDF, its expressive power, and its ability to underlie the new Library catalog card for the coming century - the value and application for controlled vocabularies - SKOS (Simple Knowledge Organization System), the newest Semantic Web language - managing scheme versioning in the Semantic Web - Physnet portal service for physics - Semantic Web technologies in biomedicine - developing the United Nations Food and Agriculture ontology - Friend Of A Friend (FOAF) vocabulary specification-with a real world case study at a university - and more Knitting the Semantic Web is a stimulating resource for professionals, researchers, educators, and students in library and information science, computer science, information architecture, Web design, and Web services.
  8. Daconta, M.C.; Oberst, L.J.; Smith, K.T.: ¬The Semantic Web : A guide to the future of XML, Web services and knowledge management (2003) 0.02
    0.017179724 = product of:
      0.06012903 = sum of:
        0.049110502 = weight(_text_:sites in 320) [ClassicSimilarity], result of:
          0.049110502 = score(doc=320,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.23103109 = fieldWeight in 320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.03125 = fieldNorm(doc=320)
        0.011018529 = product of:
          0.022037057 = sum of:
            0.022037057 = weight(_text_:22 in 320) [ClassicSimilarity], result of:
              0.022037057 = score(doc=320,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.15476047 = fieldWeight in 320, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=320)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    "The Semantic Web is an extension of the current Web in which information is given well defined meaning, better enabling computers and people to work in cooperation." - Tim Berners Lee, "Scientific American", May 2001. This authoritative guide shows how the "Semantic Web" works technically and how businesses can utilize it to gain a competitive advantage. It explains what taxonomies and ontologies are as well as their importance in constructing the Semantic Web. The companion web site includes further updates as the framework develops and links to related sites.
    Date
    22. 5.2007 10:37:38
  9. Blanco, L.; Bronzi, M.; Crescenzi, V.; Merialdo, P.; Papotti, P.: Flint: from Web pages to probabilistic semantic data (2012) 0.02
    0.0151896225 = product of:
      0.106327355 = sum of:
        0.106327355 = weight(_text_:sites in 437) [ClassicSimilarity], result of:
          0.106327355 = score(doc=437,freq=6.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.500197 = fieldWeight in 437, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0390625 = fieldNorm(doc=437)
      0.14285715 = coord(1/7)
    
    Abstract
    The Web is a surprisingly extensive source of information: it offers a huge number of sites containing data about a disparate range of topics. Although Web pages are built for human fruition, not for automatic processing of the data, we observe that an increasing number of Web sites deliver pages containing structured information about recognizable concepts, relevant to specific application domains, such as movies, finance, sport, products, etc. The development of scalable techniques to discover, extract, and integrate data from fairly structured large corpora available on the Web is a challenging issue, because to face the Web scale, these activities should be accomplished automatically by domain-independent techniques. To cope with the complexity and the heterogeneity of Web data, state-of-the-art approaches focus on information organized according to specific patterns that frequently occur on the Web. Meaningful examples are WebTables, which focuses on data published in HTML tables, and information extraction systems, such as TextRunner, which exploits lexical-syntactic patterns. As noticed by Cafarella et al., even if a small fraction of the Web is organized according to these patterns, due to the Web scale, the amount of data involved is impressive. In this chapter, we focus on methods and techniques to wring out value from the data delivered by large data-intensive Web sites.
  10. Shaw, R.; Buckland, M.: Open identification and linking of the four Ws (2008) 0.02
    0.015032258 = product of:
      0.0526129 = sum of:
        0.04297169 = weight(_text_:sites in 2665) [ClassicSimilarity], result of:
          0.04297169 = score(doc=2665,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.20215221 = fieldWeight in 2665, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2665)
        0.009641212 = product of:
          0.019282425 = sum of:
            0.019282425 = weight(_text_:22 in 2665) [ClassicSimilarity], result of:
              0.019282425 = score(doc=2665,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.1354154 = fieldWeight in 2665, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2665)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Platforms for social computing connect users via shared references to people with whom they have relationships, events attended, places lived in or traveled to, and topics such as favorite books or movies. Since free text is insufficient for expressing such references precisely and unambiguously, many social computing platforms coin identifiers for topics, places, events, and people and provide interfaces for finding and selecting these identifiers from controlled lists. Using these interfaces, users collaboratively construct a web of links among entities. This model needn't be limited to social networking sites. Understanding an item in a digital library or museum requires context: information about the topics, places, events, and people to which the item is related. Students, journalists and investigators traditionally discover this kind of context by asking "the four Ws": what, where, when and who. The DCMI Kernel Metadata Community has recognized the four Ws as fundamental elements of descriptions (Kunze & Turner, 2007). Making better use of metadata to answer these questions via links to appropriate contextual resources has been our focus in a series of research projects over the past few years. Currently we are building a system for enabling readers of any text to relate any topic, place, event or person mentioned in the text to the best explanatory resources available. This system is being developed with two different corpora: a diverse variety of biographical texts characterized by very rich and dense mentions of people, events, places and activities, and a large collection of newly-scanned books, journals and manuscripts relating to Irish culture and history. Like a social computing platform, our system consists of tools for referring to topics, places, events or people, disambiguating these references by linking them to unique identifiers, and using the disambiguated references to provide useful information in context and to link to related resources. Yet current social computing platforms, while usually amenable to importing and exporting data, tend to mint proprietary identifiers and expect links to be traversed using their own interfaces. We take a different approach, using identifiers from both established and emerging naming authorities, representing relationships using standardized metadata vocabularies, and publishing those representations using standard protocols so that links can be stored and traversed anywhere. Central to our strategy is to move from appearances in a text to naming authorities to the the construction of links for searching or querying trusted resources. Using identifiers from naming authorities, rather than literal values (as in the DCMI Kernel) or keys from a proprietary database, makes it more likely that links constructed using our system will continue to be useful in the future. WorldCat Identities URIs (http://worldcat.org/identities/) linked to Library of Congress and Deutsche Nationalbibliothek authority files for persons and organizations and Geonames (http://geonames.org/) URIs for places are stable identifiers attached to a wealth of useful metadata. Yet no naming authority can be totally comprehensive, so our system can be extended to use new sources of identifiers as needed. For example, we are experimenting with using Freebase (http://freebase.com/) URIs to identify historical events, for which no established naming authority currently exists. Stable identifiers (URIs), standardized hyperlinked data formats (XML), and uniform publishing protocols (HTTP) are key ingredients of the web's open architecture. Our system provides an example of how this open architecture can be exploited to build flexible and useful tools for connecting resources via shared references to topics, places, events, and people.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  11. Bizer, C.; Mendes, P.N.; Jentzsch, A.: Topology of the Web of Data (2012) 0.01
    0.014031572 = product of:
      0.098221004 = sum of:
        0.098221004 = weight(_text_:sites in 425) [ClassicSimilarity], result of:
          0.098221004 = score(doc=425,freq=8.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.46206218 = fieldWeight in 425, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.03125 = fieldNorm(doc=425)
      0.14285715 = coord(1/7)
    
    Abstract
    The degree of structure of Web content is the determining factor for the types of functionality that search engines can provide. The more well structured the Web content is, the easier it is for search engines to understand Web content and provide advanced functionality, such as faceted filtering or the aggregation of content from multiple Web sites, based on this understanding. Today, most Web sites are generated from structured data that is stored in relational databases. Thus, it does not require too much extra effort for Web sites to publish this structured data directly on the Web in addition to HTML pages, and thus help search engines to understand Web content and provide improved functionality. An early approach to realize this idea and help search engines to understand Web content is Microformats, a technique for markingup structured data about specific types on entities-such as tags, blog posts, people, or reviews-within HTML pages. As Microformats are focused on a few entity types, the World Wide Web Consortium (W3C) started in 2004 to standardize RDFa as an alternative, more generic language for embedding any type of data into HTML pages. Today, major search engines such as Google, Yahoo, and Bing extract Microformat and RDFa data describing products, reviews, persons, events, and recipes from Web pages and use the extracted data to improve the user's search experience. The search engines have started to aggregate structured data from different Web sites and augment their search results with these aggregated information units in the form of rich snippets which combine, for instance, data This chapter gives an overview of the topology of the Web of Data that has been created by publishing data on the Web using the microformats RDFa, Microdata and Linked Data publishing techniques.
  12. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.01
    0.012420927 = product of:
      0.08694649 = sum of:
        0.08694649 = sum of:
          0.053890903 = weight(_text_:design in 2556) [ClassicSimilarity], result of:
            0.053890903 = score(doc=2556,freq=4.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.3524878 = fieldWeight in 2556, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046875 = fieldNorm(doc=2556)
          0.033055585 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
            0.033055585 = score(doc=2556,freq=2.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.23214069 = fieldWeight in 2556, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2556)
      0.14285715 = coord(1/7)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46
  13. Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; Ives, Z.: DBpedia: a nucleus for a Web of open data (2007) 0.01
    0.01052368 = product of:
      0.07366575 = sum of:
        0.07366575 = weight(_text_:sites in 4260) [ClassicSimilarity], result of:
          0.07366575 = score(doc=4260,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.34654665 = fieldWeight in 4260, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046875 = fieldNorm(doc=4260)
      0.14285715 = coord(1/7)
    
    Abstract
    DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human- and machineconsumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.
  14. Auer, S.; Lehmann, J.: Making the Web a data washing machine : creating knowledge out of interlinked data (2010) 0.01
    0.008769733 = product of:
      0.061388128 = sum of:
        0.061388128 = weight(_text_:sites in 112) [ClassicSimilarity], result of:
          0.061388128 = score(doc=112,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.28878886 = fieldWeight in 112, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0390625 = fieldNorm(doc=112)
      0.14285715 = coord(1/7)
    
    Content
    Vgl.: http://www.semantic-web-journal.net/content/new-submission-making-web-data-washing-machine-creating-knowledge-out-interlinked-data http://www.semantic-web-journal.net/sites/default/files/swj24_0.pdf.
  15. Weiand, K.; Hartl, A.; Hausmann, S.; Furche, T.; Bry, F.: Keyword-based search over semantic data (2012) 0.01
    0.008769733 = product of:
      0.061388128 = sum of:
        0.061388128 = weight(_text_:sites in 432) [ClassicSimilarity], result of:
          0.061388128 = score(doc=432,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.28878886 = fieldWeight in 432, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0390625 = fieldNorm(doc=432)
      0.14285715 = coord(1/7)
    
    Abstract
    For a long while, the creation of Web content required at least basic knowledge of Web technologies, meaning that for many Web users, the Web was de facto a read-only medium. This changed with the arrival of the "social Web," when Web applications started to allow users to publish Web content without technological expertise. Here, content creation is often an inclusive, iterative, and interactive process. Examples of social Web applications include blogs, social networking sites, as well as many specialized applications, for example, for saving and sharing bookmarks and publishing photos. Social semantic Web applications are social Web applications in which knowledge is expressed not only in the form of text and multimedia but also through informal to formal annotations that describe, reflect, and enhance the content. These annotations often take the shape of RDF graphs backed by ontologies, but less formal annotations such as free-form tags or tags from a controlled vocabulary may also be available. Wikis are one example of social Web applications for collecting and sharing knowledge. They allow users to easily create and edit documents, so-called wiki pages, using a Web browser. The pages in a wiki are often heavily interlinked, which makes it easy to find related information and browse the content.
  16. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.01
    0.0067773536 = product of:
      0.04744147 = sum of:
        0.04744147 = sum of:
          0.025404414 = weight(_text_:design in 1634) [ClassicSimilarity], result of:
            0.025404414 = score(doc=1634,freq=2.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.16616434 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.022037057 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.022037057 = score(doc=1634,freq=2.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  17. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.01
    0.0055092643 = product of:
      0.03856485 = sum of:
        0.03856485 = product of:
          0.0771297 = sum of:
            0.0771297 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.0771297 = score(doc=4643,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 9.2007 15:41:14
  18. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.00
    0.0047222264 = product of:
      0.033055585 = sum of:
        0.033055585 = product of:
          0.06611117 = sum of:
            0.06611117 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.06611117 = score(doc=6048,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 9.2007 15:41:14
  19. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.00
    0.0047222264 = product of:
      0.033055585 = sum of:
        0.033055585 = product of:
          0.06611117 = sum of:
            0.06611117 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.06611117 = score(doc=100,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 9.2007 15:41:14
  20. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.00
    0.003935189 = product of:
      0.027546322 = sum of:
        0.027546322 = product of:
          0.055092644 = sum of:
            0.055092644 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.055092644 = score(doc=2090,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou

Languages

  • e 71
  • d 8

Types

  • a 46
  • el 18
  • m 17
  • s 7
  • n 1
  • r 1
  • x 1
  • More… Less…

Subjects

Classifications