Search (48 results, page 1 of 3)

  • × theme_ss:"Semantic Web"
  1. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.07
    0.06506885 = product of:
      0.16267212 = sum of:
        0.11343292 = weight(_text_:objects in 2418) [ClassicSimilarity], result of:
          0.11343292 = score(doc=2418,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.35234275 = fieldWeight in 2418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.0492392 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
          0.0492392 = score(doc=2418,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.23214069 = fieldWeight in 2418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
      0.4 = coord(2/5)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  2. Subirats, I.; Prasad, A.R.D.; Keizer, J.; Bagdanov, A.: Implementation of rich metadata formats and demantic tools using DSpace (2008) 0.04
    0.043379232 = product of:
      0.10844808 = sum of:
        0.07562195 = weight(_text_:objects in 2656) [ClassicSimilarity], result of:
          0.07562195 = score(doc=2656,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.23489517 = fieldWeight in 2656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.03125 = fieldNorm(doc=2656)
        0.032826133 = weight(_text_:22 in 2656) [ClassicSimilarity], result of:
          0.032826133 = score(doc=2656,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.15476047 = fieldWeight in 2656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=2656)
      0.4 = coord(2/5)
    
    Abstract
    This poster explores the customization of DSpace to allow the use of the AGRIS Application Profile metadata standard and the AGROVOC thesaurus. The objective is the adaptation of DSpace, through the least invasive code changes either in the form of plug-ins or add-ons, to the specific needs of the Agricultural Sciences and Technology community. Metadata standards such as AGRIS AP, and Knowledge Organization Systems such as the AGROVOC thesaurus, provide mechanisms for sharing information in a standardized manner by recommending the use of common semantics and interoperable syntax (Subirats et al., 2007). AGRIS AP was created to enhance the description, exchange and subsequent retrieval of agricultural Document-like Information Objects (DLIOs). It is a metadata schema which draws from Metadata standards such as Dublin Core (DC), the Australian Government Locator Service Metadata (AGLS) and the Agricultural Metadata Element Set (AgMES) namespaces. It allows sharing of information across dispersed bibliographic systems (FAO, 2005). AGROVOC68 is a multilingual structured thesaurus covering agricultural and related domains. Its main role is to standardize the indexing process in order to make searching simpler and more efficient. AGROVOC is developed by FAO (Lauser et al., 2006). The customization of the DSpace is taking place in several phases. First, the AGRIS AP metadata schema was mapped onto the metadata DSpace model, with several enhancements implemented to support AGRIS AP elements. Next, AGROVOC will be integrated as a controlled vocabulary accessed through a local SKOS or OWL file. Eventually the system will be configurable to access AGROVOC through local files or remotely via webservices. Finally, spell checking and tooltips will be incorporated in the user interface to support metadata editing. Adapting DSpace to support AGRIS AP and annotation using the semantically-rich AGROVOC thesaurus transform DSpace into a powerful, domain-specific system for annotation and exchange of bibliographic metadata in the agricultural domain.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  3. Shaw, R.; Buckland, M.: Open identification and linking of the four Ws (2008) 0.04
    0.0424392 = product of:
      0.106097996 = sum of:
        0.07737513 = weight(_text_:books in 2665) [ClassicSimilarity], result of:
          0.07737513 = score(doc=2665,freq=4.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.26430926 = fieldWeight in 2665, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2665)
        0.028722866 = weight(_text_:22 in 2665) [ClassicSimilarity], result of:
          0.028722866 = score(doc=2665,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.1354154 = fieldWeight in 2665, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2665)
      0.4 = coord(2/5)
    
    Abstract
    Platforms for social computing connect users via shared references to people with whom they have relationships, events attended, places lived in or traveled to, and topics such as favorite books or movies. Since free text is insufficient for expressing such references precisely and unambiguously, many social computing platforms coin identifiers for topics, places, events, and people and provide interfaces for finding and selecting these identifiers from controlled lists. Using these interfaces, users collaboratively construct a web of links among entities. This model needn't be limited to social networking sites. Understanding an item in a digital library or museum requires context: information about the topics, places, events, and people to which the item is related. Students, journalists and investigators traditionally discover this kind of context by asking "the four Ws": what, where, when and who. The DCMI Kernel Metadata Community has recognized the four Ws as fundamental elements of descriptions (Kunze & Turner, 2007). Making better use of metadata to answer these questions via links to appropriate contextual resources has been our focus in a series of research projects over the past few years. Currently we are building a system for enabling readers of any text to relate any topic, place, event or person mentioned in the text to the best explanatory resources available. This system is being developed with two different corpora: a diverse variety of biographical texts characterized by very rich and dense mentions of people, events, places and activities, and a large collection of newly-scanned books, journals and manuscripts relating to Irish culture and history. Like a social computing platform, our system consists of tools for referring to topics, places, events or people, disambiguating these references by linking them to unique identifiers, and using the disambiguated references to provide useful information in context and to link to related resources. Yet current social computing platforms, while usually amenable to importing and exporting data, tend to mint proprietary identifiers and expect links to be traversed using their own interfaces. We take a different approach, using identifiers from both established and emerging naming authorities, representing relationships using standardized metadata vocabularies, and publishing those representations using standard protocols so that links can be stored and traversed anywhere. Central to our strategy is to move from appearances in a text to naming authorities to the the construction of links for searching or querying trusted resources. Using identifiers from naming authorities, rather than literal values (as in the DCMI Kernel) or keys from a proprietary database, makes it more likely that links constructed using our system will continue to be useful in the future. WorldCat Identities URIs (http://worldcat.org/identities/) linked to Library of Congress and Deutsche Nationalbibliothek authority files for persons and organizations and Geonames (http://geonames.org/) URIs for places are stable identifiers attached to a wealth of useful metadata. Yet no naming authority can be totally comprehensive, so our system can be extended to use new sources of identifiers as needed. For example, we are experimenting with using Freebase (http://freebase.com/) URIs to identify historical events, for which no established naming authority currently exists. Stable identifiers (URIs), standardized hyperlinked data formats (XML), and uniform publishing protocols (HTTP) are key ingredients of the web's open architecture. Our system provides an example of how this open architecture can be exploited to build flexible and useful tools for connecting resources via shared references to topics, places, events, and people.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  4. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.04
    0.038481116 = product of:
      0.19240558 = sum of:
        0.19240558 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
          0.19240558 = score(doc=701,freq=2.0), product of:
            0.51352155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.060570993 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.2 = coord(1/5)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  5. Isaac, A.: Aligning thesauri for an integrated access to Cultural Heritage Resources (2007) 0.03
    0.029591773 = product of:
      0.14795886 = sum of:
        0.14795886 = weight(_text_:objects in 553) [ClassicSimilarity], result of:
          0.14795886 = score(doc=553,freq=10.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.4595864 = fieldWeight in 553, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.02734375 = fieldNorm(doc=553)
      0.2 = coord(1/5)
    
    Abstract
    Currently, a number of efforts are being carried out to integrate collections from different institutions and containing heterogeneous material. Examples of such projects are The European Library [1] and the Memory of the Netherlands [2]. A crucial point for the success of these is the availability to provide a unified access on top of the different collections, e.g. using one single vocabulary for querying or browsing the objects they contain. This is made difficult by the fact that the objects from different collections are often described using different vocabularies - thesauri, classification schemes - and are therefore not interoperable at the semantic level. To solve this problem, one can turn to semantic links - mappings - between the elements of the different vocabularies. If one knows that a concept C from a vocabulary V is semantically equivalent to a concept to a concept D from vocabulary W, then an appropriate search engine can return all the objects that were indexed against D for a query for objects described using C. We thus have an access to other collections, using a single one vocabulary. This is however an ideal situation, and hard alignment work is required to reach it. Several projects in the past have tried to implement such a solution, like MACS [3] and Renardus [4]. They have demonstrated very interesting results, but also highlighted the difficulty of aligning manually all the different vocabularies involved in practical cases, which sometimes contain hundreds of thousands of concepts. To alleviate this problem, a number of tools have been proposed in order to provide with candidate mappings between two input vocabularies, making alignment a (semi-) automatic task. Recently, the Semantic Web community has produced a lot of these alignment tools'. Several techniques are found, depending on the material they exploit: labels of concepts, structure of vocabularies, collection objects and external knowledge sources. Throughout our presentation, we will present a concrete heterogeneity case where alignment techniques have been applied to build a (pilot) browser, developed in the context of the STITCH project [5]. This browser enables a unified access to two collections of illuminated manuscripts, using the description vocabulary used in the first collection, Mandragore [6], or the one used by the second, Iconclass [7]. In our talk, we will also make the point for using unified representations the vocabulary semantic and lexical information. Additionally to ease the use of the alignment tools that have these vocabularies as input, turning to a standard representation format helps designing applications that are more generic, like the browser we demonstrate. We give pointers to SKOS [8], an open and web-enabled format currently developed by the Semantic Web community.
  6. Cahier, J.-P.; Ma, X.; Zaher, L'H.: Document and item-based modeling : a hybrid method for a socio-semantic web (2010) 0.03
    0.02646768 = product of:
      0.1323384 = sum of:
        0.1323384 = weight(_text_:objects in 62) [ClassicSimilarity], result of:
          0.1323384 = score(doc=62,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.41106653 = fieldWeight in 62, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=62)
      0.2 = coord(1/5)
    
    Abstract
    The paper discusses the challenges of categorising documents and "items of the world" to promote knowledge sharing in large communities of interest. We present the DOCMA method (Document and Item-based Model for Action) dedicated to end-users who have minimal or no knowledge of information science. Community members can elicit structure and indexed business items stemming from their query including projects, actors, products, places of interest, and geo-situated objects. This hybrid method was applied in a collaborative Web portal in the field of sustainability for the past two years.
  7. Cahier, J.-P.; Zaher, L'H.; Isoard , G.: Document et modèle pour l'action, une méthode pour le web socio-sémantique : application à un web 2.0 en développement durable (2010) 0.03
    0.02646768 = product of:
      0.1323384 = sum of:
        0.1323384 = weight(_text_:objects in 4836) [ClassicSimilarity], result of:
          0.1323384 = score(doc=4836,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.41106653 = fieldWeight in 4836, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4836)
      0.2 = coord(1/5)
    
    Abstract
    We present the DOCMA method (DOCument and Model for Action) focused to Socio-Semantic web applications in large communities of interest. DOCMA is dedicated to end-users without any knowledge in Information Science. Community Members can elicit, structure and index shared business items emerging from their inquiry (such as projects, actors, products, geographically situated objects of interest.). We apply DOCMA to an experiment in the field of Sustainable Development: the Cartodd-Map21 collaborative Web portal.
  8. Bianchini, C.; Willer, M.: ISBD resource and Its description in the context of the Semantic Web (2014) 0.03
    0.02646768 = product of:
      0.1323384 = sum of:
        0.1323384 = weight(_text_:objects in 1998) [ClassicSimilarity], result of:
          0.1323384 = score(doc=1998,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.41106653 = fieldWeight in 1998, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1998)
      0.2 = coord(1/5)
    
    Abstract
    This article explores the question "What is an International Standard for Bibliographic Description (ISBD) resource in the context of the Semantic Web, and what is the relationship of its description to the linked data?" This question is discussed against the background of the dichotomy between the description and access using the Semantic Web differentiation of the three logical layers: real-world objects, web of data, and special purpose (bibliographic) data. The representation of bibliographic data as linked data is discussed, distinguishing the description of a resource from the iconic/objective and the informational/subjective viewpoints. In the conclusion, the authors give views on possible directions of future development of the ISBD.
  9. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.02
    0.022978293 = product of:
      0.11489146 = sum of:
        0.11489146 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
          0.11489146 = score(doc=4643,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.5416616 = fieldWeight in 4643, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=4643)
      0.2 = coord(1/5)
    
    Date
    22. 9.2007 15:41:14
  10. Devedzic, V.: Semantic Web and education (2006) 0.02
    0.022686584 = product of:
      0.11343292 = sum of:
        0.11343292 = weight(_text_:objects in 5995) [ClassicSimilarity], result of:
          0.11343292 = score(doc=5995,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.35234275 = fieldWeight in 5995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=5995)
      0.2 = coord(1/5)
    
    Abstract
    The first section of "Semantic Web and Education" surveys the basic aspects and features of the Semantic Web. After this basic review, the book turns its focus to its primary topic of how Semantic Web developments can be used to build attractive and more successful education applications. The book analytically discusses the technical areas of architecture, metadata, learning objects, software engineering trends, and more. Integrated with these technical topics are the examinations of learning-oriented topics such as learner modeling, collaborative learning, learning management, learning communities, ontological engineering of web-based learning, and related topics. The result is a thorough and highly useful presentation on the confluence of the technical aspects of the Semantic Web and the field of Education or the art of teaching. The book will be of considerable interest to researchers and students in the fields Information Systems, Computer Science, and Education.
  11. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.02
    0.01969568 = product of:
      0.0984784 = sum of:
        0.0984784 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
          0.0984784 = score(doc=6048,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.46428138 = fieldWeight in 6048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=6048)
      0.2 = coord(1/5)
    
    Date
    22. 9.2007 15:41:14
  12. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.02
    0.01969568 = product of:
      0.0984784 = sum of:
        0.0984784 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
          0.0984784 = score(doc=100,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.46428138 = fieldWeight in 100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=100)
      0.2 = coord(1/5)
    
    Date
    22. 9.2007 15:41:14
  13. Jacobs, I.: From chaos, order: W3C standard helps organize knowledge : SKOS Connects Diverse Knowledge Organization Systems to Linked Data (2009) 0.02
    0.018952958 = product of:
      0.09476479 = sum of:
        0.09476479 = weight(_text_:books in 3062) [ClassicSimilarity], result of:
          0.09476479 = score(doc=3062,freq=6.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.32371143 = fieldWeight in 3062, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
      0.2 = coord(1/5)
    
    Abstract
    18 August 2009 -- Today W3C announces a new standard that builds a bridge between the world of knowledge organization systems - including thesauri, classifications, subject headings, taxonomies, and folksonomies - and the linked data community, bringing benefits to both. Libraries, museums, newspapers, government portals, enterprises, social networking applications, and other communities that manage large collections of books, historical artifacts, news reports, business glossaries, blog entries, and other items can now use Simple Knowledge Organization System (SKOS) to leverage the power of linked data. As different communities with expertise and established vocabularies use SKOS to integrate them into the Semantic Web, they increase the value of the information for everyone.
    Content
    SKOS Adapts to the Diversity of Knowledge Organization Systems A useful starting point for understanding the role of SKOS is the set of subject headings published by the US Library of Congress (LOC) for categorizing books, videos, and other library resources. These headings can be used to broaden or narrow queries for discovering resources. For instance, one can narrow a query about books on "Chinese literature" to "Chinese drama," or further still to "Chinese children's plays." Library of Congress subject headings have evolved within a community of practice over a period of decades. By now publishing these subject headings in SKOS, the Library of Congress has made them available to the linked data community, which benefits from a time-tested set of concepts to re-use in their own data. This re-use adds value ("the network effect") to the collection. When people all over the Web re-use the same LOC concept for "Chinese drama," or a concept from some other vocabulary linked to it, this creates many new routes to the discovery of information, and increases the chances that relevant items will be found. As an example of mapping one vocabulary to another, a combined effort from the STITCH, TELplus and MACS Projects provides links between LOC concepts and RAMEAU, a collection of French subject headings used by the Bibliothèque Nationale de France and other institutions. SKOS can be used for subject headings but also many other approaches to organizing knowledge. Because different communities are comfortable with different organization schemes, SKOS is designed to port diverse knowledge organization systems to the Web. "Active participation from the library and information science community in the development of SKOS over the past seven years has been key to ensuring that SKOS meets a variety of needs," said Thomas Baker, co-chair of the Semantic Web Deployment Working Group, which published SKOS. "One goal in creating SKOS was to provide new uses for well-established knowledge organization systems by providing a bridge to the linked data cloud." SKOS is part of the Semantic Web technology stack. Like the Web Ontology Language (OWL), SKOS can be used to define vocabularies. But the two technologies were designed to meet different needs. SKOS is a simple language with just a few features, tuned for sharing and linking knowledge organization systems such as thesauri and classification schemes. OWL offers a general and powerful framework for knowledge representation, where additional "rigor" can afford additional benefits (for instance, business rule processing). To get started with SKOS, see the SKOS Primer.
  14. Call, A.; Gottlob, G.; Pieris, A.: ¬The return of the entity-relationship model : ontological query answering (2012) 0.02
    0.018905489 = product of:
      0.09452744 = sum of:
        0.09452744 = weight(_text_:objects in 434) [ClassicSimilarity], result of:
          0.09452744 = score(doc=434,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.29361898 = fieldWeight in 434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=434)
      0.2 = coord(1/5)
    
    Abstract
    The Entity-Relationship (ER) model is a fundamental formalism for conceptual modeling in database design; it was introduced by Chen in his milestone paper, and it is now widely used, being flexible and easily understood by practitioners. With the rise of the Semantic Web, conceptual modeling formalisms have gained importance again as ontology formalisms, in the Semantic Web parlance. Ontologies and conceptual models are aimed at representing, rather than the structure of data, the domain of interest, that is, the fragment of the real world that is being represented by the data and the schema. A prominent formalism for modeling ontologies are Description Logics (DLs), which are decidable fragments of first-order logic, particularly suitable for ontological modeling and querying. In particular, DL ontologies are sets of assertions describing sets of objects and (usually binary) relations among such sets, exactly in the same fashion as the ER model. Recently, research on DLs has been focusing on the problem of answering queries under ontologies, that is, given a query q, an instance B, and an ontology X, answering q under B and amounts to compute the answers that are logically entailed from B by using the assertions of X. In this context, where data size is usually large, a central issue the data complexity of query answering, i.e., the computational complexity with respect to the data set B only, while the ontology X and the query q are fixed.
  15. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.02
    0.016413068 = product of:
      0.08206534 = sum of:
        0.08206534 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
          0.08206534 = score(doc=2090,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.38690117 = fieldWeight in 2090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=2090)
      0.2 = coord(1/5)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  16. Bizer, C.; Lehmann, J.; Kobilarov, G.; Auer, S.; Becker, C.; Cyganiak, R.; Hellmann, S.: DBpedia: a crystallization point for the Web of Data. (2009) 0.02
    0.015632138 = product of:
      0.07816069 = sum of:
        0.07816069 = weight(_text_:books in 1643) [ClassicSimilarity], result of:
          0.07816069 = score(doc=1643,freq=2.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.2669927 = fieldWeight in 1643, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1643)
      0.2 = coord(1/5)
    
    Abstract
    The DBpedia project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. The resulting DBpedia knowledge base currently describes over 2.6 million entities. For each of these entities, DBpedia defines a globally unique identifier that can be dereferenced over the Web into a rich RDF description of the entity, including human-readable definitions in 30 languages, relationships to other resources, classifications in four concept hierarchies, various facts as well as data-level links to other Web data sources describing the entity. Over the last year, an increasing number of data publishers have begun to set data-level links to DBpedia resources, making DBpedia a central interlinking hub for the emerging Web of data. Currently, the Web of interlinked data sources around DBpedia provides approximately 4.7 billion pieces of information and covers domains suc as geographic information, people, companies, films, music, genes, drugs, books, and scientific publications. This article describes the extraction of the DBpedia knowledge base, the current status of interlinking DBpedia with other data sources on the Web, and gives an overview of applications that facilitate the Web of Data around DBpedia.
  17. Harth, A.; Hogan, A.; Umbrich, J.; Kinsella, S.; Polleres, A.; Decker, S.: Searching and browsing linked data with SWSE* (2012) 0.02
    0.015632138 = product of:
      0.07816069 = sum of:
        0.07816069 = weight(_text_:books in 410) [ClassicSimilarity], result of:
          0.07816069 = score(doc=410,freq=2.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.2669927 = fieldWeight in 410, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=410)
      0.2 = coord(1/5)
    
    Abstract
    Web search engines such as Google, Yahoo! MSN/Bing, and Ask are far from the consummate Web search solution: they do not typically produce direct answers to queries but instead typically recommend a selection of related documents from the Web. We note that in more recent years, search engines have begun to provide direct answers to prose queries matching certain common templates-for example, "population of china" or "12 euro in dollars"-but again, such functionality is limited to a small subset of popular user queries. Furthermore, search engines now provide individual and focused search interfaces over images, videos, locations, news articles, books, research papers, blogs, and real-time social media-although these tools are inarguably powerful, they are limited to their respective domains. In the general case, search engines are not suitable for complex information gathering tasks requiring aggregation from multiple indexed documents: for such tasks, users must manually aggregate tidbits of pertinent information from various pages. In effect, such limitations are predicated on the lack of machine-interpretable structure in HTML-documents, which is often limited to generic markup tags mainly concerned with document renderign and linking. Most of the real content is contained in prose text which is inherently difficult for machines to interpret.
  18. Ilik, V.: Distributed person data : using Semantic Web compliant data in subject name headings (2015) 0.02
    0.015632138 = product of:
      0.07816069 = sum of:
        0.07816069 = weight(_text_:books in 2292) [ClassicSimilarity], result of:
          0.07816069 = score(doc=2292,freq=2.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.2669927 = fieldWeight in 2292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2292)
      0.2 = coord(1/5)
    
    Abstract
    Providing efficient access to information is a crucial library mission. Subject classification is one of the major pillars that guarantees the accessibility of records in libraries. In this paper we discuss the need to associate person IDs and URIs with subjects when a named person happens to be the subject of the document. This is often the case with biographies, schools of thought in philosophy, politics, art, and literary criticism. Using Semantic Web compliant data in subject name headings enhances the ability to collocate topics about a person. Also, in retrieval, books about a person would be easily linked to works by that same person. In the context of the Semantic Web, it is expected that, as the available information grows, one would be more effective in the task of information retrieval. Information about a person or, as in the case of this paper, about a researcher exist in various databases, which can be discipline specific or publishers' databases, and in such cases they have an assigned identifier. They also exist in institutional directory databases. We argue that these various databases can be leveraged to support improved discoverability and retrieval of research output for individual authors and institutions, as well as works about those authors.
  19. Veltman, K.H.: Towards a Semantic Web for culture 0.02
    0.01512439 = product of:
      0.07562195 = sum of:
        0.07562195 = weight(_text_:objects in 4040) [ClassicSimilarity], result of:
          0.07562195 = score(doc=4040,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.23489517 = fieldWeight in 4040, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.03125 = fieldNorm(doc=4040)
      0.2 = coord(1/5)
    
    Abstract
    Today's semantic web deals with meaning in a very restricted sense and offers static solutions. This is adequate for many scientific, technical purposes and for business transactions requiring machine-to-machine communication, but does not answer the needs of culture. Science, technology and business are concerned primarily with the latest findings, the state of the art, i.e. the paradigm or dominant world-view of the day. In this context, history is considered non-essential because it deals with things that are out of date. By contrast, culture faces a much larger challenge, namely, to re-present changes in ways of knowing; changing meanings in different places at a given time (synchronically) and over time (diachronically). Culture is about both objects and the commentaries on them; about a cumulative body of knowledge; about collective memory and heritage. Here, history plays a central role and older does not mean less important or less relevant. Hence, a Leonardo painting that is 400 years old, or a Greek statue that is 2500 years old, typically have richer commentaries and are often more valuable than their contemporary equivalents. In this context, the science of meaning (semantics) is necessarily much more complex than semantic primitives. A semantic web in the cultural domain must enable us to trace how meaning and knowledge organisation have evolved historically in different cultures. This paper examines five issues to address this challenge: 1) different world-views (i.e. a shift from substance to function and from ontology to multiple ontologies); 2) developments in definitions and meaning; 3) distinctions between words and concepts; 4) new classes of relations; and 5) dynamic models of knowledge organisation. These issues reveal that historical dimensions of cultural diversity in knowledge organisation are also central to classification of biological diversity. New ways are proposed of visualizing knowledge using a time/space horizon to distinguish between universals and particulars. It is suggested that new visualization methods make possible a history of questions as well as of answers, thus enabling dynamic access to cultural and historical dimensions of knowledge. Unlike earlier media, which were limited to recording factual dimensions of collective memory, digital media enable us to explore theories, ways of perceiving, ways of knowing; to enter into other mindsets and world-views and thus to attain novel insights and new levels of tolerance. Some practical consequences are outlined.
  20. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.01
    0.013130453 = product of:
      0.065652266 = sum of:
        0.065652266 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
          0.065652266 = score(doc=3376,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.30952093 = fieldWeight in 3376, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=3376)
      0.2 = coord(1/5)
    
    Date
    31. 7.2010 16:58:22

Years

Languages

  • e 40
  • d 7
  • f 1
  • More… Less…

Types

  • a 26
  • el 14
  • m 11
  • s 4
  • n 1
  • x 1
  • More… Less…