Search (37 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Semantic Web"
  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"el"
  1. Mayfield, J.; Finin, T.: Information retrieval on the Semantic Web : integrating inference and retrieval 0.01
    0.01106493 = product of:
      0.033194788 = sum of:
        0.012493922 = weight(_text_:in in 4330) [ClassicSimilarity], result of:
          0.012493922 = score(doc=4330,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 4330, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4330)
        0.020700864 = product of:
          0.04140173 = sum of:
            0.04140173 = weight(_text_:22 in 4330) [ClassicSimilarity], result of:
              0.04140173 = score(doc=4330,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.2708308 = fieldWeight in 4330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4330)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    One vision of the Semantic Web is that it will be much like the Web we know today, except that documents will be enriched by annotations in machine understandable markup. These annotations will provide metadata about the documents as well as machine interpretable statements capturing some of the meaning of document content. We discuss how the information retrieval paradigm might be recast in such an environment. We suggest that retrieval can be tightly bound to inference. Doing so makes today's Web search engines useful to Semantic Web inference engines, and causes improvements in either retrieval or inference to lead directly to improvements in the other.
    Date
    12. 2.2011 17:35:22
  2. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.01
    0.009484224 = product of:
      0.028452672 = sum of:
        0.010709076 = weight(_text_:in in 4649) [ClassicSimilarity], result of:
          0.010709076 = score(doc=4649,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 4649, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4649)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.035487194 = score(doc=4649,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  3. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.008572079 = product of:
      0.025716238 = sum of:
        0.010929906 = weight(_text_:in in 4553) [ClassicSimilarity], result of:
          0.010929906 = score(doc=4553,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18406484 = fieldWeight in 4553, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.029572664 = score(doc=4553,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  4. OWL Web Ontology Language Test Cases (2004) 0.00
    0.003943022 = product of:
      0.02365813 = sum of:
        0.02365813 = product of:
          0.04731626 = sum of:
            0.04731626 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.04731626 = score(doc=4685,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    14. 8.2011 13:33:22
  5. Scheir, P.; Pammer, V.; Lindstaedt, S.N.: Information retrieval on the Semantic Web : does it exist? (2007) 0.00
    0.0027641435 = product of:
      0.01658486 = sum of:
        0.01658486 = weight(_text_:und in 4329) [ClassicSimilarity], result of:
          0.01658486 = score(doc=4329,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.17141339 = fieldWeight in 4329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
      0.16666667 = coord(1/6)
    
    Content
    Enthält einen Überblick über Modelle, Systeme und Projekte
  6. RDF/XML Syntax Specification (Revised) : W3C Recommendation 10 February 2004 (2004) 0.00
    0.0025503114 = product of:
      0.015301868 = sum of:
        0.015301868 = weight(_text_:in in 3066) [ClassicSimilarity], result of:
          0.015301868 = score(doc=3066,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2576908 = fieldWeight in 3066, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3066)
      0.16666667 = coord(1/6)
    
    Abstract
    The Resource Description Framework (RDF) is a general-purpose language for representing information in the Web. This document defines an XML syntax for RDF called RDF/XML in terms of Namespaces in XML, the XML Information Set and XML Base. The formal grammar for the syntax is annotated with actions generating triples of the RDF graph as defined in RDF Concepts and Abstract Syntax. The triples are written using the N-Triples RDF graph serializing format which enables more precise recording of the mapping in a machine processable form. The mappings are recorded as tests cases, gathered and published in RDF Test Cases.
  7. Mirizzi, R.: Exploratory browsing in the Web of Data (2011) 0.00
    0.0024417373 = product of:
      0.014650424 = sum of:
        0.014650424 = weight(_text_:in in 4803) [ClassicSimilarity], result of:
          0.014650424 = score(doc=4803,freq=44.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24672015 = fieldWeight in 4803, product of:
              6.6332498 = tf(freq=44.0), with freq of:
                44.0 = termFreq=44.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4803)
      0.16666667 = coord(1/6)
    
    Abstract
    Thanks to the recent Linked Data initiative, the foundations of the Semantic Web have been built. Shared, open and linked RDF datasets give us the possibility to exploit both the strong theoretical results and the robust technologies and tools developed since the seminal paper in the Semantic Web appeared in 2001. In a simplistic way, we may think at the Semantic Web as a ultra large distributed database we can query to get information coming from different sources. In fact, every dataset exposes a SPARQL endpoint to make the data accessible through exact queries. If we know the URI of the famous actress Nicole Kidman in DBpedia we may retrieve all the movies she acted with a simple SPARQL query. Eventually we may aggregate this information with users ratings and genres from IMDB. Even though these are very exciting results and applications, there is much more behind the curtains. Datasets come with the description of their schema structured in an ontological way. Resources refer to classes which are in turn organized in well structured and rich ontologies. Exploiting also this further feature we go beyond the notion of a distributed database and we can refer to the Semantic Web as a distributed knowledge base. If in our knowledge base we have that Paris is located in France (ontological level) and that Moulin Rouge! is set in Paris (data level) we may query the Semantic Web (interpreted as a set of interconnected datasets and related ontologies) to return all the movies starred by Nicole Kidman set in France and Moulin Rouge! will be in the final result set. The ontological level makes possible to infer new relations among data.
    The Linked Data initiative and the state of the art in semantic technologies led off all brand new search and mash-up applications. The basic idea is to have smarter lookup services for a huge, distributed and social knowledge base. All these applications catch and (re)propose, under a semantic data perspective, the view of the classical Web as a distributed collection of documents to retrieve. The interlinked nature of the Web, and consequently of the Semantic Web, is exploited (just) to collect and aggregate data coming from different sources. Of course, this is a big step forward in search and Web technologies, but if we limit our investi- gation to retrieval tasks, we miss another important feature of the current Web: browsing and in particular exploratory browsing (a.k.a. exploratory search). Thanks to its hyperlinked nature, the Web defined a new way of browsing documents and knowledge: selection by lookup, navigation and trial-and-error tactics were, and still are, exploited by users to search for relevant information satisfying some initial requirements. The basic assumptions behind a lookup search, typical of Information Retrieval (IR) systems, are no more valid in an exploratory browsing context. An IR system, such as a search engine, assumes that: the user has a clear picture of what she is looking for ; she knows the terminology of the specific knowledge space. On the other side, as argued in, the main challenges in exploratory search can be summarized as: support querying and rapid query refinement; other facets and metadata-based result filtering; leverage search context; support learning and understanding; other visualization to support insight/decision making; facilitate collaboration. In Section 3 we will show two applications for exploratory search in the Semantic Web addressing some of the above challenges.
  8. Gómez-Pérez, A.; Corcho, O.: Ontology languages for the Semantic Web (2015) 0.00
    0.0023517415 = product of:
      0.014110449 = sum of:
        0.014110449 = weight(_text_:in in 3297) [ClassicSimilarity], result of:
          0.014110449 = score(doc=3297,freq=20.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2376267 = fieldWeight in 3297, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3297)
      0.16666667 = coord(1/6)
    
    Abstract
    Ontologies have proven to be an essential element in many applications. They are used in agent systems, knowledge management systems, and e-commerce platforms. They can also generate natural language, integrate intelligent information, provide semantic-based access to the Internet, and extract information from texts in addition to being used in many other applications to explicitly declare the knowledge embedded in them. However, not only are ontologies useful for applications in which knowledge plays a key role, but they can also trigger a major change in current Web contents. This change is leading to the third generation of the Web-known as the Semantic Web-which has been defined as "the conceptual structuring of the Web in an explicit machine-readable way."1 This definition does not differ too much from the one used for defining an ontology: "An ontology is an explicit, machinereadable specification of a shared conceptualization."2 In fact, new ontology-based applications and knowledge architectures are developing for this new Web. A common claim for all of these approaches is the need for languages to represent the semantic information that this Web requires-solving the heterogeneous data exchange in this heterogeneous environment. Here, we don't decide which language is best of the Semantic Web. Rather, our goal is to help developers find the most suitable language for their representation needs. The authors analyze the most representative ontology languages created for the Web and compare them using a common framework.
  9. Menzel, C.: Knowledge representation, the World Wide Web, and the evolution of logic (2011) 0.00
    0.0021859813 = product of:
      0.013115887 = sum of:
        0.013115887 = weight(_text_:in in 761) [ClassicSimilarity], result of:
          0.013115887 = score(doc=761,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 761, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=761)
      0.16666667 = coord(1/6)
    
    Abstract
    In this paper, I have traced a series of evolutionary adaptations of FOL motivated entirely by its use by knowledge engineers to represent and share information on the Web culminating in the development of Common Logic. While the primary goal in this paper has been to document this evolution, it is arguable, I think that CL's syntactic and semantic egalitarianism better realizes the goal "topic neutrality" that a logic should ideally exemplify - understood, at least in part, as the idea that logic should as far as possible not itself embody any metaphysical presuppositions. Instead of retaining the traditional metaphysical divisions of FOL that reflect its Fregean origins, CL begins as it were with a single, metaphysically homogeneous domain in which, potentially, anything can play the traditional roles of object, property, relation, and function. Note that the effect of this is not to destroy traditional metaphysical divisions. Rather, it simply to refrain from building those divisions explicitly into one's logic; instead, such divisions are left to the user to introduce and enforce axiomatically in an explicit metaphysical theory.
  10. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.00
    0.0021859813 = product of:
      0.013115887 = sum of:
        0.013115887 = weight(_text_:in in 3829) [ClassicSimilarity], result of:
          0.013115887 = score(doc=3829,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 3829, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
      0.16666667 = coord(1/6)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
    Content
    Thesis submitted to the Graduate School of Natural and Applied Sciences of Middle East Technical University in partial fulfilment of the requirements for the degree of Master of science in Computer Engineering (XII, 57 S.)
  11. RDF Vocabulary Description Language 1.0 : RDF Schema (2004) 0.00
    0.0020609628 = product of:
      0.012365777 = sum of:
        0.012365777 = weight(_text_:in in 3057) [ClassicSimilarity], result of:
          0.012365777 = score(doc=3057,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2082456 = fieldWeight in 3057, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=3057)
      0.16666667 = coord(1/6)
    
    Abstract
    The Resource Description Framework (RDF) is a general-purpose language for representing information in the Web. This specification describes how to use RDF to describe RDF vocabularies. This specification defines a vocabulary for this purpose and defines other built-in RDF vocabulary initially specified in the RDF Model and Syntax Specification.
  12. SKOS Simple Knowledge Organization System Primer (2009) 0.00
    0.0019955188 = product of:
      0.011973113 = sum of:
        0.011973113 = weight(_text_:in in 4795) [ClassicSimilarity], result of:
          0.011973113 = score(doc=4795,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.20163295 = fieldWeight in 4795, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4795)
      0.16666667 = coord(1/6)
    
    Abstract
    SKOS (Simple Knowledge Organisation System) provides a model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, and other types of controlled vocabulary. As an application of the Resource Description Framework (RDF) SKOS allows concepts to be documented, linked and merged with other data, while still being composed, integrated and published on the World Wide Web. This document is an implementors guide for those who would like to represent their concept scheme using SKOS. In basic SKOS, conceptual resources (concepts) can be identified using URIs, labelled with strings in one or more natural languages, documented with various types of notes, semantically related to each other in informal hierarchies and association networks, and aggregated into distinct concept schemes. In advanced SKOS, conceptual resources can be mapped to conceptual resources in other schemes and grouped into labelled or ordered collections. Concept labels can also be related to each other. Finally, the SKOS vocabulary itself can be extended to suit the needs of particular communities of practice.
  13. Zhang, L.; Liu, Q.L.; Zhang, J.; Wang, H.F.; Pan, Y.; Yu, Y.: Semplore: an IR approach to scalable hybrid query of Semantic Web data (2007) 0.00
    0.001821651 = product of:
      0.010929906 = sum of:
        0.010929906 = weight(_text_:in in 231) [ClassicSimilarity], result of:
          0.010929906 = score(doc=231,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18406484 = fieldWeight in 231, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=231)
      0.16666667 = coord(1/6)
    
    Abstract
    As an extension to the current Web, Semantic Web will not only contain structured data with machine understandable semantics but also textual information. While structured queries can be used to find information more precisely on the Semantic Web, keyword searches are still needed to help exploit textual information. It thus becomes very important that we can combine precise structured queries with imprecise keyword searches to have a hybrid query capability. In addition, due to the huge volume of information on the Semantic Web, the hybrid query must be processed in a very scalable way. In this paper, we define such a hybrid query capability that combines unary tree-shaped structured queries with keyword searches. We show how existing information retrieval (IR) index structures and functions can be reused to index semantic web data and its textual information, and how the hybrid query is evaluated on the index structure using IR engines in an efficient and scalable manner. We implemented this IR approach in an engine called Semplore. Comprehensive experiments on its performance show that it is a promising approach. It leads us to believe that it may be possible to evolve current web search engines to query and search the Semantic Web. Finally, we briefy describe how Semplore is used for searching Wikipedia and an IBM customer's product information.
    Series
    Lecture notes in computer science; 4825
  14. Veltman, K.H.: Towards a Semantic Web for culture 0.00
    0.0017848461 = product of:
      0.010709076 = sum of:
        0.010709076 = weight(_text_:in in 4040) [ClassicSimilarity], result of:
          0.010709076 = score(doc=4040,freq=18.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 4040, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4040)
      0.16666667 = coord(1/6)
    
    Abstract
    Today's semantic web deals with meaning in a very restricted sense and offers static solutions. This is adequate for many scientific, technical purposes and for business transactions requiring machine-to-machine communication, but does not answer the needs of culture. Science, technology and business are concerned primarily with the latest findings, the state of the art, i.e. the paradigm or dominant world-view of the day. In this context, history is considered non-essential because it deals with things that are out of date. By contrast, culture faces a much larger challenge, namely, to re-present changes in ways of knowing; changing meanings in different places at a given time (synchronically) and over time (diachronically). Culture is about both objects and the commentaries on them; about a cumulative body of knowledge; about collective memory and heritage. Here, history plays a central role and older does not mean less important or less relevant. Hence, a Leonardo painting that is 400 years old, or a Greek statue that is 2500 years old, typically have richer commentaries and are often more valuable than their contemporary equivalents. In this context, the science of meaning (semantics) is necessarily much more complex than semantic primitives. A semantic web in the cultural domain must enable us to trace how meaning and knowledge organisation have evolved historically in different cultures. This paper examines five issues to address this challenge: 1) different world-views (i.e. a shift from substance to function and from ontology to multiple ontologies); 2) developments in definitions and meaning; 3) distinctions between words and concepts; 4) new classes of relations; and 5) dynamic models of knowledge organisation. These issues reveal that historical dimensions of cultural diversity in knowledge organisation are also central to classification of biological diversity. New ways are proposed of visualizing knowledge using a time/space horizon to distinguish between universals and particulars. It is suggested that new visualization methods make possible a history of questions as well as of answers, thus enabling dynamic access to cultural and historical dimensions of knowledge. Unlike earlier media, which were limited to recording factual dimensions of collective memory, digital media enable us to explore theories, ways of perceiving, ways of knowing; to enter into other mindsets and world-views and thus to attain novel insights and new levels of tolerance. Some practical consequences are outlined.
  15. Hitzler, P.; Janowicz, K.: Ontologies in a data driven world : finding the middle ground (2013) 0.00
    0.0017848461 = product of:
      0.010709076 = sum of:
        0.010709076 = weight(_text_:in in 803) [ClassicSimilarity], result of:
          0.010709076 = score(doc=803,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=803)
      0.16666667 = coord(1/6)
    
  16. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.00
    0.0015740865 = product of:
      0.009444519 = sum of:
        0.009444519 = weight(_text_:in in 4796) [ClassicSimilarity], result of:
          0.009444519 = score(doc=4796,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15905021 = fieldWeight in 4796, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4796)
      0.16666667 = coord(1/6)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].
    Key recommendations of the report are: - That library leaders identify sets of data as possible candidates for early exposure as Linked Data and foster a discussion about Open Data and rights; - That library standards bodies increase library participation in Semantic Web standardization, develop library data standards that are compatible with Linked Data, and disseminate best-practice design patterns tailored to library Linked Data; - That data and systems designers design enhanced user services based on Linked Data capabilities, create URIs for the items in library datasets, develop policies for managing RDF vocabularies and their URIs, and express library data by re-using or mapping to existing Linked Data vocabularies; - That librarians and archivists preserve Linked Data element sets and value vocabularies and apply library experience in curation and long-term preservation to Linked Data datasets.
  17. Miles, A.; Matthews, B.; Beckett, D.; Brickley, D.; Wilson, M.; Rogers, N.: SKOS: A language to describe simple knowledge structures for the web (2005) 0.00
    0.0015617403 = product of:
      0.009370442 = sum of:
        0.009370442 = weight(_text_:in in 517) [ClassicSimilarity], result of:
          0.009370442 = score(doc=517,freq=18.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15780272 = fieldWeight in 517, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02734375 = fieldNorm(doc=517)
      0.16666667 = coord(1/6)
    
    Content
    "Textual content-based search engines for the web have a number of limitations. Firstly, many web resources have little or no textual content (images, audio or video streams etc.) Secondly, precision is low where natural language terms have overloaded meaning (e.g. 'bank', 'watch', 'chip' etc.) Thirdly, recall is incomplete where the search does not take account of synonyms or quasi-synonyms. Fourthly, there is no basis for assisting a user in modifying (expanding, refining, translating) a search based on the meaning of the original search. Fifthly, there is no basis for searching across natural languages, or framing search queries in terms of symbolic languages. The Semantic Web is a framework for creating, managing, publishing and searching semantically rich metadata for web resources. Annotating web resources with precise and meaningful statements about conceptual aspects of their content provides a basis for overcoming all of the limitations of textual content-based search engines listed above. Creating this type of metadata requires that metadata generators are able to refer to shared repositories of meaning: 'vocabularies' of concepts that are common to a community, and describe the domain of interest for that community.
    This type of effort is common in the digital library community, where a group of experts will interact with a user community to create a thesaurus for a specific domain (e.g. the Art & Architecture Thesaurus AAT AAT) or an overarching classification scheme (e.g. the Dewey Decimal Classification). A similar type of activity is being undertaken more recently in a less centralised manner by web communities, producing for example the DMOZ web directory DMOZ, or the Topic Exchange for weblog topics Topic Exchange. The web, including the semantic web, provides a medium within which communities can interact and collaboratively build and use vocabularies of concepts. A simple language is required that allows these communities to express the structure and content of their vocabularies in a machine-understandable way, enabling exchange and reuse. The Resource Description Framework (RDF) is an ideal language for making statements about web resources and publishing metadata. However, RDF provides only the low level semantics required to form metadata statements. RDF vocabularies must be built on top of RDF to support the expression of more specific types of information within metadata. Ontology languages such as OWL OWL add a layer of expressive power to RDF, and provide powerful tools for defining complex conceptual structures, which can be used to generate rich metadata. However, the class-oriented, logically precise modelling required to construct useful web ontologies is demanding in terms of expertise, effort, and therefore cost. In many cases this type of modelling may be superfluous or unsuited to requirements. Therefore there is a need for a language for expressing vocabularies of concepts for use in semantically rich metadata, that is powerful enough to support semantically enhanced search, but simple enough to be undemanding in terms of the cost and expertise required to use it."
  18. Miles, A.: SKOS: requirements for standardization (2006) 0.00
    0.0015457221 = product of:
      0.009274333 = sum of:
        0.009274333 = weight(_text_:in in 5703) [ClassicSimilarity], result of:
          0.009274333 = score(doc=5703,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 5703, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5703)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper poses three questions regarding the planned development of the Simple Knowledge Organisation System (SKOS) towards W3C Recommendation status. Firstly, what is the fundamental purpose and therefore scope of SKOS? Secondly, which key software components depend on SKOS, and how do they interact? Thirdly, what is the wider technological and social context in which SKOS is likely to be applied and how might this influence design goals? Some tentative conclusions are drawn and in particular it is suggested that the scope of SKOS be restricted to the formal representation of controlled structured vocabularies intended for use within retrieval applications. However, the main purpose of this paper is to articulate the assumptions that have motivated the design of SKOS, so that these may be reviewed prior to a rigorous standardization initiative.
    Footnote
    Presented at the International Conference on Dublin Core and Metadata Applications in October 2006
  19. Schmitz-Esser, W.; Sigel, A.: Introducing terminology-based ontologies : Papers and Materials presented by the authors at the workshop "Introducing Terminology-based Ontologies" (Poli/Schmitz-Esser/Sigel) at the 9th International Conference of the International Society for Knowledge Organization (ISKO), Vienna, Austria, July 6th, 2006 (2006) 0.00
    0.0015457221 = product of:
      0.009274333 = sum of:
        0.009274333 = weight(_text_:in in 1285) [ClassicSimilarity], result of:
          0.009274333 = score(doc=1285,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 1285, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1285)
      0.16666667 = coord(1/6)
    
    Abstract
    This work-in-progress communication contains the papers and materials presented by Winfried Schmitz-Esser and Alexander Sigel in the joint workshop (with Roberto Poli) "Introducing Terminology-based Ontologies" at the 9th International Conference of the International Society for Knowledge Organization (ISKO), Vienna, Austria, July 6th, 2006.
    Content
    Inhalt: 1. From traditional Knowledge Organization Systems (authority files, classifications, thesauri) towards ontologies on the web (Alexander Sigel) (Tutorial. Paper with Slides interspersed) pp. 3-53 2. Introduction to Integrative Cross-Language Ontology (ICLO): Formalizing and interrelating textual knowledge to enable intelligent action and knowledge sharing (Winfried Schmitz-Esser) pp. 54-113 3. First Idea Sketch on Modelling ICLO with Topic Maps (Alexander Sigel) (Work in progress paper. Topic maps available from the author) pp. 114-130
  20. Suchanek, F.M.; Kasneci, G.; Weikum, G.: YAGO: a core of semantic knowledge unifying WordNet and Wikipedia (2007) 0.00
    0.0015457221 = product of:
      0.009274333 = sum of:
        0.009274333 = weight(_text_:in in 3403) [ClassicSimilarity], result of:
          0.009274333 = score(doc=3403,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 3403, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3403)
      0.16666667 = coord(1/6)
    
    Abstract
    We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as hasWonPrize). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships - and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.