Search (60 results, page 1 of 3)

  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"el"
  1. OWL Web Ontology Language Guide (2004) 0.06
    0.058129102 = product of:
      0.14532275 = sum of:
        0.13287885 = weight(_text_:readable in 4687) [ClassicSimilarity], result of:
          0.13287885 = score(doc=4687,freq=4.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.47999436 = fieldWeight in 4687, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4687)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 4687) [ClassicSimilarity], result of:
              0.024887787 = score(doc=4687,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 4687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4687)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The World Wide Web as it is currently constituted resembles a poorly mapped geography. Our insight into the documents and capabilities available are based on keyword searches, abetted by clever use of document connectivity and usage patterns. The sheer mass of this data is unmanageable without powerful tool support. In order to map this terrain more precisely, computational agents require machine-readable descriptions of the content and capabilities of Web accessible resources. These descriptions must be in addition to the human-readable versions of that information. The OWL Web Ontology Language is intended to provide a language that can be used to describe the classes and relations between them that are inherent in Web documents and applications. This document demonstrates the use of the OWL language to - formalize a domain by defining classes and properties of those classes, - define individuals and assert properties about them, and - reason about these classes and individuals to the degree permitted by the formal semantics of the OWL language. The sections are organized to present an incremental definition of a set of classes, properties and individuals, beginning with the fundamentals and proceeding to more complex language components.
  2. Gómez-Pérez, A.; Corcho, O.: Ontology languages for the Semantic Web (2015) 0.04
    0.04256137 = product of:
      0.106403425 = sum of:
        0.09395953 = weight(_text_:readable in 3297) [ClassicSimilarity], result of:
          0.09395953 = score(doc=3297,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 3297, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3297)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 3297) [ClassicSimilarity], result of:
              0.024887787 = score(doc=3297,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 3297, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3297)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Ontologies have proven to be an essential element in many applications. They are used in agent systems, knowledge management systems, and e-commerce platforms. They can also generate natural language, integrate intelligent information, provide semantic-based access to the Internet, and extract information from texts in addition to being used in many other applications to explicitly declare the knowledge embedded in them. However, not only are ontologies useful for applications in which knowledge plays a key role, but they can also trigger a major change in current Web contents. This change is leading to the third generation of the Web-known as the Semantic Web-which has been defined as "the conceptual structuring of the Web in an explicit machine-readable way."1 This definition does not differ too much from the one used for defining an ontology: "An ontology is an explicit, machinereadable specification of a shared conceptualization."2 In fact, new ontology-based applications and knowledge architectures are developing for this new Web. A common claim for all of these approaches is the need for languages to represent the semantic information that this Web requires-solving the heterogeneous data exchange in this heterogeneous environment. Here, we don't decide which language is best of the Semantic Web. Rather, our goal is to help developers find the most suitable language for their representation needs. The authors analyze the most representative ontology languages created for the Web and compare them using a common framework.
  3. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.03
    0.033142954 = product of:
      0.082857385 = sum of:
        0.030179864 = weight(_text_:bibliographic in 4796) [ClassicSimilarity], result of:
          0.030179864 = score(doc=4796,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.17204987 = fieldWeight in 4796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=4796)
        0.05267752 = product of:
          0.10535504 = sum of:
            0.10535504 = weight(_text_:data in 4796) [ClassicSimilarity], result of:
              0.10535504 = score(doc=4796,freq=56.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.7394569 = fieldWeight in 4796, product of:
                  7.483315 = tf(freq=56.0), with freq of:
                    56.0 = termFreq=56.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4796)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].
    Key recommendations of the report are: - That library leaders identify sets of data as possible candidates for early exposure as Linked Data and foster a discussion about Open Data and rights; - That library standards bodies increase library participation in Semantic Web standardization, develop library data standards that are compatible with Linked Data, and disseminate best-practice design patterns tailored to library Linked Data; - That data and systems designers design enhanced user services based on Linked Data capabilities, create URIs for the items in library datasets, develop policies for managing RDF vocabularies and their URIs, and express library data by re-using or mapping to existing Linked Data vocabularies; - That librarians and archivists preserve Linked Data element sets and value vocabularies and apply library experience in curation and long-term preservation to Linked Data datasets.
  4. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo (2020) 0.03
    0.029792959 = product of:
      0.074482396 = sum of:
        0.06577167 = weight(_text_:readable in 53) [ClassicSimilarity], result of:
          0.06577167 = score(doc=53,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.23758507 = fieldWeight in 53, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.02734375 = fieldNorm(doc=53)
        0.008710725 = product of:
          0.01742145 = sum of:
            0.01742145 = weight(_text_:data in 53) [ClassicSimilarity], result of:
              0.01742145 = score(doc=53,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.12227618 = fieldWeight in 53, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=53)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    # Community action on individual ontologies We would like to call on all ontology maintainers and consumers to help us increase the average star rating of the web of ontologies by fixing and improving its ontologies. You can easily check an ontology at https://archivo.dbpedia.org/info. If you are an ontology maintainer just release a patched version - archivo will automatically pick it up 8 hours later. If you are a user of an ontology and want your consumed data to become FAIRer, please inform the ontology maintainer about the issues found with Archivo. The star rating is very basic and only requires fixing small things. However, theimpact on technical and legal usability can be immense.
    # How does Archivo work? Each week Archivo runs several discovery algorithms to scan for new ontologies. Once discovered Archivo checks them every 8 hours. When changes are detected, Archivo downloads and rates and archives the latest snapshot persistently on the DBpedia Databus. # Archivo's mission Archivo's mission is to improve FAIRness (findability, accessibility, interoperability, and reusability) of all available ontologies on the Semantic Web. Archivo is not a guideline, it is fully automated, machine-readable and enforces interoperability with its star rating. - Ontology developers can implement against Archivo until they reach more stars. The stars and tests are designed to guarantee the interoperability and fitness of the ontology. - Ontology users can better find, access and re-use ontologies. Snapshots are persisted in case the original is not reachable anymore adding a layer of reliability to the decentral web of ontologies.
  5. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.02
    0.02216464 = product of:
      0.11082319 = sum of:
        0.11082319 = sum of:
          0.049775574 = weight(_text_:data in 5576) [ClassicSimilarity], result of:
            0.049775574 = score(doc=5576,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.34936053 = fieldWeight in 5576, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.078125 = fieldNorm(doc=5576)
          0.061047617 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
            0.061047617 = score(doc=5576,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.38690117 = fieldWeight in 5576, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5576)
      0.2 = coord(1/5)
    
    Abstract
    Diese Publikation beschreibt die Zusammenhänge zwischen wissenshaltigen begriffsorientierten Terminologien, Ontologien, Big Data und künstliche Intelligenz.
    Date
    13.12.2017 14:17:22
  6. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.02
    0.01773171 = product of:
      0.08865855 = sum of:
        0.08865855 = sum of:
          0.03982046 = weight(_text_:data in 318) [ClassicSimilarity], result of:
            0.03982046 = score(doc=318,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.2794884 = fieldWeight in 318, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0625 = fieldNorm(doc=318)
          0.04883809 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
            0.04883809 = score(doc=318,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.30952093 = fieldWeight in 318, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=318)
      0.2 = coord(1/5)
    
    Date
    22. 5.2021 12:43:05
    Source
    Open Password. 2021, Nr.925 vom 21.05.2021 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzI5NSwiZDdlZGY4MTk0NWJhIiwwLDAsMjY1LDFd]
  7. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.02
    0.015772909 = product of:
      0.078864545 = sum of:
        0.078864545 = sum of:
          0.042235978 = weight(_text_:data in 4649) [ClassicSimilarity], result of:
            0.042235978 = score(doc=4649,freq=4.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.29644224 = fieldWeight in 4649, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
          0.036628567 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
            0.036628567 = score(doc=4649,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.23214069 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
      0.2 = coord(1/5)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  8. Priss, U.: Description logic and faceted knowledge representation (1999) 0.01
    0.0132987825 = product of:
      0.06649391 = sum of:
        0.06649391 = sum of:
          0.029865343 = weight(_text_:data in 2655) [ClassicSimilarity], result of:
            0.029865343 = score(doc=2655,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.2096163 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
          0.036628567 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
            0.036628567 = score(doc=2655,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.23214069 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
      0.2 = coord(1/5)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  9. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.013144091 = product of:
      0.065720454 = sum of:
        0.065720454 = sum of:
          0.035196647 = weight(_text_:data in 4553) [ClassicSimilarity], result of:
            0.035196647 = score(doc=4553,freq=4.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.24703519 = fieldWeight in 4553, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
          0.030523809 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
            0.030523809 = score(doc=4553,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.19345059 = fieldWeight in 4553, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
      0.2 = coord(1/5)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  10. Wright, H.: Semantic Web and ontologies (2018) 0.01
    0.009218565 = product of:
      0.046092827 = sum of:
        0.046092827 = product of:
          0.092185654 = sum of:
            0.092185654 = weight(_text_:data in 80) [ClassicSimilarity], result of:
              0.092185654 = score(doc=80,freq=14.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.64702475 = fieldWeight in 80, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=80)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The Semantic Web and ontologies can help archaeologists combine and share data, making it more open and useful. Archaeologists create diverse types of data, using a wide variety of technologies and methodologies. Like all research domains, these data are increasingly digital. The creation of data that are now openly and persistently available from disparate sources has also inspired efforts to bring archaeological resources together and make them more interoperable. This allows functionality such as federated cross-search across different datasets, and the mapping of heterogeneous data to authoritative structures to build a single data source. Ontologies provide the structure and relationships for Semantic Web data, and have been developed for use in cultural heritage applications generally, and archaeology specifically. A variety of online resources for archaeology now incorporate Semantic Web principles and technologies.
  11. Smith, D.A.; Shadbolt, N.R.: FacetOntology : expressive descriptions of facets in the Semantic Web (2012) 0.01
    0.007870209 = product of:
      0.039351046 = sum of:
        0.039351046 = product of:
          0.07870209 = sum of:
            0.07870209 = weight(_text_:data in 2208) [ClassicSimilarity], result of:
              0.07870209 = score(doc=2208,freq=20.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.5523875 = fieldWeight in 2208, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2208)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The formal structure of the information on the Semantic Web lends itself to faceted browsing, an information retrieval method where users can filter results based on the values of properties ("facets"). Numerous faceted browsers have been created to browse RDF and Linked Data, but these systems use their own ontologies for defining how data is queried to populate their facets. Since the source data is the same format across these systems (specifically, RDF), we can unify the different methods of describing how to quer the underlying data, to enable compatibility across systems, and provide an extensible base ontology for future systems. To this end, we present FacetOntology, an ontology that defines how to query data to form a faceted browser, and a number of transformations and filters that can be applied to data before it is shown to users. FacetOntology overcomes limitations in the expressivity of existing work, by enabling the full expressivity of SPARQL when selecting data for facets. By applying a FacetOntology definition to data, a set of facets are specified, each with queries and filters to source RDF data, which enables faceted browsing systems to be created using that RDF data.
  12. Glimm, B.; Hogan, A.; Krötzsch, M.; Polleres, A.: OWL: Yet to arrive on the Web of Data? (2012) 0.01
    0.007315486 = product of:
      0.03657743 = sum of:
        0.03657743 = product of:
          0.07315486 = sum of:
            0.07315486 = weight(_text_:data in 4798) [ClassicSimilarity], result of:
              0.07315486 = score(doc=4798,freq=12.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.513453 = fieldWeight in 4798, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4798)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Seven years on from OWL becoming a W3C recommendation, and two years on from the more recent OWL 2 W3C recommendation, OWL has still experienced only patchy uptake on the Web. Although certain OWL features (like owl:sameAs) are very popular, other features of OWL are largely neglected by publishers in the Linked Data world. This may suggest that despite the promise of easy implementations and the proposal of tractable profiles suggested in OWL's second version, there is still no "right" standard fragment for the Linked Data community. In this paper, we (1) analyse uptake of OWL on the Web of Data, (2) gain insights into the OWL fragment that is actually used/usable on the Web, where we arrive at the conclusion that this fragment is likely to be a simplified profile based on OWL RL, (3) propose and discuss such a new fragment, which we call OWL LD (for Linked Data).
    Content
    Beitrag des Workshops: Linked Data on the Web (LDOW2012), April 16, 2012 Lyon, France; vgl.: http://events.linkeddata.org/ldow2012/.
  13. Halpin, H.; Hayes, P.J.: When owl:sameAs isn't the same : an analysis of identity links on the Semantic Web (2010) 0.01
    0.007315486 = product of:
      0.03657743 = sum of:
        0.03657743 = product of:
          0.07315486 = sum of:
            0.07315486 = weight(_text_:data in 4834) [ClassicSimilarity], result of:
              0.07315486 = score(doc=4834,freq=12.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.513453 = fieldWeight in 4834, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4834)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    In Linked Data, the use of owl:sameAs is ubiquitous in 'inter-linking' data-sets. However, there is a lurking suspicion within the Linked Data community that this use of owl:sameAs may be somehow incorrect, in particular with regards to its interactions with inference. In fact, owl:sameAs can be considered just one type of 'identity link', a link that declares two items to be identical in some fashion. After reviewing the definitions and history of the problem of identity in philosophy and knowledge representation, we outline four alternative readings of owl:sameAs, showing with examples how it is being (ab)used on the Web of data. Then we present possible solutions to this problem by introducing alternative identity links that rely on named graphs.
    Source
    Linked Data on the Web (LDOW2010). Proceedings of the WWW2010 Workshop on Linked Data on the Web. Raleigh, USA, April 27, 2010. Edited by Christian Bizer et al
  14. Dextre Clarke, S.G.; Will, L.D.; Cochard, N.: ¬The BS8723 thesaurus data model and exchange format, and its relationship to SKOS (2008) 0.01
    0.00696858 = product of:
      0.0348429 = sum of:
        0.0348429 = product of:
          0.0696858 = sum of:
            0.0696858 = weight(_text_:data in 6051) [ClassicSimilarity], result of:
              0.0696858 = score(doc=6051,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.48910472 = fieldWeight in 6051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6051)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
  15. Bauckhage, C.: Moderne Textanalyse : neues Wissen für intelligente Lösungen (2016) 0.01
    0.0068971063 = product of:
      0.03448553 = sum of:
        0.03448553 = product of:
          0.06897106 = sum of:
            0.06897106 = weight(_text_:data in 2568) [ClassicSimilarity], result of:
              0.06897106 = score(doc=2568,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.48408815 = fieldWeight in 2568, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2568)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Im Zuge der immer größeren Verfügbarkeit von Daten (Big Data) und rasanter Fortschritte im Daten-basierten maschinellen Lernen haben wir in den letzten Jahren Durchbrüche in der künstlichen Intelligenz erlebt. Dieser Vortrag beleuchtet diese Entwicklungen insbesondere im Hinblick auf die automatische Analyse von Textdaten. Anhand einfacher Beispiele illustrieren wir, wie moderne Textanalyse abläuft und zeigen wiederum anhand von Beispielen, welche praktischen Anwendungsmöglichkeiten sich heutzutage in Branchen wie dem Verlagswesen, der Finanzindustrie oder dem Consulting ergeben.
    Source
    https://login.mailingwork.de/public/a_5668_LVrTK/file/data/1125_Textanalyse_Christian-Bauckhage.pdf
    Theme
    Data Mining
  16. Aitken, S.; Reid, S.: Evaluation of an ontology-based information retrieval tool (2000) 0.01
    0.0068971063 = product of:
      0.03448553 = sum of:
        0.03448553 = product of:
          0.06897106 = sum of:
            0.06897106 = weight(_text_:data in 2862) [ClassicSimilarity], result of:
              0.06897106 = score(doc=2862,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.48408815 = fieldWeight in 2862, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2862)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This paper evaluates the use of an explicit domain ontology in an information retrieval tool. The evaluation compares the performance of ontology-enhanced retrieval with keyword retrieval for a fixed set of queries across several data sets. The robustness of the IR approach is assessed by comparing the performance of the tool on the original data set with that on previously unseen data.
  17. Zeng, M.L.; Zumer, M.: Introducing FRSAD and mapping it with SKOS and other models (2009) 0.01
    0.006678094 = product of:
      0.03339047 = sum of:
        0.03339047 = product of:
          0.06678094 = sum of:
            0.06678094 = weight(_text_:data in 3150) [ClassicSimilarity], result of:
              0.06678094 = score(doc=3150,freq=10.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46871632 = fieldWeight in 3150, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3150)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The Functional Requirements for Subject Authority Records (FRSAR) Working Group was formed in 2005 as the third IFLA working group of the FRBR family to address subject authority data issues and to investigate the direct and indirect uses of subject authority data by a wide range of users. This paper introduces the Functional Requirements for Subject Authority Data (FRSAD), the model developed by the FRSAR Working Group, and discusses it in the context of other related conceptual models defined in the specifications during recent years, including the British Standard BS8723-5: Structured vocabularies for information retrieval - Guide Part 5: Exchange formats and protocols for interoperability, W3C's SKOS Simple Knowledge Organization System Reference, and OWL Web Ontology Language Reference. These models enable the consideration of the functions of subject authority data and concept schemes at a higher level that is independent of any implementation, system, or specific context, while allowing us to focus on the semantics, structures, and interoperability of subject authority data.
  18. Halpin, H.; Hayes, P.J.; McCusker, J.P.; McGuinness, D.L.; Thompson, H.S.: When owl:sameAs isn't the same : an analysis of identity in linked data (2010) 0.01
    0.006678094 = product of:
      0.03339047 = sum of:
        0.03339047 = product of:
          0.06678094 = sum of:
            0.06678094 = weight(_text_:data in 4703) [ClassicSimilarity], result of:
              0.06678094 = score(doc=4703,freq=10.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46871632 = fieldWeight in 4703, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4703)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    In Linked Data, the use of owl:sameAs is ubiquitous in interlinking data-sets. There is however, ongoing discussion about its use, and potential misuse, particularly with regards to interactions with inference. In fact, owl:sameAs can be viewed as encoding only one point on a scale of similarity, one that is often too strong for many of its current uses. We describe how referentially opaque contexts that do not allow inference exist, and then outline some varieties of referentially-opaque alternatives to owl:sameAs. Finally, we report on an empirical experiment over randomly selected owl:sameAs statements from the Web of data. This theoretical apparatus and experiment shed light upon how owl:sameAs is being used (and misused) on the Web of data.
  19. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.01
    0.0061047617 = product of:
      0.030523809 = sum of:
        0.030523809 = product of:
          0.061047617 = sum of:
            0.061047617 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.061047617 = score(doc=539,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    26.12.2011 13:22:07
  20. Quick Guide to Publishing a Classification Scheme on the Semantic Web (2008) 0.01
    0.006034968 = product of:
      0.03017484 = sum of:
        0.03017484 = product of:
          0.06034968 = sum of:
            0.06034968 = weight(_text_:data in 3061) [ClassicSimilarity], result of:
              0.06034968 = score(doc=3061,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.42357713 = fieldWeight in 3061, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3061)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This document describes in brief how to express the content and structure of a classification scheme, and metadata about a classification scheme, in RDF using the SKOS vocabulary. RDF allows data to be linked to and/or merged with other RDF data by semantic web applications. The Semantic Web, which is based on the Resource Description Framework (RDF), provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Publishing classifications schemes in SKOS will unify the great many of existing classification efforts in the framework of the Semantic Web.

Years

Languages

  • e 55
  • d 4
  • More… Less…

Types

  • a 29
  • n 5
  • r 2
  • p 1
  • s 1
  • More… Less…