Search (77 results, page 1 of 4)

  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"el"
  1. Zeng, M.L.; Zumer, M.: Introducing FRSAD and mapping it with SKOS and other models (2009) 0.04
    0.043205686 = product of:
      0.10801421 = sum of:
        0.068473496 = weight(_text_:context in 3150) [ClassicSimilarity], result of:
          0.068473496 = score(doc=3150,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.38856095 = fieldWeight in 3150, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=3150)
        0.03954072 = weight(_text_:system in 3150) [ClassicSimilarity], result of:
          0.03954072 = score(doc=3150,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.29527056 = fieldWeight in 3150, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3150)
      0.4 = coord(2/5)
    
    Abstract
    The Functional Requirements for Subject Authority Records (FRSAR) Working Group was formed in 2005 as the third IFLA working group of the FRBR family to address subject authority data issues and to investigate the direct and indirect uses of subject authority data by a wide range of users. This paper introduces the Functional Requirements for Subject Authority Data (FRSAD), the model developed by the FRSAR Working Group, and discusses it in the context of other related conceptual models defined in the specifications during recent years, including the British Standard BS8723-5: Structured vocabularies for information retrieval - Guide Part 5: Exchange formats and protocols for interoperability, W3C's SKOS Simple Knowledge Organization System Reference, and OWL Web Ontology Language Reference. These models enable the consideration of the functions of subject authority data and concept schemes at a higher level that is independent of any implementation, system, or specific context, while allowing us to focus on the semantics, structures, and interoperability of subject authority data.
  2. Davies, J.; Weeks, R.: QuizRDF: search technology for the Semantic Web (2004) 0.03
    0.0340826 = product of:
      0.085206494 = sum of:
        0.044850416 = weight(_text_:index in 4320) [ClassicSimilarity], result of:
          0.044850416 = score(doc=4320,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.24139762 = fieldWeight in 4320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4320)
        0.04035608 = weight(_text_:system in 4320) [ClassicSimilarity], result of:
          0.04035608 = score(doc=4320,freq=6.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.30135927 = fieldWeight in 4320, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4320)
      0.4 = coord(2/5)
    
    Abstract
    An information-seeking system is described which combines traditional keyword querying of WWW resources with the ability to browse and query against RD annotations of those resources. RDF(S) and RDF are used to specify and populate an ontology and the resultant RDF annotations are then indexed along with the full text of the annotated resources. The resultant index allows both keyword querying against the full text of the document and the literal values occurring in the RDF annotations, along with the ability to browse and query the ontology. We motivate our approach as a key enabler for fully exploiting the Semantic Web in the area of knowledge management and argue that the ability to combine searching and browsing behaviours more fully supports a typical information-seeking task. The approach is characterised as "low threshold, high ceiling" in the sense that where RDF annotations exist they are exploited for an improved information-seeking experience but where they do not yet exist, a search capability is still available.
    Source
    Hawaii International Conference on System Sciences: Proceedings of the 37th Annual Hawaii International Conference on System Sciences (HICSS'04) - Track 4, Big Island, Hawaii, January 05-January 08, 2004
  3. Davies, J.; Weeks, R.; Krohn, U.: QuizRDF: search technology for the Semantic Web (2004) 0.03
    0.032712005 = product of:
      0.08178001 = sum of:
        0.0538205 = weight(_text_:index in 4316) [ClassicSimilarity], result of:
          0.0538205 = score(doc=4316,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.28967714 = fieldWeight in 4316, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=4316)
        0.027959513 = weight(_text_:system in 4316) [ClassicSimilarity], result of:
          0.027959513 = score(doc=4316,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 4316, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=4316)
      0.4 = coord(2/5)
    
    Abstract
    An information-seeking system is described which combines traditional keyword querying of WWW resources with the ability to browse and query against RDF annotations of those resources. RDF(S) and RDF are used to specify and populate an ontology and the resultant RDF annotations are then indexed along with the full text of the annotated resources. The resultant index allows both keyword querying against the full text of the document and the literal values occurring in the RDF annotations, along with the ability to browse and query the ontology. We motivate our approach as a key enabler for fully exploiting the Semantic Web in the area of knowledge management and argue that the ability to combine searching and browsing behaviours more fully supports a typical information-seeking task. The approach is characterised as "low threshold, high ceiling" in the sense that where RDF annotations exist they are exploited for an improved information-seeking experience but where they do not yet exist, a search capability is still available.
  4. Miles, A.: SKOS: requirements for standardization (2006) 0.03
    0.030551035 = product of:
      0.076377586 = sum of:
        0.04841807 = weight(_text_:context in 5703) [ClassicSimilarity], result of:
          0.04841807 = score(doc=5703,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 5703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=5703)
        0.027959513 = weight(_text_:system in 5703) [ClassicSimilarity], result of:
          0.027959513 = score(doc=5703,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 5703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=5703)
      0.4 = coord(2/5)
    
    Abstract
    This paper poses three questions regarding the planned development of the Simple Knowledge Organisation System (SKOS) towards W3C Recommendation status. Firstly, what is the fundamental purpose and therefore scope of SKOS? Secondly, which key software components depend on SKOS, and how do they interact? Thirdly, what is the wider technological and social context in which SKOS is likely to be applied and how might this influence design goals? Some tentative conclusions are drawn and in particular it is suggested that the scope of SKOS be restricted to the formal representation of controlled structured vocabularies intended for use within retrieval applications. However, the main purpose of this paper is to articulate the assumptions that have motivated the design of SKOS, so that these may be reviewed prior to a rigorous standardization initiative.
  5. Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012) 0.03
    0.029319597 = product of:
      0.07329899 = sum of:
        0.040348392 = weight(_text_:context in 2861) [ClassicSimilarity], result of:
          0.040348392 = score(doc=2861,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 2861, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
        0.032950602 = weight(_text_:system in 2861) [ClassicSimilarity], result of:
          0.032950602 = score(doc=2861,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 2861, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
      0.4 = coord(2/5)
    
    Abstract
    Today's conventional search engines hardly do provide the essential content relevant to the user's search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.
  6. Beppler, F.D.; Fonseca, F.T.; Pacheco, R.C.S.: Hermeneus: an architecture for an ontology-enabled information retrieval (2008) 0.02
    0.02397943 = product of:
      0.05994857 = sum of:
        0.048427295 = weight(_text_:system in 3261) [ClassicSimilarity], result of:
          0.048427295 = score(doc=3261,freq=6.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.36163113 = fieldWeight in 3261, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3261)
        0.011521274 = product of:
          0.03456382 = sum of:
            0.03456382 = weight(_text_:22 in 3261) [ClassicSimilarity], result of:
              0.03456382 = score(doc=3261,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23214069 = fieldWeight in 3261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3261)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Ontologies improve IR systems regarding its retrieval and presentation of information, which make the task of finding information more effective, efficient, and interactive. In this paper we argue that ontologies also greatly improve the engineering of such systems. We created a framework that uses ontology to drive the process of engineering an IR system. We developed a prototype that shows how a domain specialist without knowledge in the IR field can build an IR system with interactive components. The resulting system provides support for users not only to find their information needs but also to extend their state of knowledge. This way, our approach to ontology-enabled information retrieval addresses both the engineering aspect described here and also the usability aspect described elsewhere.
    Date
    28.11.2016 12:43:22
  7. Mirizzi, R.: Exploratory browsing in the Web of Data (2011) 0.02
    0.022501035 = product of:
      0.056252588 = sum of:
        0.03994287 = weight(_text_:context in 4803) [ClassicSimilarity], result of:
          0.03994287 = score(doc=4803,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22666055 = fieldWeight in 4803, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4803)
        0.016309716 = weight(_text_:system in 4803) [ClassicSimilarity], result of:
          0.016309716 = score(doc=4803,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.1217929 = fieldWeight in 4803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4803)
      0.4 = coord(2/5)
    
    Abstract
    The Linked Data initiative and the state of the art in semantic technologies led off all brand new search and mash-up applications. The basic idea is to have smarter lookup services for a huge, distributed and social knowledge base. All these applications catch and (re)propose, under a semantic data perspective, the view of the classical Web as a distributed collection of documents to retrieve. The interlinked nature of the Web, and consequently of the Semantic Web, is exploited (just) to collect and aggregate data coming from different sources. Of course, this is a big step forward in search and Web technologies, but if we limit our investi- gation to retrieval tasks, we miss another important feature of the current Web: browsing and in particular exploratory browsing (a.k.a. exploratory search). Thanks to its hyperlinked nature, the Web defined a new way of browsing documents and knowledge: selection by lookup, navigation and trial-and-error tactics were, and still are, exploited by users to search for relevant information satisfying some initial requirements. The basic assumptions behind a lookup search, typical of Information Retrieval (IR) systems, are no more valid in an exploratory browsing context. An IR system, such as a search engine, assumes that: the user has a clear picture of what she is looking for ; she knows the terminology of the specific knowledge space. On the other side, as argued in, the main challenges in exploratory search can be summarized as: support querying and rapid query refinement; other facets and metadata-based result filtering; leverage search context; support learning and understanding; other visualization to support insight/decision making; facilitate collaboration. In Section 3 we will show two applications for exploratory search in the Semantic Web addressing some of the above challenges.
  8. Jansen, B.; Browne, G.M.: Navigating information spaces : index / mind map / topic map? (2021) 0.02
    0.020296982 = product of:
      0.10148491 = sum of:
        0.10148491 = weight(_text_:index in 436) [ClassicSimilarity], result of:
          0.10148491 = score(doc=436,freq=4.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.5462205 = fieldWeight in 436, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0625 = fieldNorm(doc=436)
      0.2 = coord(1/5)
    
    Abstract
    This paper discusses the use of wiki technology to provide a navigation structure for a collection of newspaper clippings. We overview the architecture of the wiki, discuss the navigation structure and pose the question: is the navigation structure an index, and if so, what type, or is it just a linkage structure or topic map. Does such a distinction really matter? Are these definitions in reality function based?
  9. Priss, U.: Faceted knowledge representation (1999) 0.02
    0.018424368 = product of:
      0.04606092 = sum of:
        0.03261943 = weight(_text_:system in 2654) [ClassicSimilarity], result of:
          0.03261943 = score(doc=2654,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.013441487 = product of:
          0.04032446 = sum of:
            0.04032446 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.04032446 = score(doc=2654,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.2708308 = fieldWeight in 2654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2654)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  10. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.02
    0.016011713 = product of:
      0.040029284 = sum of:
        0.032278713 = weight(_text_:context in 4639) [ClassicSimilarity], result of:
          0.032278713 = score(doc=4639,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.18316938 = fieldWeight in 4639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.0077505717 = product of:
          0.023251714 = sum of:
            0.023251714 = weight(_text_:29 in 4639) [ClassicSimilarity], result of:
              0.023251714 = score(doc=4639,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.15546128 = fieldWeight in 4639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    Date
    29. 7.2011 14:44:56
  11. Hoekstra, R.: BestMap: context-aware SKOS vocabulary mappings in OWL 2 (2009) 0.02
    0.01597715 = product of:
      0.07988574 = sum of:
        0.07988574 = weight(_text_:context in 1574) [ClassicSimilarity], result of:
          0.07988574 = score(doc=1574,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.4533211 = fieldWeight in 1574, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1574)
      0.2 = coord(1/5)
    
    Abstract
    This paper describes an approach to SKOS vocabulary mapping that takes into account the context in which vocabulary terms are used in annotations. The standard vocabulary mapping properties in SKOS only allow for binary mappings between concepts. In the BestMap ontology, annotated resources are the contexts in which annotations coincide and allow for a more fine grained control over when mappings hold. A mapping between two vocabularies is defined as a class that groups descriptions of a resource. We use the OWL 2 features for property chains, disjoint properties, union, intersection and negation together with careful use of equivalence and subsumption to specify these mappings.
  12. Zhang, L.; Liu, Q.L.; Zhang, J.; Wang, H.F.; Pan, Y.; Yu, Y.: Semplore: an IR approach to scalable hybrid query of Semantic Web data (2007) 0.02
    0.015536643 = product of:
      0.07768321 = sum of:
        0.07768321 = weight(_text_:index in 231) [ClassicSimilarity], result of:
          0.07768321 = score(doc=231,freq=6.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.418113 = fieldWeight in 231, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=231)
      0.2 = coord(1/5)
    
    Abstract
    As an extension to the current Web, Semantic Web will not only contain structured data with machine understandable semantics but also textual information. While structured queries can be used to find information more precisely on the Semantic Web, keyword searches are still needed to help exploit textual information. It thus becomes very important that we can combine precise structured queries with imprecise keyword searches to have a hybrid query capability. In addition, due to the huge volume of information on the Semantic Web, the hybrid query must be processed in a very scalable way. In this paper, we define such a hybrid query capability that combines unary tree-shaped structured queries with keyword searches. We show how existing information retrieval (IR) index structures and functions can be reused to index semantic web data and its textual information, and how the hybrid query is evaluated on the index structure using IR engines in an efficient and scalable manner. We implemented this IR approach in an engine called Semplore. Comprehensive experiments on its performance show that it is a promising approach. It leads us to believe that it may be possible to evolve current web search engines to query and search the Semantic Web. Finally, we briefy describe how Semplore is used for searching Wikipedia and an IBM customer's product information.
  13. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.02
    0.0152755175 = product of:
      0.038188793 = sum of:
        0.024209036 = weight(_text_:context in 3391) [ClassicSimilarity], result of:
          0.024209036 = score(doc=3391,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.13737704 = fieldWeight in 3391, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3391)
        0.013979756 = weight(_text_:system in 3391) [ClassicSimilarity], result of:
          0.013979756 = score(doc=3391,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.104393914 = fieldWeight in 3391, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3391)
      0.4 = coord(2/5)
    
    Content
    Long Papers * Suggestions for OWL 3, Pascal Hitzler. * BestMap: Context-Aware SKOS Vocabulary Mappings in OWL 2, Rinke Hoekstra. * Mechanisms for Importing Modules, Bijan Parsia, Ulrike Sattler and Thomas Schneider. * A Syntax for Rules in OWL 2, Birte Glimm, Matthew Horridge, Bijan Parsia and Peter Patel-Schneider. * PelletSpatial: A Hybrid RCC-8 and RDF/OWL Reasoning and Query Engine, Markus Stocker and Evren Sirin. * The OWL API: A Java API for Working with OWL 2 Ontologies, Matthew Horridge and Sean Bechhofer. * From Justifications to Proofs for Entailments in OWL, Matthew Horridge, Bijan Parsia and Ulrike Sattler. * A Solution for the Man-Man Problem in the Family History Knowledge Base, Dmitry Tsarkov, Ulrike Sattler and Robert Stevens. * Towards Integrity Constraints in OWL, Evren Sirin and Jiao Tao. * Processing OWL2 ontologies using Thea: An application of logic programming, Vangelis Vassiliadis, Jan Wielemaker and Chris Mungall. * Reasoning in Metamodeling Enabled Ontologies, Nophadol Jekjantuk, Gerd Gröner and Jeff Z. Pan.
    Short Papers * A Database Backend for OWL, Jörg Henss, Joachim Kleb and Stephan Grimm. * Unifying SysML and OWL, Henson Graves. * The OWLlink Protocol, Thorsten Liebig, Marko Luther and Olaf Noppens. * A Reasoning Broker Framework for OWL, Juergen Bock, Tuvshintur Tserendorj, Yongchun Xu, Jens Wissmann and Stephan Grimm. * Change Representation For OWL 2 Ontologies, Raul Palma, Peter Haase, Oscar Corcho and Asunción Gómez-Pérez. * Practical Aspects of Query Rewriting for OWL 2, Héctor Pérez-Urbina, Ian Horrocks and Boris Motik. * CSage: Use of a Configurable Semantically Attributed Graph Editor as Framework for Editing and Visualization, Lawrence Levin. * A Conformance Test Suite for the OWL 2 RL/RDF Rules Language and the OWL 2 RDF-Based Semantics, Michael Schneider and Kai Mainzer. * Improving the Data Quality of Relational Databases using OBDA and OWL 2 QL, Olivier Cure. * Temporal Classes and OWL, Natalya Keberle. * Using Ontologies for Medical Image Retrieval - An Experiment, Jasmin Opitz, Bijan Parsia and Ulrike Sattler. * Task Representation and Retrieval in an Ontology-Guided Modelling System, Yuan Ren, Jens Lemcke, Andreas Friesen, Tirdad Rahmani, Srdjan Zivkovic, Boris Gregorcic, Andreas Bartho, Yuting Zhao and Jeff Z. Pan. * A platform for reasoning with OWL-EL knowledge bases in a Peer-to-Peer environment, Alexander De Leon and Michel Dumontier. * Axiomé: a Tool for the Elicitation and Management of SWRL Rules, Saeed Hassanpour, Martin O'Connor and Amar Das. * SQWRL: A Query Language for OWL, Martin O'Connor and Amar Das. * Classifying ELH Ontologies In SQL Databases, Vincent Delaitre and Yevgeny Kazakov. * A Semantic Web Approach to Represent and Retrieve Information in a Corporate Memory, Ana B. Rios-Alvarado, R. Carolina Medina-Ramirez and Ricardo Marcelin-Jimenez. * Towards a Graphical Notation for OWL 2, Elisa Kendall, Roy Bell, Roger Burkhart, Mark Dutra and Evan Wallace.
  14. Arenas, M.; Cuenca Grau, B.; Kharlamov, E.; Marciuska, S.; Zheleznyakov, D.: Faceted search over ontology-enhanced RDF data (2014) 0.01
    0.013694699 = product of:
      0.068473496 = sum of:
        0.068473496 = weight(_text_:context in 2207) [ClassicSimilarity], result of:
          0.068473496 = score(doc=2207,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.38856095 = fieldWeight in 2207, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=2207)
      0.2 = coord(1/5)
    
    Abstract
    An increasing number of applications rely on RDF, OWL2, and SPARQL for storing and querying data. SPARQL, however, is not targeted towards end-users, and suitable query interfaces are needed. Faceted search is a prominent approach for end-user data access, and several RDF-based faceted search systems have been developed. There is, however, a lack of rigorous theoretical underpinning for faceted search in the context of RDF and OWL2. In this paper, we provide such solid foundations. We formalise faceted interfaces for this context, identify a fragment of first-order logic capturing the underlying queries, and study the complexity of answering such queries for RDF and OWL2 profiles. We then study interface generation and update, and devise efficiently implementable algorithms. Finally, we have implemented and tested our faceted search algorithms for scalability, with encouraging results.
  15. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.013160261 = product of:
      0.032900654 = sum of:
        0.023299592 = weight(_text_:system in 4553) [ClassicSimilarity], result of:
          0.023299592 = score(doc=4553,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 4553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.009601062 = product of:
          0.028803186 = sum of:
            0.028803186 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.028803186 = score(doc=4553,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  16. Gödert, W.: ¬An ontology-based model for indexing and retrieval (2013) 0.01
    0.0129114855 = product of:
      0.064557426 = sum of:
        0.064557426 = weight(_text_:context in 1510) [ClassicSimilarity], result of:
          0.064557426 = score(doc=1510,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.36633876 = fieldWeight in 1510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0625 = fieldNorm(doc=1510)
      0.2 = coord(1/5)
    
    Abstract
    Starting from an unsolved problem of information retrieval this paper presents an ontology-based model for indexing and retrieval. The model combines the methods and experiences of cognitive-to-interpret indexing languages with the strengths and possibilities of formal knowledge representation. The core component of the model uses inferences along the paths of typed relations between the entities of a knowledge representation for enabling the determination of hit quantities in the context of retrieval processes. The entities are arranged in aspect-oriented facets to ensure a consistent hierarchical structure. The possible consequences for indexing and retrieval are discussed.
  17. Lacasta, J.; Nogueras-Iso, J.; López-Pellicer, F.J.; Muro-Medrano, P.R.; Zarazaga-Soria, F.J.: ThManager : an open source tool for creating and visualizing SKOS (2007) 0.01
    0.01129755 = product of:
      0.05648775 = sum of:
        0.05648775 = weight(_text_:context in 2349) [ClassicSimilarity], result of:
          0.05648775 = score(doc=2349,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32054642 = fieldWeight in 2349, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2349)
      0.2 = coord(1/5)
    
    Abstract
    Knowledge Organization Systems denotes formally represented knowledge that is used within the context of Digital Libraries to improve data sharing and information retrieval. To increase their use, and to reuse them when possible, it is vital to manage them adequately and to provide them in a standard interchange format. Simple Knowledge Organization Systems (SKOS) seems to be the most promising representation for the type of knowledge models used in digital libraries, but there is a lack of tools that are able to properly manage it. This work presents a tool that fills this gap, facilitating their use in different environments and using SKOS as an interchange format.
  18. Voß, J.: ¬Das Simple Knowledge Organisation System (SKOS) als Kodierungs- und Austauschformat der DDC für Anwendungen im Semantischen Web (2007) 0.01
    0.011183805 = product of:
      0.055919025 = sum of:
        0.055919025 = weight(_text_:system in 243) [ClassicSimilarity], result of:
          0.055919025 = score(doc=243,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.41757566 = fieldWeight in 243, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.09375 = fieldNorm(doc=243)
      0.2 = coord(1/5)
    
  19. Ulrich, W.: Simple Knowledge Organisation System (2007) 0.01
    0.011183805 = product of:
      0.055919025 = sum of:
        0.055919025 = weight(_text_:system in 105) [ClassicSimilarity], result of:
          0.055919025 = score(doc=105,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.41757566 = fieldWeight in 105, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.09375 = fieldNorm(doc=105)
      0.2 = coord(1/5)
    
  20. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.01
    0.011183805 = product of:
      0.055919025 = sum of:
        0.055919025 = weight(_text_:system in 3829) [ClassicSimilarity], result of:
          0.055919025 = score(doc=3829,freq=8.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.41757566 = fieldWeight in 3829, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
      0.2 = coord(1/5)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.

Years

Languages

  • e 65
  • d 11

Types

  • a 32
  • x 3
  • n 2
  • p 1
  • r 1
  • s 1
  • More… Less…