Search (52 results, page 1 of 3)

  • × theme_ss:"Semantic Web"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.06
    0.06391277 = product of:
      0.19173831 = sum of:
        0.15331133 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
          0.15331133 = score(doc=701,freq=2.0), product of:
            0.4091808 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04826377 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.038426977 = weight(_text_:problem in 701) [ClassicSimilarity], result of:
          0.038426977 = score(doc=701,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.1875815 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.33333334 = coord(2/6)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.05
    0.046958487 = product of:
      0.14087546 = sum of:
        0.09510193 = weight(_text_:problem in 759) [ClassicSimilarity], result of:
          0.09510193 = score(doc=759,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.46424055 = fieldWeight in 759, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.04577352 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
          0.04577352 = score(doc=759,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.2708308 = fieldWeight in 759, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
      0.33333334 = coord(2/6)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
  3. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.02
    0.017443962 = product of:
      0.052331887 = sum of:
        0.02401686 = weight(_text_:problem in 150) [ClassicSimilarity], result of:
          0.02401686 = score(doc=150,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.11723843 = fieldWeight in 150, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.028315026 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
          0.028315026 = score(doc=150,freq=6.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.16753313 = fieldWeight in 150, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
      0.33333334 = coord(2/6)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
  4. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.02
    0.015257841 = product of:
      0.09154704 = sum of:
        0.09154704 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
          0.09154704 = score(doc=4643,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.5416616 = fieldWeight in 4643, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=4643)
      0.16666667 = coord(1/6)
    
    Date
    22. 9.2007 15:41:14
  5. Eiter, T.; Kaminski, T.; Redl, C.; Schüller, P.; Weinzierl, A.: Answer set programming with external source access (2017) 0.01
    0.013866143 = product of:
      0.083196856 = sum of:
        0.083196856 = weight(_text_:problem in 3938) [ClassicSimilarity], result of:
          0.083196856 = score(doc=3938,freq=6.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.4061259 = fieldWeight in 3938, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3938)
      0.16666667 = coord(1/6)
    
    Abstract
    Access to external information is an important need for Answer Set Programming (ASP), which is a booming declarative problem solving approach these days. External access not only includes data in different formats, but more general also the results of computations, and possibly in a two-way information exchange. Providing such access is a major challenge, and in particular if it should be supported at a generic level, both regarding the semantics and efficient computation. In this article, we consider problem solving with ASP under external information access using the dlvhex system. The latter facilitates this access through special external atoms, which are two-way API style interfaces between the rules of the program and an external source. The dlvhex system has a flexible plugin architecture that allows one to use multiple predefined and user-defined external atoms which can be implemented, e.g., in Python or C++. We consider how to solve problems using the ASP paradigm, and specifically discuss how to use external atoms in this context, illustrated by examples. As a showcase, we demonstrate the development of a hex program for a concrete real-world problem using Semantic Web technologies, and discuss specifics of the implementation process.
  6. Cali, A.: Ontology querying : datalog strikes back (2017) 0.01
    0.01358599 = product of:
      0.08151594 = sum of:
        0.08151594 = weight(_text_:problem in 3928) [ClassicSimilarity], result of:
          0.08151594 = score(doc=3928,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.39792046 = fieldWeight in 3928, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=3928)
      0.16666667 = coord(1/6)
    
    Abstract
    In this tutorial we address the problem of ontology querying, that is, the problem of answering queries against a theory constituted by facts (the data) and inference rules (the ontology). A varied landscape of ontology languages exists in the scientific literature, with several degrees of complexity of query processing. We argue that Datalog±, a family of languages derived from Datalog, is a powerful tool for ontology querying. To illustrate the impact of this comeback of Datalog, we present the basic paradigms behind the main Datalog± as well as some recent extensions. We also present some efficient query processing techniques for some cases.
  7. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.01
    0.013078149 = product of:
      0.0784689 = sum of:
        0.0784689 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
          0.0784689 = score(doc=6048,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.46428138 = fieldWeight in 6048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=6048)
      0.16666667 = coord(1/6)
    
    Date
    22. 9.2007 15:41:14
  8. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.01
    0.013078149 = product of:
      0.0784689 = sum of:
        0.0784689 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
          0.0784689 = score(doc=100,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.46428138 = fieldWeight in 100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=100)
      0.16666667 = coord(1/6)
    
    Date
    22. 9.2007 15:41:14
  9. Krause, J.: Shell Model, Semantic Web and Web Information Retrieval (2006) 0.01
    0.011321658 = product of:
      0.067929946 = sum of:
        0.067929946 = weight(_text_:problem in 6061) [ClassicSimilarity], result of:
          0.067929946 = score(doc=6061,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.33160037 = fieldWeight in 6061, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
      0.16666667 = coord(1/6)
    
    Abstract
    The middle of the 1990s are coined by the increased enthusiasm for the possibilities of the WWW, which has only recently deviated - at least in relation to scientific information - for the differentiated measuring of its advantages and disadvantages. Web Information Retrieval originated as a specialized discipline with great commercial significance (for an overview see Lewandowski 2005). Besides the new technological structure that enables the indexing and searching (in seconds) of unimaginable amounts of data worldwide, new assessment processes for the ranking of search results are being developed, which use the link structures of the Web. They are the main innovation with respect to the traditional "mother discipline" of Information Retrieval. From the beginning, link structures of Web pages are applied to commercial search engines in a wide array of variations. From the perspective of scientific information, link topology based approaches were in essence trying to solve a self-created problem: on the one hand, it quickly became clear that the openness of the Web led to an up-tonow unknown increase in available information, but this also caused the quality of the Web pages searched to become a problem - and with it the relevance of the results. The gatekeeper function of traditional information providers, which narrows down every user query to focus on high-quality sources was lacking. Therefore, the recognition of the "authoritativeness" of the Web pages by general search engines such as Google was one of the most important factors for their success.
  10. Rousset, M.-C.; Atencia, M.; David, J.; Jouanot, F.; Ulliana, F.; Palombi, O.: Datalog revisited for reasoning in linked data (2017) 0.01
    0.011321658 = product of:
      0.067929946 = sum of:
        0.067929946 = weight(_text_:problem in 3936) [ClassicSimilarity], result of:
          0.067929946 = score(doc=3936,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.33160037 = fieldWeight in 3936, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3936)
      0.16666667 = coord(1/6)
    
    Abstract
    Linked Data provides access to huge, continuously growing amounts of open data and ontologies in RDF format that describe entities, links and properties on those entities. Equipping Linked Data with inference paves the way to make the Semantic Web a reality. In this survey, we describe a unifying framework for RDF ontologies and databases that we call deductive RDF triplestores. It consists in equipping RDF triplestores with Datalog inference rules. This rule language allows to capture in a uniform manner OWL constraints that are useful in practice, such as property transitivity or symmetry, but also domain-specific rules with practical relevance for users in many domains of interest. The expressivity and the genericity of this framework is illustrated for modeling Linked Data applications and for developing inference algorithms. In particular, we show how it allows to model the problem of data linkage in Linked Data as a reasoning problem on possibly decentralized data. We also explain how it makes possible to efficiently extract expressive modules from Semantic Web ontologies and databases with formal guarantees, whilst effectively controlling their succinctness. Experiments conducted on real-world datasets have demonstrated the feasibility of this approach and its usefulness in practice for data integration and information extraction.
  11. Rajabi, E.; Sanchez-Alonso, S.; Sicilia, M.-A.: Analyzing broken links on the web of data : An experiment with DBpedia (2014) 0.01
    0.011207869 = product of:
      0.06724721 = sum of:
        0.06724721 = weight(_text_:problem in 1330) [ClassicSimilarity], result of:
          0.06724721 = score(doc=1330,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.3282676 = fieldWeight in 1330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1330)
      0.16666667 = coord(1/6)
    
    Abstract
    Linked open data allow interlinking and integrating any kind of data on the web. Links between various data sources play a key role insofar as they allow software applications (e.g., browsers, search engines) to operate over the aggregated data space as if it was a unique local database. In this new data space, where DBpedia, a data set including structured information from Wikipedia, seems to be the central hub, we analyzed and highlighted outgoing links from this hub in an effort to discover broken links. The paper reports on an experiment to examine the causes of broken links and proposes some treatments for solving this problem.
  12. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.01
    0.010898459 = product of:
      0.06539075 = sum of:
        0.06539075 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
          0.06539075 = score(doc=2090,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.38690117 = fieldWeight in 2090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=2090)
      0.16666667 = coord(1/6)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  13. Isaac, A.: Aligning thesauri for an integrated access to Cultural Heritage Resources (2007) 0.01
    0.009706301 = product of:
      0.058237802 = sum of:
        0.058237802 = weight(_text_:problem in 553) [ClassicSimilarity], result of:
          0.058237802 = score(doc=553,freq=6.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.28428814 = fieldWeight in 553, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.02734375 = fieldNorm(doc=553)
      0.16666667 = coord(1/6)
    
    Abstract
    Currently, a number of efforts are being carried out to integrate collections from different institutions and containing heterogeneous material. Examples of such projects are The European Library [1] and the Memory of the Netherlands [2]. A crucial point for the success of these is the availability to provide a unified access on top of the different collections, e.g. using one single vocabulary for querying or browsing the objects they contain. This is made difficult by the fact that the objects from different collections are often described using different vocabularies - thesauri, classification schemes - and are therefore not interoperable at the semantic level. To solve this problem, one can turn to semantic links - mappings - between the elements of the different vocabularies. If one knows that a concept C from a vocabulary V is semantically equivalent to a concept to a concept D from vocabulary W, then an appropriate search engine can return all the objects that were indexed against D for a query for objects described using C. We thus have an access to other collections, using a single one vocabulary. This is however an ideal situation, and hard alignment work is required to reach it. Several projects in the past have tried to implement such a solution, like MACS [3] and Renardus [4]. They have demonstrated very interesting results, but also highlighted the difficulty of aligning manually all the different vocabularies involved in practical cases, which sometimes contain hundreds of thousands of concepts. To alleviate this problem, a number of tools have been proposed in order to provide with candidate mappings between two input vocabularies, making alignment a (semi-) automatic task. Recently, the Semantic Web community has produced a lot of these alignment tools'. Several techniques are found, depending on the material they exploit: labels of concepts, structure of vocabularies, collection objects and external knowledge sources. Throughout our presentation, we will present a concrete heterogeneity case where alignment techniques have been applied to build a (pilot) browser, developed in the context of the STITCH project [5]. This browser enables a unified access to two collections of illuminated manuscripts, using the description vocabulary used in the first collection, Mandragore [6], or the one used by the second, Iconclass [7]. In our talk, we will also make the point for using unified representations the vocabulary semantic and lexical information. Additionally to ease the use of the alignment tools that have these vocabularies as input, turning to a standard representation format helps designing applications that are more generic, like the browser we demonstrate. We give pointers to SKOS [8], an open and web-enabled format currently developed by the Semantic Web community.
    References [1] http:// www.theeuropeanlibrary.org [2] http://www.geheugenvannederland.nl [3] http://macs.cenl.org [4] Day, M., Koch, T., Neuroth, H.: Searching and browsing multiple subject gateways in the Renardus service. In Proceedings of the RC33 Sixth International Conference on Social Science Methodology, Amsterdam , 2005. [5] http://stitch.cs.vu.nl [6] http://mandragore.bnf.fr [7] http://www.iconclass.nl [8] www.w3.org/2004/02/skos/ 1 The Semantic Web vision supposes sharing data using different conceptualizations (ontologies), and therefore implies to tackle the semantic interoperability problem
  14. Urro, R.; Winiwarter, W.: Specifying ontologies : Linguistic aspects in problem-driven knowledge engineering (2001) 0.01
    0.009606745 = product of:
      0.05764047 = sum of:
        0.05764047 = weight(_text_:problem in 3263) [ClassicSimilarity], result of:
          0.05764047 = score(doc=3263,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.28137225 = fieldWeight in 3263, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=3263)
      0.16666667 = coord(1/6)
    
  15. Liang, A.; Salokhe, G.; Sini, M.; Keizer, J.: Towards an infrastructure for semantic applications : methodologies for semantic integration of heterogeneous resources (2006) 0.01
    0.009606745 = product of:
      0.05764047 = sum of:
        0.05764047 = weight(_text_:problem in 241) [ClassicSimilarity], result of:
          0.05764047 = score(doc=241,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.28137225 = fieldWeight in 241, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=241)
      0.16666667 = coord(1/6)
    
    Abstract
    The semantic heterogeneity presented by Web information in the Agricultural domain presents tremendous information retrieval challenges. This article presents work taking place at the Food and Agriculture Organizations (FAO) which addresses this challenge. Based on the analysis of resources in the domain of agriculture, this paper proposes (a) an application profile (AP) for dealing with the problem of heterogeneity originating from differences in terminologies, domain coverage, and domain modelling, and (b) a root application ontology (AAO) based on the application profile which can serve as a basis for extending knowledge of the domain. The paper explains how even a small investment in the enhancement of relations between vocabularies, both metadata and domain-specific, yields a relatively large return on investment.
  16. Fluit, C.; Horst, H. ter; Meer, J. van der; Sabou, M.; Mika, P.: Spectacle (2004) 0.01
    0.009606745 = product of:
      0.05764047 = sum of:
        0.05764047 = weight(_text_:problem in 4337) [ClassicSimilarity], result of:
          0.05764047 = score(doc=4337,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.28137225 = fieldWeight in 4337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=4337)
      0.16666667 = coord(1/6)
    
    Abstract
    Many Semantic Web initiatives improve the capabilities of machines to exchange the meaning of information with other machines. These efforts lead to an increased quality of the application's results, but their user interfaces take little or no advantage of the semantic richness. For example, an ontology-based search engine will use its ontology when evaluating the user's query (e.g. for query formulation, disambiguation or evaluation), but fails to use it to significantly enrich the presentation of the results to a human user. For example, one could imagine replacing the endless list of hits with a structured presentation based on the semantic properties of the hits. Another problem is that the modelling of a domain is done from a single perspective (most often that of the information provider). Therefore, presentation based on the resulting ontology is unlikely to satisfy the needs of all the different types of users of the information. So even assuming an ontology for the domain is in place, mapping that ontology to the needs of individual users - based on their tasks, expertise and personal preferences - is not trivial.
  17. Engels, R.H.P.; Lech, T.Ch.: Generating ontologies for the Semantic Web : OntoBuilder (2004) 0.01
    0.009057326 = product of:
      0.054343957 = sum of:
        0.054343957 = weight(_text_:problem in 4404) [ClassicSimilarity], result of:
          0.054343957 = score(doc=4404,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.2652803 = fieldWeight in 4404, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.03125 = fieldNorm(doc=4404)
      0.16666667 = coord(1/6)
    
    Abstract
    Significant progress has been made in technologies for publishing and distributing knowledge and information on the web. However, much of the published information is not organized, and it is hard to find answers to questions that require more than a keyword search. In general, one can say that the web is organizing itself. Information is often published in relatively ad hoc fashion. Typically, concern about the presentation of content has been limited to purely layout issues. This, combined with the fact that the representation language used on the World Wide Web (HTML) is mainly format-oriented, makes publishing on the WWW easy, giving it an enormous expressiveness. People add private, educational or organizational content to the web that is of an immensely diverse nature. Content on the web is growing closer to a real universal knowledge base, with one problem relatively undefined; the problem of the interpretation of its contents. Although widely acknowledged for its general and universal advantages, the increasing popularity of the web also shows us some major drawbacks. The developments of the information content on the web during the last year alone, clearly indicates the need for some changes. Perhaps one of the most significant problems with the web as a distributed information system is the difficulty of finding and comparing information.
  18. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.01
    0.008718766 = product of:
      0.052312598 = sum of:
        0.052312598 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
          0.052312598 = score(doc=3376,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.30952093 = fieldWeight in 3376, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=3376)
      0.16666667 = coord(1/6)
    
    Date
    31. 7.2010 16:58:22
  19. Eckert, K.: SKOS: eine Sprache für die Übertragung von Thesauri ins Semantic Web (2011) 0.01
    0.008718766 = product of:
      0.052312598 = sum of:
        0.052312598 = weight(_text_:22 in 4331) [ClassicSimilarity], result of:
          0.052312598 = score(doc=4331,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.30952093 = fieldWeight in 4331, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=4331)
      0.16666667 = coord(1/6)
    
    Date
    15. 3.2011 19:21:22
  20. OWL Web Ontology Language Test Cases (2004) 0.01
    0.008718766 = product of:
      0.052312598 = sum of:
        0.052312598 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
          0.052312598 = score(doc=4685,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.30952093 = fieldWeight in 4685, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=4685)
      0.16666667 = coord(1/6)
    
    Date
    14. 8.2011 13:33:22

Authors

Years

Languages

  • e 45
  • d 7

Types

  • a 34
  • el 11
  • m 8
  • s 3
  • n 1
  • x 1
  • More… Less…