Search (52 results, page 1 of 3)

  • × theme_ss:"Wissensrepräsentation"
  • × theme_ss:"Semantic Web"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.26
    0.26491702 = product of:
      0.46360478 = sum of:
        0.118422605 = product of:
          0.197371 = sum of:
            0.115060724 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.115060724 = score(doc=701,freq=2.0), product of:
                0.3070917 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03622214 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
            0.054806177 = weight(_text_:retrieval in 701) [ClassicSimilarity], result of:
              0.054806177 = score(doc=701,freq=28.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.5001983 = fieldWeight in 701, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
            0.027504109 = weight(_text_:system in 701) [ClassicSimilarity], result of:
              0.027504109 = score(doc=701,freq=6.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.24108742 = fieldWeight in 701, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.6 = coord(3/5)
        0.115060724 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.115060724 = score(doc=701,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.115060724 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.115060724 = score(doc=701,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.115060724 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.115060724 = score(doc=701,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5714286 = coord(4/7)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Mayfield, J.; Finin, T.: Information retrieval on the Semantic Web : integrating inference and retrieval 0.01
    0.008182896 = product of:
      0.028640134 = sum of:
        0.011463534 = product of:
          0.05731767 = sum of:
            0.05731767 = weight(_text_:retrieval in 4330) [ClassicSimilarity], result of:
              0.05731767 = score(doc=4330,freq=10.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.5231199 = fieldWeight in 4330, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4330)
          0.2 = coord(1/5)
        0.0171766 = product of:
          0.0343532 = sum of:
            0.0343532 = weight(_text_:22 in 4330) [ClassicSimilarity], result of:
              0.0343532 = score(doc=4330,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2708308 = fieldWeight in 4330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4330)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    One vision of the Semantic Web is that it will be much like the Web we know today, except that documents will be enriched by annotations in machine understandable markup. These annotations will provide metadata about the documents as well as machine interpretable statements capturing some of the meaning of document content. We discuss how the information retrieval paradigm might be recast in such an environment. We suggest that retrieval can be tightly bound to inference. Doing so makes today's Web search engines useful to Semantic Web inference engines, and causes improvements in either retrieval or inference to lead directly to improvements in the other.
    Date
    12. 2.2011 17:35:22
  3. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.01
    0.0052332124 = product of:
      0.036632486 = sum of:
        0.036632486 = product of:
          0.09158121 = sum of:
            0.0439427 = weight(_text_:retrieval in 3829) [ClassicSimilarity], result of:
              0.0439427 = score(doc=3829,freq=8.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.40105087 = fieldWeight in 3829, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3829)
            0.047638513 = weight(_text_:system in 3829) [ClassicSimilarity], result of:
              0.047638513 = score(doc=3829,freq=8.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.41757566 = fieldWeight in 3829, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3829)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
  4. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.00
    0.004639679 = product of:
      0.016238876 = sum of:
        0.003969876 = product of:
          0.01984938 = sum of:
            0.01984938 = weight(_text_:system in 4553) [ClassicSimilarity], result of:
              0.01984938 = score(doc=4553,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.17398985 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.2 = coord(1/5)
        0.0122690005 = product of:
          0.024538001 = sum of:
            0.024538001 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.024538001 = score(doc=4553,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  5. Semantic applications (2018) 0.00
    0.004166863 = product of:
      0.029168038 = sum of:
        0.029168038 = product of:
          0.07292009 = sum of:
            0.04484883 = weight(_text_:retrieval in 5204) [ClassicSimilarity], result of:
              0.04484883 = score(doc=5204,freq=12.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.40932083 = fieldWeight in 5204, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5204)
            0.028071264 = weight(_text_:system in 5204) [ClassicSimilarity], result of:
              0.028071264 = score(doc=5204,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.24605882 = fieldWeight in 5204, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5204)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    LCSH
    Information storage and retrieval
    Information Storage and Retrieval
    RSWK
    Wissensbasiertes System
    Information Retrieval
    Subject
    Wissensbasiertes System
    Information Retrieval
    Information storage and retrieval
    Information Storage and Retrieval
  6. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.00
    0.003641347 = product of:
      0.012744714 = sum of:
        0.0029295133 = product of:
          0.014647567 = sum of:
            0.014647567 = weight(_text_:retrieval in 1634) [ClassicSimilarity], result of:
              0.014647567 = score(doc=1634,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.13368362 = fieldWeight in 1634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1634)
          0.2 = coord(1/5)
        0.0098152 = product of:
          0.0196304 = sum of:
            0.0196304 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
              0.0196304 = score(doc=1634,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.15476047 = fieldWeight in 1634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1634)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  7. Kiryakov, A.; Popov, B.; Terziev, I.; Manov, D.; Ognyanoff, D.: Semantic annotation, indexing, and retrieval (2004) 0.00
    0.0034888082 = product of:
      0.024421657 = sum of:
        0.024421657 = product of:
          0.06105414 = sum of:
            0.029295133 = weight(_text_:retrieval in 700) [ClassicSimilarity], result of:
              0.029295133 = score(doc=700,freq=8.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.26736724 = fieldWeight in 700, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=700)
            0.03175901 = weight(_text_:system in 700) [ClassicSimilarity], result of:
              0.03175901 = score(doc=700,freq=8.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.27838376 = fieldWeight in 700, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=700)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    The Semantic Web realization depends on the availability of a critical mass of metadata for the web content, associated with the respective formal knowledge about the world. We claim that the Semantic Web, at its current stage of development, is in a state of a critical need of metadata generation and usage schemata that are specific, well-defined and easy to understand. This paper introduces our vision for a holistic architecture for semantic annotation, indexing, and retrieval of documents with regard to extensive semantic repositories. A system (called KIM), implementing this concept, is presented in brief and it is used for the purposes of evaluation and demonstration. A particular schema for semantic annotation with respect to real-world entities is proposed. The underlying philosophy is that a practical semantic annotation is impossible without some particular knowledge modelling commitments. Our understanding is that a system for such semantic annotation should be based upon a simple model of real-world entity classes, complemented with extensive instance knowledge. To ensure the efficiency, ease of sharing, and reusability of the metadata, we introduce an upper-level ontology (of about 250 classes and 100 properties), which starts with some basic philosophical distinctions and then goes down to the most common entity types (people, companies, cities, etc.). Thus it encodes many of the domain-independent commonsense concepts and allows straightforward domain-specific extensions. On the basis of the ontology, a large-scale knowledge base of entity descriptions is bootstrapped, and further extended and maintained. Currently, the knowledge bases usually scales between 105 and 106 descriptions. Finally, this paper presents a semantically enhanced information extraction system, which provides automatic semantic annotation with references to classes in the ontology and to instances. The system has been running over a continuously growing document collection (currently about 0.5 million news articles), so it has been under constant testing and evaluation for some time now. On the basis of these semantic annotations, we perform semantic based indexing and retrieval where users can mix traditional information retrieval (IR) queries and ontology-based ones. We argue that such large-scale, fully automatic methods are essential for the transformation of the current largely textual web into a Semantic Web.
  8. Miles, A.; Pérez-Agüera, J.R.: SKOS: Simple Knowledge Organisation for the Web (2006) 0.00
    0.0030527073 = product of:
      0.02136895 = sum of:
        0.02136895 = product of:
          0.053422377 = sum of:
            0.025633242 = weight(_text_:retrieval in 504) [ClassicSimilarity], result of:
              0.025633242 = score(doc=504,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23394634 = fieldWeight in 504, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=504)
            0.027789133 = weight(_text_:system in 504) [ClassicSimilarity], result of:
              0.027789133 = score(doc=504,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2435858 = fieldWeight in 504, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=504)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    This article introduces the Simple Knowledge Organisation System (SKOS), a Semantic Web language for representing controlled structured vocabularies, including thesauri, classification schemes, subject heading systems and taxonomies. SKOS provides a framework for publishing thesauri, classification schemes, and subject indexes on the Web, and for applying these systems to resource collections that are part of the SemanticWeb. SemanticWeb applications may harvest and merge SKOS data, to integrate and enhances retrieval service across multiple collections (e.g. libraries). This article also describes some alternatives for integrating Semantic Web services based on the Resource Description Framework (RDF) and SKOS into a distributed enterprise architecture.
  9. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.00
    0.0028043431 = product of:
      0.0196304 = sum of:
        0.0196304 = product of:
          0.0392608 = sum of:
            0.0392608 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
              0.0392608 = score(doc=3376,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.30952093 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    31. 7.2010 16:58:22
  10. OWL Web Ontology Language Test Cases (2004) 0.00
    0.0028043431 = product of:
      0.0196304 = sum of:
        0.0196304 = product of:
          0.0392608 = sum of:
            0.0392608 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.0392608 = score(doc=4685,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    14. 8.2011 13:33:22
  11. Miles, A.: SKOS: requirements for standardization (2006) 0.00
    0.0026166062 = product of:
      0.018316243 = sum of:
        0.018316243 = product of:
          0.045790605 = sum of:
            0.02197135 = weight(_text_:retrieval in 5703) [ClassicSimilarity], result of:
              0.02197135 = score(doc=5703,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20052543 = fieldWeight in 5703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5703)
            0.023819257 = weight(_text_:system in 5703) [ClassicSimilarity], result of:
              0.023819257 = score(doc=5703,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20878783 = fieldWeight in 5703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5703)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper poses three questions regarding the planned development of the Simple Knowledge Organisation System (SKOS) towards W3C Recommendation status. Firstly, what is the fundamental purpose and therefore scope of SKOS? Secondly, which key software components depend on SKOS, and how do they interact? Thirdly, what is the wider technological and social context in which SKOS is likely to be applied and how might this influence design goals? Some tentative conclusions are drawn and in particular it is suggested that the scope of SKOS be restricted to the formal representation of controlled structured vocabularies intended for use within retrieval applications. However, the main purpose of this paper is to articulate the assumptions that have motivated the design of SKOS, so that these may be reviewed prior to a rigorous standardization initiative.
  12. Djioua, B.; Desclés, J.-P.; Alrahabi, M.: Searching and mining with semantic categories (2012) 0.00
    0.0021805053 = product of:
      0.015263537 = sum of:
        0.015263537 = product of:
          0.03815884 = sum of:
            0.01830946 = weight(_text_:retrieval in 99) [ClassicSimilarity], result of:
              0.01830946 = score(doc=99,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.16710453 = fieldWeight in 99, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=99)
            0.01984938 = weight(_text_:system in 99) [ClassicSimilarity], result of:
              0.01984938 = score(doc=99,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.17398985 = fieldWeight in 99, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=99)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    A new model is proposed to retrieve information by building automatically a semantic metatext structure for texts that allow searching and extracting discourse and semantic information according to certain linguistic categorizations. This paper presents approaches for searching and mining full text with semantic categories. The model is built up from two engines: The first one, called EXCOM (Djioua et al., 2006; Alrahabi, 2010), is an automatic system for text annotation, related to discourse and semantic maps, which are specification of general linguistic ontologies founded on the Applicative and Cognitive Grammar. The annotation layer uses a linguistic method called Contextual Exploration, which handles the polysemic values of a term in texts. Several 'semantic maps' underlying 'point of views' for text mining guide this automatic annotation process. The second engine uses semantic annotated texts, produced previously in order to create a semantic inverted index, which is able to retrieve relevant documents for queries associated with discourse and semantic categories such as definition, quotation, causality, relations between concepts, etc. (Djioua & Desclés, 2007). This semantic indexation process builds a metatext layer for textual contents. Some data and linguistic rules sets as well as the general architecture that extend third-party software are expressed as supplementary information.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  13. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.00
    0.0021032572 = product of:
      0.0147228 = sum of:
        0.0147228 = product of:
          0.0294456 = sum of:
            0.0294456 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
              0.0294456 = score(doc=2418,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23214069 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2418)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  14. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.00
    0.0021032572 = product of:
      0.0147228 = sum of:
        0.0147228 = product of:
          0.0294456 = sum of:
            0.0294456 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.0294456 = score(doc=4649,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    26.12.2011 13:40:22
  15. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.00
    0.0021032572 = product of:
      0.0147228 = sum of:
        0.0147228 = product of:
          0.0294456 = sum of:
            0.0294456 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
              0.0294456 = score(doc=2024,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23214069 = fieldWeight in 2024, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2024)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
  16. Fernández, M.; Cantador, I.; López, V.; Vallet, D.; Castells, P.; Motta, E.: Semantically enhanced Information Retrieval : an ontology-based approach (2011) 0.00
    0.0020911023 = product of:
      0.014637716 = sum of:
        0.014637716 = product of:
          0.03659429 = sum of:
            0.020714786 = weight(_text_:retrieval in 230) [ClassicSimilarity], result of:
              0.020714786 = score(doc=230,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.18905719 = fieldWeight in 230, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=230)
            0.015879504 = weight(_text_:system in 230) [ClassicSimilarity], result of:
              0.015879504 = score(doc=230,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.13919188 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=230)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Currently, techniques for content description and query processing in Information Retrieval (IR) are based on keywords, and therefore provide limited capabilities to capture the conceptualizations associated with user needs and contents. Aiming to solve the limitations of keyword-based models, the idea of conceptual search, understood as searching by meanings rather than literal strings, has been the focus of a wide body of research in the IR field. More recently, it has been used as a prototypical scenario (or even envisioned as a potential "killer app") in the Semantic Web (SW) vision, since its emergence in the late nineties. However, current approaches to semantic search developed in the SW area have not yet taken full advantage of the acquired knowledge, accumulated experience, and technological sophistication achieved through several decades of work in the IR field. Starting from this position, this work investigates the definition of an ontology-based IR model, oriented to the exploitation of domain Knowledge Bases to support semantic search capabilities in large document repositories, stressing on the one hand the use of fully fledged ontologies in the semantic-based perspective, and on the other hand the consideration of unstructured content as the target search space. The major contribution of this work is an innovative, comprehensive semantic search model, which extends the classic IR model, addresses the challenges of the massive and heterogeneous Web environment, and integrates the benefits of both keyword and semantic-based search. Additional contributions include: an innovative rank fusion technique that minimizes the undesired effects of knowledge sparseness on the yet juvenile SW, and the creation of a large-scale evaluation benchmark, based on TREC IR evaluation standards, which allows a rigorous comparison between IR and SW approaches. Conducted experiments show that our semantic search model obtained comparable and better performance results (in terms of MAP and P@10 values) than the best TREC automatic system.
  17. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.00
    0.00198297 = product of:
      0.013880789 = sum of:
        0.013880789 = product of:
          0.027761579 = sum of:
            0.027761579 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.027761579 = score(doc=2654,freq=4.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.21886435 = fieldWeight in 2654, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  18. Mirizzi, R.: Exploratory browsing in the Web of Data (2011) 0.00
    0.0018297146 = product of:
      0.012808002 = sum of:
        0.012808002 = product of:
          0.032020003 = sum of:
            0.018125437 = weight(_text_:retrieval in 4803) [ClassicSimilarity], result of:
              0.018125437 = score(doc=4803,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.16542503 = fieldWeight in 4803, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4803)
            0.013894566 = weight(_text_:system in 4803) [ClassicSimilarity], result of:
              0.013894566 = score(doc=4803,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.1217929 = fieldWeight in 4803, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4803)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    The Linked Data initiative and the state of the art in semantic technologies led off all brand new search and mash-up applications. The basic idea is to have smarter lookup services for a huge, distributed and social knowledge base. All these applications catch and (re)propose, under a semantic data perspective, the view of the classical Web as a distributed collection of documents to retrieve. The interlinked nature of the Web, and consequently of the Semantic Web, is exploited (just) to collect and aggregate data coming from different sources. Of course, this is a big step forward in search and Web technologies, but if we limit our investi- gation to retrieval tasks, we miss another important feature of the current Web: browsing and in particular exploratory browsing (a.k.a. exploratory search). Thanks to its hyperlinked nature, the Web defined a new way of browsing documents and knowledge: selection by lookup, navigation and trial-and-error tactics were, and still are, exploited by users to search for relevant information satisfying some initial requirements. The basic assumptions behind a lookup search, typical of Information Retrieval (IR) systems, are no more valid in an exploratory browsing context. An IR system, such as a search engine, assumes that: the user has a clear picture of what she is looking for ; she knows the terminology of the specific knowledge space. On the other side, as argued in, the main challenges in exploratory search can be summarized as: support querying and rapid query refinement; other facets and metadata-based result filtering; leverage search context; support learning and understanding; other visualization to support insight/decision making; facilitate collaboration. In Section 3 we will show two applications for exploratory search in the Semantic Web addressing some of the above challenges.
  19. Engels, R.H.P.; Lech, T.Ch.: Generating ontologies for the Semantic Web : OntoBuilder (2004) 0.00
    0.0017444041 = product of:
      0.012210828 = sum of:
        0.012210828 = product of:
          0.03052707 = sum of:
            0.014647567 = weight(_text_:retrieval in 4404) [ClassicSimilarity], result of:
              0.014647567 = score(doc=4404,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.13368362 = fieldWeight in 4404, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4404)
            0.015879504 = weight(_text_:system in 4404) [ClassicSimilarity], result of:
              0.015879504 = score(doc=4404,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.13919188 = fieldWeight in 4404, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4404)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Significant progress has been made in technologies for publishing and distributing knowledge and information on the web. However, much of the published information is not organized, and it is hard to find answers to questions that require more than a keyword search. In general, one can say that the web is organizing itself. Information is often published in relatively ad hoc fashion. Typically, concern about the presentation of content has been limited to purely layout issues. This, combined with the fact that the representation language used on the World Wide Web (HTML) is mainly format-oriented, makes publishing on the WWW easy, giving it an enormous expressiveness. People add private, educational or organizational content to the web that is of an immensely diverse nature. Content on the web is growing closer to a real universal knowledge base, with one problem relatively undefined; the problem of the interpretation of its contents. Although widely acknowledged for its general and universal advantages, the increasing popularity of the web also shows us some major drawbacks. The developments of the information content on the web during the last year alone, clearly indicates the need for some changes. Perhaps one of the most significant problems with the web as a distributed information system is the difficulty of finding and comparing information.
    Thus, there is a clear need for the web to become more semantic. The aim of introducing semantics into the web is to enhance the precision of search, but also enable the use of logical reasoning on web contents in order to answer queries. The CORPORUM OntoBuilder toolset is developed specifically for this task. It consists of a set of applications that can fulfil a variety of tasks, either as stand-alone tools, or augmenting each other. Important tasks that are dealt with by CORPORUM are related to document and information retrieval (find relevant documents, or support the user finding them), as well as information extraction (building a knowledge base from web documents to answer queries), information dissemination (summarizing strategies and information visualization), and automated document classification strategies. First versions of the toolset are encouraging in that they show large potential as a supportive technology for building up the Semantic Web. In this chapter, methods for transforming the current web into a semantic web are discussed, as well as a technical solution that can perform this task: the CORPORUM tool set. First, the toolset is introduced; followed by some pragmatic issues relating to the approach; then there will be a short overview of the theory in relation to CognIT's vision; and finally, a discussion on some of the applications that arose from the project.
  20. Studer, R.; Studer, H.-P.; Studer, A.: Semantisches Knowledge Retrieval (2001) 0.00
    0.0015376742 = product of:
      0.010763719 = sum of:
        0.010763719 = product of:
          0.053818595 = sum of:
            0.053818595 = weight(_text_:retrieval in 4322) [ClassicSimilarity], result of:
              0.053818595 = score(doc=4322,freq=12.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.49118498 = fieldWeight in 4322, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4322)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Dieses Whitepaper befasst sich mit der Integration semantischer Technologien in bestehende Ansätze des Information Retrieval und die damit verbundenen weitreichenden Auswirkungen auf Effizienz und Effektivität von Suche und Navigation in Dokumenten. Nach einer Einbettung in die Problematik des Wissensmanagement aus Sicht der Informationstechnik folgt ein Überblick zu den Methoden des Information Retrieval. Anschließend werden die semantischen Technologien "Wissen modellieren - Ontologie" und "Neues Wissen ableiten - Inferenz" vorgestellt. Ein Integrationsansatz wird im Folgenden diskutiert und die entstehenden Mehrwerte präsentiert. Insbesondere ergeben sich Erweiterungen hinsichtlich einer verfeinerten Suchunterstützung und einer kontextbezogenen Navigation sowie die Möglichkeiten der Auswertung von regelbasierten Zusammenhängen und einfache Integration von strukturierten Informationsquellen. Das Whitepaper schließt mit einem Ausblick auf die zukünftige Entwicklung des WWW hin zu einem Semantic Web und die damit verbundenen Implikationen für semantische Technologien.
    Content
    Inhalt: 1. Einführung - 2. Wissensmanagement - 3. Information Retrieval - 3.1. Methoden und Techniken - 3.2. Information Retrieval in der Anwendung - 4. Semantische Ansätze - 4.1. Wissen modellieren - Ontologie - 4.2. Neues Wissen inferieren - 5. Knowledge Retrieval in der Anwendung - 6. Zukunftsaussichten - 7. Fazit

Languages

  • e 45
  • d 7

Types

  • a 25
  • el 23
  • m 9
  • s 5
  • x 3
  • n 2
  • r 1
  • More… Less…

Subjects