Search (112 results, page 1 of 6)

  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012) 0.14
    0.13841115 = product of:
      0.1845482 = sum of:
        0.05817665 = weight(_text_:web in 2861) [ClassicSimilarity], result of:
          0.05817665 = score(doc=2861,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.36057037 = fieldWeight in 2861, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
        0.08729243 = weight(_text_:search in 2861) [ClassicSimilarity], result of:
          0.08729243 = score(doc=2861,freq=14.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.5079997 = fieldWeight in 2861, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
        0.03907912 = product of:
          0.07815824 = sum of:
            0.07815824 = weight(_text_:engine in 2861) [ClassicSimilarity], result of:
              0.07815824 = score(doc=2861,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.29552078 = fieldWeight in 2861, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2861)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Today's conventional search engines hardly do provide the essential content relevant to the user's search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.
  2. Allocca, C.; Aquin, M.d'; Motta, E.: Impact of using relationships between ontologies to enhance the ontology search results (2012) 0.12
    0.12242785 = product of:
      0.16323714 = sum of:
        0.050382458 = weight(_text_:web in 264) [ClassicSimilarity], result of:
          0.050382458 = score(doc=264,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3122631 = fieldWeight in 264, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=264)
        0.07377557 = weight(_text_:search in 264) [ClassicSimilarity], result of:
          0.07377557 = score(doc=264,freq=10.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.4293381 = fieldWeight in 264, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=264)
        0.03907912 = product of:
          0.07815824 = sum of:
            0.07815824 = weight(_text_:engine in 264) [ClassicSimilarity], result of:
              0.07815824 = score(doc=264,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.29552078 = fieldWeight in 264, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=264)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Using semantic web search engines, such as Watson, Swoogle or Sindice, to find ontologies is a complex exploratory activity. It generally requires formulating multiple queries, browsing pages of results, and assessing the returned ontologies against each other to obtain a relevant and adequate subset of ontologies for the intended use. Our hypothesis is that at least some of the difficulties related to searching ontologies stem from the lack of structure in the search results, where ontologies that are implicitly related to each other are presented as disconnected and shown on different result pages. In earlier publications we presented a software framework, Kannel, which is able to automatically detect and make explicit relationships between ontologies in large ontology repositories. In this paper, we present a study that compares the use of the Watson ontology search engine with an extension,Watson+Kannel, which provides information regarding the various relationships occurring between the result ontologies. We evaluate Watson+Kannel by demonstrating through various indicators that explicit relationships between ontologies improve users' efficiency in ontology search, thus validating our hypothesis.
    Source
    9th Extended Semantic Web Conference (ESWC), 2012-05-27/2012-05-31 in Hersonissos, Crete, Greece. Eds.: Elena Simperl et al
    Theme
    Semantic Web
  3. Mäkelä, E.; Hyvönen, E.; Saarela, S.; Vilfanen, K.: Application of ontology techniques to view-based semantic serach and browsing (2012) 0.12
    0.12073888 = product of:
      0.16098517 = sum of:
        0.03490599 = weight(_text_:web in 3264) [ClassicSimilarity], result of:
          0.03490599 = score(doc=3264,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 3264, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3264)
        0.07918424 = weight(_text_:search in 3264) [ClassicSimilarity], result of:
          0.07918424 = score(doc=3264,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.460814 = fieldWeight in 3264, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=3264)
        0.04689494 = product of:
          0.09378988 = sum of:
            0.09378988 = weight(_text_:engine in 3264) [ClassicSimilarity], result of:
              0.09378988 = score(doc=3264,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.35462496 = fieldWeight in 3264, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3264)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    We scho how the beenfits of the view-based search method, developed within the information retrieval community, can be extended with ontology-based search, developed within the Semantic Web community, and with semantic recommendations. As a proof of the concept, we have implemented an ontology-and view-based search engine and recommendations system Ontogaotr for RDF(S) repositories. Ontogator is innovative in two ways. Firstly, the RDFS.based ontologies used for annotating metadata are used in the user interface to facilitate view-based information retrieval. The views provide the user with an overview of the repositorys contents and a vocabulary for expressing search queries. Secondlyy, a semantic browsing function is provided by a recommender system. This system enriches instance level metadata by ontologies and provides the user with links to semantically related relevant resources. The semantic linkage is specified in terms of logical rules. To illustrate and discuss the ideas, a deployed application of Ontogator to a photo repository of the Helsinki University Museum is presented.
  4. Djioua, B.; Desclés, J.-P.; Alrahabi, M.: Searching and mining with semantic categories (2012) 0.09
    0.08612041 = product of:
      0.114827216 = sum of:
        0.029088326 = weight(_text_:web in 99) [ClassicSimilarity], result of:
          0.029088326 = score(doc=99,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 99, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=99)
        0.046659768 = weight(_text_:search in 99) [ClassicSimilarity], result of:
          0.046659768 = score(doc=99,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.27153727 = fieldWeight in 99, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=99)
        0.03907912 = product of:
          0.07815824 = sum of:
            0.07815824 = weight(_text_:engine in 99) [ClassicSimilarity], result of:
              0.07815824 = score(doc=99,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.29552078 = fieldWeight in 99, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=99)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    A new model is proposed to retrieve information by building automatically a semantic metatext structure for texts that allow searching and extracting discourse and semantic information according to certain linguistic categorizations. This paper presents approaches for searching and mining full text with semantic categories. The model is built up from two engines: The first one, called EXCOM (Djioua et al., 2006; Alrahabi, 2010), is an automatic system for text annotation, related to discourse and semantic maps, which are specification of general linguistic ontologies founded on the Applicative and Cognitive Grammar. The annotation layer uses a linguistic method called Contextual Exploration, which handles the polysemic values of a term in texts. Several 'semantic maps' underlying 'point of views' for text mining guide this automatic annotation process. The second engine uses semantic annotated texts, produced previously in order to create a semantic inverted index, which is able to retrieve relevant documents for queries associated with discourse and semantic categories such as definition, quotation, causality, relations between concepts, etc. (Djioua & Desclés, 2007). This semantic indexation process builds a metatext layer for textual contents. Some data and linguistic rules sets as well as the general architecture that extend third-party software are expressed as supplementary information.
    Footnote
    Vgl.: http://www.igi-global.com/book/next-generation-search-engines/64423.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
    Theme
    Semantic Web
  5. Branch, F.; Arias, T.; Kennah, J.; Phillips, R.; Windleharth, T.; Lee, J.H.: Representing transmedia fictional worlds through ontology (2017) 0.09
    0.08612041 = product of:
      0.114827216 = sum of:
        0.029088326 = weight(_text_:web in 3958) [ClassicSimilarity], result of:
          0.029088326 = score(doc=3958,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 3958, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3958)
        0.046659768 = weight(_text_:search in 3958) [ClassicSimilarity], result of:
          0.046659768 = score(doc=3958,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.27153727 = fieldWeight in 3958, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3958)
        0.03907912 = product of:
          0.07815824 = sum of:
            0.07815824 = weight(_text_:engine in 3958) [ClassicSimilarity], result of:
              0.07815824 = score(doc=3958,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.29552078 = fieldWeight in 3958, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3958)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Currently, there is no structured data standard for representing elements commonly found in transmedia fictional worlds. Although there are websites dedicated to individual universes, the information found on these sites separate out the various formats, concentrate on only the bibliographic aspects of the material, and are only searchable with full text. We have created an ontological model that will allow various user groups interested in transmedia to search for and retrieve the information contained in these worlds based upon their structure. We conducted a domain analysis and user studies based on the contents of Harry Potter, Lord of the Rings, the Marvel Universe, and Star Wars in order to build a new model using Ontology Web Language (OWL) and an artificial intelligence-reasoning engine. This model can infer connections between transmedia properties such as characters, elements of power, items, places, events, and so on. This model will facilitate better search and retrieval of the information contained within these vast story universes for all users interested in them. The result of this project is an OWL ontology reflecting real user needs based upon user research, which is intuitive for users and can be used by artificial intelligence systems.
  6. Mahesh, K.: Highly expressive tagging for knowledge organization in the 21st century (2014) 0.08
    0.075091355 = product of:
      0.1001218 = sum of:
        0.050382458 = weight(_text_:web in 1434) [ClassicSimilarity], result of:
          0.050382458 = score(doc=1434,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3122631 = fieldWeight in 1434, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1434)
        0.032993436 = weight(_text_:search in 1434) [ClassicSimilarity], result of:
          0.032993436 = score(doc=1434,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.19200584 = fieldWeight in 1434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1434)
        0.01674591 = product of:
          0.03349182 = sum of:
            0.03349182 = weight(_text_:22 in 1434) [ClassicSimilarity], result of:
              0.03349182 = score(doc=1434,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.19345059 = fieldWeight in 1434, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1434)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Knowledge organization of large-scale content on the Web requires substantial amounts of semantic metadata that is expensive to generate manually. Recent developments in Web technologies have enabled any user to tag documents and other forms of content thereby generating metadata that could help organize knowledge. However, merely adding one or more tags to a document is highly inadequate to capture the aboutness of the document and thereby to support powerful semantic functions such as automatic classification, question answering or true semantic search and retrieval. This is true even when the tags used are labels from a well-designed classification system such as a thesaurus or taxonomy. There is a strong need to develop a semantic tagging mechanism with sufficient expressive power to capture the aboutness of each part of a document or dataset or multimedia content in order to enable applications that can benefit from knowledge organization on the Web. This article proposes a highly expressive mechanism of using ontology snippets as semantic tags that map portions of a document or a part of a dataset or a segment of a multimedia content to concepts and relations in an ontology of the domain(s) of interest.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  7. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.07
    0.07432193 = product of:
      0.09909591 = sum of:
        0.032909684 = weight(_text_:web in 1634) [ClassicSimilarity], result of:
          0.032909684 = score(doc=1634,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2039694 = fieldWeight in 1634, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
        0.052789498 = weight(_text_:search in 1634) [ClassicSimilarity], result of:
          0.052789498 = score(doc=1634,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.30720934 = fieldWeight in 1634, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
        0.013396727 = product of:
          0.026793454 = sum of:
            0.026793454 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
              0.026793454 = score(doc=1634,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.15476047 = fieldWeight in 1634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1634)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
    Series
    Special issue: Semantic search
    Theme
    Semantic Web
  8. Bast, H.; Bäurle, F.; Buchhold, B.; Haussmann, E.: Broccoli: semantic full-text search at your fingertips (2012) 0.07
    0.066199325 = product of:
      0.13239865 = sum of:
        0.093319535 = weight(_text_:search in 704) [ClassicSimilarity], result of:
          0.093319535 = score(doc=704,freq=16.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.54307455 = fieldWeight in 704, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=704)
        0.03907912 = product of:
          0.07815824 = sum of:
            0.07815824 = weight(_text_:engine in 704) [ClassicSimilarity], result of:
              0.07815824 = score(doc=704,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.29552078 = fieldWeight in 704, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=704)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    We present Broccoli, a fast and easy-to-use search engine forwhat we call semantic full-text search. Semantic full-textsearch combines the capabilities of standard full-text searchand ontology search. The search operates on four kinds ofobjects: ordinary words (e.g., edible), classes (e.g., plants), instances (e.g.,Broccoli), and relations (e.g., occurs-with or native-to). Queries are trees, where nodes are arbitrary bags of these objects, and arcs are relations. The user interface guides the user in incrementally constructing such trees by instant (search-as-you-type) suggestions of words, classes, instances, or relations that lead to good hits. Both standard full-text search and pure ontology search are included as special cases. In this paper, we describe the query language of Broccoli, a new kind of index that enables fast processing of queries from that language as well as fast query suggestion, the natural language processing required, and the user interface. We evaluated query times and result quality on the full version of the English Wikipedia (32 GB XML dump) combined with the YAGO ontology (26 million facts). We have implemented a fully functional prototype based on our ideas, see: http://broccoli.informatik.uni-freiburg.de.
  9. Fernández, M.; Cantador, I.; López, V.; Vallet, D.; Castells, P.; Motta, E.: Semantically enhanced Information Retrieval : an ontology-based approach (2011) 0.06
    0.0633452 = product of:
      0.1266904 = sum of:
        0.05203478 = weight(_text_:web in 230) [ClassicSimilarity], result of:
          0.05203478 = score(doc=230,freq=10.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.32250395 = fieldWeight in 230, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=230)
        0.07465562 = weight(_text_:search in 230) [ClassicSimilarity], result of:
          0.07465562 = score(doc=230,freq=16.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.43445963 = fieldWeight in 230, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=230)
      0.5 = coord(2/4)
    
    Abstract
    Currently, techniques for content description and query processing in Information Retrieval (IR) are based on keywords, and therefore provide limited capabilities to capture the conceptualizations associated with user needs and contents. Aiming to solve the limitations of keyword-based models, the idea of conceptual search, understood as searching by meanings rather than literal strings, has been the focus of a wide body of research in the IR field. More recently, it has been used as a prototypical scenario (or even envisioned as a potential "killer app") in the Semantic Web (SW) vision, since its emergence in the late nineties. However, current approaches to semantic search developed in the SW area have not yet taken full advantage of the acquired knowledge, accumulated experience, and technological sophistication achieved through several decades of work in the IR field. Starting from this position, this work investigates the definition of an ontology-based IR model, oriented to the exploitation of domain Knowledge Bases to support semantic search capabilities in large document repositories, stressing on the one hand the use of fully fledged ontologies in the semantic-based perspective, and on the other hand the consideration of unstructured content as the target search space. The major contribution of this work is an innovative, comprehensive semantic search model, which extends the classic IR model, addresses the challenges of the massive and heterogeneous Web environment, and integrates the benefits of both keyword and semantic-based search. Additional contributions include: an innovative rank fusion technique that minimizes the undesired effects of knowledge sparseness on the yet juvenile SW, and the creation of a large-scale evaluation benchmark, based on TREC IR evaluation standards, which allows a rigorous comparison between IR and SW approaches. Conducted experiments show that our semantic search model obtained comparable and better performance results (in terms of MAP and P@10 values) than the best TREC automatic system.
    Series
    JWS special issue on Semantic Search
    Source
    Web semantics: science, services and agents on the World Wide Web. 9(2011) no.4, S.434-452
    Theme
    Semantic Web
  10. Lee, J.; Min, J.-K.; Oh, A.; Chung, C.-W.: Effective ranking and search techniques for Web resources considering semantic relationships (2014) 0.05
    0.05356199 = product of:
      0.10712398 = sum of:
        0.041137107 = weight(_text_:web in 2670) [ClassicSimilarity], result of:
          0.041137107 = score(doc=2670,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25496176 = fieldWeight in 2670, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2670)
        0.06598687 = weight(_text_:search in 2670) [ClassicSimilarity], result of:
          0.06598687 = score(doc=2670,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.3840117 = fieldWeight in 2670, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2670)
      0.5 = coord(2/4)
    
    Abstract
    On the Semantic Web, the types of resources and the semantic relationships between resources are defined in an ontology. By using that information, the accuracy of information retrieval can be improved. In this paper, we present effective ranking and search techniques considering the semantic relationships in an ontology. Our technique retrieves top-k resources which are the most relevant to query keywords through the semantic relationships. To do this, we propose a weighting measure for the semantic relationship. Based on this measure, we propose a novel ranking method which considers the number of meaningful semantic relationships between a resource and keywords as well as the coverage and discriminating power of keywords. In order to improve the efficiency of the search, we prune the unnecessary search space using the length and weight thresholds of the semantic relationship path. In addition, we exploit Threshold Algorithm based on an extended inverted index to answer top-k results efficiently. The experimental results using real data sets demonstrate that our retrieval method using the semantic information generates accurate results efficiently compared to the traditional methods.
  11. Atanassova, I.; Bertin, M.: Semantic facets for scientific information retrieval (2014) 0.05
    0.053023666 = product of:
      0.10604733 = sum of:
        0.04072366 = weight(_text_:web in 4471) [ClassicSimilarity], result of:
          0.04072366 = score(doc=4471,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 4471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4471)
        0.06532367 = weight(_text_:search in 4471) [ClassicSimilarity], result of:
          0.06532367 = score(doc=4471,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.38015217 = fieldWeight in 4471, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4471)
      0.5 = coord(2/4)
    
    Abstract
    We present an Information Retrieval System for scientific publications that provides the possibility to filter results according to semantic facets. We use sentence-level semantic annotations that identify specific semantic relations in texts, such as methods, definitions, hypotheses, that correspond to common information needs related to scientific literature. The semantic annotations are obtained using a rule-based method that identifies linguistic clues organized into a linguistic ontology. The system is implemented using Solr Search Server and offers efficient search and navigation in scientific papers.
    Source
    Semantic Web Evaluation Challenge. SemWebEval 2014 at ESWC 2014, Anissaras, Crete, Greece, May 25-29, 2014, Revised Selected Papers. Eds.: V. Presutti et al
  12. Wenige, L.; Ruhland, J.: Similarity-based knowledge graph queries for recommendation retrieval (2019) 0.05
    0.048521113 = product of:
      0.097042225 = sum of:
        0.050382458 = weight(_text_:web in 5864) [ClassicSimilarity], result of:
          0.050382458 = score(doc=5864,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3122631 = fieldWeight in 5864, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5864)
        0.046659768 = weight(_text_:search in 5864) [ClassicSimilarity], result of:
          0.046659768 = score(doc=5864,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.27153727 = fieldWeight in 5864, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5864)
      0.5 = coord(2/4)
    
    Abstract
    Current retrieval and recommendation approaches rely on hard-wired data models. This hinders personalized cus-tomizations to meet information needs of users in a more flexible manner. Therefore, the paper investigates how similarity-basedretrieval strategies can be combined with graph queries to enable users or system providers to explore repositories in the LinkedOpen Data (LOD) cloud more thoroughly. For this purpose, we developed novel content-based recommendation approaches.They rely on concept annotations of Simple Knowledge Organization System (SKOS) vocabularies and a SPARQL-based querylanguage that facilitates advanced and personalized requests for openly available knowledge graphs. We have comprehensivelyevaluated the novel search strategies in several test cases and example application domains (i.e., travel search and multimediaretrieval). The results of the web-based online experiments showed that our approaches increase the recall and diversity of rec-ommendations or at least provide a competitive alternative strategy of resource access when conventional methods do not providehelpful suggestions. The findings may be of use for Linked Data-enabled recommender systems (LDRS) as well as for semanticsearch engines that can consume LOD resources. (PDF) Similarity-based knowledge graph queries for recommendation retrieval. Available from: https://www.researchgate.net/publication/333358714_Similarity-based_knowledge_graph_queries_for_recommendation_retrieval [accessed May 21 2020].
    Content
    Vgl.: https://www.researchgate.net/publication/333358714_Similarity-based_knowledge_graph_queries_for_recommendation_retrieval. Vgl. auch: http://semantic-web-journal.net/content/similarity-based-knowledge-graph-queries-recommendation-retrieval-1.
    Source
    Semantic Web. 10(2019) 6, S.1007-1037
  13. Thenmalar, S.; Geetha, T.V.: Enhanced ontology-based indexing and searching (2014) 0.04
    0.044160463 = product of:
      0.088320926 = sum of:
        0.07659879 = weight(_text_:search in 1633) [ClassicSimilarity], result of:
          0.07659879 = score(doc=1633,freq=22.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.44576794 = fieldWeight in 1633, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1633)
        0.011722136 = product of:
          0.023444273 = sum of:
            0.023444273 = weight(_text_:22 in 1633) [ClassicSimilarity], result of:
              0.023444273 = score(doc=1633,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.1354154 = fieldWeight in 1633, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1633)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - The purpose of this paper is to improve the conceptual-based search by incorporating structural ontological information such as concepts and relations. Generally, Semantic-based information retrieval aims to identify relevant information based on the meanings of the query terms or on the context of the terms and the performance of semantic information retrieval is carried out through standard measures-precision and recall. Higher precision leads to the (meaningful) relevant documents obtained and lower recall leads to the less coverage of the concepts. Design/methodology/approach - In this paper, the authors enhance the existing ontology-based indexing proposed by Kohler et al., by incorporating sibling information to the index. The index designed by Kohler et al., contains only super and sub-concepts from the ontology. In addition, in our approach, we focus on two tasks; query expansion and ranking of the expanded queries, to improve the efficiency of the ontology-based search. The aforementioned tasks make use of ontological concepts, and relations existing between those concepts so as to obtain semantically more relevant search results for a given query. Findings - The proposed ontology-based indexing technique is investigated by analysing the coverage of concepts that are being populated in the index. Here, we introduce a new measure called index enhancement measure, to estimate the coverage of ontological concepts being indexed. We have evaluated the ontology-based search for the tourism domain with the tourism documents and tourism-specific ontology. The comparison of search results based on the use of ontology "with and without query expansion" is examined to estimate the efficiency of the proposed query expansion task. The ranking is compared with the ORank system to evaluate the performance of our ontology-based search. From these analyses, the ontology-based search results shows better recall when compared to the other concept-based search systems. The mean average precision of the ontology-based search is found to be 0.79 and the recall is found to be 0.65, the ORank system has the mean average precision of 0.62 and the recall is found to be 0.51, while the concept-based search has the mean average precision of 0.56 and the recall is found to be 0.42. Practical implications - When the concept is not present in the domain-specific ontology, the concept cannot be indexed. When the given query term is not available in the ontology then the term-based results are retrieved. Originality/value - In addition to super and sub-concepts, we incorporate the concepts present in same level (siblings) to the ontological index. The structural information from the ontology is determined for the query expansion. The ranking of the documents depends on the type of the query (single concept query, multiple concept queries and concept with relation queries) and the ontological relations that exists in the query and the documents. With this ontological structural information, the search results showed us better coverage of concepts with respect to the query.
    Date
    20. 1.2015 18:30:22
  14. Koopman, B.; Zuccon, G.; Bruza, P.; Sitbon, L.; Lawley, M.: Information retrieval as semantic inference : a graph Inference model applied to medical search (2016) 0.04
    0.043962162 = product of:
      0.087924324 = sum of:
        0.023270661 = weight(_text_:web in 3260) [ClassicSimilarity], result of:
          0.023270661 = score(doc=3260,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.14422815 = fieldWeight in 3260, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3260)
        0.064653665 = weight(_text_:search in 3260) [ClassicSimilarity], result of:
          0.064653665 = score(doc=3260,freq=12.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.37625307 = fieldWeight in 3260, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=3260)
      0.5 = coord(2/4)
    
    Abstract
    This paper presents a Graph Inference retrieval model that integrates structured knowledge resources, statistical information retrieval methods and inference in a unified framework. Key components of the model are a graph-based representation of the corpus and retrieval driven by an inference mechanism achieved as a traversal over the graph. The model is proposed to tackle the semantic gap problem-the mismatch between the raw data and the way a human being interprets it. We break down the semantic gap problem into five core issues, each requiring a specific type of inference in order to be overcome. Our model and evaluation is applied to the medical domain because search within this domain is particularly challenging and, as we show, often requires inference. In addition, this domain features both structured knowledge resources as well as unstructured text. Our evaluation shows that inference can be effective, retrieving many new relevant documents that are not retrieved by state-of-the-art information retrieval models. We show that many retrieved documents were not pooled by keyword-based search methods, prompting us to perform additional relevance assessment on these new documents. A third of the newly retrieved documents judged were found to be relevant. Our analysis provides a thorough understanding of when and how to apply inference for retrieval, including a categorisation of queries according to the effect of inference. The inference mechanism promoted recall by retrieving new relevant documents not found by previous keyword-based approaches. In addition, it promoted precision by an effective reranking of documents. When inference is used, performance gains can generally be expected on hard queries. However, inference should not be applied universally: for easy, unambiguous queries and queries with few relevant documents, inference did adversely affect effectiveness. These conclusions reflect the fact that for retrieval as inference to be effective, a careful balancing act is involved. Finally, although the Graph Inference model is developed and applied to medical search, it is a general retrieval model applicable to other areas such as web search, where an emerging research trend is to utilise structured knowledge resources for more effective semantic search.
  15. Saruladha, K.; Aghila, G.; Penchala, S.K.: Design of new indexing techniques based on ontology for information retrieval systems (2010) 0.04
    0.043898437 = product of:
      0.087796874 = sum of:
        0.041137107 = weight(_text_:web in 4317) [ClassicSimilarity], result of:
          0.041137107 = score(doc=4317,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25496176 = fieldWeight in 4317, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4317)
        0.046659768 = weight(_text_:search in 4317) [ClassicSimilarity], result of:
          0.046659768 = score(doc=4317,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.27153727 = fieldWeight in 4317, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4317)
      0.5 = coord(2/4)
    
    Abstract
    Information Retrieval [IR] is the science of searching for documents, for information within documents, and for metadata about documents, as well as that of searching relational databases and the World Wide Web. This paper describes a document representation method instead of keywords ontological descriptors. The purpose of this paper is to propose a system for content-based querying of texts based on the availability of ontology for the concepts in the text domain and to develop new Indexing methods to improve RSV (Retrieval status value). There is a need for querying ontologies at various granularities to retrieve information from various sources to suit the requirements of Semantic web, to eradicate the mismatch between user request and response from the Information Retrieval system. Most of the search engines use indexes that are built at the syntactical level and return hits based on simple string comparisons. The indexes do not contain synonyms, cannot differentiate between homonyms and users receive different search results when they use different conjugation forms of the same word.
  16. Netto, C.M.; Borém de Oliveira Lima, G.A.; Pierozzi Júnior, I.: ¬An application of facet analysis theory and concept maps for faceted search in a domain ontology : preliminary studies (2016) 0.04
    0.043898437 = product of:
      0.087796874 = sum of:
        0.041137107 = weight(_text_:web in 2966) [ClassicSimilarity], result of:
          0.041137107 = score(doc=2966,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25496176 = fieldWeight in 2966, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2966)
        0.046659768 = weight(_text_:search in 2966) [ClassicSimilarity], result of:
          0.046659768 = score(doc=2966,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.27153727 = fieldWeight in 2966, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2966)
      0.5 = coord(2/4)
    
    Abstract
    This paper presents partial results of a research project still in development that aims to study the theory of faceted analysis and concept maps for faceted search in a domain ontology. The research shows a solution that enables abstraction levels for users in order to retrieve information in the domain area represented in ontology. The problem is considered a challenge, because of the formal computational structure of semantic description of ontology that does not present itself as feasible from the point of view of its users' cognition. This paper exposes the results using a web tool prototype for faceted navigation in a sample of ontology that was created for the organization of the domain knowledge, regarding the impact of agriculture and climatic changes on water resources. The results show the feasibility of navigation and information retrieval in the ontology using the web faceted prototype. It is believed that through this study a computational solution that can be developed is able to facilitate the creation of the conceptual model for faceted and concept map navigation around the area represented by ontology so human learning on the domain can be assessed, as well as the recovery and sharing of information on user groups.
  17. Almeida, M.B.: Revisiting ontologies : a necessary clarification (2013) 0.04
    0.04324353 = product of:
      0.08648706 = sum of:
        0.03959212 = weight(_text_:search in 1010) [ClassicSimilarity], result of:
          0.03959212 = score(doc=1010,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 1010, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=1010)
        0.04689494 = product of:
          0.09378988 = sum of:
            0.09378988 = weight(_text_:engine in 1010) [ClassicSimilarity], result of:
              0.09378988 = score(doc=1010,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.35462496 = fieldWeight in 1010, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1010)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Looking for ontology in a search engine, one can find so many different approaches that it can be difficult to understand which field of research the subject belongs to and how it can be useful. The term ontology is employed within philosophy, computer science, and information science with different meanings. To take advantage of what ontology theories have to offer, one should understand what they address and where they come from. In information science, except for a few papers, there is no initiative toward clarifying what ontology really is and the connections that it fosters among different research fields. This article provides such a clarification. We begin by revisiting the meaning of the term in its original field, philosophy, to reach its current use in other research fields. We advocate that ontology is a genuine and relevant subject of research in information science. Finally, we conclude by offering our view of the opportunities for interdisciplinary research.
  18. Gray, A.J.G.; Gray, N.; Hall, C.W.; Ounis, I.: Finding the right term : retrieving and exploring semantic concepts in astronomical vocabularies (2010) 0.04
    0.043117315 = product of:
      0.08623463 = sum of:
        0.029088326 = weight(_text_:web in 4235) [ClassicSimilarity], result of:
          0.029088326 = score(doc=4235,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 4235, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4235)
        0.057146307 = weight(_text_:search in 4235) [ClassicSimilarity], result of:
          0.057146307 = score(doc=4235,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.33256388 = fieldWeight in 4235, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4235)
      0.5 = coord(2/4)
    
    Abstract
    Astronomy, like many domains, already has several sets of terminology in general use, referred to as controlled vocabularies. For example, the keywords for tagging journal articles, or the taxonomy of terms used to label image files. These existing vocabularies can be encoded into skos, a W3C proposed recommendation for representing vocabularies on the Semantic Web, so that computer systems can help users to search for and discover resources tagged with vocabulary concepts. However, this requires a search mechanism to go from a user-supplied string to a vocabulary concept. In this paper, we present our experiences in implementing the Vocabulary Explorer, a vocabulary search service based on the Terrier Information Retrieval Platform. We investigate the capabilities of existing document weighting models for identifying the correct vocabulary concept for a query. Due to the highly structured nature of a skos encoded vocabulary, we investigate the effects of term weighting (boosting the score of concepts that match on particular fields of a vocabulary concept), and query expansion. We found that the existing document weighting models provided very high quality results, but these could be improved further with the use of term weighting that makes use of the semantic evidence.
  19. Giunchiglia, F.; Dutta, B.; Maltese, V.: From knowledge organization to knowledge representation (2014) 0.04
    0.043117315 = product of:
      0.08623463 = sum of:
        0.029088326 = weight(_text_:web in 1369) [ClassicSimilarity], result of:
          0.029088326 = score(doc=1369,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 1369, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
        0.057146307 = weight(_text_:search in 1369) [ClassicSimilarity], result of:
          0.057146307 = score(doc=1369,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.33256388 = fieldWeight in 1369, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
      0.5 = coord(2/4)
    
    Abstract
    So far, within the library and information science (LIS) community, knowledge organization (KO) has developed its own very successful solutions to document search, allowing for the classification, indexing and search of millions of books. However, current KO solutions are limited in expressivity as they only support queries by document properties, e.g., by title, author and subject. In parallel, within the artificial intelligence and semantic web communities, knowledge representation (KR) has developed very powerful end expressive techniques, which via the use of ontologies support queries by any entity property (e.g., the properties of the entities described in a document). However, KR has not scaled yet to the level of KO, mainly because of the lack of a precise and scalable entity specification methodology. In this paper we present DERA, a new methodology inspired by the faceted approach, as introduced in KO, that retains all the advantages of KR and compensates for the limitations of KO. DERA guarantees at the same time quality, extensibility, scalability and effectiveness in search.
  20. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.04
    0.041697998 = product of:
      0.083395995 = sum of:
        0.057001244 = weight(_text_:web in 4639) [ClassicSimilarity], result of:
          0.057001244 = score(doc=4639,freq=12.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.35328537 = fieldWeight in 4639, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.026394749 = weight(_text_:search in 4639) [ClassicSimilarity], result of:
          0.026394749 = score(doc=4639,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.15360467 = fieldWeight in 4639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
      0.5 = coord(2/4)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.

Authors

Languages

  • e 98
  • d 13

Types

  • el 22
  • x 1
  • More… Less…