Search (56 results, page 1 of 3)

  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"a"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.14
    0.14228143 = product of:
      0.28456286 = sum of:
        0.071140714 = product of:
          0.21342213 = sum of:
            0.21342213 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.21342213 = score(doc=400,freq=2.0), product of:
                0.3797425 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04479146 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.21342213 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.21342213 = score(doc=400,freq=2.0), product of:
            0.3797425 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04479146 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(2/4)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Priss, U.: Description logic and faceted knowledge representation (1999) 0.04
    0.04100116 = product of:
      0.16400464 = sum of:
        0.16400464 = sum of:
          0.12759289 = weight(_text_:programming in 2655) [ClassicSimilarity], result of:
            0.12759289 = score(doc=2655,freq=2.0), product of:
              0.29361802 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.04479146 = queryNorm
              0.43455404 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
          0.036411747 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
            0.036411747 = score(doc=2655,freq=2.0), product of:
              0.15685207 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04479146 = queryNorm
              0.23214069 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
      0.25 = coord(1/4)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  3. Conde, A.; Larrañaga, M.; Arruarte, A.; Elorriaga, J.A.; Roth, D.: litewi: a combined term extraction and entity linking method for eliciting educational ontologies from textbooks (2016) 0.03
    0.034167632 = product of:
      0.13667053 = sum of:
        0.13667053 = sum of:
          0.10632741 = weight(_text_:programming in 2645) [ClassicSimilarity], result of:
            0.10632741 = score(doc=2645,freq=2.0), product of:
              0.29361802 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.04479146 = queryNorm
              0.36212835 = fieldWeight in 2645, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
          0.030343125 = weight(_text_:22 in 2645) [ClassicSimilarity], result of:
            0.030343125 = score(doc=2645,freq=2.0), product of:
              0.15685207 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04479146 = queryNorm
              0.19345059 = fieldWeight in 2645, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
      0.25 = coord(1/4)
    
    Abstract
    Major efforts have been conducted on ontology learning, that is, semiautomatic processes for the construction of domain ontologies from diverse sources of information. In the past few years, a research trend has focused on the construction of educational ontologies, that is, ontologies to be used for educational purposes. The identification of the terminology is crucial to build ontologies. Term extraction techniques allow the identification of the domain-related terms from electronic resources. This paper presents LiTeWi, a novel method that combines current unsupervised term extraction approaches for creating educational ontologies for technology supported learning systems from electronic textbooks. LiTeWi uses Wikipedia as an additional information source. Wikipedia contains more than 30 million articles covering the terminology of nearly every domain in 288 languages, which makes it an appropriate generic corpus for term extraction. Furthermore, given that its content is available in several languages, it promotes both domain and language independence. LiTeWi is aimed at being used by teachers, who usually develop their didactic material from textbooks. To evaluate its performance, LiTeWi was tuned up using a textbook on object oriented programming and then tested with two textbooks of different domains-astronomy and molecular biology.
    Date
    22. 1.2016 12:38:14
  4. Djioua, B.; Desclés, J.-P.; Alrahabi, M.: Searching and mining with semantic categories (2012) 0.03
    0.027659154 = product of:
      0.110636614 = sum of:
        0.110636614 = weight(_text_:engines in 99) [ClassicSimilarity], result of:
          0.110636614 = score(doc=99,freq=6.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.4861493 = fieldWeight in 99, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=99)
      0.25 = coord(1/4)
    
    Abstract
    A new model is proposed to retrieve information by building automatically a semantic metatext structure for texts that allow searching and extracting discourse and semantic information according to certain linguistic categorizations. This paper presents approaches for searching and mining full text with semantic categories. The model is built up from two engines: The first one, called EXCOM (Djioua et al., 2006; Alrahabi, 2010), is an automatic system for text annotation, related to discourse and semantic maps, which are specification of general linguistic ontologies founded on the Applicative and Cognitive Grammar. The annotation layer uses a linguistic method called Contextual Exploration, which handles the polysemic values of a term in texts. Several 'semantic maps' underlying 'point of views' for text mining guide this automatic annotation process. The second engine uses semantic annotated texts, produced previously in order to create a semantic inverted index, which is able to retrieve relevant documents for queries associated with discourse and semantic categories such as definition, quotation, causality, relations between concepts, etc. (Djioua & Desclés, 2007). This semantic indexation process builds a metatext layer for textual contents. Some data and linguistic rules sets as well as the general architecture that extend third-party software are expressed as supplementary information.
    Footnote
    Vgl.: http://www.igi-global.com/book/next-generation-search-engines/64423.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  5. Lassalle, E.; Lassalle, E.: Semantic models in information retrieval (2012) 0.02
    0.022583602 = product of:
      0.09033441 = sum of:
        0.09033441 = weight(_text_:engines in 97) [ClassicSimilarity], result of:
          0.09033441 = score(doc=97,freq=4.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.39693922 = fieldWeight in 97, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=97)
      0.25 = coord(1/4)
    
    Footnote
    Vgl.: http://www.igi-global.com/book/next-generation-search-engines/64424.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  6. Zhang, L.; Liu, Q.L.; Zhang, J.; Wang, H.F.; Pan, Y.; Yu, Y.: Semplore: an IR approach to scalable hybrid query of Semantic Web data (2007) 0.02
    0.022583602 = product of:
      0.09033441 = sum of:
        0.09033441 = weight(_text_:engines in 231) [ClassicSimilarity], result of:
          0.09033441 = score(doc=231,freq=4.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.39693922 = fieldWeight in 231, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=231)
      0.25 = coord(1/4)
    
    Abstract
    As an extension to the current Web, Semantic Web will not only contain structured data with machine understandable semantics but also textual information. While structured queries can be used to find information more precisely on the Semantic Web, keyword searches are still needed to help exploit textual information. It thus becomes very important that we can combine precise structured queries with imprecise keyword searches to have a hybrid query capability. In addition, due to the huge volume of information on the Semantic Web, the hybrid query must be processed in a very scalable way. In this paper, we define such a hybrid query capability that combines unary tree-shaped structured queries with keyword searches. We show how existing information retrieval (IR) index structures and functions can be reused to index semantic web data and its textual information, and how the hybrid query is evaluated on the index structure using IR engines in an efficient and scalable manner. We implemented this IR approach in an engine called Semplore. Comprehensive experiments on its performance show that it is a promising approach. It leads us to believe that it may be possible to evolve current web search engines to query and search the Semantic Web. Finally, we briefy describe how Semplore is used for searching Wikipedia and an IBM customer's product information.
  7. Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012) 0.02
    0.022583602 = product of:
      0.09033441 = sum of:
        0.09033441 = weight(_text_:engines in 2861) [ClassicSimilarity], result of:
          0.09033441 = score(doc=2861,freq=4.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.39693922 = fieldWeight in 2861, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
      0.25 = coord(1/4)
    
    Abstract
    Today's conventional search engines hardly do provide the essential content relevant to the user's search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.
  8. Wang, H.; Liu, Q.; Penin, T.; Fu, L.; Zhang, L.; Tran, T.; Yu, Y.; Pan, Y.: Semplore: a scalable IR approach to search the Web of Data (2009) 0.02
    0.01916282 = product of:
      0.07665128 = sum of:
        0.07665128 = weight(_text_:engines in 1638) [ClassicSimilarity], result of:
          0.07665128 = score(doc=1638,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.33681408 = fieldWeight in 1638, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.046875 = fieldNorm(doc=1638)
      0.25 = coord(1/4)
    
    Abstract
    The Web of Data keeps growing rapidly. However, the full exploitation of this large amount of structured data faces numerous challenges like usability, scalability, imprecise information needs and data change. We present Semplore, an IR-based system that aims at addressing these issues. Semplore supports intuitive faceted search and complex queries both on text and structured data. It combines imprecise keyword search and precise structured query in a unified ranking scheme. Scalable query processing is supported by leveraging inverted indexes traditionally used in IR systems. This is combined with a novel block-based index structure to support efficient index update when data changes. The experimental results show that Semplore is an efficient and effective system for searching the Web of Data and can be used as a basic infrastructure for Web-scale Semantic Web search engines.
  9. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.02
    0.018607298 = product of:
      0.07442919 = sum of:
        0.07442919 = product of:
          0.14885838 = sum of:
            0.14885838 = weight(_text_:programming in 604) [ClassicSimilarity], result of:
              0.14885838 = score(doc=604,freq=2.0), product of:
                0.29361802 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.04479146 = queryNorm
                0.5069797 = fieldWeight in 604, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=604)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
  10. Köhler, J.; Philippi, S.; Specht, M.; Rüegg, A.: Ontology based text indexing and querying for the semantic web (2006) 0.02
    0.015969018 = product of:
      0.06387607 = sum of:
        0.06387607 = weight(_text_:engines in 3280) [ClassicSimilarity], result of:
          0.06387607 = score(doc=3280,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.2806784 = fieldWeight in 3280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3280)
      0.25 = coord(1/4)
    
    Abstract
    This publication shows how the gap between the HTML based internet and the RDF based vision of the semantic web might be bridged, by linking words in texts to concepts of ontologies. Most current search engines use indexes that are built at the syntactical level and return hits based on simple string comparisons. However, the indexes do not contain synonyms, cannot differentiate between homonyms ('mouse' as a pointing vs. 'mouse' as an animal) and users receive different search results when they use different conjugation forms of the same word. In this publication, we present a system that uses ontologies and Natural Language Processing techniques to index texts, and thus supports word sense disambiguation and the retrieval of texts that contain equivalent words, by indexing them to concepts of ontologies. For this purpose, we developed fully automated methods for mapping equivalent concepts of imported RDF ontologies (for this prototype WordNet, SUMO and OpenCyc). These methods will thus allow the seamless integration of domain specific ontologies for concept based information retrieval in different domains. To demonstrate the practical workability of this approach, a set of web pages that contain synonyms and homonyms were indexed and can be queried via a search engine like query frontend. However, the ontology based indexing approach can also be used for other data mining applications such text clustering, relation mining and for searching free text fields in biological databases. The ontology alignment methods and some of the text mining principles described in this publication are now incorporated into the ONDEX system http://ondex.sourceforge.net/.
  11. Saruladha, K.; Aghila, G.; Penchala, S.K.: Design of new indexing techniques based on ontology for information retrieval systems (2010) 0.02
    0.015969018 = product of:
      0.06387607 = sum of:
        0.06387607 = weight(_text_:engines in 4317) [ClassicSimilarity], result of:
          0.06387607 = score(doc=4317,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.2806784 = fieldWeight in 4317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4317)
      0.25 = coord(1/4)
    
    Abstract
    Information Retrieval [IR] is the science of searching for documents, for information within documents, and for metadata about documents, as well as that of searching relational databases and the World Wide Web. This paper describes a document representation method instead of keywords ontological descriptors. The purpose of this paper is to propose a system for content-based querying of texts based on the availability of ontology for the concepts in the text domain and to develop new Indexing methods to improve RSV (Retrieval status value). There is a need for querying ontologies at various granularities to retrieve information from various sources to suit the requirements of Semantic web, to eradicate the mismatch between user request and response from the Information Retrieval system. Most of the search engines use indexes that are built at the syntactical level and return hits based on simple string comparisons. The indexes do not contain synonyms, cannot differentiate between homonyms and users receive different search results when they use different conjugation forms of the same word.
  12. Allocca, C.; Aquin, M.d'; Motta, E.: Impact of using relationships between ontologies to enhance the ontology search results (2012) 0.02
    0.015969018 = product of:
      0.06387607 = sum of:
        0.06387607 = weight(_text_:engines in 264) [ClassicSimilarity], result of:
          0.06387607 = score(doc=264,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.2806784 = fieldWeight in 264, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=264)
      0.25 = coord(1/4)
    
    Abstract
    Using semantic web search engines, such as Watson, Swoogle or Sindice, to find ontologies is a complex exploratory activity. It generally requires formulating multiple queries, browsing pages of results, and assessing the returned ontologies against each other to obtain a relevant and adequate subset of ontologies for the intended use. Our hypothesis is that at least some of the difficulties related to searching ontologies stem from the lack of structure in the search results, where ontologies that are implicitly related to each other are presented as disconnected and shown on different result pages. In earlier publications we presented a software framework, Kannel, which is able to automatically detect and make explicit relationships between ontologies in large ontology repositories. In this paper, we present a study that compares the use of the Watson ontology search engine with an extension,Watson+Kannel, which provides information regarding the various relationships occurring between the result ontologies. We evaluate Watson+Kannel by demonstrating through various indicators that explicit relationships between ontologies improve users' efficiency in ontology search, thus validating our hypothesis.
  13. Wenige, L.; Ruhland, J.: Similarity-based knowledge graph queries for recommendation retrieval (2019) 0.02
    0.015969018 = product of:
      0.06387607 = sum of:
        0.06387607 = weight(_text_:engines in 5864) [ClassicSimilarity], result of:
          0.06387607 = score(doc=5864,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.2806784 = fieldWeight in 5864, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5864)
      0.25 = coord(1/4)
    
    Abstract
    Current retrieval and recommendation approaches rely on hard-wired data models. This hinders personalized cus-tomizations to meet information needs of users in a more flexible manner. Therefore, the paper investigates how similarity-basedretrieval strategies can be combined with graph queries to enable users or system providers to explore repositories in the LinkedOpen Data (LOD) cloud more thoroughly. For this purpose, we developed novel content-based recommendation approaches.They rely on concept annotations of Simple Knowledge Organization System (SKOS) vocabularies and a SPARQL-based querylanguage that facilitates advanced and personalized requests for openly available knowledge graphs. We have comprehensivelyevaluated the novel search strategies in several test cases and example application domains (i.e., travel search and multimediaretrieval). The results of the web-based online experiments showed that our approaches increase the recall and diversity of rec-ommendations or at least provide a competitive alternative strategy of resource access when conventional methods do not providehelpful suggestions. The findings may be of use for Linked Data-enabled recommender systems (LDRS) as well as for semanticsearch engines that can consume LOD resources. (PDF) Similarity-based knowledge graph queries for recommendation retrieval. Available from: https://www.researchgate.net/publication/333358714_Similarity-based_knowledge_graph_queries_for_recommendation_retrieval [accessed May 21 2020].
  14. Wu, D.; Shi, J.: Classical music recording ontology used in a library catalog (2016) 0.01
    0.013290926 = product of:
      0.053163704 = sum of:
        0.053163704 = product of:
          0.10632741 = sum of:
            0.10632741 = weight(_text_:programming in 3179) [ClassicSimilarity], result of:
              0.10632741 = score(doc=3179,freq=2.0), product of:
                0.29361802 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.04479146 = queryNorm
                0.36212835 = fieldWeight in 3179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3179)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In order to improve the organization of classical music information resources, we constructed a classical music recording ontology, on top of which we then designed an online classical music catalog. Our construction of the classical music recording ontology consisted of three steps: identifying the purpose, analyzing the ontology, and encoding the ontology. We identified the main classes and properties of the domain by investigating classical music recording resources and users' information needs. We implemented the ontology in the Web Ontology Language (OWL) using five steps: transforming the properties, encoding the transformed properties, defining ranges of the properties, constructing individuals, and standardizing the ontology. In constructing the online catalog, we first designed the structure and functions of the catalog based on investigations into users' information needs and information-seeking behaviors. Then we extracted classes and properties of the ontology using the Apache Jena application programming interface (API), and constructed a catalog in the Java environment. The catalog provides a hierarchical main page (built using the Functional Requirements for Bibliographic Records (FRBR) model), a classical music information network and integrated information service; this combination of features greatly eases the task of finding classical music recordings and more information about classical music.
  15. Andreas, H.: On frames and theory-elements of structuralism (2014) 0.01
    0.013290926 = product of:
      0.053163704 = sum of:
        0.053163704 = product of:
          0.10632741 = sum of:
            0.10632741 = weight(_text_:programming in 3402) [ClassicSimilarity], result of:
              0.10632741 = score(doc=3402,freq=2.0), product of:
                0.29361802 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.04479146 = queryNorm
                0.36212835 = fieldWeight in 3402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3402)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    There are quite a few success stories illustrating philosophy's relevance to information science. One can cite, for example, Leibniz's work on a characteristica universalis and a corresponding calculus ratiocinator through which he aspired to reduce reasoning to calculating. It goes without saying that formal logic initiated research on decidability and computational complexity. But even beyond the realm of formal logic, philosophy has served as a source of inspiration for developments in information and computer science. At the end of the twentieth century, formal ontology emerged from a quest for a semantic foundation of information systems having a higher reusability than systems being available at the time. A success story that is less well documented is the advent of frame systems in computer science. Minsky is credited with having laid out the foundational ideas of such systems. There, the logic programming approach to knowledge representation is criticized by arguing that one should be more careful about the way human beings recognize objects and situations. Notably, the paper draws heavily on the writings of Kuhn and the Gestalt-theorists. It is not our intent, however, to document the traces of the frame idea in the works of philosophers. What follows is, rather, an exposition of a methodology for representing scientific knowledge that is essentially frame-like. This methodology is labelled as structuralist theory of science or, in short, as structuralism. The frame-like character of its basic meta-theoretical concepts makes structuralism likely to be useful in knowledge representation.
  16. Hauer, M.: Mehrsprachige semantische Netze leichter entwickeln (2002) 0.01
    0.012775214 = product of:
      0.051100858 = sum of:
        0.051100858 = weight(_text_:engines in 3894) [ClassicSimilarity], result of:
          0.051100858 = score(doc=3894,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.22454272 = fieldWeight in 3894, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.03125 = fieldNorm(doc=3894)
      0.25 = coord(1/4)
    
    Abstract
    AGI - Information Management Consultants liefern seit nunmehr 16 Jahren eine Software zur Entwicklung von Thesauri und Klassifikationen, ehemals bezeichnet als INDEX, seit zweieinhalb Jahren als IC INDEX neu entwickelt. Solche Terminologien werden oft auch als Glossar, Lexikon, Topic Maps, RDF, semantisches Netz, Systematik, Aktenplan oder Nomenklatur bezeichnet. Die Software erlaubt zwar schon immer, dass solche terminologischen Werke mehrsprachig angelegt sind, doch es gab keine speziellen Werkzeuge, um die Übersetzung zu erleichtern. Die Globalisierung führt zunehmend auch zur Mehrsprachigkeit von Fachterminologien, wie laufende Projekte belegen. In IC INDEX 5.08 wurde deshalb ein spezieller Workflow für die Übersetzung implementiert, der Wortfelder bearbeitet und dabei weitgehend automatisch, aber vom Übersetzer kontrolliert, die richtigen Verbindungen zwischen den Termen in den anderen Sprachen erzeugt. Bereits dieser Workflow beschleunigt wesentlich die Übersetzungstätigkeit. Doch es geht noch schneller: der eTranslation Server von Linguatec generiert automatisch Übersetzungsvorschläge für Deutsch/English und Deutsch/Französisch. Demnächst auch Deutsch/Spanisch und Deutsch/Italienisch. Gerade bei Mehrwortbegriffen, Klassenbezeichnungen und Komposita spielt die automatische Übersetzung gegenüber dem Wörterbuch-Lookup ihre Stärke aus. Der Rückgriff ins Wörterbuch ist selbstverständlich auch implementiert, sowohl auf das Linguatec-Wörterbuch und zusätzlich jedes beliebige über eine URL adressierbare Wörterbuch. Jeder Übersetzungsvorschlag muss vom Terminologie-Entwickler bestätigt werden. Im Rahmen der Oualitätskontrolle haben wir anhand vorliegender mehrsprachiger Thesauri getestet mit dem Ergebnis, dass die automatischen Vorschläge oft gleich und fast immer sehr nahe an der gewünschten Übersetzung waren. Worte, die für durchschnittlich gebildete Menschen nicht mehr verständlich sind, bereiten auch der maschinellen Übersetzung Probleme, z.B. Fachbegriffe aus Medizin, Chemie und anderen Wissenschaften. Aber auch ein Humanübersetzer wäre hier ohne einschlägige Fachausbildung überfordert. Also, ohne Fach- und ohne Sprachkompetenz geht es nicht, aber mit geht es ziemlich flott. IC INDEX basiert auf Lotus Notes & Domino 5.08. Beliebige Relationen zwischen Termen sind zulässig, die ANSI-Normen sind implementiert und um zusätzliche Relationen ergänzt, 26 Relationen gehören zum Lieferumfang. Ausgaben gemäß Topic Maps oder RDF - zwei eng verwandte Normen-werden bei Nachfrage entwickelt. Ausgaben sind in HMTL, XML, eine ansprechende Druckversion unter MS Word 2000 und für verschiedene Search-Engines vorhanden. AGI - Information Management Consultants, Neustadt an der Weinstraße, beraten seit 1983 Unternehmen und Organisationen im dem heute als Knowledge Management bezeichneten Feld. Seit 1994 liefern sie eine umfassende, hochintegrative Lösung: "Information Center" - darin ist IC INDEX ein eigenständiges Modul zur Unterstützung von mehrsprachiger Indexierung und mehrsprachigem semantischem Retrieval. Linguatec, München, ist einstmals aus den linguistischen Forschungslabors von IBM hervorgegangen und ist über den Personal Translator weithin bekannt.
  17. Davies, J.; Weeks, R.; Krohn, U.: QuizRDF: search technology for the Semantic Web (2004) 0.01
    0.012775214 = product of:
      0.051100858 = sum of:
        0.051100858 = weight(_text_:engines in 4406) [ClassicSimilarity], result of:
          0.051100858 = score(doc=4406,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.22454272 = fieldWeight in 4406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.03125 = fieldNorm(doc=4406)
      0.25 = coord(1/4)
    
    Abstract
    Important information is often scattered across Web and/or intranet resources. Traditional search engines return ranked retrieval lists that offer little or no information on the semantic relationships among documents. Knowledge workers spend a substantial amount of their time browsing and reading to find out how documents are related to one another and where each falls into the overall structure of the problem domain. Yet only when knowledge workers begin to locate the similarities and differences among pieces of information do they move into an essential part of their work: building relationships to create new knowledge. Information retrieval traditionally focuses on the relationship between a given query (or user profile) and the information store. On the other hand, exploitation of interrelationships between selected pieces of information (which can be facilitated by the use of ontologies) can put otherwise isolated information into a meaningful context. The implicit structures so revealed help users use and manage information more efficiently. Knowledge management tools are needed that integrate the resources dispersed across Web resources into a coherent corpus of interrelated information. Previous research in information integration has largely focused on integrating heterogeneous databases and knowledge bases, which represent information in a highly structured way, often by means of formal languages. In contrast, the Web consists to a large extent of unstructured or semi-structured natural language texts. As we have seen, ontologies offer an alternative way to cope with heterogeneous representations of Web resources. The domain model implicit in an ontology can be taken as a unifying structure for giving information a common representation and semantics. Once such a unifying structure exists, it can be exploited to improve browsing and retrieval performance in information access tools. QuizRDF is an example of such a tool.
  18. Sy, M.-F.; Ranwez, S.; Montmain, J.; Ragnault, A.; Crampes, M.; Ranwez, V.: User centered and ontology based information retrieval system for life sciences (2012) 0.01
    0.012775214 = product of:
      0.051100858 = sum of:
        0.051100858 = weight(_text_:engines in 699) [ClassicSimilarity], result of:
          0.051100858 = score(doc=699,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.22454272 = fieldWeight in 699, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.03125 = fieldNorm(doc=699)
      0.25 = coord(1/4)
    
    Abstract
    Background: Because of the increasing number of electronic resources, designing efficient tools to retrieve and exploit them is a major challenge. Some improvements have been offered by semantic Web technologies and applications based on domain ontologies. In life science, for instance, the Gene Ontology is widely exploited in genomic applications and the Medical Subject Headings is the basis of biomedical publications indexation and information retrieval process proposed by PubMed. However current search engines suffer from two main drawbacks: there is limited user interaction with the list of retrieved resources and no explanation for their adequacy to the query is provided. Users may thus be confused by the selection and have no idea on how to adapt their queries so that the results match their expectations. Results: This paper describes an information retrieval system that relies on domain ontology to widen the set of relevant documents that is retrieved and that uses a graphical rendering of query results to favor user interactions. Semantic proximities between ontology concepts and aggregating models are used to assess documents adequacy with respect to a query. The selection of documents is displayed in a semantic map to provide graphical indications that make explicit to what extent they match the user's query; this man/machine interface favors a more interactive and iterative exploration of data corpus, by facilitating query concepts weighting and visual explanation. We illustrate the benefit of using this information retrieval system on two case studies one of which aiming at collecting human genes related to transcription factors involved in hemopoiesis pathway. Conclusions: The ontology based information retrieval system described in this paper (OBIRS) is freely available at: http://www.ontotoolkit.mines-ales.fr/ObirsClient/. This environment is a first step towards a user centred application in which the system enlightens relevant information to provide decision help.
  19. Silva, S.E.; Reis, L.P.; Fernandes, J.M.; Sester Pereira, A.D.: ¬A multi-layer framework for semantic modeling (2020) 0.01
    0.012775214 = product of:
      0.051100858 = sum of:
        0.051100858 = weight(_text_:engines in 5712) [ClassicSimilarity], result of:
          0.051100858 = score(doc=5712,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.22454272 = fieldWeight in 5712, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.03125 = fieldNorm(doc=5712)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of this paper is to introduce a multi-level framework for semantic modeling (MFSM) based on four signification levels: objects, classes of entities, instances and domains. In addition, four fundamental propositions of the signification process underpin these levels, namely, classification, decomposition, instantiation and contextualization. Design/methodology/approach The deductive approach guided the design of this modeling framework. The authors empirically validated the MFSM in two ways. First, the authors identified the signification processes used in articles that deal with semantic modeling. The authors then applied the MFSM to model the semantic context of the literature about lean manufacturing, a field of management science. Findings The MFSM presents a highly consistent approach about the signification process, integrates the semantic modeling literature in a new and comprehensive view; and permits the modeling of any semantic context, thus facilitating the development of knowledge organization systems based on semantic search. Research limitations/implications The use of MFSM is manual and, thus, requires a considerable effort of the team that decides to model a semantic context. In this paper, the modeling was generated by specialists, and in the future should be applicated to lay users. Practical implications The MFSM opens up avenues to a new form of classification of documents, as well as for the development of tools based on the semantic search, and to investigate how users do their searches. Social implications The MFSM can be used to model archives semantically in public or private settings. In future, it can be incorporated to search engines for more efficient searches of users. Originality/value The MFSM provides a new and comprehensive approach about the elementary levels and activities in the process of signification. In addition, this new framework presents a new form to model semantically any context classifying its objects.
  20. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.01
    0.007585781 = product of:
      0.030343125 = sum of:
        0.030343125 = product of:
          0.06068625 = sum of:
            0.06068625 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.06068625 = score(doc=6089,freq=2.0), product of:
                0.15685207 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04479146 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Pages
    S.11-22

Authors

Years

Languages

  • e 47
  • d 8