Search (210 results, page 1 of 11)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.07
    0.06969979 = product of:
      0.104549676 = sum of:
        0.08107116 = product of:
          0.24321347 = sum of:
            0.24321347 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.24321347 = score(doc=400,freq=2.0), product of:
                0.4327503 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05104385 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.023478512 = weight(_text_:science in 400) [ClassicSimilarity], result of:
          0.023478512 = score(doc=400,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.6666667 = coord(2/3)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.07
    0.06651768 = product of:
      0.09977652 = sum of:
        0.07043553 = weight(_text_:science in 3355) [ClassicSimilarity], result of:
          0.07043553 = score(doc=3355,freq=18.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.52385724 = fieldWeight in 3355, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
        0.029340988 = product of:
          0.058681976 = sum of:
            0.058681976 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.058681976 = score(doc=3355,freq=4.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    One of a series of three publications influenced by the travelling exhibit Places & Spaces: Mapping Science, curated by the Cyberinfrastructure for Network Science Center at Indiana University. - Additional materials can be found at http://http://scimaps.org/atlas2. Erweitert durch: Börner, Katy. Atlas of Science: Visualizing What We Know.
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
    LCSH
    Science / Atlases
    Science / Study and teaching / Graphic methods
    Communication in science / Data processing
    Subject
    Science / Atlases
    Science / Study and teaching / Graphic methods
    Communication in science / Data processing
  3. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.07
    0.06523234 = product of:
      0.097848505 = sum of:
        0.027110653 = weight(_text_:science in 4399) [ClassicSimilarity], result of:
          0.027110653 = score(doc=4399,freq=6.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.20163277 = fieldWeight in 4399, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.07073785 = sum of:
          0.043074906 = weight(_text_:index in 4399) [ClassicSimilarity], result of:
            0.043074906 = score(doc=4399,freq=2.0), product of:
              0.22304957 = queryWeight, product of:
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.05104385 = queryNorm
              0.1931181 = fieldWeight in 4399, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.03125 = fieldNorm(doc=4399)
          0.027662948 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
            0.027662948 = score(doc=4399,freq=2.0), product of:
              0.17874686 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05104385 = queryNorm
              0.15476047 = fieldWeight in 4399, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=4399)
      0.6666667 = coord(2/3)
    
    Abstract
    Indexing plays a vital role in Information Retrieval. With the availability of huge volume of information, it has become necessary to index the information in such a way to make easier for the end users to find the information they want efficiently and accurately. Keyword-based indexing uses words as indexing terms. It is not capable of capturing the implicit relation among terms or the semantics of the words in the document. To eliminate this limitation, ontology-based indexing came into existence, which allows semantic based indexing to solve complex and indirect user queries. Ontologies are used for document indexing which allows semantic based information retrieval. Existing ontologies or the ones constructed from scratch are used presently for indexing. Constructing ontologies from scratch is a labor-intensive task and requires extensive domain knowledge whereas use of an existing ontology may leave some important concepts in documents un-annotated. Using multiple ontologies can overcome the problem of missing out concepts to a great extent, but it is difficult to manage (changes in ontologies over time by their developers) multiple ontologies and ontology heterogeneity also arises due to ontologies constructed by different ontology developers. One possible solution to managing multiple ontologies and build from scratch is to use modular ontologies for indexing.
    Content
    Submitted to the Faculty of the Computer Science and Engineering Department of the University of Engineering and Technology Lahore in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Computer Science (2009 - 009-PhD-CS-04). Vgl.: http://prr.hec.gov.pk/jspui/bitstream/123456789/8375/1/Taybah_Kiren_Computer_Science_HSR_2017_UET_Lahore_14.12.2017.pdf.
    Date
    20. 1.2015 18:30:22
    Imprint
    Lahore : University of Engineering and Technology / Department of Computer Science and Engineering
  4. Wang, H.; Liu, Q.; Penin, T.; Fu, L.; Zhang, L.; Tran, T.; Yu, Y.; Pan, Y.: Semplore: a scalable IR approach to search the Web of Data (2009) 0.05
    0.052594315 = product of:
      0.07889147 = sum of:
        0.03320363 = weight(_text_:science in 1638) [ClassicSimilarity], result of:
          0.03320363 = score(doc=1638,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.24694869 = fieldWeight in 1638, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=1638)
        0.04568784 = product of:
          0.09137568 = sum of:
            0.09137568 = weight(_text_:index in 1638) [ClassicSimilarity], result of:
              0.09137568 = score(doc=1638,freq=4.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.40966535 = fieldWeight in 1638, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1638)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Web of Data keeps growing rapidly. However, the full exploitation of this large amount of structured data faces numerous challenges like usability, scalability, imprecise information needs and data change. We present Semplore, an IR-based system that aims at addressing these issues. Semplore supports intuitive faceted search and complex queries both on text and structured data. It combines imprecise keyword search and precise structured query in a unified ranking scheme. Scalable query processing is supported by leveraging inverted indexes traditionally used in IR systems. This is combined with a novel block-based index structure to support efficient index update when data changes. The experimental results show that Semplore is an efficient and effective system for searching the Web of Data and can be used as a basic infrastructure for Web-scale Semantic Web search engines.
    Content
    Vgl.: http://www.sciencedirect.com/science/article/pii/S1570826809000262.
    Source
    Web semantics: science, services and agents on the World Wide Web. 7(2009) no.3, S.177-188
  5. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.05
    0.046466526 = product of:
      0.06969979 = sum of:
        0.054047443 = product of:
          0.16214232 = sum of:
            0.16214232 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.16214232 = score(doc=5820,freq=2.0), product of:
                0.4327503 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05104385 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.015652342 = weight(_text_:science in 5820) [ClassicSimilarity], result of:
          0.015652342 = score(doc=5820,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.11641272 = fieldWeight in 5820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.6666667 = coord(2/3)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
    Imprint
    Pittsburgh, PA : Carnegie Mellon University, School of Computer Science, Language Technologies Institute
  6. Zhang, L.; Liu, Q.L.; Zhang, J.; Wang, H.F.; Pan, Y.; Yu, Y.: Semplore: an IR approach to scalable hybrid query of Semantic Web data (2007) 0.04
    0.044130262 = product of:
      0.06619539 = sum of:
        0.019565428 = weight(_text_:science in 231) [ClassicSimilarity], result of:
          0.019565428 = score(doc=231,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.1455159 = fieldWeight in 231, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=231)
        0.04662996 = product of:
          0.09325992 = sum of:
            0.09325992 = weight(_text_:index in 231) [ClassicSimilarity], result of:
              0.09325992 = score(doc=231,freq=6.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.418113 = fieldWeight in 231, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=231)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    As an extension to the current Web, Semantic Web will not only contain structured data with machine understandable semantics but also textual information. While structured queries can be used to find information more precisely on the Semantic Web, keyword searches are still needed to help exploit textual information. It thus becomes very important that we can combine precise structured queries with imprecise keyword searches to have a hybrid query capability. In addition, due to the huge volume of information on the Semantic Web, the hybrid query must be processed in a very scalable way. In this paper, we define such a hybrid query capability that combines unary tree-shaped structured queries with keyword searches. We show how existing information retrieval (IR) index structures and functions can be reused to index semantic web data and its textual information, and how the hybrid query is evaluated on the index structure using IR engines in an efficient and scalable manner. We implemented this IR approach in an engine called Semplore. Comprehensive experiments on its performance show that it is a promising approach. It leads us to believe that it may be possible to evolve current web search engines to query and search the Semantic Web. Finally, we briefy describe how Semplore is used for searching Wikipedia and an IBM customer's product information.
    Series
    Lecture notes in computer science; 4825
  7. Frické, M.: Logic and the organization of information (2012) 0.04
    0.03838856 = product of:
      0.05758284 = sum of:
        0.03873757 = weight(_text_:science in 1782) [ClassicSimilarity], result of:
          0.03873757 = score(doc=1782,freq=16.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.2881068 = fieldWeight in 1782, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
        0.018845271 = product of:
          0.037690543 = sum of:
            0.037690543 = weight(_text_:index in 1782) [ClassicSimilarity], result of:
              0.037690543 = score(doc=1782,freq=2.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.16897833 = fieldWeight in 1782, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1782)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Logic and the Organization of Information closely examines the historical and contemporary methodologies used to catalogue information objects-books, ebooks, journals, articles, web pages, images, emails, podcasts and more-in the digital era. This book provides an in-depth technical background for digital librarianship, and covers a broad range of theoretical and practical topics including: classification theory, topic annotation, automatic clustering, generalized synonymy and concept indexing, distributed libraries, semantic web ontologies and Simple Knowledge Organization System (SKOS). It also analyzes the challenges facing today's information architects, and outlines a series of techniques for overcoming them. Logic and the Organization of Information is intended for practitioners and professionals working at a design level as a reference book for digital librarianship. Advanced-level students, researchers and academics studying information science, library science, digital libraries and computer science will also find this book invaluable.
    Footnote
    Rez. in: J. Doc. 70(2014) no.4: "Books on the organization of information and knowledge, aimed at a library/information audience, tend to fall into two clear categories. Most are practical and pragmatic, explaining the "how" as much or more than the "why". Some are theoretical, in part or in whole, showing how the practice of classification, indexing, resource description and the like relates to philosophy, logic, and other foundational bases; the books by Langridge (1992) and by Svenonious (2000) are well-known examples this latter kind. To this category certainly belongs a recent book by Martin Frické (2012). The author takes the reader for an extended tour through a variety of aspects of information organization, including classification and taxonomy, alphabetical vocabularies and indexing, cataloguing and FRBR, and aspects of the semantic web. The emphasis throughout is on showing how practice is, or should be, underpinned by formal structures; there is a particular emphasis on first order predicate calculus. The advantages of a greater, and more explicit, use of symbolic logic is a recurring theme of the book. There is a particularly commendable historical dimension, often omitted in texts on this subject. It cannot be said that this book is entirely an easy read, although it is well written with a helpful index, and its arguments are generally well supported by clear and relevant examples. It is thorough and detailed, but thereby seems better geared to the needs of advanced students and researchers than to the practitioners who are suggested as a main market. For graduate students in library/information science and related disciplines, in particular, this will be a valuable resource. I would place it alongside Svenonious' book as the best insight into the theoretical "why" of information organization. It has evoked a good deal of interest, including a set of essay commentaries in Journal of Information Science (Gilchrist et al., 2013). Introducing these, Alan Gilchrist rightly says that Frické deserves a salute for making explicit the fundamental relationship between the ancient discipline of logic and modern information organization. If information science is to continue to develop, and make a contribution to the organization of the information environments of the future, then this book sets the groundwork for the kind of studies which will be needed." (D. Bawden)
    LCSH
    Computer science
    Subject
    Computer science
  8. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.04
    0.03682912 = product of:
      0.05524368 = sum of:
        0.041412205 = weight(_text_:science in 179) [ClassicSimilarity], result of:
          0.041412205 = score(doc=179,freq=14.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.30799913 = fieldWeight in 179, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.013831474 = product of:
          0.027662948 = sum of:
            0.027662948 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
              0.027662948 = score(doc=179,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.15476047 = fieldWeight in 179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=179)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
    Footnote
    Beitrag in einem Special Issue: Showcasing Doctoral Research in Information Science.
  9. Thenmalar, S.; Geetha, T.V.: Enhanced ontology-based indexing and searching (2014) 0.04
    0.036161236 = product of:
      0.10848371 = sum of:
        0.10848371 = sum of:
          0.08427863 = weight(_text_:index in 1633) [ClassicSimilarity], result of:
            0.08427863 = score(doc=1633,freq=10.0), product of:
              0.22304957 = queryWeight, product of:
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.05104385 = queryNorm
              0.37784708 = fieldWeight in 1633, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
          0.02420508 = weight(_text_:22 in 1633) [ClassicSimilarity], result of:
            0.02420508 = score(doc=1633,freq=2.0), product of:
              0.17874686 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05104385 = queryNorm
              0.1354154 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to improve the conceptual-based search by incorporating structural ontological information such as concepts and relations. Generally, Semantic-based information retrieval aims to identify relevant information based on the meanings of the query terms or on the context of the terms and the performance of semantic information retrieval is carried out through standard measures-precision and recall. Higher precision leads to the (meaningful) relevant documents obtained and lower recall leads to the less coverage of the concepts. Design/methodology/approach - In this paper, the authors enhance the existing ontology-based indexing proposed by Kohler et al., by incorporating sibling information to the index. The index designed by Kohler et al., contains only super and sub-concepts from the ontology. In addition, in our approach, we focus on two tasks; query expansion and ranking of the expanded queries, to improve the efficiency of the ontology-based search. The aforementioned tasks make use of ontological concepts, and relations existing between those concepts so as to obtain semantically more relevant search results for a given query. Findings - The proposed ontology-based indexing technique is investigated by analysing the coverage of concepts that are being populated in the index. Here, we introduce a new measure called index enhancement measure, to estimate the coverage of ontological concepts being indexed. We have evaluated the ontology-based search for the tourism domain with the tourism documents and tourism-specific ontology. The comparison of search results based on the use of ontology "with and without query expansion" is examined to estimate the efficiency of the proposed query expansion task. The ranking is compared with the ORank system to evaluate the performance of our ontology-based search. From these analyses, the ontology-based search results shows better recall when compared to the other concept-based search systems. The mean average precision of the ontology-based search is found to be 0.79 and the recall is found to be 0.65, the ORank system has the mean average precision of 0.62 and the recall is found to be 0.51, while the concept-based search has the mean average precision of 0.56 and the recall is found to be 0.42. Practical implications - When the concept is not present in the domain-specific ontology, the concept cannot be indexed. When the given query term is not available in the ontology then the term-based results are retrieved. Originality/value - In addition to super and sub-concepts, we incorporate the concepts present in same level (siblings) to the ontological index. The structural information from the ontology is determined for the query expansion. The ranking of the documents depends on the type of the query (single concept query, multiple concept queries and concept with relation queries) and the ontological relations that exists in the query and the documents. With this ontological structural information, the search results showed us better coverage of concepts with respect to the query.
    Date
    20. 1.2015 18:30:22
  10. Das, S.; Roy, S.: Faceted ontological model for brain tumour study (2016) 0.03
    0.029972691 = product of:
      0.044959035 = sum of:
        0.027669692 = weight(_text_:science in 2831) [ClassicSimilarity], result of:
          0.027669692 = score(doc=2831,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.20579056 = fieldWeight in 2831, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2831)
        0.017289342 = product of:
          0.034578685 = sum of:
            0.034578685 = weight(_text_:22 in 2831) [ClassicSimilarity], result of:
              0.034578685 = score(doc=2831,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.19345059 = fieldWeight in 2831, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2831)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The purpose of this work is to develop an ontology-based framework for developing an information retrieval system to cater to specific queries of users. For creating such an ontology, information was obtained from a wide range of information sources involved with brain tumour study and research. The information thus obtained was compiled and analysed to provide a standard, reliable and relevant information base to aid our proposed system. Facet-based methodology has been used for ontology formalization for quite some time. Ontology formalization involves different steps such as identification of the terminology, analysis, synthesis, standardization and ordering. A vast majority of the ontologies being developed nowadays lack flexibility. This becomes a formidable constraint when it comes to interoperability. We found that a facet-based method provides a distinct guideline for the development of a robust and flexible model concerning the domain of brain tumours. Our attempt has been to bridge library and information science and computer science, which itself involved an experimental approach. It was discovered that a faceted approach is really enduring, as it helps in the achievement of properties like navigation, exploration and faceted browsing. Computer-based brain tumour ontology supports the work of researchers towards gathering information on brain tumour research and allows users across the world to intelligently access new scientific information quickly and efficiently.
    Date
    12. 3.2016 13:21:22
  11. Bringsjord, S.; Clark, M.; Taylor, J.: Sophisticated knowledge representation and reasoning requires philosophy (2014) 0.03
    0.029972691 = product of:
      0.044959035 = sum of:
        0.027669692 = weight(_text_:science in 3403) [ClassicSimilarity], result of:
          0.027669692 = score(doc=3403,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.20579056 = fieldWeight in 3403, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3403)
        0.017289342 = product of:
          0.034578685 = sum of:
            0.034578685 = weight(_text_:22 in 3403) [ClassicSimilarity], result of:
              0.034578685 = score(doc=3403,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.19345059 = fieldWeight in 3403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3403)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    What is knowledge representation and reasoning (KR&R)? Alas, a thorough account would require a book, or at least a dedicated, full-length paper, but here we shall have to make do with something simpler. Since most readers are likely to have an intuitive grasp of the essence of KR&R, our simple account should suffice. The interesting thing is that this simple account itself makes reference to some of the foundational distinctions in the field of philosophy. These distinctions also play a central role in artificial intelligence (AI) and computer science. To begin with, the first distinction in KR&R is that we identify knowledge with knowledge that such-and-such holds (possibly to a degree), rather than knowing how. If you ask an expert tennis player how he manages to serve a ball at 130 miles per hour on his first serve, and then serve a safer, topspin serve on his second should the first be out, you may well receive a confession that, if truth be told, this athlete can't really tell you. He just does it; he does something he has been doing since his youth. Yet, there is no denying that he knows how to serve. In contrast, the knowledge in KR&R must be expressible in declarative statements. For example, our tennis player knows that if his first serve lands outside the service box, it's not in play. He thus knows a proposition, conditional in form.
    Date
    9. 2.2017 19:22:14
    Source
    Philosophy, computing and information science. Eds.: R. Hagengruber u. U.V. Riss
  12. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.03
    0.029483816 = product of:
      0.044225723 = sum of:
        0.023478512 = weight(_text_:science in 2418) [ClassicSimilarity], result of:
          0.023478512 = score(doc=2418,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 2418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.02074721 = product of:
          0.04149442 = sum of:
            0.04149442 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
              0.04149442 = score(doc=2418,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.23214069 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2418)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Series
    Lecture notes in computer science; vol.4172
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  13. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.03
    0.029483816 = product of:
      0.044225723 = sum of:
        0.023478512 = weight(_text_:science in 2024) [ClassicSimilarity], result of:
          0.023478512 = score(doc=2024,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 2024, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=2024)
        0.02074721 = product of:
          0.04149442 = sum of:
            0.04149442 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
              0.04149442 = score(doc=2024,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.23214069 = fieldWeight in 2024, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2024)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
  14. Green, R.: Relationships in the Dewey Decimal Classification (DDC) : plan of study (2008) 0.02
    0.024869313 = product of:
      0.07460794 = sum of:
        0.07460794 = product of:
          0.14921588 = sum of:
            0.14921588 = weight(_text_:index in 3397) [ClassicSimilarity], result of:
              0.14921588 = score(doc=3397,freq=6.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.6689808 = fieldWeight in 3397, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3397)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    EPC Exhibit 129-36.1 presented intermediate results of a project to connect Relative Index terms to topics associated with classes and to determine if those Relative Index terms approximated the whole of the corresponding class or were in standing room in the class. The Relative Index project constitutes the first stage of a long(er)-term project to instill a more systematic treatment of relationships within the DDC. The present exhibit sets out a plan of study for that long-term project.
  15. Cui, H.: Competency evaluation of plant character ontologies against domain literature (2010) 0.02
    0.024569847 = product of:
      0.03685477 = sum of:
        0.019565428 = weight(_text_:science in 3466) [ClassicSimilarity], result of:
          0.019565428 = score(doc=3466,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.1455159 = fieldWeight in 3466, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3466)
        0.017289342 = product of:
          0.034578685 = sum of:
            0.034578685 = weight(_text_:22 in 3466) [ClassicSimilarity], result of:
              0.034578685 = score(doc=3466,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.19345059 = fieldWeight in 3466, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3466)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    1. 6.2010 9:55:22
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1144-1165
  16. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.02
    0.024569847 = product of:
      0.03685477 = sum of:
        0.019565428 = weight(_text_:science in 4607) [ClassicSimilarity], result of:
          0.019565428 = score(doc=4607,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.1455159 = fieldWeight in 4607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.017289342 = product of:
          0.034578685 = sum of:
            0.034578685 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
              0.034578685 = score(doc=4607,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.19345059 = fieldWeight in 4607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4607)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Series
    Lecture notes in computer science: Lecture notes in artificial intelligence ; 4604
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a
  17. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.02
    0.024569847 = product of:
      0.03685477 = sum of:
        0.019565428 = weight(_text_:science in 633) [ClassicSimilarity], result of:
          0.019565428 = score(doc=633,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.1455159 = fieldWeight in 633, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.017289342 = product of:
          0.034578685 = sum of:
            0.034578685 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
              0.034578685 = score(doc=633,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.19345059 = fieldWeight in 633, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=633)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Considering the characteristics of hypertext systems and problems such as cognitive overload and the disorientation of users, this project studies subject hypertext documents that have undergone conceptual structuring using facets for content representation and improvement of information retrieval during navigation. The main objective was to assess the possibility of the application of topic map technology for automating the compatibilization process of these structures. For this purpose, two dissertations from the UFMG Information Science Post-Graduation Program were adopted as samples. Both dissertations had been duly analyzed and structured on the MHTX (Hypertextual Map) prototype database. The faceted structures of both dissertations, which had been represented in conceptual maps, were then converted into topic maps. It was then possible to use the merge property of the topic maps to promote the semantic interrelationship between the maps and, consequently, between the hypertextual information resources proper. The merge results were then analyzed in the light of theories dealing with the compatibilization of languages developed within the realm of information technology and librarianship from the 1960s on. The main goals accomplished were: (a) the detailed conceptualization of the merge process of the topic maps, considering the possible compatibilization levels and the applicability of this technology in the integration of faceted structures; and (b) the production of a detailed sequence of steps that may be used in the implementation of topic maps based on faceted structures.
    Date
    22. 2.2013 11:39:23
  18. Conde, A.; Larrañaga, M.; Arruarte, A.; Elorriaga, J.A.; Roth, D.: litewi: a combined term extraction and entity linking method for eliciting educational ontologies from textbooks (2016) 0.02
    0.024569847 = product of:
      0.03685477 = sum of:
        0.019565428 = weight(_text_:science in 2645) [ClassicSimilarity], result of:
          0.019565428 = score(doc=2645,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.1455159 = fieldWeight in 2645, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2645)
        0.017289342 = product of:
          0.034578685 = sum of:
            0.034578685 = weight(_text_:22 in 2645) [ClassicSimilarity], result of:
              0.034578685 = score(doc=2645,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.19345059 = fieldWeight in 2645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2645)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 1.2016 12:38:14
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.2, S.380-399
  19. Jansen, B.; Browne, G.M.: Navigating information spaces : index / mind map / topic map? (2021) 0.02
    0.020305708 = product of:
      0.06091712 = sum of:
        0.06091712 = product of:
          0.12183424 = sum of:
            0.12183424 = weight(_text_:index in 436) [ClassicSimilarity], result of:
              0.12183424 = score(doc=436,freq=4.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.5462205 = fieldWeight in 436, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0625 = fieldNorm(doc=436)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper discusses the use of wiki technology to provide a navigation structure for a collection of newspaper clippings. We overview the architecture of the wiki, discuss the navigation structure and pose the question: is the navigation structure an index, and if so, what type, or is it just a linkage structure or topic map. Does such a distinction really matter? Are these definitions in reality function based?
  20. Vickery, B.C.: Ontologies (1997) 0.02
    0.01807377 = product of:
      0.054221306 = sum of:
        0.054221306 = weight(_text_:science in 4891) [ClassicSimilarity], result of:
          0.054221306 = score(doc=4891,freq=6.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.40326554 = fieldWeight in 4891, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0625 = fieldNorm(doc=4891)
      0.33333334 = coord(1/3)
    
    Abstract
    Discusses the emergence of the term 'ontology' in knowledge engineering (and now in information science) with a definition of the term as currently used. Ontology is the study of what exists and what must be assumed to exist in order to achieve a cogent description or reality. The term has seen extensive application to artificial intelligence. Describes the process of building an ontology and the uses of such tools in knowledge engineering. Concludes by comparing ontologies with similar tools used in information science
    Source
    Journal of information science. 23(1997) no.4, S.277-286

Authors

Years

Languages

  • e 186
  • d 18
  • pt 2
  • f 1
  • More… Less…

Types

  • a 163
  • el 38
  • m 19
  • x 14
  • s 9
  • p 4
  • n 2
  • r 1
  • More… Less…

Subjects