Search (474 results, page 1 of 24)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.12
    0.11561403 = sum of:
      0.08288895 = product of:
        0.24866685 = sum of:
          0.24866685 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
            0.24866685 = score(doc=400,freq=2.0), product of:
              0.4424535 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05218836 = queryNorm
              0.56201804 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.33333334 = coord(1/3)
      0.020722598 = weight(_text_:of in 400) [ClassicSimilarity], result of:
        0.020722598 = score(doc=400,freq=12.0), product of:
          0.08160993 = queryWeight, product of:
            1.5637573 = idf(docFreq=25162, maxDocs=44218)
            0.05218836 = queryNorm
          0.25392252 = fieldWeight in 400, product of:
            3.4641016 = tf(freq=12.0), with freq of:
              12.0 = termFreq=12.0
            1.5637573 = idf(docFreq=25162, maxDocs=44218)
            0.046875 = fieldNorm(doc=400)
      0.012002475 = product of:
        0.02400495 = sum of:
          0.02400495 = weight(_text_:science in 400) [ClassicSimilarity], result of:
            0.02400495 = score(doc=400,freq=2.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.17461908 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.5 = coord(1/2)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Source
    Graph-Based Methods for Natural Language Processing - proceedings of the Thirteenth Workshop (TextGraphs-13): November 4, 2019, Hong Kong : EMNLP-IJCNLP 2019. Ed.: Dmitry Ustalov
  2. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.10
    0.099288344 = product of:
      0.14893252 = sum of:
        0.016919931 = weight(_text_:of in 3355) [ClassicSimilarity], result of:
          0.016919931 = score(doc=3355,freq=8.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.20732689 = fieldWeight in 3355, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
        0.13201259 = sum of:
          0.072014846 = weight(_text_:science in 3355) [ClassicSimilarity], result of:
            0.072014846 = score(doc=3355,freq=18.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.52385724 = fieldWeight in 3355, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
          0.05999775 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
            0.05999775 = score(doc=3355,freq=4.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.32829654 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
      0.6666667 = coord(2/3)
    
    Content
    One of a series of three publications influenced by the travelling exhibit Places & Spaces: Mapping Science, curated by the Cyberinfrastructure for Network Science Center at Indiana University. - Additional materials can be found at http://http://scimaps.org/atlas2. Erweitert durch: Börner, Katy. Atlas of Science: Visualizing What We Know.
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
    LCSH
    Science / Atlases
    Science / Study and teaching / Graphic methods
    Communication in science / Data processing
    Subject
    Science / Atlases
    Science / Study and teaching / Graphic methods
    Communication in science / Data processing
  3. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.09
    0.08510449 = sum of:
      0.055259302 = product of:
        0.1657779 = sum of:
          0.1657779 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
            0.1657779 = score(doc=5820,freq=2.0), product of:
              0.4424535 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05218836 = queryNorm
              0.3746787 = fieldWeight in 5820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.33333334 = coord(1/3)
      0.02184354 = weight(_text_:of in 5820) [ClassicSimilarity], result of:
        0.02184354 = score(doc=5820,freq=30.0), product of:
          0.08160993 = queryWeight, product of:
            1.5637573 = idf(docFreq=25162, maxDocs=44218)
            0.05218836 = queryNorm
          0.26765788 = fieldWeight in 5820, product of:
            5.477226 = tf(freq=30.0), with freq of:
              30.0 = termFreq=30.0
            1.5637573 = idf(docFreq=25162, maxDocs=44218)
            0.03125 = fieldNorm(doc=5820)
      0.00800165 = product of:
        0.0160033 = sum of:
          0.0160033 = weight(_text_:science in 5820) [ClassicSimilarity], result of:
            0.0160033 = score(doc=5820,freq=2.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.11641272 = fieldWeight in 5820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.5 = coord(1/2)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
    Imprint
    Pittsburgh, PA : Carnegie Mellon University, School of Computer Science, Language Technologies Institute
  4. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.06
    0.061151218 = product of:
      0.091726825 = sum of:
        0.021102862 = weight(_text_:of in 179) [ClassicSimilarity], result of:
          0.021102862 = score(doc=179,freq=28.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.25858206 = fieldWeight in 179, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.070623964 = sum of:
          0.04234075 = weight(_text_:science in 179) [ClassicSimilarity], result of:
            0.04234075 = score(doc=179,freq=14.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.30799913 = fieldWeight in 179, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
          0.028283209 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
            0.028283209 = score(doc=179,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.15476047 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
    Footnote
    Beitrag in einem Special Issue: Showcasing Doctoral Research in Information Science.
    Source
    Aslib journal of information management. 72(2020) no.4, S.671-685
  5. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.06
    0.057392165 = product of:
      0.08608825 = sum of:
        0.030730115 = weight(_text_:of in 633) [ClassicSimilarity], result of:
          0.030730115 = score(doc=633,freq=38.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.37654874 = fieldWeight in 633, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.055358134 = sum of:
          0.020004123 = weight(_text_:science in 633) [ClassicSimilarity], result of:
            0.020004123 = score(doc=633,freq=2.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.1455159 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
          0.03535401 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
            0.03535401 = score(doc=633,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.19345059 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
      0.6666667 = coord(2/3)
    
    Abstract
    Considering the characteristics of hypertext systems and problems such as cognitive overload and the disorientation of users, this project studies subject hypertext documents that have undergone conceptual structuring using facets for content representation and improvement of information retrieval during navigation. The main objective was to assess the possibility of the application of topic map technology for automating the compatibilization process of these structures. For this purpose, two dissertations from the UFMG Information Science Post-Graduation Program were adopted as samples. Both dissertations had been duly analyzed and structured on the MHTX (Hypertextual Map) prototype database. The faceted structures of both dissertations, which had been represented in conceptual maps, were then converted into topic maps. It was then possible to use the merge property of the topic maps to promote the semantic interrelationship between the maps and, consequently, between the hypertextual information resources proper. The merge results were then analyzed in the light of theories dealing with the compatibilization of languages developed within the realm of information technology and librarianship from the 1960s on. The main goals accomplished were: (a) the detailed conceptualization of the merge process of the topic maps, considering the possible compatibilization levels and the applicability of this technology in the integration of faceted structures; and (b) the production of a detailed sequence of steps that may be used in the implementation of topic maps based on faceted structures.
    Date
    22. 2.2013 11:39:23
  6. Das, S.; Roy, S.: Faceted ontological model for brain tumour study (2016) 0.06
    0.05652935 = product of:
      0.08479402 = sum of:
        0.021149913 = weight(_text_:of in 2831) [ClassicSimilarity], result of:
          0.021149913 = score(doc=2831,freq=18.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.25915858 = fieldWeight in 2831, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2831)
        0.06364411 = sum of:
          0.028290104 = weight(_text_:science in 2831) [ClassicSimilarity], result of:
            0.028290104 = score(doc=2831,freq=4.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.20579056 = fieldWeight in 2831, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
          0.03535401 = weight(_text_:22 in 2831) [ClassicSimilarity], result of:
            0.03535401 = score(doc=2831,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.19345059 = fieldWeight in 2831, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
      0.6666667 = coord(2/3)
    
    Abstract
    The purpose of this work is to develop an ontology-based framework for developing an information retrieval system to cater to specific queries of users. For creating such an ontology, information was obtained from a wide range of information sources involved with brain tumour study and research. The information thus obtained was compiled and analysed to provide a standard, reliable and relevant information base to aid our proposed system. Facet-based methodology has been used for ontology formalization for quite some time. Ontology formalization involves different steps such as identification of the terminology, analysis, synthesis, standardization and ordering. A vast majority of the ontologies being developed nowadays lack flexibility. This becomes a formidable constraint when it comes to interoperability. We found that a facet-based method provides a distinct guideline for the development of a robust and flexible model concerning the domain of brain tumours. Our attempt has been to bridge library and information science and computer science, which itself involved an experimental approach. It was discovered that a faceted approach is really enduring, as it helps in the achievement of properties like navigation, exploration and faceted browsing. Computer-based brain tumour ontology supports the work of researchers towards gathering information on brain tumour research and allows users across the world to intelligently access new scientific information quickly and efficiently.
    Date
    12. 3.2016 13:21:22
  7. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.06
    0.055566467 = product of:
      0.0833497 = sum of:
        0.016919931 = weight(_text_:of in 2024) [ClassicSimilarity], result of:
          0.016919931 = score(doc=2024,freq=8.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.20732689 = fieldWeight in 2024, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2024)
        0.066429764 = sum of:
          0.02400495 = weight(_text_:science in 2024) [ClassicSimilarity], result of:
            0.02400495 = score(doc=2024,freq=2.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.17461908 = fieldWeight in 2024, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
          0.042424813 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
            0.042424813 = score(doc=2024,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.23214069 = fieldWeight in 2024, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
      0.6666667 = coord(2/3)
    
    Abstract
    Defined in 1999 and paired with XML, the Resource Description Framework (RDF) has been cast as an RDF Schema, producing data that is well-structured but not validated, permitting certain illogical relationships. When stakeholders convened in 2014 to consider solutions to the data validation challenge, a W3C working group proposed Resource Shapes and Shape Expressions to describe the properties expected for an RDF node. Resistance rose from concerns about data and schema reuse, key principles in RDF. Ideally data types and properties are designed for broad use, but they are increasingly adopted with local restrictions for specific purposes. Resource Shapes are commonly treated as record classes, standing in for data structures but losing flexibility for later reuse. Of various solutions to the resulting tensions, the concept of record classes may be the most reasonable basis for agreement, satisfying stakeholders' objectives while allowing for variations with constraints.
    Footnote
    Contribution to a special section "Linked data and the charm of weak semantics".
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
  8. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.05
    0.054149657 = product of:
      0.081224486 = sum of:
        0.025222747 = weight(_text_:of in 4399) [ClassicSimilarity], result of:
          0.025222747 = score(doc=4399,freq=40.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.3090647 = fieldWeight in 4399, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.056001738 = sum of:
          0.027718531 = weight(_text_:science in 4399) [ClassicSimilarity], result of:
            0.027718531 = score(doc=4399,freq=6.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.20163277 = fieldWeight in 4399, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.03125 = fieldNorm(doc=4399)
          0.028283209 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
            0.028283209 = score(doc=4399,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.15476047 = fieldWeight in 4399, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=4399)
      0.6666667 = coord(2/3)
    
    Abstract
    Indexing plays a vital role in Information Retrieval. With the availability of huge volume of information, it has become necessary to index the information in such a way to make easier for the end users to find the information they want efficiently and accurately. Keyword-based indexing uses words as indexing terms. It is not capable of capturing the implicit relation among terms or the semantics of the words in the document. To eliminate this limitation, ontology-based indexing came into existence, which allows semantic based indexing to solve complex and indirect user queries. Ontologies are used for document indexing which allows semantic based information retrieval. Existing ontologies or the ones constructed from scratch are used presently for indexing. Constructing ontologies from scratch is a labor-intensive task and requires extensive domain knowledge whereas use of an existing ontology may leave some important concepts in documents un-annotated. Using multiple ontologies can overcome the problem of missing out concepts to a great extent, but it is difficult to manage (changes in ontologies over time by their developers) multiple ontologies and ontology heterogeneity also arises due to ontologies constructed by different ontology developers. One possible solution to managing multiple ontologies and build from scratch is to use modular ontologies for indexing.
    Modular ontologies are built in modular manner by combining modules from multiple relevant ontologies. Ontology heterogeneity also arises during modular ontology construction because multiple ontologies are being dealt with, during this process. Ontologies need to be aligned before using them for modular ontology construction. The existing approaches for ontology alignment compare all the concepts of each ontology to be aligned, hence not optimized in terms of time and search space utilization. A new indexing technique is proposed based on modular ontology. An efficient ontology alignment technique is proposed to solve the heterogeneity problem during the construction of modular ontology. Results are satisfactory as Precision and Recall are improved by (8%) and (10%) respectively. The value of Pearsons Correlation Coefficient for degree of similarity, time, search space requirement, precision and recall are close to 1 which shows that the results are significant. Further research can be carried out for using modular ontology based indexing technique for Multimedia Information Retrieval and Bio-Medical information retrieval.
    Content
    Submitted to the Faculty of the Computer Science and Engineering Department of the University of Engineering and Technology Lahore in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Computer Science (2009 - 009-PhD-CS-04). Vgl.: http://prr.hec.gov.pk/jspui/bitstream/123456789/8375/1/Taybah_Kiren_Computer_Science_HSR_2017_UET_Lahore_14.12.2017.pdf.
    Date
    20. 1.2015 18:30:22
    Imprint
    Lahore : University of Engineering and Technology / Department of Computer Science and Engineering
  9. Bringsjord, S.; Clark, M.; Taylor, J.: Sophisticated knowledge representation and reasoning requires philosophy (2014) 0.05
    0.052938886 = product of:
      0.079408325 = sum of:
        0.015764216 = weight(_text_:of in 3403) [ClassicSimilarity], result of:
          0.015764216 = score(doc=3403,freq=10.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.19316542 = fieldWeight in 3403, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3403)
        0.06364411 = sum of:
          0.028290104 = weight(_text_:science in 3403) [ClassicSimilarity], result of:
            0.028290104 = score(doc=3403,freq=4.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.20579056 = fieldWeight in 3403, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3403)
          0.03535401 = weight(_text_:22 in 3403) [ClassicSimilarity], result of:
            0.03535401 = score(doc=3403,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.19345059 = fieldWeight in 3403, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3403)
      0.6666667 = coord(2/3)
    
    Abstract
    What is knowledge representation and reasoning (KR&R)? Alas, a thorough account would require a book, or at least a dedicated, full-length paper, but here we shall have to make do with something simpler. Since most readers are likely to have an intuitive grasp of the essence of KR&R, our simple account should suffice. The interesting thing is that this simple account itself makes reference to some of the foundational distinctions in the field of philosophy. These distinctions also play a central role in artificial intelligence (AI) and computer science. To begin with, the first distinction in KR&R is that we identify knowledge with knowledge that such-and-such holds (possibly to a degree), rather than knowing how. If you ask an expert tennis player how he manages to serve a ball at 130 miles per hour on his first serve, and then serve a safer, topspin serve on his second should the first be out, you may well receive a confession that, if truth be told, this athlete can't really tell you. He just does it; he does something he has been doing since his youth. Yet, there is no denying that he knows how to serve. In contrast, the knowledge in KR&R must be expressible in declarative statements. For example, our tennis player knows that if his first serve lands outside the service box, it's not in play. He thus knows a proposition, conditional in form.
    Date
    9. 2.2017 19:22:14
    Series
    History and philosophy of technoscience; 3
    Source
    Philosophy, computing and information science. Eds.: R. Hagengruber u. U.V. Riss
  10. Cui, H.: Competency evaluation of plant character ontologies against domain literature (2010) 0.05
    0.052493498 = product of:
      0.07874025 = sum of:
        0.02338211 = weight(_text_:of in 3466) [ClassicSimilarity], result of:
          0.02338211 = score(doc=3466,freq=22.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.28651062 = fieldWeight in 3466, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3466)
        0.055358134 = sum of:
          0.020004123 = weight(_text_:science in 3466) [ClassicSimilarity], result of:
            0.020004123 = score(doc=3466,freq=2.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.1455159 = fieldWeight in 3466, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3466)
          0.03535401 = weight(_text_:22 in 3466) [ClassicSimilarity], result of:
            0.03535401 = score(doc=3466,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.19345059 = fieldWeight in 3466, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3466)
      0.6666667 = coord(2/3)
    
    Abstract
    Specimen identification keys are still the most commonly created tools used by systematic biologists to access biodiversity information. Creating identification keys requires analyzing and synthesizing large amounts of information from specimens and their descriptions and is a very labor-intensive and time-consuming activity. Automating the generation of identification keys from text descriptions becomes a highly attractive text mining application in the biodiversity domain. Fine-grained semantic annotation of morphological descriptions of organisms is a necessary first step in generating keys from text. Machine-readable ontologies are needed in this process because most biological characters are only implied (i.e., not stated) in descriptions. The immediate question to ask is How well do existing ontologies support semantic annotation and automated key generation? With the intention to either select an existing ontology or develop a unified ontology based on existing ones, this paper evaluates the coverage, semantic consistency, and inter-ontology agreement of a biodiversity character ontology and three plant glossaries that may be turned into ontologies. The coverage and semantic consistency of the ontology/glossaries are checked against the authoritative domain literature, namely, Flora of North America and Flora of China. The evaluation results suggest that more work is needed to improve the coverage and interoperability of the ontology/glossaries. More concepts need to be added to the ontology/glossaries and careful work is needed to improve the semantic consistency. The method used in this paper to evaluate the ontology/glossaries can be used to propose new candidate concepts from the domain literature and suggest appropriate definitions.
    Date
    1. 6.2010 9:55:22
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1144-1165
  11. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.05
    0.052493498 = product of:
      0.07874025 = sum of:
        0.02338211 = weight(_text_:of in 4607) [ClassicSimilarity], result of:
          0.02338211 = score(doc=4607,freq=22.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.28651062 = fieldWeight in 4607, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.055358134 = sum of:
          0.020004123 = weight(_text_:science in 4607) [ClassicSimilarity], result of:
            0.020004123 = score(doc=4607,freq=2.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.1455159 = fieldWeight in 4607, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4607)
          0.03535401 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
            0.03535401 = score(doc=4607,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.19345059 = fieldWeight in 4607, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4607)
      0.6666667 = coord(2/3)
    
    Abstract
    Smart applications behave intelligently because they understand at least partially the context where they operate. To do this, they need not only a formal domain model but also formal descriptions of the data they process and their own operational behaviour. Interoperability of smart applications is based on formalised definitions of all their data and processes. This paper studies the semantic interoperability of data in the case of eLearning and describes an experiment and its assessment. New content is imported into a knowledge-based learning environment without real updates of the original domain model, which is encoded as a knowledge base of conceptual graphs. A component called mediator enables the import by assigning dummy metadata annotations for the imported items. However, some functionality of the original system is lost, when processing the imported content, due to the lack of proper metadata annotation which cannot be associated fully automatically. So the paper presents an interoperability scenario when appropriate content items are viewed from the perspective of the original world and can be (partially) reused there.
    Series
    Lecture notes in computer science: Lecture notes in artificial intelligence ; 4604
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a
  12. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.05
    0.051879477 = product of:
      0.07781921 = sum of:
        0.055259302 = product of:
          0.1657779 = sum of:
            0.1657779 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.1657779 = score(doc=701,freq=2.0), product of:
                0.4424535 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05218836 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.02255991 = weight(_text_:of in 701) [ClassicSimilarity], result of:
          0.02255991 = score(doc=701,freq=32.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.27643585 = fieldWeight in 701, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.6666667 = coord(2/3)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  13. Conde, A.; Larrañaga, M.; Arruarte, A.; Elorriaga, J.A.; Roth, D.: litewi: a combined term extraction and entity linking method for eliciting educational ontologies from textbooks (2016) 0.05
    0.05019898 = product of:
      0.075298466 = sum of:
        0.019940332 = weight(_text_:of in 2645) [ClassicSimilarity], result of:
          0.019940332 = score(doc=2645,freq=16.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.24433708 = fieldWeight in 2645, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2645)
        0.055358134 = sum of:
          0.020004123 = weight(_text_:science in 2645) [ClassicSimilarity], result of:
            0.020004123 = score(doc=2645,freq=2.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.1455159 = fieldWeight in 2645, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
          0.03535401 = weight(_text_:22 in 2645) [ClassicSimilarity], result of:
            0.03535401 = score(doc=2645,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.19345059 = fieldWeight in 2645, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
      0.6666667 = coord(2/3)
    
    Abstract
    Major efforts have been conducted on ontology learning, that is, semiautomatic processes for the construction of domain ontologies from diverse sources of information. In the past few years, a research trend has focused on the construction of educational ontologies, that is, ontologies to be used for educational purposes. The identification of the terminology is crucial to build ontologies. Term extraction techniques allow the identification of the domain-related terms from electronic resources. This paper presents LiTeWi, a novel method that combines current unsupervised term extraction approaches for creating educational ontologies for technology supported learning systems from electronic textbooks. LiTeWi uses Wikipedia as an additional information source. Wikipedia contains more than 30 million articles covering the terminology of nearly every domain in 288 languages, which makes it an appropriate generic corpus for term extraction. Furthermore, given that its content is available in several languages, it promotes both domain and language independence. LiTeWi is aimed at being used by teachers, who usually develop their didactic material from textbooks. To evaluate its performance, LiTeWi was tuned up using a textbook on object oriented programming and then tested with two textbooks of different domains-astronomy and molecular biology.
    Date
    22. 1.2016 12:38:14
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.2, S.380-399
  14. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.05
    0.049926486 = product of:
      0.07488973 = sum of:
        0.008459966 = weight(_text_:of in 2418) [ClassicSimilarity], result of:
          0.008459966 = score(doc=2418,freq=2.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.103663445 = fieldWeight in 2418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.066429764 = sum of:
          0.02400495 = weight(_text_:science in 2418) [ClassicSimilarity], result of:
            0.02400495 = score(doc=2418,freq=2.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.17461908 = fieldWeight in 2418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
          0.042424813 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
            0.042424813 = score(doc=2418,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.23214069 = fieldWeight in 2418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
      0.6666667 = coord(2/3)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Series
    Lecture notes in computer science; vol.4172
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  15. Giunchiglia, F.; Villafiorita, A.; Walsh, T.: Theories of abstraction (1997) 0.04
    0.04012516 = product of:
      0.06018774 = sum of:
        0.03190453 = weight(_text_:of in 4476) [ClassicSimilarity], result of:
          0.03190453 = score(doc=4476,freq=16.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.39093933 = fieldWeight in 4476, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4476)
        0.028283209 = product of:
          0.056566417 = sum of:
            0.056566417 = weight(_text_:22 in 4476) [ClassicSimilarity], result of:
              0.056566417 = score(doc=4476,freq=2.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.30952093 = fieldWeight in 4476, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4476)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes the types of representations used in different theories of abstractions. Shows how the type of mapping between these representations has been increasingly generalised. Discusses desirable properties preserved by such mappings and identifies how these properties are influenced by the mappings and the presentations defined. Surveys programs made in understanding the complexity reduction associated with abstraction. Focuses on formal models of how abstraction reduces the search space. Presents some of the systems that implement abstraction. shows how the efforts in this area have focused on the mechanisation of languages for the declarative representation of abstraction.
    Date
    1.10.2018 14:13:22
  16. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.04
    0.039850555 = product of:
      0.05977583 = sum of:
        0.02442182 = weight(_text_:of in 6089) [ClassicSimilarity], result of:
          0.02442182 = score(doc=6089,freq=6.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.2992506 = fieldWeight in 6089, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=6089)
        0.03535401 = product of:
          0.07070802 = sum of:
            0.07070802 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.07070802 = score(doc=6089,freq=2.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Pages
    S.11-22
    Source
    Compatibility and integration of order systems: Research Seminar Proceedings of the TIP/ISKO Meeting, Warsaw, 13-15 September 1995
  17. Vickery, B.C.: Ontologies (1997) 0.04
    0.036899112 = product of:
      0.055348665 = sum of:
        0.027630134 = weight(_text_:of in 4891) [ClassicSimilarity], result of:
          0.027630134 = score(doc=4891,freq=12.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.33856338 = fieldWeight in 4891, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4891)
        0.027718531 = product of:
          0.055437062 = sum of:
            0.055437062 = weight(_text_:science in 4891) [ClassicSimilarity], result of:
              0.055437062 = score(doc=4891,freq=6.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.40326554 = fieldWeight in 4891, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4891)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Discusses the emergence of the term 'ontology' in knowledge engineering (and now in information science) with a definition of the term as currently used. Ontology is the study of what exists and what must be assumed to exist in order to achieve a cogent description or reality. The term has seen extensive application to artificial intelligence. Describes the process of building an ontology and the uses of such tools in knowledge engineering. Concludes by comparing ontologies with similar tools used in information science
    Source
    Journal of information science. 23(1997) no.4, S.277-286
  18. Fonseca, F.: ¬The double role of ontologies in information science research (2007) 0.04
    0.035727408 = product of:
      0.05359111 = sum of:
        0.02675276 = weight(_text_:of in 277) [ClassicSimilarity], result of:
          0.02675276 = score(doc=277,freq=20.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.32781258 = fieldWeight in 277, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=277)
        0.026838351 = product of:
          0.053676702 = sum of:
            0.053676702 = weight(_text_:science in 277) [ClassicSimilarity], result of:
              0.053676702 = score(doc=277,freq=10.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.39046016 = fieldWeight in 277, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=277)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In philosophy, Ontology is the basic description of things in the world. In information science, an ontology refers to an engineering artifact, constituted by a specific vocabulary used to describe a certain reality. Ontologies have been proposed for validating both conceptual models and conceptual schemas; however, these roles are quite dissimilar. In this article, we show that ontologies can be better understood if we classify the different uses of the term as it appears in the literature. First, we explain Ontology (upper case O) as used in Philosophy. Then, we propose a differentiation between ontologies of information systems and ontologies for information systems. All three concepts have an important role in information science. We clarify the different meanings and uses of Ontology and ontologies through a comparison of research by Wand and Weber and by Guarino in ontology-driven information systems. The contributions of this article are twofold: (a) It provides a better understanding of what ontologies are, and (b) it explains the double role of ontologies in information science research.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.6, S.786-793
  19. OWL Web Ontology Language Test Cases (2004) 0.03
    0.033895414 = product of:
      0.05084312 = sum of:
        0.02255991 = weight(_text_:of in 4685) [ClassicSimilarity], result of:
          0.02255991 = score(doc=4685,freq=8.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.27643585 = fieldWeight in 4685, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4685)
        0.028283209 = product of:
          0.056566417 = sum of:
            0.056566417 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.056566417 = score(doc=4685,freq=2.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This document contains and presents test cases for the Web Ontology Language (OWL) approved by the Web Ontology Working Group. Many of the test cases illustrate the correct usage of the Web Ontology Language (OWL), and the formal meaning of its constructs. Other test cases illustrate the resolution of issues considered by the Working Group. Conformance for OWL documents and OWL document checkers is specified.
    Date
    14. 8.2011 13:33:22
  20. Wen, B.; Horlings, E.; Zouwen, M. van der; Besselaar, P. van den: Mapping science through bibliometric triangulation : an experimental approach applied to water research (2017) 0.03
    0.033279322 = product of:
      0.049918983 = sum of:
        0.025419034 = weight(_text_:of in 3437) [ClassicSimilarity], result of:
          0.025419034 = score(doc=3437,freq=26.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.31146988 = fieldWeight in 3437, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3437)
        0.024499949 = product of:
          0.048999898 = sum of:
            0.048999898 = weight(_text_:science in 3437) [ClassicSimilarity], result of:
              0.048999898 = score(doc=3437,freq=12.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.3564397 = fieldWeight in 3437, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3437)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The idea of constructing science maps based on bibliographic data has intrigued researchers for decades, and various techniques have been developed to map the structure of research disciplines. Most science mapping studies use a single method. However, as research fields have various properties, a valid map of a field should actually be composed of a set of maps derived from a series of investigations using different methods. That leads to the question of what can be learned from a combination-triangulation-of these different science maps. In this paper we propose a method for triangulation, using the example of water science. We combine three different mapping approaches: journal-journal citation relations (JJCR), shared author keywords (SAK), and title word-cited reference co-occurrence (TWRC). Our results demonstrate that triangulation of JJCR, SAK, and TWRC produces a more comprehensive picture than each method applied individually. The outcomes from the three different approaches can be associated with each other and systematically interpreted to provide insights into the complex multidisciplinary structure of the field of water research.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.3, S.724-738

Years

Languages

  • e 441
  • d 20
  • pt 5
  • f 1
  • sp 1
  • More… Less…

Types

  • a 349
  • el 138
  • m 28
  • x 19
  • n 13
  • s 13
  • p 6
  • r 3
  • A 1
  • EL 1
  • More… Less…

Subjects

Classifications