Search (141 results, page 1 of 8)

  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.23
    0.23403114 = product of:
      0.31204152 = sum of:
        0.075580016 = product of:
          0.22674005 = sum of:
            0.22674005 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.22674005 = score(doc=400,freq=2.0), product of:
                0.4034391 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047586527 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.22674005 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.22674005 = score(doc=400,freq=2.0), product of:
            0.4034391 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047586527 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.00972145 = weight(_text_:information in 400) [ClassicSimilarity], result of:
          0.00972145 = score(doc=400,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.116372846 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.75 = coord(3/4)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.04
    0.044414878 = product of:
      0.088829756 = sum of:
        0.016202414 = weight(_text_:information in 633) [ClassicSimilarity], result of:
          0.016202414 = score(doc=633,freq=8.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.19395474 = fieldWeight in 633, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.07262734 = sum of:
          0.040390756 = weight(_text_:technology in 633) [ClassicSimilarity], result of:
            0.040390756 = score(doc=633,freq=6.0), product of:
              0.1417311 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.047586527 = queryNorm
              0.2849816 = fieldWeight in 633, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
          0.032236587 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
            0.032236587 = score(doc=633,freq=2.0), product of:
              0.16663991 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047586527 = queryNorm
              0.19345059 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
      0.5 = coord(2/4)
    
    Abstract
    Considering the characteristics of hypertext systems and problems such as cognitive overload and the disorientation of users, this project studies subject hypertext documents that have undergone conceptual structuring using facets for content representation and improvement of information retrieval during navigation. The main objective was to assess the possibility of the application of topic map technology for automating the compatibilization process of these structures. For this purpose, two dissertations from the UFMG Information Science Post-Graduation Program were adopted as samples. Both dissertations had been duly analyzed and structured on the MHTX (Hypertextual Map) prototype database. The faceted structures of both dissertations, which had been represented in conceptual maps, were then converted into topic maps. It was then possible to use the merge property of the topic maps to promote the semantic interrelationship between the maps and, consequently, between the hypertextual information resources proper. The merge results were then analyzed in the light of theories dealing with the compatibilization of languages developed within the realm of information technology and librarianship from the 1960s on. The main goals accomplished were: (a) the detailed conceptualization of the merge process of the topic maps, considering the possible compatibilization levels and the applicability of this technology in the integration of faceted structures; and (b) the production of a detailed sequence of steps that may be used in the implementation of topic maps based on faceted structures.
    Date
    22. 2.2013 11:39:23
  3. Conde, A.; Larrañaga, M.; Arruarte, A.; Elorriaga, J.A.; Roth, D.: litewi: a combined term extraction and entity linking method for eliciting educational ontologies from textbooks (2016) 0.04
    0.039623603 = product of:
      0.07924721 = sum of:
        0.0140317045 = weight(_text_:information in 2645) [ClassicSimilarity], result of:
          0.0140317045 = score(doc=2645,freq=6.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.16796975 = fieldWeight in 2645, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2645)
        0.0652155 = sum of:
          0.03297891 = weight(_text_:technology in 2645) [ClassicSimilarity], result of:
            0.03297891 = score(doc=2645,freq=4.0), product of:
              0.1417311 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.047586527 = queryNorm
              0.23268649 = fieldWeight in 2645, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
          0.032236587 = weight(_text_:22 in 2645) [ClassicSimilarity], result of:
            0.032236587 = score(doc=2645,freq=2.0), product of:
              0.16663991 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047586527 = queryNorm
              0.19345059 = fieldWeight in 2645, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
      0.5 = coord(2/4)
    
    Abstract
    Major efforts have been conducted on ontology learning, that is, semiautomatic processes for the construction of domain ontologies from diverse sources of information. In the past few years, a research trend has focused on the construction of educational ontologies, that is, ontologies to be used for educational purposes. The identification of the terminology is crucial to build ontologies. Term extraction techniques allow the identification of the domain-related terms from electronic resources. This paper presents LiTeWi, a novel method that combines current unsupervised term extraction approaches for creating educational ontologies for technology supported learning systems from electronic textbooks. LiTeWi uses Wikipedia as an additional information source. Wikipedia contains more than 30 million articles covering the terminology of nearly every domain in 288 languages, which makes it an appropriate generic corpus for term extraction. Furthermore, given that its content is available in several languages, it promotes both domain and language independence. LiTeWi is aimed at being used by teachers, who usually develop their didactic material from textbooks. To evaluate its performance, LiTeWi was tuned up using a textbook on object oriented programming and then tested with two textbooks of different domains-astronomy and molecular biology.
    Date
    22. 1.2016 12:38:14
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.2, S.380-399
  4. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.04
    0.038194444 = product of:
      0.07638889 = sum of:
        0.00972145 = weight(_text_:information in 2024) [ClassicSimilarity], result of:
          0.00972145 = score(doc=2024,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.116372846 = fieldWeight in 2024, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2024)
        0.06666744 = sum of:
          0.027983533 = weight(_text_:technology in 2024) [ClassicSimilarity], result of:
            0.027983533 = score(doc=2024,freq=2.0), product of:
              0.1417311 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.047586527 = queryNorm
              0.19744103 = fieldWeight in 2024, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
          0.038683902 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
            0.038683902 = score(doc=2024,freq=2.0), product of:
              0.16663991 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047586527 = queryNorm
              0.23214069 = fieldWeight in 2024, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
      0.5 = coord(2/4)
    
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
  5. Cui, H.: Competency evaluation of plant character ontologies against domain literature (2010) 0.03
    0.03479395 = product of:
      0.0695879 = sum of:
        0.0140317045 = weight(_text_:information in 3466) [ClassicSimilarity], result of:
          0.0140317045 = score(doc=3466,freq=6.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.16796975 = fieldWeight in 3466, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3466)
        0.055556197 = sum of:
          0.02331961 = weight(_text_:technology in 3466) [ClassicSimilarity], result of:
            0.02331961 = score(doc=3466,freq=2.0), product of:
              0.1417311 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.047586527 = queryNorm
              0.16453418 = fieldWeight in 3466, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3466)
          0.032236587 = weight(_text_:22 in 3466) [ClassicSimilarity], result of:
            0.032236587 = score(doc=3466,freq=2.0), product of:
              0.16663991 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047586527 = queryNorm
              0.19345059 = fieldWeight in 3466, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3466)
      0.5 = coord(2/4)
    
    Abstract
    Specimen identification keys are still the most commonly created tools used by systematic biologists to access biodiversity information. Creating identification keys requires analyzing and synthesizing large amounts of information from specimens and their descriptions and is a very labor-intensive and time-consuming activity. Automating the generation of identification keys from text descriptions becomes a highly attractive text mining application in the biodiversity domain. Fine-grained semantic annotation of morphological descriptions of organisms is a necessary first step in generating keys from text. Machine-readable ontologies are needed in this process because most biological characters are only implied (i.e., not stated) in descriptions. The immediate question to ask is How well do existing ontologies support semantic annotation and automated key generation? With the intention to either select an existing ontology or develop a unified ontology based on existing ones, this paper evaluates the coverage, semantic consistency, and inter-ontology agreement of a biodiversity character ontology and three plant glossaries that may be turned into ontologies. The coverage and semantic consistency of the ontology/glossaries are checked against the authoritative domain literature, namely, Flora of North America and Flora of China. The evaluation results suggest that more work is needed to improve the coverage and interoperability of the ontology/glossaries. More concepts need to be added to the ontology/glossaries and careful work is needed to improve the semantic consistency. The method used in this paper to evaluate the ontology/glossaries can be used to propose new candidate concepts from the domain literature and suggest appropriate definitions.
    Date
    1. 6.2010 9:55:22
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1144-1165
  6. Padmavathi, T.; Krishnamurthy, M.: Ontological representation of knowledge for developing information services in food science and technology (2012) 0.02
    0.020762585 = product of:
      0.04152517 = sum of:
        0.021737823 = weight(_text_:information in 839) [ClassicSimilarity], result of:
          0.021737823 = score(doc=839,freq=10.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.2602176 = fieldWeight in 839, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=839)
        0.019787349 = product of:
          0.039574698 = sum of:
            0.039574698 = weight(_text_:technology in 839) [ClassicSimilarity], result of:
              0.039574698 = score(doc=839,freq=4.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.2792238 = fieldWeight in 839, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=839)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Knowledge explosion in various fields during recent years has resulted in the creation of vast amounts of on-line scientific literature. Food Science &Technology (FST) is also an important subject domain where rapid developments are taking place due to diverse research and development activities. As a result, information storage and retrieval has become very complex and current information retrieval systems (IRs) are being challenged in terms of both adequate precision and response time. To overcome these limitations as well as to provide naturallanguage based effective retrieval, a suitable knowledge engineering framework needs to be applied to represent, share and discover information. Semantic web technologies provide mechanisms for creating knowledge bases, ontologies and rules for handling data that promise to improve the quality of information retrieval. Ontologies are the backbone of such knowledge systems. This paper presents a framework for semantic representation of a large repository of content in the domain of FST.
  7. Zhang, L.: Linking information through function (2014) 0.02
    0.020744089 = product of:
      0.041488178 = sum of:
        0.02749641 = weight(_text_:information in 1526) [ClassicSimilarity], result of:
          0.02749641 = score(doc=1526,freq=16.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.3291521 = fieldWeight in 1526, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1526)
        0.013991767 = product of:
          0.027983533 = sum of:
            0.027983533 = weight(_text_:technology in 1526) [ClassicSimilarity], result of:
              0.027983533 = score(doc=1526,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.19744103 = fieldWeight in 1526, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1526)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    How information resources can be meaningfully related has been addressed in contexts from bibliographic entries to hyperlinks and, more recently, linked data. The genre structure and relationships among genre structure constituents shed new light on organizing information by purpose or function. This study examines the relationships among a set of functional units previously constructed in a taxonomy, each of which is a chunk of information embedded in a document and is distinct in terms of its communicative function. Through a card-sort study, relationships among functional units were identified with regard to their occurrence and function. The findings suggest that a group of functional units can be identified, collocated, and navigated by particular relationships. Understanding how functional units are related to each other is significant in linking information pieces in documents to support finding, aggregating, and navigating information in a distributed information environment.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.11, S.2293-2305
  8. Das, S.; Roy, S.: Faceted ontological model for brain tumour study (2016) 0.02
    0.019515984 = product of:
      0.039031968 = sum of:
        0.022913676 = weight(_text_:information in 2831) [ClassicSimilarity], result of:
          0.022913676 = score(doc=2831,freq=16.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.27429342 = fieldWeight in 2831, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2831)
        0.016118294 = product of:
          0.032236587 = sum of:
            0.032236587 = weight(_text_:22 in 2831) [ClassicSimilarity], result of:
              0.032236587 = score(doc=2831,freq=2.0), product of:
                0.16663991 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047586527 = queryNorm
                0.19345059 = fieldWeight in 2831, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2831)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The purpose of this work is to develop an ontology-based framework for developing an information retrieval system to cater to specific queries of users. For creating such an ontology, information was obtained from a wide range of information sources involved with brain tumour study and research. The information thus obtained was compiled and analysed to provide a standard, reliable and relevant information base to aid our proposed system. Facet-based methodology has been used for ontology formalization for quite some time. Ontology formalization involves different steps such as identification of the terminology, analysis, synthesis, standardization and ordering. A vast majority of the ontologies being developed nowadays lack flexibility. This becomes a formidable constraint when it comes to interoperability. We found that a facet-based method provides a distinct guideline for the development of a robust and flexible model concerning the domain of brain tumours. Our attempt has been to bridge library and information science and computer science, which itself involved an experimental approach. It was discovered that a faceted approach is really enduring, as it helps in the achievement of properties like navigation, exploration and faceted browsing. Computer-based brain tumour ontology supports the work of researchers towards gathering information on brain tumour research and allows users across the world to intelligently access new scientific information quickly and efficiently.
    Date
    12. 3.2016 13:21:22
  9. Almeida, M.B.; Farinelli, F.: Ontologies for the representation of electronic medical records : the obstetric and neonatal ontology (2017) 0.02
    0.018639037 = product of:
      0.037278075 = sum of:
        0.02561827 = weight(_text_:information in 3918) [ClassicSimilarity], result of:
          0.02561827 = score(doc=3918,freq=20.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.30666938 = fieldWeight in 3918, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3918)
        0.011659805 = product of:
          0.02331961 = sum of:
            0.02331961 = weight(_text_:technology in 3918) [ClassicSimilarity], result of:
              0.02331961 = score(doc=3918,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.16453418 = fieldWeight in 3918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3918)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Ontology is an interdisciplinary field that involves both the use of philosophical principles and the development of computational artifacts. As artifacts, ontologies can have diverse applications in knowledge management, information retrieval, and information systems, to mention a few. They have been largely applied to organize information in complex fields like Biomedicine. In this article, we present the OntoNeo Ontology, an initiative to build a formal ontology in the obstetrics and neonatal domain. OntoNeo is a resource that has been designed to serve as a comprehensive infrastructure providing scientific research and healthcare professionals with access to relevant information. The goal of OntoNeo is twofold: (a) to organize specialized medical knowledge, and (b) to provide a potential consensual representation of the medical information found in electronic health records and medical information systems. To describe our initiative, we first provide background information about distinct theories underlying ontology, top-level computational ontologies and their applications in Biomedicine. Then, we present the methodology employed in the development of OntoNeo and the results obtained to date. Finally, we discuss the applicability of OntoNeo by presenting a proof of concept that illustrates its potential usefulness in the realm of healthcare information systems.
    Footnote
    Beitrag in einem Special issue on biomedical information retrieval.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.11, S.2529-2542
  10. Eito-Brun, R.: Ontologies and the exchange of technical information : building a knowledge repository based on ECSS standards (2014) 0.02
    0.018572096 = product of:
      0.03714419 = sum of:
        0.024249556 = weight(_text_:information in 1436) [ClassicSimilarity], result of:
          0.024249556 = score(doc=1436,freq=28.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.29028487 = fieldWeight in 1436, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1436)
        0.012894635 = product of:
          0.02578927 = sum of:
            0.02578927 = weight(_text_:22 in 1436) [ClassicSimilarity], result of:
              0.02578927 = score(doc=1436,freq=2.0), product of:
                0.16663991 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047586527 = queryNorm
                0.15476047 = fieldWeight in 1436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1436)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The development of complex projects in the aerospace industry is based on the collaboration of geographically distributed teams and companies. In this context, the need of sharing different types of data and information is a key factor to assure the successful execution of the projects. In the case of European projects, the ECSS standards provide a normative framework that specifies, among other requirements, the different document types, information items and artifacts that need to be generated. The specification of the characteristics of these information items are usually incorporated as annex to the different ECSS standards, and they provide the intended purpose, scope, and structure of the documents and information items. In these standards, documents or deliverables should not be considered as independent items, but as the results of packaging different information artifacts for their delivery between the involved parties. Successful information integration and knowledge exchange cannot be based exclusively on the conceptual definition of information types. It also requires the definition of methods and techniques for serializing and exchanging these documents and artifacts. This area is not covered by ECSS standards, and the definition of these data schemas would improve the opportunity for improving collaboration processes among companies. This paper describes the development of an OWL-based ontology to manage the different artifacts and information items requested in the European Space Agency (ESA) ECSS standards for SW development. The ECSS set of standards is the main reference in aerospace projects in Europe, and in addition to engineering and managerial requirements they provide a set of DRD (Document Requirements Documents) with the structure of the different documents and records necessary to manage projects and describe intermediate information products and final deliverables. Information integration is a must-have in aerospace projects, where different players need to collaborate and share data during the life cycle of the products about requirements, design elements, problems, etc. The proposed ontology provides the basis for building advanced information systems where the information coming from different companies and institutions can be integrated into a coherent set of related data. It also provides a conceptual framework to enable the development of interfaces and gateways between the different tools and information systems used by the different players in aerospace projects.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  11. Rosemblat, G.; Resnick, M.P.; Auston, I.; Shin, D.; Sneiderman, C.; Fizsman, M.; Rindflesch, T.C.: Extending SemRep to the public health domain (2013) 0.02
    0.017864795 = product of:
      0.03572959 = sum of:
        0.021737823 = weight(_text_:information in 2096) [ClassicSimilarity], result of:
          0.021737823 = score(doc=2096,freq=10.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.2602176 = fieldWeight in 2096, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2096)
        0.013991767 = product of:
          0.027983533 = sum of:
            0.027983533 = weight(_text_:technology in 2096) [ClassicSimilarity], result of:
              0.027983533 = score(doc=2096,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.19744103 = fieldWeight in 2096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2096)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    We describe the use of a domain-independent method to extend a natural language processing (NLP) application, SemRep (Rindflesch, Fiszman, & Libbus, 2005), based on the knowledge sources afforded by the Unified Medical Language System (UMLS®; Humphreys, Lindberg, Schoolman, & Barnett, 1998) to support the area of health promotion within the public health domain. Public health professionals require good information about successful health promotion policies and programs that might be considered for application within their own communities. Our effort seeks to improve access to relevant information for the public health profession, to help those in the field remain an information-savvy workforce. Natural language processing and semantic techniques hold promise to help public health professionals navigate the growing ocean of information by organizing and structuring this knowledge into a focused public health framework paired with a user-friendly visualization application as a way to summarize results of PubMed® searches in this field of knowledge.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.10, S.1963-1974
  12. Fischer, W.; Bauer, B.: Combining ontologies and natural language (2010) 0.02
    0.01728674 = product of:
      0.03457348 = sum of:
        0.022913676 = weight(_text_:information in 3740) [ClassicSimilarity], result of:
          0.022913676 = score(doc=3740,freq=16.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.27429342 = fieldWeight in 3740, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3740)
        0.011659805 = product of:
          0.02331961 = sum of:
            0.02331961 = weight(_text_:technology in 3740) [ClassicSimilarity], result of:
              0.02331961 = score(doc=3740,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.16453418 = fieldWeight in 3740, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3740)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Ontologies are a popular concept for capturing semantic knowledge of the world in a computer understandable way. Todays ontological standards have been designed with primarily the logical formalisms in mind and therefore leaving the linguistic information aside. However knowledge is rarely just about the semantic information itself. In order to create and modify existing ontologies users have to be able to understand the information represented by them. Other problem domains (e.g. Natural Language Processing, NLP) can build on ontological information however a bridge to syntactic information is missing. Therefore in this paper we argue that the possibilities of todays standards like OWL, RDF, etc. are not enough to provide a sound combination of syntax and semantics. Therefore we present an approach for the linguistic enrichment of ontologies inspired by cognitive linguistics. The goal is to provide a generic, language independent approach on modelling semantics which can be annotated with arbitrary linguistic information. This knowledge can then be used for a better documentation of ontologies as well as for NLP and other Information Extraction (IE) related tasks.
    Footnote
    Preprint. To be published as Vol 122 in the Conferences in Research and Practice in Information Technology Series by the Australian Computer Society Inc. http://crpit.com/.
  13. Boteram, F.: Semantische Relationen in Dokumentationssprachen : vom Thesaurus zum semantischen Netz (2010) 0.02
    0.01695365 = product of:
      0.0339073 = sum of:
        0.011341691 = weight(_text_:information in 4792) [ClassicSimilarity], result of:
          0.011341691 = score(doc=4792,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.13576832 = fieldWeight in 4792, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4792)
        0.02256561 = product of:
          0.04513122 = sum of:
            0.04513122 = weight(_text_:22 in 4792) [ClassicSimilarity], result of:
              0.04513122 = score(doc=4792,freq=2.0), product of:
                0.16663991 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047586527 = queryNorm
                0.2708308 = fieldWeight in 4792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4792)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Moderne Verfahren des Information Retrieval verlangen nach aussagekräftigen und detailliert relationierten Dokumentationssprachen. Der selektive Transfer einzelner Modellierungsstrategien aus dem Bereich semantischer Technologien für die Gestaltung und Relationierung bestehender Dokumentationssprachen wird diskutiert. In Form einer Taxonomie wird ein hierarchisch strukturiertes Relationeninventar definiert, welches sowohl hinreichend allgemeine als auch zahlreiche spezifische Relationstypen enthält, die eine detaillierte und damit aussagekräftige Relationierung des Vokabulars ermöglichen. Das bringt einen Zugewinn an Übersichtlichkeit und Funktionalität. Im Gegensatz zu anderen Ansätzen und Überlegungen zur Schaffung von Relationeninventaren entwickelt der vorgestellte Vorschlag das Relationeninventar aus der Begriffsmenge eines bestehenden Gegenstandsbereichs heraus.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  14. Almeida, M.B.: Revisiting ontologies : a necessary clarification (2013) 0.02
    0.016717333 = product of:
      0.033434667 = sum of:
        0.0194429 = weight(_text_:information in 1010) [ClassicSimilarity], result of:
          0.0194429 = score(doc=1010,freq=8.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.23274569 = fieldWeight in 1010, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1010)
        0.013991767 = product of:
          0.027983533 = sum of:
            0.027983533 = weight(_text_:technology in 1010) [ClassicSimilarity], result of:
              0.027983533 = score(doc=1010,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.19744103 = fieldWeight in 1010, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1010)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Looking for ontology in a search engine, one can find so many different approaches that it can be difficult to understand which field of research the subject belongs to and how it can be useful. The term ontology is employed within philosophy, computer science, and information science with different meanings. To take advantage of what ontology theories have to offer, one should understand what they address and where they come from. In information science, except for a few papers, there is no initiative toward clarifying what ontology really is and the connections that it fosters among different research fields. This article provides such a clarification. We begin by revisiting the meaning of the term in its original field, philosophy, to reach its current use in other research fields. We advocate that ontology is a genuine and relevant subject of research in information science. Finally, we conclude by offering our view of the opportunities for interdisciplinary research.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.8, S.1682-1693
  15. Aker, A.; Plaza, L.; Lloret, E.; Gaizauskas, R.: Do humans have conceptual models about geographic objects? : a user study (2013) 0.02
    0.015751816 = product of:
      0.031503633 = sum of:
        0.019843826 = weight(_text_:information in 680) [ClassicSimilarity], result of:
          0.019843826 = score(doc=680,freq=12.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.23754507 = fieldWeight in 680, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=680)
        0.011659805 = product of:
          0.02331961 = sum of:
            0.02331961 = weight(_text_:technology in 680) [ClassicSimilarity], result of:
              0.02331961 = score(doc=680,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.16453418 = fieldWeight in 680, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=680)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In this article, we investigate what sorts of information humans request about geographical objects of the same type. For example, Edinburgh Castle and Bodiam Castle are two objects of the same type: "castle." The question is whether specific information is requested for the object type "castle" and how this information differs for objects of other types (e.g., church, museum, or lake). We aim to answer this question using an online survey. In the survey, we showed 184 participants 200 images pertaining to urban and rural objects and asked them to write questions for which they would like to know the answers when seeing those objects. Our analysis of the 6,169 questions collected in the survey shows that humans have shared ideas of what to ask about geographical objects. When the object types resemble each other (e.g., church and temple), the requested information is similar for the objects of these types. Otherwise, the information is specific to an object type. Our results may be very useful in guiding Natural Language Processing tasks involving automatic generation of templates for image descriptions and their assessment, as well as image indexing and organization.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.689-700
  16. Maheswari, J.U.; Karpagam, G.R.: ¬A conceptual framework for ontology based information retrieval (2010) 0.02
    0.015751816 = product of:
      0.031503633 = sum of:
        0.019843826 = weight(_text_:information in 702) [ClassicSimilarity], result of:
          0.019843826 = score(doc=702,freq=12.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.23754507 = fieldWeight in 702, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=702)
        0.011659805 = product of:
          0.02331961 = sum of:
            0.02331961 = weight(_text_:technology in 702) [ClassicSimilarity], result of:
              0.02331961 = score(doc=702,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.16453418 = fieldWeight in 702, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=702)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Improving Information retrieval by employing the use of ontologies to overcome the limitations of syntactic search has been one of the inspirations since its emergence. This paper proposes a conceptual framework to exploit ontology based Information retrieval. This framework constitutes of five phases namely Query parsing, word stemming, ontology matching, weight assignment, ranking and Information retrieval. In the first phase, the user query is parsed into sequence of words. The parsed contents are curtailed to identify the significant word by ignoring superfluous terms such as "to", "is","ed", "about" and the like in the stemming phase. The objective of the stemming phase is to throttle feature descriptors to root words, which in turn will increase efficiency; this reduces the time consumed for searching the superfluous terms, which may not significantly influence the effectiveness of the retrieval process. In the third phase ontology matching is carried out by matching the parsed words with the relevant terms in the existing ontology. If the ontology does not exist, it is recommended to generate the required ontology. In the fourth phase the weights are assigned based on the distance between the stemmed words and the terms in the ontology uses improved matchmaking algorithm. The range of weights varies from 0 to 1 based on the level of distance in the ontology (superclass-subclass). The aggregate weights are calculated for the all the combination of stemmed words. The combination with the highest score is ranked as the best and the corresponding information is retrieved. The conceptual workflow is illustrated with an e-governance case study Academic Information System.
    Source
    International Journal of Engineering Science and Technology. 2(2010), no.10, S.5679-5688
  17. Mengle, S.S.R.; Goharian, N.: Detecting relationships among categories using text classification (2010) 0.01
    0.014887327 = product of:
      0.029774655 = sum of:
        0.01811485 = weight(_text_:information in 3462) [ClassicSimilarity], result of:
          0.01811485 = score(doc=3462,freq=10.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.21684799 = fieldWeight in 3462, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3462)
        0.011659805 = product of:
          0.02331961 = sum of:
            0.02331961 = weight(_text_:technology in 3462) [ClassicSimilarity], result of:
              0.02331961 = score(doc=3462,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.16453418 = fieldWeight in 3462, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3462)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Discovering relationships among concepts and categories is crucial in various information systems. The authors' objective was to discover such relationships among document categories. Traditionally, such relationships are represented in the form of a concept hierarchy, grouping some categories under the same parent category. Although the nature of hierarchy supports the identification of categories that may share the same parent, not all of these categories have a relationship with each other - other than sharing the same parent. However, some non-sibling relationships exist that although are related to each other are not identified as such. The authors identify and build a relationship network (relationship-net) with categories as the vertices and relationships as the edges of this network. They demonstrate that using a relationship-net, some nonobvious category relationships are detected. Their approach capitalizes on the misclassification information generated during the process of text classification to identify potential relationships among categories and automatically generate relationship-nets. Their results demonstrate a statistically significant improvement over the current approach by up to 73% on 20 News groups 20NG, up to 68% on 17 categories in the Open Directories Project (ODP17), and more than twice on ODP46 and Special Interest Group on Information Retrieval (SIGIR) data sets. Their results also indicate that using misclassification information stemming from passage classification as opposed to document classification statistically significantly improves the results on 20NG (8%), ODP17 (5%), ODP46 (73%), and SIGIR (117%) with respect to F1 measure. By assigning weights to relationships and by performing feature selection, results are further optimized.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.5, S.1046-1061
  18. Giri, K.; Gokhale, P.: Developing a banking service ontology using Protégé, an open source software (2015) 0.01
    0.01393111 = product of:
      0.02786222 = sum of:
        0.016202414 = weight(_text_:information in 2793) [ClassicSimilarity], result of:
          0.016202414 = score(doc=2793,freq=8.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.19395474 = fieldWeight in 2793, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2793)
        0.011659805 = product of:
          0.02331961 = sum of:
            0.02331961 = weight(_text_:technology in 2793) [ClassicSimilarity], result of:
              0.02331961 = score(doc=2793,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.16453418 = fieldWeight in 2793, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2793)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Computers have transformed from single isolated devices to entry points into a worldwide network of information exchange. Consequently, support in the exchange of data, information, and knowledge is becoming the key issue in computer technology today. The increasing volume of data available on the Web makes information retrieval a tedious and difficult task. Researchers are now exploring the possibility of creating a semantic web, in which meaning is made explicit, allowing machines to process and integrate web resources intelligently. The vision of the semantic web introduces the next generation of the Web by establishing a layer of machine-understandable data. The success of the semantic web depends on the easy creation, integration and use of semantic data, which will depend on web ontology. The faceted approach towards analyzing and representing knowledge given by S R Ranganathan would be useful in this regard. Ontology development in different fields is one such area where this approach given by Ranganathan could be applied. This paper presents a case of developing ontology for the field of banking.
    Source
    Annals of library and information studies. 62(2015) no.4, S.281-285
  19. Vlachidis, A.; Tudhope, D.: ¬A knowledge-based approach to information extraction for semantic interoperability in the archaeology domain (2016) 0.01
    0.01393111 = product of:
      0.02786222 = sum of:
        0.016202414 = weight(_text_:information in 2895) [ClassicSimilarity], result of:
          0.016202414 = score(doc=2895,freq=8.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.19395474 = fieldWeight in 2895, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2895)
        0.011659805 = product of:
          0.02331961 = sum of:
            0.02331961 = weight(_text_:technology in 2895) [ClassicSimilarity], result of:
              0.02331961 = score(doc=2895,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.16453418 = fieldWeight in 2895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2895)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The article presents a method for automatic semantic indexing of archaeological grey-literature reports using empirical (rule-based) Information Extraction techniques in combination with domain-specific knowledge organization systems. The semantic annotation system (OPTIMA) performs the tasks of Named Entity Recognition, Relation Extraction, Negation Detection, and Word-Sense Disambiguation using hand-crafted rules and terminological resources for associating contextual abstractions with classes of the standard ontology CIDOC Conceptual Reference Model (CRM) for cultural heritage and its archaeological extension, CRM-EH. Relation Extraction (RE) performance benefits from a syntactic-based definition of RE patterns derived from domain oriented corpus analysis. The evaluation also shows clear benefit in the use of assistive natural language processing (NLP) modules relating to Word-Sense Disambiguation, Negation Detection, and Noun Phrase Validation, together with controlled thesaurus expansion. The semantic indexing results demonstrate the capacity of rule-based Information Extraction techniques to deliver interoperable semantic abstractions (semantic annotations) with respect to the CIDOC CRM and archaeological thesauri. Major contributions include recognition of relevant entities using shallow parsing NLP techniques driven by a complimentary use of ontological and terminological domain resources and empirical derivation of context-driven RE rules for the recognition of semantic relationships from phrases of unstructured text.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.5, S.1138-1152
  20. Branch, F.; Arias, T.; Kennah, J.; Phillips, R.; Windleharth, T.; Lee, J.H.: Representing transmedia fictional worlds through ontology (2017) 0.01
    0.01393111 = product of:
      0.02786222 = sum of:
        0.016202414 = weight(_text_:information in 3958) [ClassicSimilarity], result of:
          0.016202414 = score(doc=3958,freq=8.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.19395474 = fieldWeight in 3958, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3958)
        0.011659805 = product of:
          0.02331961 = sum of:
            0.02331961 = weight(_text_:technology in 3958) [ClassicSimilarity], result of:
              0.02331961 = score(doc=3958,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.16453418 = fieldWeight in 3958, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3958)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Currently, there is no structured data standard for representing elements commonly found in transmedia fictional worlds. Although there are websites dedicated to individual universes, the information found on these sites separate out the various formats, concentrate on only the bibliographic aspects of the material, and are only searchable with full text. We have created an ontological model that will allow various user groups interested in transmedia to search for and retrieve the information contained in these worlds based upon their structure. We conducted a domain analysis and user studies based on the contents of Harry Potter, Lord of the Rings, the Marvel Universe, and Star Wars in order to build a new model using Ontology Web Language (OWL) and an artificial intelligence-reasoning engine. This model can infer connections between transmedia properties such as characters, elements of power, items, places, events, and so on. This model will facilitate better search and retrieval of the information contained within these vast story universes for all users interested in them. The result of this project is an OWL ontology reflecting real user needs based upon user research, which is intuitive for users and can be used by artificial intelligence systems.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.12, S.2771-2782

Authors

Languages

  • e 125
  • d 14

Types

  • el 21
  • x 1
  • More… Less…