Search (545 results, page 2 of 28)

  • × theme_ss:"Wissensrepräsentation"
  1. Beppler, F.D.; Fonseca, F.T.; Pacheco, R.C.S.: Hermeneus: an architecture for an ontology-enabled information retrieval (2008) 0.02
    0.02047082 = product of:
      0.04094164 = sum of:
        0.04094164 = sum of:
          0.007295696 = weight(_text_:a in 3261) [ClassicSimilarity], result of:
            0.007295696 = score(doc=3261,freq=8.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.15287387 = fieldWeight in 3261, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=3261)
          0.033645947 = weight(_text_:22 in 3261) [ClassicSimilarity], result of:
            0.033645947 = score(doc=3261,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.23214069 = fieldWeight in 3261, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3261)
      0.5 = coord(1/2)
    
    Abstract
    Ontologies improve IR systems regarding its retrieval and presentation of information, which make the task of finding information more effective, efficient, and interactive. In this paper we argue that ontologies also greatly improve the engineering of such systems. We created a framework that uses ontology to drive the process of engineering an IR system. We developed a prototype that shows how a domain specialist without knowledge in the IR field can build an IR system with interactive components. The resulting system provides support for users not only to find their information needs but also to extend their state of knowledge. This way, our approach to ontology-enabled information retrieval addresses both the engineering aspect described here and also the usability aspect described elsewhere.
    Date
    28.11.2016 12:43:22
    Type
    a
  2. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.02
    0.019982103 = product of:
      0.039964207 = sum of:
        0.039964207 = sum of:
          0.006318258 = weight(_text_:a in 2024) [ClassicSimilarity], result of:
            0.006318258 = score(doc=2024,freq=6.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.13239266 = fieldWeight in 2024, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
          0.033645947 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
            0.033645947 = score(doc=2024,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.23214069 = fieldWeight in 2024, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
      0.5 = coord(1/2)
    
    Abstract
    Defined in 1999 and paired with XML, the Resource Description Framework (RDF) has been cast as an RDF Schema, producing data that is well-structured but not validated, permitting certain illogical relationships. When stakeholders convened in 2014 to consider solutions to the data validation challenge, a W3C working group proposed Resource Shapes and Shape Expressions to describe the properties expected for an RDF node. Resistance rose from concerns about data and schema reuse, key principles in RDF. Ideally data types and properties are designed for broad use, but they are increasingly adopted with local restrictions for specific purposes. Resource Shapes are commonly treated as record classes, standing in for data structures but losing flexibility for later reuse. Of various solutions to the resulting tensions, the concept of record classes may be the most reasonable basis for agreement, satisfying stakeholders' objectives while allowing for variations with constraints.
    Footnote
    Contribution to a special section "Linked data and the charm of weak semantics".
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
    Type
    a
  3. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.02
    0.0195087 = product of:
      0.0390174 = sum of:
        0.0390174 = sum of:
          0.007295696 = weight(_text_:a in 2654) [ClassicSimilarity], result of:
            0.007295696 = score(doc=2654,freq=18.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.15287387 = fieldWeight in 2654, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=2654)
          0.031721704 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
            0.031721704 = score(doc=2654,freq=4.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.21886435 = fieldWeight in 2654, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2654)
      0.5 = coord(1/2)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
    Type
    a
  4. Mahesh, K.: Highly expressive tagging for knowledge organization in the 21st century (2014) 0.02
    0.019499356 = product of:
      0.03899871 = sum of:
        0.03899871 = sum of:
          0.01096042 = weight(_text_:a in 1434) [ClassicSimilarity], result of:
            0.01096042 = score(doc=1434,freq=26.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.22966442 = fieldWeight in 1434, product of:
                5.0990195 = tf(freq=26.0), with freq of:
                  26.0 = termFreq=26.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1434)
          0.028038291 = weight(_text_:22 in 1434) [ClassicSimilarity], result of:
            0.028038291 = score(doc=1434,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.19345059 = fieldWeight in 1434, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1434)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge organization of large-scale content on the Web requires substantial amounts of semantic metadata that is expensive to generate manually. Recent developments in Web technologies have enabled any user to tag documents and other forms of content thereby generating metadata that could help organize knowledge. However, merely adding one or more tags to a document is highly inadequate to capture the aboutness of the document and thereby to support powerful semantic functions such as automatic classification, question answering or true semantic search and retrieval. This is true even when the tags used are labels from a well-designed classification system such as a thesaurus or taxonomy. There is a strong need to develop a semantic tagging mechanism with sufficient expressive power to capture the aboutness of each part of a document or dataset or multimedia content in order to enable applications that can benefit from knowledge organization on the Web. This article proposes a highly expressive mechanism of using ontology snippets as semantic tags that map portions of a document or a part of a dataset or a segment of a multimedia content to concepts and relations in an ontology of the domain(s) of interest.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
    Type
    a
  5. Definition of the CIDOC Conceptual Reference Model (2003) 0.02
    0.019402392 = product of:
      0.038804784 = sum of:
        0.038804784 = sum of:
          0.005158836 = weight(_text_:a in 1652) [ClassicSimilarity], result of:
            0.005158836 = score(doc=1652,freq=4.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.10809815 = fieldWeight in 1652, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=1652)
          0.033645947 = weight(_text_:22 in 1652) [ClassicSimilarity], result of:
            0.033645947 = score(doc=1652,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.23214069 = fieldWeight in 1652, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1652)
      0.5 = coord(1/2)
    
    Abstract
    This document is the formal definition of the CIDOC Conceptual Reference Model ("CRM"), a formal ontology intended to facilitate the integration, mediation and interchange of heterogeneous cultural heritage information. The CRM is the culmination of more than a decade of standards development work by the International Committee for Documentation (CIDOC) of the International Council of Museums (ICOM). Work on the CRM itself began in 1996 under the auspices of the ICOM-CIDOC Documentation Standards Working Group. Since 2000, development of the CRM has been officially delegated by ICOM-CIDOC to the CIDOC CRM Special Interest Group, which collaborates with the ISO working group ISO/TC46/SC4/WG9 to bring the CRM to the form and status of an International Standard.
    Date
    6. 8.2010 14:22:28
  6. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.02
    0.019402392 = product of:
      0.038804784 = sum of:
        0.038804784 = sum of:
          0.005158836 = weight(_text_:a in 4649) [ClassicSimilarity], result of:
            0.005158836 = score(doc=4649,freq=4.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.10809815 = fieldWeight in 4649, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
          0.033645947 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
            0.033645947 = score(doc=4649,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.23214069 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
      0.5 = coord(1/2)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  7. Gödert, W.; Hubrich, J.; Nagelschmidt, M.: Semantic knowledge representation for information retrieval (2014) 0.02
    0.019402392 = product of:
      0.038804784 = sum of:
        0.038804784 = sum of:
          0.005158836 = weight(_text_:a in 987) [ClassicSimilarity], result of:
            0.005158836 = score(doc=987,freq=4.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.10809815 = fieldWeight in 987, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
          0.033645947 = weight(_text_:22 in 987) [ClassicSimilarity], result of:
            0.033645947 = score(doc=987,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.23214069 = fieldWeight in 987, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
      0.5 = coord(1/2)
    
    Abstract
    This book covers the basics of semantic web technologies and indexing languages, and describes their contribution to improve languages as a tool for subject queries and knowledge exploration. The book is relevant to information scientists, knowledge workers and indexers. It provides a suitable combination of theoretical foundations and practical applications.
    Date
    23. 7.2017 13:49:22
  8. Almeida Campos, M.L. de; Machado Campos, M.L.; Dávila, A.M.R.; Espanha Gomes, H.; Campos, L.M.; Lira e Oliveira, L. de: Information sciences methodological aspects applied to ontology reuse tools : a study based on genomic annotations in the domain of trypanosomatides (2013) 0.02
    0.01928436 = product of:
      0.03856872 = sum of:
        0.03856872 = sum of:
          0.010530431 = weight(_text_:a in 635) [ClassicSimilarity], result of:
            0.010530431 = score(doc=635,freq=24.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.22065444 = fieldWeight in 635, product of:
                4.8989797 = tf(freq=24.0), with freq of:
                  24.0 = termFreq=24.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=635)
          0.028038291 = weight(_text_:22 in 635) [ClassicSimilarity], result of:
            0.028038291 = score(doc=635,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.19345059 = fieldWeight in 635, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=635)
      0.5 = coord(1/2)
    
    Abstract
    Despite the dissemination of modeling languages and tools for representation and construction of ontologies, their underlying methodologies can still be improved. As a consequence, ontology tools can be enhanced accordingly, in order to support users through the ontology construction process. This paper proposes suggestions for ontology tools' improvement based on a case study within the domain of bioinformatics, applying a reuse method ology. Quantitative and qualitative analyses were carried out on a subset of 28 terms of Gene Ontology on a semi-automatic alignment with other biomedical ontologies. As a result, a report is presented containing suggestions for enhancing ontology reuse tools, which is a product derived from difficulties that we had in reusing a set of OBO ontologies. For the reuse process, a set of steps closely related to those of Pinto and Martin's methodology was used. In each step, it was observed that the experiment would have been significantly improved if ontology manipulation tools had provided certain features. Accordingly, problematic aspects in ontology tools are presented and suggestions are made aiming at getting better results in ontology reuse.
    Date
    22. 2.2013 12:03:53
    Type
    a
  9. Bringsjord, S.; Clark, M.; Taylor, J.: Sophisticated knowledge representation and reasoning requires philosophy (2014) 0.02
    0.018825607 = product of:
      0.037651215 = sum of:
        0.037651215 = sum of:
          0.0096129235 = weight(_text_:a in 3403) [ClassicSimilarity], result of:
            0.0096129235 = score(doc=3403,freq=20.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.20142901 = fieldWeight in 3403, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3403)
          0.028038291 = weight(_text_:22 in 3403) [ClassicSimilarity], result of:
            0.028038291 = score(doc=3403,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.19345059 = fieldWeight in 3403, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3403)
      0.5 = coord(1/2)
    
    Abstract
    What is knowledge representation and reasoning (KR&R)? Alas, a thorough account would require a book, or at least a dedicated, full-length paper, but here we shall have to make do with something simpler. Since most readers are likely to have an intuitive grasp of the essence of KR&R, our simple account should suffice. The interesting thing is that this simple account itself makes reference to some of the foundational distinctions in the field of philosophy. These distinctions also play a central role in artificial intelligence (AI) and computer science. To begin with, the first distinction in KR&R is that we identify knowledge with knowledge that such-and-such holds (possibly to a degree), rather than knowing how. If you ask an expert tennis player how he manages to serve a ball at 130 miles per hour on his first serve, and then serve a safer, topspin serve on his second should the first be out, you may well receive a confession that, if truth be told, this athlete can't really tell you. He just does it; he does something he has been doing since his youth. Yet, there is no denying that he knows how to serve. In contrast, the knowledge in KR&R must be expressible in declarative statements. For example, our tennis player knows that if his first serve lands outside the service box, it's not in play. He thus knows a proposition, conditional in form.
    Date
    9. 2.2017 19:22:14
    Type
    a
  10. Hohmann, G.: ¬Die Anwendung des CIDOC-CRM für die semantische Wissensrepräsentation in den Kulturwissenschaften (2010) 0.02
    0.018646898 = product of:
      0.037293795 = sum of:
        0.037293795 = sum of:
          0.003647848 = weight(_text_:a in 4011) [ClassicSimilarity], result of:
            0.003647848 = score(doc=4011,freq=2.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.07643694 = fieldWeight in 4011, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4011)
          0.033645947 = weight(_text_:22 in 4011) [ClassicSimilarity], result of:
            0.033645947 = score(doc=4011,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.23214069 = fieldWeight in 4011, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4011)
      0.5 = coord(1/2)
    
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
    Type
    a
  11. Semenova, E.: Ontologie als Begriffssystem : Theoretische Überlegungen und ihre praktische Umsetzung bei der Entwicklung einer Ontologie der Wissenschaftsdisziplinen (2010) 0.02
    0.018646898 = product of:
      0.037293795 = sum of:
        0.037293795 = sum of:
          0.003647848 = weight(_text_:a in 4095) [ClassicSimilarity], result of:
            0.003647848 = score(doc=4095,freq=2.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.07643694 = fieldWeight in 4095, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4095)
          0.033645947 = weight(_text_:22 in 4095) [ClassicSimilarity], result of:
            0.033645947 = score(doc=4095,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.23214069 = fieldWeight in 4095, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4095)
      0.5 = coord(1/2)
    
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
    Type
    a
  12. Kruk, S.R.; Kruk, E.; Stankiewicz, K.: Evaluation of semantic and social technologies for digital libraries (2009) 0.02
    0.018646898 = product of:
      0.037293795 = sum of:
        0.037293795 = sum of:
          0.003647848 = weight(_text_:a in 3387) [ClassicSimilarity], result of:
            0.003647848 = score(doc=3387,freq=2.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.07643694 = fieldWeight in 3387, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=3387)
          0.033645947 = weight(_text_:22 in 3387) [ClassicSimilarity], result of:
            0.033645947 = score(doc=3387,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.23214069 = fieldWeight in 3387, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3387)
      0.5 = coord(1/2)
    
    Date
    1. 8.2010 12:35:22
    Type
    a
  13. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.02
    0.018646898 = product of:
      0.037293795 = sum of:
        0.037293795 = sum of:
          0.003647848 = weight(_text_:a in 4820) [ClassicSimilarity], result of:
            0.003647848 = score(doc=4820,freq=2.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.07643694 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
          0.033645947 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
            0.033645947 = score(doc=4820,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.23214069 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
      0.5 = coord(1/2)
    
    Date
    3.12.2016 18:39:22
    Type
    a
  14. Das, S.; Roy, S.: Faceted ontological model for brain tumour study (2016) 0.02
    0.018578956 = product of:
      0.03715791 = sum of:
        0.03715791 = sum of:
          0.00911962 = weight(_text_:a in 2831) [ClassicSimilarity], result of:
            0.00911962 = score(doc=2831,freq=18.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.19109234 = fieldWeight in 2831, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
          0.028038291 = weight(_text_:22 in 2831) [ClassicSimilarity], result of:
            0.028038291 = score(doc=2831,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.19345059 = fieldWeight in 2831, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
      0.5 = coord(1/2)
    
    Abstract
    The purpose of this work is to develop an ontology-based framework for developing an information retrieval system to cater to specific queries of users. For creating such an ontology, information was obtained from a wide range of information sources involved with brain tumour study and research. The information thus obtained was compiled and analysed to provide a standard, reliable and relevant information base to aid our proposed system. Facet-based methodology has been used for ontology formalization for quite some time. Ontology formalization involves different steps such as identification of the terminology, analysis, synthesis, standardization and ordering. A vast majority of the ontologies being developed nowadays lack flexibility. This becomes a formidable constraint when it comes to interoperability. We found that a facet-based method provides a distinct guideline for the development of a robust and flexible model concerning the domain of brain tumours. Our attempt has been to bridge library and information science and computer science, which itself involved an experimental approach. It was discovered that a faceted approach is really enduring, as it helps in the achievement of properties like navigation, exploration and faceted browsing. Computer-based brain tumour ontology supports the work of researchers towards gathering information on brain tumour research and allows users across the world to intelligently access new scientific information quickly and efficiently.
    Date
    12. 3.2016 13:21:22
    Type
    a
  15. Kiren, T.; Shoaib, M.: ¬A novel ontology matching approach using key concepts (2016) 0.02
    0.01804052 = product of:
      0.03608104 = sum of:
        0.03608104 = sum of:
          0.008042749 = weight(_text_:a in 2589) [ClassicSimilarity], result of:
            0.008042749 = score(doc=2589,freq=14.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.1685276 = fieldWeight in 2589, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2589)
          0.028038291 = weight(_text_:22 in 2589) [ClassicSimilarity], result of:
            0.028038291 = score(doc=2589,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.19345059 = fieldWeight in 2589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2589)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Ontologies are used to formally describe the concepts within a domain in a machine-understandable way. Matching of heterogeneous ontologies is often essential for many applications like semantic annotation, query answering or ontology integration. Some ontologies may include a large number of entities which make the ontology matching process very complex in terms of the search space and execution time requirements. The purpose of this paper is to present a technique for finding degree of similarity between ontologies that trims down the search space by eliminating the ontology concepts that have less likelihood of being matched. Design/methodology/approach Algorithms are written for finding key concepts, concept matching and relationship matching. WordNet is used for solving synonym problems during the matching process. The technique is evaluated using the reference alignments between ontologies from ontology alignment evaluation initiative benchmark in terms of degree of similarity, Pearson's correlation coefficient and IR measures precision, recall and F-measure. Findings Positive correlation between the degree of similarity and degree of similarity (reference alignment) and computed values of precision, recall and F-measure showed that if only key concepts of ontologies are compared, a time and search space efficient ontology matching system can be developed. Originality/value On the basis of the present novel approach for ontology matching, it is concluded that using key concepts for ontology matching gives comparable results in reduced time and space.
    Date
    20. 1.2015 18:30:22
    Type
    a
  16. Conde, A.; Larrañaga, M.; Arruarte, A.; Elorriaga, J.A.; Roth, D.: litewi: a combined term extraction and entity linking method for eliciting educational ontologies from textbooks (2016) 0.02
    0.01804052 = product of:
      0.03608104 = sum of:
        0.03608104 = sum of:
          0.008042749 = weight(_text_:a in 2645) [ClassicSimilarity], result of:
            0.008042749 = score(doc=2645,freq=14.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.1685276 = fieldWeight in 2645, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
          0.028038291 = weight(_text_:22 in 2645) [ClassicSimilarity], result of:
            0.028038291 = score(doc=2645,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.19345059 = fieldWeight in 2645, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
      0.5 = coord(1/2)
    
    Abstract
    Major efforts have been conducted on ontology learning, that is, semiautomatic processes for the construction of domain ontologies from diverse sources of information. In the past few years, a research trend has focused on the construction of educational ontologies, that is, ontologies to be used for educational purposes. The identification of the terminology is crucial to build ontologies. Term extraction techniques allow the identification of the domain-related terms from electronic resources. This paper presents LiTeWi, a novel method that combines current unsupervised term extraction approaches for creating educational ontologies for technology supported learning systems from electronic textbooks. LiTeWi uses Wikipedia as an additional information source. Wikipedia contains more than 30 million articles covering the terminology of nearly every domain in 288 languages, which makes it an appropriate generic corpus for term extraction. Furthermore, given that its content is available in several languages, it promotes both domain and language independence. LiTeWi is aimed at being used by teachers, who usually develop their didactic material from textbooks. To evaluate its performance, LiTeWi was tuned up using a textbook on object oriented programming and then tested with two textbooks of different domains-astronomy and molecular biology.
    Date
    22. 1.2016 12:38:14
    Type
    a
  17. Cui, H.: Competency evaluation of plant character ontologies against domain literature (2010) 0.02
    0.017742215 = product of:
      0.03548443 = sum of:
        0.03548443 = sum of:
          0.0074461387 = weight(_text_:a in 3466) [ClassicSimilarity], result of:
            0.0074461387 = score(doc=3466,freq=12.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.15602624 = fieldWeight in 3466, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3466)
          0.028038291 = weight(_text_:22 in 3466) [ClassicSimilarity], result of:
            0.028038291 = score(doc=3466,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.19345059 = fieldWeight in 3466, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3466)
      0.5 = coord(1/2)
    
    Abstract
    Specimen identification keys are still the most commonly created tools used by systematic biologists to access biodiversity information. Creating identification keys requires analyzing and synthesizing large amounts of information from specimens and their descriptions and is a very labor-intensive and time-consuming activity. Automating the generation of identification keys from text descriptions becomes a highly attractive text mining application in the biodiversity domain. Fine-grained semantic annotation of morphological descriptions of organisms is a necessary first step in generating keys from text. Machine-readable ontologies are needed in this process because most biological characters are only implied (i.e., not stated) in descriptions. The immediate question to ask is How well do existing ontologies support semantic annotation and automated key generation? With the intention to either select an existing ontology or develop a unified ontology based on existing ones, this paper evaluates the coverage, semantic consistency, and inter-ontology agreement of a biodiversity character ontology and three plant glossaries that may be turned into ontologies. The coverage and semantic consistency of the ontology/glossaries are checked against the authoritative domain literature, namely, Flora of North America and Flora of China. The evaluation results suggest that more work is needed to improve the coverage and interoperability of the ontology/glossaries. More concepts need to be added to the ontology/glossaries and careful work is needed to improve the semantic consistency. The method used in this paper to evaluate the ontology/glossaries can be used to propose new candidate concepts from the domain literature and suggest appropriate definitions.
    Date
    1. 6.2010 9:55:22
    Type
    a
  18. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.02
    0.017417828 = product of:
      0.034835655 = sum of:
        0.034835655 = sum of:
          0.0067973635 = weight(_text_:a in 4607) [ClassicSimilarity], result of:
            0.0067973635 = score(doc=4607,freq=10.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.14243183 = fieldWeight in 4607, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4607)
          0.028038291 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
            0.028038291 = score(doc=4607,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.19345059 = fieldWeight in 4607, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4607)
      0.5 = coord(1/2)
    
    Abstract
    Smart applications behave intelligently because they understand at least partially the context where they operate. To do this, they need not only a formal domain model but also formal descriptions of the data they process and their own operational behaviour. Interoperability of smart applications is based on formalised definitions of all their data and processes. This paper studies the semantic interoperability of data in the case of eLearning and describes an experiment and its assessment. New content is imported into a knowledge-based learning environment without real updates of the original domain model, which is encoded as a knowledge base of conceptual graphs. A component called mediator enables the import by assigning dummy metadata annotations for the imported items. However, some functionality of the original system is lost, when processing the imported content, due to the lack of proper metadata annotation which cannot be associated fully automatically. So the paper presents an interoperability scenario when appropriate content items are viewed from the perspective of the original world and can be (partially) reused there.
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a
    Type
    a
  19. Marcondes, C.H.; Costa, L.C da.: ¬A model to represent and process scientific knowledge in biomedical articles with semantic Web technologies (2016) 0.02
    0.017059019 = product of:
      0.034118038 = sum of:
        0.034118038 = sum of:
          0.006079746 = weight(_text_:a in 2829) [ClassicSimilarity], result of:
            0.006079746 = score(doc=2829,freq=8.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.12739488 = fieldWeight in 2829, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2829)
          0.028038291 = weight(_text_:22 in 2829) [ClassicSimilarity], result of:
            0.028038291 = score(doc=2829,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.19345059 = fieldWeight in 2829, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2829)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge organization faces the challenge of managing the amount of knowledge available on the Web. Published literature in biomedical sciences is a huge source of knowledge, which can only efficiently be managed through automatic methods. The conventional channel for reporting scientific results is Web electronic publishing. Despite its advances, scientific articles are still published in print formats such as portable document format (PDF). Semantic Web and Linked Data technologies provides new opportunities for communicating, sharing, and integrating scientific knowledge that can overcome the limitations of the current print format. Here is proposed a semantic model of scholarly electronic articles in biomedical sciences that can overcome the limitations of traditional flat records formats. Scientific knowledge consists of claims made throughout article texts, especially when semantic elements such as questions, hypotheses and conclusions are stated. These elements, although having different roles, express relationships between phenomena. Once such knowledge units are extracted and represented with technologies such as RDF (Resource Description Framework) and linked data, they may be integrated in reasoning chains. Thereby, the results of scientific research can be published and shared in structured formats, enabling crawling by software agents, semantic retrieval, knowledge reuse, validation of scientific results, and identification of traces of scientific discoveries.
    Date
    12. 3.2016 13:17:22
    Type
    a
  20. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.02
    0.017059019 = product of:
      0.034118038 = sum of:
        0.034118038 = sum of:
          0.006079746 = weight(_text_:a in 4553) [ClassicSimilarity], result of:
            0.006079746 = score(doc=4553,freq=8.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.12739488 = fieldWeight in 4553, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
          0.028038291 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
            0.028038291 = score(doc=4553,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.19345059 = fieldWeight in 4553, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
      0.5 = coord(1/2)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
    Type
    a

Years

Languages

  • e 438
  • d 91
  • pt 5
  • el 1
  • f 1
  • sp 1
  • More… Less…

Types

  • a 419
  • el 143
  • m 23
  • x 22
  • n 13
  • s 11
  • p 5
  • r 5
  • A 1
  • EL 1
  • More… Less…

Subjects

Classifications