Search (62 results, page 1 of 4)

  • × theme_ss:"Wissensrepräsentation"
  1. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.06
    0.059919223 = product of:
      0.11983845 = sum of:
        0.11983845 = sum of:
          0.08545842 = weight(_text_:assessment in 4607) [ClassicSimilarity], result of:
            0.08545842 = score(doc=4607,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.30499613 = fieldWeight in 4607, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4607)
          0.03438003 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
            0.03438003 = score(doc=4607,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.19345059 = fieldWeight in 4607, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4607)
      0.5 = coord(1/2)
    
    Abstract
    Smart applications behave intelligently because they understand at least partially the context where they operate. To do this, they need not only a formal domain model but also formal descriptions of the data they process and their own operational behaviour. Interoperability of smart applications is based on formalised definitions of all their data and processes. This paper studies the semantic interoperability of data in the case of eLearning and describes an experiment and its assessment. New content is imported into a knowledge-based learning environment without real updates of the original domain model, which is encoded as a knowledge base of conceptual graphs. A component called mediator enables the import by assigning dummy metadata annotations for the imported items. However, some functionality of the original system is lost, when processing the imported content, due to the lack of proper metadata annotation which cannot be associated fully automatically. So the paper presents an interoperability scenario when appropriate content items are viewed from the perspective of the original world and can be (partially) reused there.
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a
  2. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.05
    0.04793538 = product of:
      0.09587076 = sum of:
        0.09587076 = sum of:
          0.068366736 = weight(_text_:assessment in 1634) [ClassicSimilarity], result of:
            0.068366736 = score(doc=1634,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.2439969 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.027504025 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.027504025 = score(doc=1634,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  3. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.04
    0.040302705 = product of:
      0.08060541 = sum of:
        0.08060541 = product of:
          0.24181622 = sum of:
            0.24181622 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.24181622 = score(doc=400,freq=2.0), product of:
                0.43026417 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050750602 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  4. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.03
    0.02686847 = product of:
      0.05373694 = sum of:
        0.05373694 = product of:
          0.16121082 = sum of:
            0.16121082 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.16121082 = score(doc=701,freq=2.0), product of:
                0.43026417 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050750602 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  5. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.03
    0.02686847 = product of:
      0.05373694 = sum of:
        0.05373694 = product of:
          0.16121082 = sum of:
            0.16121082 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.16121082 = score(doc=5820,freq=2.0), product of:
                0.43026417 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050750602 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  6. Wright, L.W.; Nardini, H.K.G.; Aronson, A.R.; Rindflesch, T.C.: Hierarchical concept indexing of full-text documents in the Unified Medical Language System Information sources Map (1999) 0.03
    0.025637524 = product of:
      0.05127505 = sum of:
        0.05127505 = product of:
          0.1025501 = sum of:
            0.1025501 = weight(_text_:assessment in 2111) [ClassicSimilarity], result of:
              0.1025501 = score(doc=2111,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.36599535 = fieldWeight in 2111, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2111)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Full-text documents are a vital and rapidly growing part of online biomedical information. A single large document can contain as much information as a small database, but normally lacks the tight structure and consistent indexing of a database. Retrieval systems will often miss highly relevant parts of a document if the document as a whole appears irrelevant. Access to full-text information is further complicated by the need to search separately many disparate information resources. This research explores how these problems can be addressed by the combined use of 2 techniques: 1) natural language processing for automatic concept-based indexing of full text, and 2) methods for exploiting the structure and hierarchy of full-text documents. We describe methods for applying these techniques to a large collection of full-text documents drawn from the Health Services / Technology Assessment Text (HSTAT) database at the NLM and examine how this hierarchical concept indexing can assist both document- and source-level retrieval in the context of NLM's Information Source Map project
  7. Sugimoto, C.R.; Weingart, S.: ¬The kaleidoscope of disciplinarity (2015) 0.03
    0.025637524 = product of:
      0.05127505 = sum of:
        0.05127505 = product of:
          0.1025501 = sum of:
            0.1025501 = weight(_text_:assessment in 2141) [ClassicSimilarity], result of:
              0.1025501 = score(doc=2141,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.36599535 = fieldWeight in 2141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2141)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this paper is to identify criteria for and definitions of disciplinarity, and how they differ between different types of literature. Design/methodology/approach This synthesis is achieved through a purposive review of three types of literature: explicit conceptualizations of disciplinarity; narrative histories of disciplines; and operationalizations of disciplinarity. Findings Each angle of discussing disciplinarity presents distinct criteria. However, there are a few common axes upon which conceptualizations, disciplinary narratives, and measurements revolve: communication, social features, topical coherence, and institutions. Originality/value There is considerable ambiguity in the concept of a discipline. This is of particular concern in a heightened assessment culture, where decisions about funding and resource allocation are often discipline-dependent (or focussed exclusively on interdisciplinary endeavors). This work explores the varied nature of disciplinarity and, through synthesis of the literature, presents a framework of criteria that can be used to guide science policy makers, scientometricians, administrators, and others interested in defining, constructing, and evaluating disciplines.
  8. Green, R.; Panzer, M.: Relations in the notational hierarchy of the Dewey Decimal Classification (2011) 0.02
    0.021364605 = product of:
      0.04272921 = sum of:
        0.04272921 = product of:
          0.08545842 = sum of:
            0.08545842 = weight(_text_:assessment in 4823) [ClassicSimilarity], result of:
              0.08545842 = score(doc=4823,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.30499613 = fieldWeight in 4823, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4823)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As part of a larger assessment of relationships in the Dewey Decimal Classification (DDC) system, this study investigates the semantic nature of relationships in the DDC notational hierarchy. The semantic relationship between each of a set of randomly selected classes and its parent class in the notational hierarchy is examined against a set of relationship types (specialization, class-instance, several flavours of whole-part).The analysis addresses the prevalence of specific relationship types, their lexical expression, difficulties encountered in assigning relationship types, compatibility of relationships found in the DDC with those found in other knowledge organization systems (KOS), and compatibility of relationships found in the DDC with those in a shared formalism like the Web Ontology Language (OWL). Since notational hierarchy is an organizational mechanism shared across most classification schemes and is often considered to provide an easy solution for ontological transformation of a classification system, the findings of the study are likely to generalize across classification schemes with respect to difficulties that might be encountered in such a transformation process.
  9. Aker, A.; Plaza, L.; Lloret, E.; Gaizauskas, R.: Do humans have conceptual models about geographic objects? : a user study (2013) 0.02
    0.021364605 = product of:
      0.04272921 = sum of:
        0.04272921 = product of:
          0.08545842 = sum of:
            0.08545842 = weight(_text_:assessment in 680) [ClassicSimilarity], result of:
              0.08545842 = score(doc=680,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.30499613 = fieldWeight in 680, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=680)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we investigate what sorts of information humans request about geographical objects of the same type. For example, Edinburgh Castle and Bodiam Castle are two objects of the same type: "castle." The question is whether specific information is requested for the object type "castle" and how this information differs for objects of other types (e.g., church, museum, or lake). We aim to answer this question using an online survey. In the survey, we showed 184 participants 200 images pertaining to urban and rural objects and asked them to write questions for which they would like to know the answers when seeing those objects. Our analysis of the 6,169 questions collected in the survey shows that humans have shared ideas of what to ask about geographical objects. When the object types resemble each other (e.g., church and temple), the requested information is similar for the objects of these types. Otherwise, the information is specific to an object type. Our results may be very useful in guiding Natural Language Processing tasks involving automatic generation of templates for image descriptions and their assessment, as well as image indexing and organization.
  10. Zhitomirsky-Geffet, M.; Erez, E.S.; Bar-Ilan, J.: Toward multiviewpoint ontology construction by collaboration of non-experts and crowdsourcing : the case of the effect of diet on health (2017) 0.02
    0.021364605 = product of:
      0.04272921 = sum of:
        0.04272921 = product of:
          0.08545842 = sum of:
            0.08545842 = weight(_text_:assessment in 3439) [ClassicSimilarity], result of:
              0.08545842 = score(doc=3439,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.30499613 = fieldWeight in 3439, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3439)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Domain experts are skilled in buliding a narrow ontology that reflects their subfield of expertise based on their work experience and personal beliefs. We call this type of ontology a single-viewpoint ontology. There can be a variety of such single viewpoint ontologies that represent a wide spectrum of subfields and expert opinions on the domain. However, to have a complete formal vocabulary for the domain they need to be linked and unified into a multiviewpoint model while having the subjective viewpoint statements marked and distinguished from the objectively true statements. In this study, we propose and implement a two-phase methodology for multiviewpoint ontology construction by nonexpert users. The proposed methodology was implemented for the domain of the effect of diet on health. A large-scale crowdsourcing experiment was conducted with about 750 ontological statements to determine whether each of these statements is objectively true, viewpoint, or erroneous. Typically, in crowdsourcing experiments the workers are asked for their personal opinions on the given subject. However, in our case their ability to objectively assess others' opinions was examined as well. Our results show substantially higher accuracy in classification for the objective assessment approach compared to the results based on personal opinions.
  11. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.02
    0.017190015 = product of:
      0.03438003 = sum of:
        0.03438003 = product of:
          0.06876006 = sum of:
            0.06876006 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.06876006 = score(doc=6089,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.11-22
  12. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.02
    0.017190015 = product of:
      0.03438003 = sum of:
        0.03438003 = product of:
          0.06876006 = sum of:
            0.06876006 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
              0.06876006 = score(doc=5576,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.38690117 = fieldWeight in 5576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5576)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13.12.2017 14:17:22
  13. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.02
    0.017190015 = product of:
      0.03438003 = sum of:
        0.03438003 = product of:
          0.06876006 = sum of:
            0.06876006 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.06876006 = score(doc=539,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26.12.2011 13:22:07
  14. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.02
    0.017190015 = product of:
      0.03438003 = sum of:
        0.03438003 = product of:
          0.06876006 = sum of:
            0.06876006 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.06876006 = score(doc=3406,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30. 5.2010 16:22:35
  15. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.02
    0.017190015 = product of:
      0.03438003 = sum of:
        0.03438003 = product of:
          0.06876006 = sum of:
            0.06876006 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
              0.06876006 = score(doc=4523,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.38690117 = fieldWeight in 4523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4523)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
  16. Koopman, B.; Zuccon, G.; Bruza, P.; Sitbon, L.; Lawley, M.: Information retrieval as semantic inference : a graph Inference model applied to medical search (2016) 0.02
    0.017091684 = product of:
      0.034183368 = sum of:
        0.034183368 = product of:
          0.068366736 = sum of:
            0.068366736 = weight(_text_:assessment in 3260) [ClassicSimilarity], result of:
              0.068366736 = score(doc=3260,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.2439969 = fieldWeight in 3260, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3260)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a Graph Inference retrieval model that integrates structured knowledge resources, statistical information retrieval methods and inference in a unified framework. Key components of the model are a graph-based representation of the corpus and retrieval driven by an inference mechanism achieved as a traversal over the graph. The model is proposed to tackle the semantic gap problem-the mismatch between the raw data and the way a human being interprets it. We break down the semantic gap problem into five core issues, each requiring a specific type of inference in order to be overcome. Our model and evaluation is applied to the medical domain because search within this domain is particularly challenging and, as we show, often requires inference. In addition, this domain features both structured knowledge resources as well as unstructured text. Our evaluation shows that inference can be effective, retrieving many new relevant documents that are not retrieved by state-of-the-art information retrieval models. We show that many retrieved documents were not pooled by keyword-based search methods, prompting us to perform additional relevance assessment on these new documents. A third of the newly retrieved documents judged were found to be relevant. Our analysis provides a thorough understanding of when and how to apply inference for retrieval, including a categorisation of queries according to the effect of inference. The inference mechanism promoted recall by retrieving new relevant documents not found by previous keyword-based approaches. In addition, it promoted precision by an effective reranking of documents. When inference is used, performance gains can generally be expected on hard queries. However, inference should not be applied universally: for easy, unambiguous queries and queries with few relevant documents, inference did adversely affect effectiveness. These conclusions reflect the fact that for retrieval as inference to be effective, a careful balancing act is involved. Finally, although the Graph Inference model is developed and applied to medical search, it is a general retrieval model applicable to other areas such as web search, where an emerging research trend is to utilise structured knowledge resources for more effective semantic search.
  17. Baroncini, S.; Sartini, B.; Erp, M. Van; Tomasi, F.; Gangemi, A.: Is dc:subject enough? : A landscape on iconography and iconology statements of knowledge graphs in the semantic web (2023) 0.02
    0.017091684 = product of:
      0.034183368 = sum of:
        0.034183368 = product of:
          0.068366736 = sum of:
            0.068366736 = weight(_text_:assessment in 1030) [ClassicSimilarity], result of:
              0.068366736 = score(doc=1030,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.2439969 = fieldWeight in 1030, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1030)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides (art-)historians and Cultural Heritage professionals with a wealth of information to explore. Specifically, structured data about iconographical and iconological (icon) aspects, i.e. information about the subjects, concepts and meanings of artworks, are extremely valuable for the state-of-the-art of computational tools, e.g. content recognition through computer vision. Nevertheless, a data quality evaluation for art domains, fundamental for data reuse, is still missing. The purpose of this study is filling this gap with an overview of art-historical data quality in current KGs with a focus on the icon aspects. Design/methodology/approach This study's analyses are based on established KG evaluation methodologies, adapted to the domain by addressing requirements from art historians' theories. The authors first select several KGs according to Semantic Web principles. Then, the authors evaluate (1) their structures' suitability to describe icon information through quantitative and qualitative assessment and (2) their content, qualitatively assessed in terms of correctness and completeness. Findings This study's results reveal several issues on the current expression of icon information in KGs. The content evaluation shows that these domain-specific statements are generally correct but often not complete. The incompleteness is confirmed by the structure evaluation, which highlights the unsuitability of the KG schemas to describe icon information with the required granularity. Originality/value The main contribution of this work is an overview of the actual landscape of the icon information expressed in LOD. Therefore, it is valuable to cultural institutions by providing them a first domain-specific data quality evaluation. Since this study's results suggest that the selected domain information is underrepresented in Semantic Web datasets, the authors highlight the need for the creation and fostering of such information to provide a more thorough art-historical dimension to LOD.
  18. Fischer, D.H.: Converting a thesaurus to OWL : Notes on the paper "The National Cancer Institute's Thesaurus and Ontology" (2004) 0.01
    0.014955224 = product of:
      0.029910447 = sum of:
        0.029910447 = product of:
          0.059820894 = sum of:
            0.059820894 = weight(_text_:assessment in 2362) [ClassicSimilarity], result of:
              0.059820894 = score(doc=2362,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.2134973 = fieldWeight in 2362, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2362)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    According to my findings several relations available in the thesaurus query interface as "roles", are not used, i.e. there are not yet any assertions with them. And those which are used do not contribute to complete concept definitions of concepts which represent thesaurus main entries. In other words: The authors claim to already have a "description logic based nomenclature", where there is not yet one which deserves that title by being much more than a thesaurus with strict subsumption and additional inheritable semantic links. In the last section of the paper the authors say: "The most time consuming process in this conversion was making a careful analysis of the Thesaurus to understand the best way to translate it into OWL." "For other conversions, these same types of distinctions and decisions must be made. The expressive power of a proprietary encoding can vary widely from that in OWL or RDF. Understanding the original semantics and engineering a solution that most closely duplicates it is critical for creating a useful and accu-rate ontology." My question is: What decisions were made and are they exemplary, can they be rec-ommended as "the best way"? I raise strong doubts with respect to that, and I miss more profound discussions of the issues at stake. The following notes are dedicated to a critical description and assessment of the results of that conversion activity. They are written in a tutorial style more or less addressing students, but myself being a learner especially in the field of medical knowledge representation I do not speak "ex cathedra".
  19. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.01
    0.014586212 = product of:
      0.029172424 = sum of:
        0.029172424 = product of:
          0.05834485 = sum of:
            0.05834485 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.05834485 = score(doc=3355,freq=4.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  20. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.01
    0.0137520125 = product of:
      0.027504025 = sum of:
        0.027504025 = product of:
          0.05500805 = sum of:
            0.05500805 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
              0.05500805 = score(doc=3376,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.30952093 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.2010 16:58:22

Authors

Years

Languages

  • e 51
  • d 11

Types

  • a 47
  • el 14
  • x 5
  • m 2
  • n 1
  • r 1
  • More… Less…