Search (142 results, page 1 of 8)

  • × theme_ss:"Wissensrepräsentation"
  1. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.06
    0.059782628 = sum of:
      0.011891785 = product of:
        0.07135071 = sum of:
          0.07135071 = weight(_text_:authors in 179) [ClassicSimilarity], result of:
            0.07135071 = score(doc=179,freq=6.0), product of:
              0.20446584 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.044850662 = queryNorm
              0.34896153 = fieldWeight in 179, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
        0.16666667 = coord(1/6)
      0.047890842 = sum of:
        0.02358426 = weight(_text_:c in 179) [ClassicSimilarity], result of:
          0.02358426 = score(doc=179,freq=2.0), product of:
            0.1547081 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.044850662 = queryNorm
            0.1524436 = fieldWeight in 179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.024306582 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
          0.024306582 = score(doc=179,freq=2.0), product of:
            0.15705937 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.044850662 = queryNorm
            0.15476047 = fieldWeight in 179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.04
    0.035537045 = sum of:
      0.023744915 = product of:
        0.14246948 = sum of:
          0.14246948 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
            0.14246948 = score(doc=5820,freq=2.0), product of:
              0.3802444 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.044850662 = queryNorm
              0.3746787 = fieldWeight in 5820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.16666667 = coord(1/6)
      0.01179213 = product of:
        0.02358426 = sum of:
          0.02358426 = weight(_text_:c in 5820) [ClassicSimilarity], result of:
            0.02358426 = score(doc=5820,freq=2.0), product of:
              0.1547081 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.044850662 = queryNorm
              0.1524436 = fieldWeight in 5820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.5 = coord(1/2)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Pepper, S.; Groenmo, G.O.: Towards a general theory of scope (2002) 0.03
    0.029651677 = product of:
      0.059303354 = sum of:
        0.059303354 = product of:
          0.17791006 = sum of:
            0.051492937 = weight(_text_:authors in 539) [ClassicSimilarity], result of:
              0.051492937 = score(doc=539,freq=2.0), product of:
                0.20446584 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.044850662 = queryNorm
                0.25184128 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=539)
            0.12641712 = weight(_text_:back in 539) [ClassicSimilarity], result of:
              0.12641712 = score(doc=539,freq=4.0), product of:
                0.26939675 = queryWeight, product of:
                  6.006528 = idf(docFreq=295, maxDocs=44218)
                  0.044850662 = queryNorm
                0.46925998 = fieldWeight in 539, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.006528 = idf(docFreq=295, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=539)
          0.33333334 = coord(2/6)
      0.5 = coord(1/2)
    
    Abstract
    This paper is concerned with the issue of scope in topic maps. Topic maps are a form of knowledge representation suitable for solving a number of complex problems in the area of information management, ranging from findability (navigation and querying) to knowledge management and enterprise application integration (EAI). The topic map paradigm has its roots in efforts to understand the essential semantics of back-of-book indexes in order that they might be captured in a form suitable for computer processing. Once understood, the model of a back-of-book index was generalised in order to cover the needs of digital information, and extended to encompass glossaries and thesauri, as well as indexes. The resulting core model, of typed topics, associations, and occurrences, has many similarities with the semantic networks developed by the artificial intelligence community for representing knowledge structures. One key requirement of topic maps from the earliest days was to be able to merge indexes from disparate origins. This requirement accounts for two further concepts that greatly enhance the power of topic maps: subject identity and scope. This paper concentrates on scope, but also includes a brief discussion of the feature known as the topic naming constraint, with which it is closely related. It is based on the authors' experience in creating topic maps (in particular, the Italian Opera Topic Map, and in implementing processing systems for topic maps (in particular, the Ontopia Topic Map Engine and Navigator.
  4. Jiang, Y.-C.; Li, H.: ¬The theoretical basis and basic principles of knowledge network construction in digital library (2023) 0.03
    0.027986784 = sum of:
      0.010298587 = product of:
        0.061791524 = sum of:
          0.061791524 = weight(_text_:authors in 1130) [ClassicSimilarity], result of:
            0.061791524 = score(doc=1130,freq=2.0), product of:
              0.20446584 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.044850662 = queryNorm
              0.30220953 = fieldWeight in 1130, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=1130)
        0.16666667 = coord(1/6)
      0.017688196 = product of:
        0.035376392 = sum of:
          0.035376392 = weight(_text_:c in 1130) [ClassicSimilarity], result of:
            0.035376392 = score(doc=1130,freq=2.0), product of:
              0.1547081 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.044850662 = queryNorm
              0.22866541 = fieldWeight in 1130, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.046875 = fieldNorm(doc=1130)
        0.5 = coord(1/2)
    
    Abstract
    Knowledge network construction (KNC) is the essence of dynamic knowledge architecture, and is helpful to illustrate ubiquitous knowledge service in digital libraries (DLs). The authors explore its theoretical foundations and basic rules to elucidate the basic principles of KNC in DLs. The results indicate that world general connection, small-world phenomenon, relevance theory, unity and continuity of science development have been the production tool, architecture aim and scientific foundation of KNC in DLs. By analyzing both the characteristics of KNC based on different types of knowledge linking and the relationships between different forms of knowledge and the appropriate ways of knowledge linking, the basic principle of KNC is summarized as follows: let each kind of knowledge linking form each shows its ability, each kind of knowledge manifestation each answer the purpose intended in practice, and then subjective knowledge network and objective knowledge network are organically combined. This will lay a solid theoretical foundation and provide an action guide for DLs to construct knowledge networks.
  5. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.02
    0.024045076 = sum of:
      0.011891785 = product of:
        0.07135071 = sum of:
          0.07135071 = weight(_text_:authors in 1634) [ClassicSimilarity], result of:
            0.07135071 = score(doc=1634,freq=6.0), product of:
              0.20446584 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.044850662 = queryNorm
              0.34896153 = fieldWeight in 1634, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.16666667 = coord(1/6)
      0.012153291 = product of:
        0.024306582 = sum of:
          0.024306582 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.024306582 = score(doc=1634,freq=2.0), product of:
              0.15705937 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044850662 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  6. Lassalle, E.; Lassalle, E.: Semantic models in information retrieval (2012) 0.02
    0.023322318 = sum of:
      0.008582156 = product of:
        0.051492937 = sum of:
          0.051492937 = weight(_text_:authors in 97) [ClassicSimilarity], result of:
            0.051492937 = score(doc=97,freq=2.0), product of:
              0.20446584 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.044850662 = queryNorm
              0.25184128 = fieldWeight in 97, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=97)
        0.16666667 = coord(1/6)
      0.0147401625 = product of:
        0.029480325 = sum of:
          0.029480325 = weight(_text_:c in 97) [ClassicSimilarity], result of:
            0.029480325 = score(doc=97,freq=2.0), product of:
              0.1547081 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.044850662 = queryNorm
              0.1905545 = fieldWeight in 97, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.0390625 = fieldNorm(doc=97)
        0.5 = coord(1/2)
    
    Abstract
    Robertson and Spärck Jones pioneered experimental probabilistic models (Binary Independence Model) with both a typology generalizing the Boolean model, a frequency counting to calculate elementary weightings, and their combination into a global probabilistic estimation. However, this model did not consider indexing terms dependencies. An extension to mixture models (e.g., using a 2-Poisson law) made it possible to take into account these dependencies from a macroscopic point of view (BM25), as well as a shallow linguistic processing of co-references. New approaches (language models, for example "bag of words" models, probabilistic dependencies between requests and documents, and consequently Bayesian inference using Dirichlet prior conjugate) furnished new solutions for documents structuring (categorization) and for index smoothing. Presently, in these probabilistic models the main issues have been addressed from a formal point of view only. Thus, linguistic properties are neglected in the indexing language. The authors examine how a linguistic and semantic modeling can be integrated in indexing languages and set up a hybrid model that makes it possible to deal with different information retrieval problems in a unified way.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  7. Baofu, P.: ¬The future of information architecture : conceiving a better way to understand taxonomy, network, and intelligence (2008) 0.02
    0.023322318 = sum of:
      0.008582156 = product of:
        0.051492937 = sum of:
          0.051492937 = weight(_text_:authors in 2257) [ClassicSimilarity], result of:
            0.051492937 = score(doc=2257,freq=2.0), product of:
              0.20446584 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.044850662 = queryNorm
              0.25184128 = fieldWeight in 2257, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2257)
        0.16666667 = coord(1/6)
      0.0147401625 = product of:
        0.029480325 = sum of:
          0.029480325 = weight(_text_:c in 2257) [ClassicSimilarity], result of:
            0.029480325 = score(doc=2257,freq=2.0), product of:
              0.1547081 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.044850662 = queryNorm
              0.1905545 = fieldWeight in 2257, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2257)
        0.5 = coord(1/2)
    
    Abstract
    The Future of Information Architecture examines issues surrounding why information is processed, stored and applied in the way that it has, since time immemorial. Contrary to the conventional wisdom held by many scholars in human history, the recurrent debate on the explanation of the most basic categories of information (eg space, time causation, quality, quantity) has been misconstrued, to the effect that there exists some deeper categories and principles behind these categories of information - with enormous implications for our understanding of reality in general. To understand this, the book is organised in to four main parts: Part I begins with the vital question concerning the role of information within the context of the larger theoretical debate in the literature. Part II provides a critical examination of the nature of data taxonomy from the main perspectives of culture, society, nature and the mind. Part III constructively invesitgates the world of information network from the main perspectives of culture, society, nature and the mind. Part IV proposes six main theses in the authors synthetic theory of information architecture, namely, (a) the first thesis on the simpleness-complicatedness principle, (b) the second thesis on the exactness-vagueness principle (c) the third thesis on the slowness-quickness principle (d) the fourth thesis on the order-chaos principle, (e) the fifth thesis on the symmetry-asymmetry principle, and (f) the sixth thesis on the post-human stage.
  8. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.02
    0.017808685 = product of:
      0.03561737 = sum of:
        0.03561737 = product of:
          0.21370421 = sum of:
            0.21370421 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.21370421 = score(doc=400,freq=2.0), product of:
                0.3802444 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044850662 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.16666667 = coord(1/6)
      0.5 = coord(1/2)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  9. Kottmann, N.; Studer, T.: Improving semantic query answering (2006) 0.02
    0.016676592 = product of:
      0.033353183 = sum of:
        0.033353183 = product of:
          0.06670637 = sum of:
            0.06670637 = weight(_text_:c in 3979) [ClassicSimilarity], result of:
              0.06670637 = score(doc=3979,freq=4.0), product of:
                0.1547081 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044850662 = queryNorm
                0.43117565 = fieldWeight in 3979, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3979)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The retrieval problem is one of the main reasoning tasks for knowledge base systems. Given a knowledge base K and a concept C, the retrieval problem consists of finding all individuals a for which K logically entails C(a). We present an approach to answer retrieval queries over (a restriction of) OWL ontologies. Our solution is based on reducing the retrieval problem to a problem of evaluating an SQL query over a database constructed from the original knowledge base. We provide complete answers to retrieval problems. Still, our system performs very well as is shown by a standard benchmark.
  10. Schubert, C.; Kinkeldey, C.; Reich, H.: Handbuch Datenbankanwendung zur Wissensrepräsentation im Verbundprojekt DeCOVER (2006) 0.02
    0.016676592 = product of:
      0.033353183 = sum of:
        0.033353183 = product of:
          0.06670637 = sum of:
            0.06670637 = weight(_text_:c in 4256) [ClassicSimilarity], result of:
              0.06670637 = score(doc=4256,freq=4.0), product of:
                0.1547081 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044850662 = queryNorm
                0.43117565 = fieldWeight in 4256, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4256)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Thenmalar, S.; Geetha, T.V.: Enhanced ontology-based indexing and searching (2014) 0.02
    0.01664164 = sum of:
      0.0060075093 = product of:
        0.036045056 = sum of:
          0.036045056 = weight(_text_:authors in 1633) [ClassicSimilarity], result of:
            0.036045056 = score(doc=1633,freq=2.0), product of:
              0.20446584 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.044850662 = queryNorm
              0.17628889 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
        0.16666667 = coord(1/6)
      0.010634129 = product of:
        0.021268258 = sum of:
          0.021268258 = weight(_text_:22 in 1633) [ClassicSimilarity], result of:
            0.021268258 = score(doc=1633,freq=2.0), product of:
              0.15705937 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044850662 = queryNorm
              0.1354154 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to improve the conceptual-based search by incorporating structural ontological information such as concepts and relations. Generally, Semantic-based information retrieval aims to identify relevant information based on the meanings of the query terms or on the context of the terms and the performance of semantic information retrieval is carried out through standard measures-precision and recall. Higher precision leads to the (meaningful) relevant documents obtained and lower recall leads to the less coverage of the concepts. Design/methodology/approach - In this paper, the authors enhance the existing ontology-based indexing proposed by Kohler et al., by incorporating sibling information to the index. The index designed by Kohler et al., contains only super and sub-concepts from the ontology. In addition, in our approach, we focus on two tasks; query expansion and ranking of the expanded queries, to improve the efficiency of the ontology-based search. The aforementioned tasks make use of ontological concepts, and relations existing between those concepts so as to obtain semantically more relevant search results for a given query. Findings - The proposed ontology-based indexing technique is investigated by analysing the coverage of concepts that are being populated in the index. Here, we introduce a new measure called index enhancement measure, to estimate the coverage of ontological concepts being indexed. We have evaluated the ontology-based search for the tourism domain with the tourism documents and tourism-specific ontology. The comparison of search results based on the use of ontology "with and without query expansion" is examined to estimate the efficiency of the proposed query expansion task. The ranking is compared with the ORank system to evaluate the performance of our ontology-based search. From these analyses, the ontology-based search results shows better recall when compared to the other concept-based search systems. The mean average precision of the ontology-based search is found to be 0.79 and the recall is found to be 0.65, the ORank system has the mean average precision of 0.62 and the recall is found to be 0.51, while the concept-based search has the mean average precision of 0.56 and the recall is found to be 0.42. Practical implications - When the concept is not present in the domain-specific ontology, the concept cannot be indexed. When the given query term is not available in the ontology then the term-based results are retrieved. Originality/value - In addition to super and sub-concepts, we incorporate the concepts present in same level (siblings) to the ontological index. The structural information from the ontology is determined for the query expansion. The ranking of the documents depends on the type of the query (single concept query, multiple concept queries and concept with relation queries) and the ontological relations that exists in the query and the documents. With this ontological structural information, the search results showed us better coverage of concepts with respect to the query.
    Date
    20. 1.2015 18:30:22
  12. Onofri, A.: Concepts in context (2013) 0.02
    0.016325623 = sum of:
      0.0060075093 = product of:
        0.036045056 = sum of:
          0.036045056 = weight(_text_:authors in 1077) [ClassicSimilarity], result of:
            0.036045056 = score(doc=1077,freq=2.0), product of:
              0.20446584 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.044850662 = queryNorm
              0.17628889 = fieldWeight in 1077, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1077)
        0.16666667 = coord(1/6)
      0.0103181135 = product of:
        0.020636227 = sum of:
          0.020636227 = weight(_text_:c in 1077) [ClassicSimilarity], result of:
            0.020636227 = score(doc=1077,freq=2.0), product of:
              0.1547081 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.044850662 = queryNorm
              0.13338815 = fieldWeight in 1077, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1077)
        0.5 = coord(1/2)
    
    Abstract
    My thesis discusses two related problems that have taken center stage in the recent literature on concepts: 1) What are the individuation conditions of concepts? Under what conditions is a concept Cv(1) the same concept as a concept Cv(2)? 2) What are the possession conditions of concepts? What conditions must be satisfied for a thinker to have a concept C? The thesis defends a novel account of concepts, which I call "pluralist-contextualist": 1) Pluralism: Different concepts have different kinds of individuation and possession conditions: some concepts are individuated more "coarsely", have less demanding possession conditions and are widely shared, while other concepts are individuated more "finely" and not shared. 2) Contextualism: When a speaker ascribes a propositional attitude to a subject S, or uses his ascription to explain/predict S's behavior, the speaker's intentions in the relevant context determine the correct individuation conditions for the concepts involved in his report. In chapters 1-3 I defend a contextualist, non-Millian theory of propositional attitude ascriptions. Then, I show how contextualism can be used to offer a novel perspective on the problem of concept individuation/possession. More specifically, I employ contextualism to provide a new, more effective argument for Fodor's "publicity principle": if contextualism is true, then certain specific concepts must be shared in order for interpersonally applicable psychological generalizations to be possible. In chapters 4-5 I raise a tension between publicity and another widely endorsed principle, the "Fregean constraint" (FC): subjects who are unaware of certain identity facts and find themselves in so-called "Frege cases" must have distinct concepts for the relevant object x. For instance: the ancient astronomers had distinct concepts (HESPERUS/PHOSPHORUS) for the same object (the planet Venus). First, I examine some leading theories of concepts and argue that they cannot meet both of our constraints at the same time. Then, I offer principled reasons to think that no theory can satisfy (FC) while also respecting publicity. (FC) appears to require a form of holism, on which a concept is individuated by its global inferential role in a subject S and can thus only be shared by someone who has exactly the same inferential dispositions as S. This explains the tension between publicity and (FC), since holism is clearly incompatible with concept shareability. To solve the tension, I suggest adopting my pluralist-contextualist proposal: concepts involved in Frege cases are holistically individuated and not public, while other concepts are more coarsely individuated and widely shared; given this "plurality" of concepts, we will then need contextual factors (speakers' intentions) to "select" the specific concepts to be employed in our intentional generalizations in the relevant contexts. In chapter 6 I develop the view further by contrasting it with some rival accounts. First, I examine a very different kind of pluralism about concepts, which has been recently defended by Daniel Weiskopf, and argue that it is insufficiently radical. Then, I consider the inferentialist accounts defended by authors like Peacocke, Rey and Jackson. Such views, I argue, are committed to an implausible picture of reference determination, on which our inferential dispositions fix the reference of our concepts: this leads to wrong predictions in all those cases of scientific disagreement where two parties have very different inferential dispositions and yet seem to refer to the same natural kind.
  13. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.02
    0.015191614 = product of:
      0.030383227 = sum of:
        0.030383227 = product of:
          0.060766455 = sum of:
            0.060766455 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.060766455 = score(doc=6089,freq=2.0), product of:
                0.15705937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044850662 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.11-22
  14. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.02
    0.015191614 = product of:
      0.030383227 = sum of:
        0.030383227 = product of:
          0.060766455 = sum of:
            0.060766455 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
              0.060766455 = score(doc=5576,freq=2.0), product of:
                0.15705937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044850662 = queryNorm
                0.38690117 = fieldWeight in 5576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5576)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13.12.2017 14:17:22
  15. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.02
    0.015191614 = product of:
      0.030383227 = sum of:
        0.030383227 = product of:
          0.060766455 = sum of:
            0.060766455 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.060766455 = score(doc=539,freq=2.0), product of:
                0.15705937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044850662 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26.12.2011 13:22:07
  16. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.02
    0.015191614 = product of:
      0.030383227 = sum of:
        0.030383227 = product of:
          0.060766455 = sum of:
            0.060766455 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.060766455 = score(doc=3406,freq=2.0), product of:
                0.15705937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044850662 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30. 5.2010 16:22:35
  17. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.02
    0.015191614 = product of:
      0.030383227 = sum of:
        0.030383227 = product of:
          0.060766455 = sum of:
            0.060766455 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
              0.060766455 = score(doc=4523,freq=2.0), product of:
                0.15705937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044850662 = queryNorm
                0.38690117 = fieldWeight in 4523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4523)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
  18. Angerer, C.: Neuronale Netze : Revolution für die Wissenschaft? (2018) 0.01
    0.0147401625 = product of:
      0.029480325 = sum of:
        0.029480325 = product of:
          0.05896065 = sum of:
            0.05896065 = weight(_text_:c in 4023) [ClassicSimilarity], result of:
              0.05896065 = score(doc=4023,freq=2.0), product of:
                0.1547081 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044850662 = queryNorm
                0.381109 = fieldWeight in 4023, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4023)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Stuart, D.: Practical ontologies for information professionals (2016) 0.01
    0.013649584 = product of:
      0.027299168 = sum of:
        0.027299168 = product of:
          0.054598335 = sum of:
            0.054598335 = weight(_text_:c in 5152) [ClassicSimilarity], result of:
              0.054598335 = score(doc=5152,freq=14.0), product of:
                0.1547081 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044850662 = queryNorm
                0.35291192 = fieldWeight in 5152, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5152)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    C H A P T E R 1 What is an ontology?; Introduction; The data deluge and information overload; Defining terms; Knowledge organization systems and ontologies; Ontologies, metadata and linked data; What can an ontology do?; Ontologies and information professionals; Alternatives to ontologies; The aims of this book; The structure of this book; C H A P T E R 2 Ontologies and the semantic web; Introduction; The semantic web and linked data; Resource Description Framework (RDF); Classes, subclasses and properties; The semantic web stack; Embedded RDF; Alternative semantic visionsLibraries and the semantic web; Other cultural heritage institutions and the semantic web; Other organizations and the semantic web; Conclusion; C H A P T E R 3 Existing ontologies; Introduction; Ontology documentation; Ontologies for representing ontologies; Ontologies for libraries; Upper ontologies; Cultural heritage data models; Ontologies for the web; Conclusion; C H A P T E R 4 Adopting ontologies; Introduction; Reusing ontologies: application profiles and data models; Identifying ontologies; The ideal ontology discovery tool; Selection criteria; Conclusion C H A P T E R 5 Building ontologiesIntroduction; Approaches to building an ontology; The twelve steps; Ontology development example: Bibliometric Metrics Ontology element set; Conclusion; C H A P T E R 6 Interrogating ontologies; Introduction; Interrogating ontologies for reuse; Interrogating a knowledge base; Understanding ontology use; Conclusion; C H A P T E R 7 The future of ontologies and the information professional; Introduction; The future of ontologies for knowledge discovery; The future role of library and information professionals; The practical development of ontologies
  20. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.01
    0.012890512 = product of:
      0.025781024 = sum of:
        0.025781024 = product of:
          0.05156205 = sum of:
            0.05156205 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.05156205 = score(doc=3355,freq=4.0), product of:
                0.15705937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044850662 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56

Authors

Years

Languages

  • e 108
  • d 30
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 103
  • el 36
  • m 10
  • x 9
  • r 4
  • s 4
  • n 2
  • p 1
  • More… Less…

Subjects