Search (130 results, page 1 of 7)

  • × theme_ss:"Wissensrepräsentation"
  1. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie (2005) 0.06
    0.06114716 = product of:
      0.12229432 = sum of:
        0.12229432 = sum of:
          0.07277499 = weight(_text_:k in 1852) [ClassicSimilarity], result of:
            0.07277499 = score(doc=1852,freq=4.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.39044446 = fieldWeight in 1852, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1852)
          0.049519327 = weight(_text_:22 in 1852) [ClassicSimilarity], result of:
            0.049519327 = score(doc=1852,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.2708308 = fieldWeight in 1852, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1852)
      0.5 = coord(1/2)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:58
    Object
    K-Infinity
  2. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2005) 0.06
    0.06114716 = product of:
      0.12229432 = sum of:
        0.12229432 = sum of:
          0.07277499 = weight(_text_:k in 4324) [ClassicSimilarity], result of:
            0.07277499 = score(doc=4324,freq=4.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.39044446 = fieldWeight in 4324, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4324)
          0.049519327 = weight(_text_:22 in 4324) [ClassicSimilarity], result of:
            0.049519327 = score(doc=4324,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.2708308 = fieldWeight in 4324, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4324)
      0.5 = coord(1/2)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:25
    Object
    K-Infinity
  3. Gödert, W.; Hubrich, J.; Nagelschmidt, M.: Semantic knowledge representation for information retrieval (2014) 0.05
    0.052411847 = product of:
      0.10482369 = sum of:
        0.10482369 = sum of:
          0.06237856 = weight(_text_:k in 987) [ClassicSimilarity], result of:
            0.06237856 = score(doc=987,freq=4.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.33466667 = fieldWeight in 987, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
          0.042445138 = weight(_text_:22 in 987) [ClassicSimilarity], result of:
            0.042445138 = score(doc=987,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.23214069 = fieldWeight in 987, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
      0.5 = coord(1/2)
    
    Classification
    BCA (FH K)
    Date
    23. 7.2017 13:49:22
    GHBS
    BCA (FH K)
  4. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.05
    0.0520674 = product of:
      0.1041348 = sum of:
        0.1041348 = sum of:
          0.044108305 = weight(_text_:k in 3355) [ClassicSimilarity], result of:
            0.044108305 = score(doc=3355,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.23664509 = fieldWeight in 3355, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
          0.060026493 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
            0.060026493 = score(doc=3355,freq=4.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.32829654 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
      0.5 = coord(1/2)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  5. Kruk, S.R.; Kruk, E.; Stankiewicz, K.: Evaluation of semantic and social technologies for digital libraries (2009) 0.04
    0.04327672 = product of:
      0.08655344 = sum of:
        0.08655344 = sum of:
          0.044108305 = weight(_text_:k in 3387) [ClassicSimilarity], result of:
            0.044108305 = score(doc=3387,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.23664509 = fieldWeight in 3387, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=3387)
          0.042445138 = weight(_text_:22 in 3387) [ClassicSimilarity], result of:
            0.042445138 = score(doc=3387,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.23214069 = fieldWeight in 3387, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3387)
      0.5 = coord(1/2)
    
    Date
    1. 8.2010 12:35:22
  6. Järvelin, K.; Kristensen, J.; Niemi, T.; Sormunen, E.; Keskustalo, H.: ¬A deductive data model for query expansion (1996) 0.04
    0.04327672 = product of:
      0.08655344 = sum of:
        0.08655344 = sum of:
          0.044108305 = weight(_text_:k in 2230) [ClassicSimilarity], result of:
            0.044108305 = score(doc=2230,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.23664509 = fieldWeight in 2230, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=2230)
          0.042445138 = weight(_text_:22 in 2230) [ClassicSimilarity], result of:
            0.042445138 = score(doc=2230,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.23214069 = fieldWeight in 2230, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2230)
      0.5 = coord(1/2)
    
    Source
    Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR '96), Zürich, Switzerland, August 18-22, 1996. Eds.: H.P. Frei et al
  7. Baofu, P.: ¬The future of information architecture : conceiving a better way to understand taxonomy, network, and intelligence (2008) 0.04
    0.04097758 = sum of:
      0.0149865085 = product of:
        0.059946034 = sum of:
          0.059946034 = weight(_text_:authors in 2257) [ClassicSimilarity], result of:
            0.059946034 = score(doc=2257,freq=2.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.25184128 = fieldWeight in 2257, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2257)
        0.25 = coord(1/4)
      0.02599107 = product of:
        0.05198214 = sum of:
          0.05198214 = weight(_text_:k in 2257) [ClassicSimilarity], result of:
            0.05198214 = score(doc=2257,freq=4.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.2788889 = fieldWeight in 2257, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2257)
        0.5 = coord(1/2)
    
    Abstract
    The Future of Information Architecture examines issues surrounding why information is processed, stored and applied in the way that it has, since time immemorial. Contrary to the conventional wisdom held by many scholars in human history, the recurrent debate on the explanation of the most basic categories of information (eg space, time causation, quality, quantity) has been misconstrued, to the effect that there exists some deeper categories and principles behind these categories of information - with enormous implications for our understanding of reality in general. To understand this, the book is organised in to four main parts: Part I begins with the vital question concerning the role of information within the context of the larger theoretical debate in the literature. Part II provides a critical examination of the nature of data taxonomy from the main perspectives of culture, society, nature and the mind. Part III constructively invesitgates the world of information network from the main perspectives of culture, society, nature and the mind. Part IV proposes six main theses in the authors synthetic theory of information architecture, namely, (a) the first thesis on the simpleness-complicatedness principle, (b) the second thesis on the exactness-vagueness principle (c) the third thesis on the slowness-quickness principle (d) the fourth thesis on the order-chaos principle, (e) the fifth thesis on the symmetry-asymmetry principle, and (f) the sixth thesis on the post-human stage.
    Classification
    BBV (FH K)
    GHBS
    BBV (FH K)
  8. Starostenko, O.; Rodríguez-Asomoza, J.; Sénchez-López, S.E.; Chévez-Aragón, J.A.: Shape indexing and retrieval : a hybrid approach using ontological description (2008) 0.04
    0.040037964 = sum of:
      0.017983811 = product of:
        0.071935244 = sum of:
          0.071935244 = weight(_text_:authors in 4318) [ClassicSimilarity], result of:
            0.071935244 = score(doc=4318,freq=2.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.30220953 = fieldWeight in 4318, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=4318)
        0.25 = coord(1/4)
      0.022054153 = product of:
        0.044108305 = sum of:
          0.044108305 = weight(_text_:k in 4318) [ClassicSimilarity], result of:
            0.044108305 = score(doc=4318,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.23664509 = fieldWeight in 4318, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=4318)
        0.5 = coord(1/2)
    
    Abstract
    This paper presents a novel hybrid approach for visual information retrieval (VIR) that combines shape analysis of objects in image with their indexing by textual descriptions. The principal goal of presented technique is applying Two Segments Turning Function (2STF) proposed by authors for efficient invariant to spatial variations shape processing and implementation of semantic Web approaches for ontology-based user-oriented annotations of multimedia information. In the proposed approach the user's textual queries are converted to image features, which are used for images searching, indexing, interpretation, and retrieval. A decision about similarity between retrieved image and user's query is taken computing the shape convergence to 2STF combining it with matching the ontological annotations of objects in image and providing in this way automatic definition of the machine-understandable semantics. In order to evaluate the proposed approach the Image Retrieval by Ontological Description of Shapes system has been designed and tested using some standard image domains.
    Source
    Innovations and advanced techniques in systems, computing sciences and software engineering. Ed.: K. Elleithy
  9. Mahesh, K.: Highly expressive tagging for knowledge organization in the 21st century (2014) 0.04
    0.036063936 = product of:
      0.07212787 = sum of:
        0.07212787 = sum of:
          0.03675692 = weight(_text_:k in 1434) [ClassicSimilarity], result of:
            0.03675692 = score(doc=1434,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.19720423 = fieldWeight in 1434, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1434)
          0.03537095 = weight(_text_:22 in 1434) [ClassicSimilarity], result of:
            0.03537095 = score(doc=1434,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.19345059 = fieldWeight in 1434, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1434)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  10. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.03
    0.034914296 = sum of:
      0.020765917 = product of:
        0.08306367 = sum of:
          0.08306367 = weight(_text_:authors in 1634) [ClassicSimilarity], result of:
            0.08306367 = score(doc=1634,freq=6.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.34896153 = fieldWeight in 1634, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.25 = coord(1/4)
      0.01414838 = product of:
        0.02829676 = sum of:
          0.02829676 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.02829676 = score(doc=1634,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  11. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.03
    0.034914296 = sum of:
      0.020765917 = product of:
        0.08306367 = sum of:
          0.08306367 = weight(_text_:authors in 179) [ClassicSimilarity], result of:
            0.08306367 = score(doc=179,freq=6.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.34896153 = fieldWeight in 179, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
        0.25 = coord(1/4)
      0.01414838 = product of:
        0.02829676 = sum of:
          0.02829676 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
            0.02829676 = score(doc=179,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.15476047 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
        0.5 = coord(1/2)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
  12. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.03
    0.031098248 = product of:
      0.062196497 = sum of:
        0.062196497 = product of:
          0.24878599 = sum of:
            0.24878599 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.24878599 = score(doc=400,freq=2.0), product of:
                0.4426655 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052213363 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  13. Hinkelmann, K.: Ontopia Omnigator : ein Werkzeug zur Einführung in Topic Maps (20xx) 0.03
    0.025729846 = product of:
      0.051459692 = sum of:
        0.051459692 = product of:
          0.102919385 = sum of:
            0.102919385 = weight(_text_:k in 3162) [ClassicSimilarity], result of:
              0.102919385 = score(doc=3162,freq=2.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.5521719 = fieldWeight in 3162, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3162)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Haenelt, K.: Semantik im Wiki : am Beispiel des MediaWiki und Semantic MediaWiki (2011) 0.03
    0.025729846 = product of:
      0.051459692 = sum of:
        0.051459692 = product of:
          0.102919385 = sum of:
            0.102919385 = weight(_text_:k in 3166) [ClassicSimilarity], result of:
              0.102919385 = score(doc=3166,freq=2.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.5521719 = fieldWeight in 3166, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3166)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Weller, K.: Kooperativer Ontologieaufbau (2006) 0.03
    0.025729846 = product of:
      0.051459692 = sum of:
        0.051459692 = product of:
          0.102919385 = sum of:
            0.102919385 = weight(_text_:k in 1270) [ClassicSimilarity], result of:
              0.102919385 = score(doc=1270,freq=2.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.5521719 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1270)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Weller, K.: Kooperativer Ontologieaufbau (2006) 0.03
    0.025729846 = product of:
      0.051459692 = sum of:
        0.051459692 = product of:
          0.102919385 = sum of:
            0.102919385 = weight(_text_:k in 2397) [ClassicSimilarity], result of:
              0.102919385 = score(doc=2397,freq=2.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.5521719 = fieldWeight in 2397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2397)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Thenmalar, S.; Geetha, T.V.: Enhanced ontology-based indexing and searching (2014) 0.02
    0.022870388 = sum of:
      0.010490555 = product of:
        0.04196222 = sum of:
          0.04196222 = weight(_text_:authors in 1633) [ClassicSimilarity], result of:
            0.04196222 = score(doc=1633,freq=2.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.17628889 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
        0.25 = coord(1/4)
      0.012379832 = product of:
        0.024759663 = sum of:
          0.024759663 = weight(_text_:22 in 1633) [ClassicSimilarity], result of:
            0.024759663 = score(doc=1633,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.1354154 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to improve the conceptual-based search by incorporating structural ontological information such as concepts and relations. Generally, Semantic-based information retrieval aims to identify relevant information based on the meanings of the query terms or on the context of the terms and the performance of semantic information retrieval is carried out through standard measures-precision and recall. Higher precision leads to the (meaningful) relevant documents obtained and lower recall leads to the less coverage of the concepts. Design/methodology/approach - In this paper, the authors enhance the existing ontology-based indexing proposed by Kohler et al., by incorporating sibling information to the index. The index designed by Kohler et al., contains only super and sub-concepts from the ontology. In addition, in our approach, we focus on two tasks; query expansion and ranking of the expanded queries, to improve the efficiency of the ontology-based search. The aforementioned tasks make use of ontological concepts, and relations existing between those concepts so as to obtain semantically more relevant search results for a given query. Findings - The proposed ontology-based indexing technique is investigated by analysing the coverage of concepts that are being populated in the index. Here, we introduce a new measure called index enhancement measure, to estimate the coverage of ontological concepts being indexed. We have evaluated the ontology-based search for the tourism domain with the tourism documents and tourism-specific ontology. The comparison of search results based on the use of ontology "with and without query expansion" is examined to estimate the efficiency of the proposed query expansion task. The ranking is compared with the ORank system to evaluate the performance of our ontology-based search. From these analyses, the ontology-based search results shows better recall when compared to the other concept-based search systems. The mean average precision of the ontology-based search is found to be 0.79 and the recall is found to be 0.65, the ORank system has the mean average precision of 0.62 and the recall is found to be 0.51, while the concept-based search has the mean average precision of 0.56 and the recall is found to be 0.42. Practical implications - When the concept is not present in the domain-specific ontology, the concept cannot be indexed. When the given query term is not available in the ontology then the term-based results are retrieved. Originality/value - In addition to super and sub-concepts, we incorporate the concepts present in same level (siblings) to the ontological index. The structural information from the ontology is determined for the query expansion. The ranking of the documents depends on the type of the query (single concept query, multiple concept queries and concept with relation queries) and the ontological relations that exists in the query and the documents. With this ontological structural information, the search results showed us better coverage of concepts with respect to the query.
    Date
    20. 1.2015 18:30:22
  18. Hitzler, P.; Janowicz, K.: Ontologies in a data driven world : finding the middle ground (2013) 0.02
    0.022054153 = product of:
      0.044108305 = sum of:
        0.044108305 = product of:
          0.08821661 = sum of:
            0.08821661 = weight(_text_:k in 803) [ClassicSimilarity], result of:
              0.08821661 = score(doc=803,freq=2.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.47329018 = fieldWeight in 803, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.09375 = fieldNorm(doc=803)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Kottmann, N.; Studer, T.: Improving semantic query answering (2006) 0.02
    0.020792855 = product of:
      0.04158571 = sum of:
        0.04158571 = product of:
          0.08317142 = sum of:
            0.08317142 = weight(_text_:k in 3979) [ClassicSimilarity], result of:
              0.08317142 = score(doc=3979,freq=4.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.44622225 = fieldWeight in 3979, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3979)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The retrieval problem is one of the main reasoning tasks for knowledge base systems. Given a knowledge base K and a concept C, the retrieval problem consists of finding all individuals a for which K logically entails C(a). We present an approach to answer retrieval queries over (a restriction of) OWL ontologies. Our solution is based on reducing the retrieval problem to a problem of evaluating an SQL query over a database constructed from the original knowledge base. We provide complete answers to retrieval problems. Still, our system performs very well as is shown by a standard benchmark.
  20. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.020732166 = product of:
      0.041464332 = sum of:
        0.041464332 = product of:
          0.16585733 = sum of:
            0.16585733 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.16585733 = score(doc=701,freq=2.0), product of:
                0.4426655 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052213363 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.

Authors

Years

Languages

  • e 99
  • d 26

Types

  • a 89
  • el 31
  • m 15
  • x 7
  • s 5
  • r 2
  • n 1
  • More… Less…

Subjects

Classifications