Search (146 results, page 1 of 8)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.24
    0.244769 = product of:
      0.489538 = sum of:
        0.0489538 = product of:
          0.1468614 = sum of:
            0.1468614 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.1468614 = score(doc=400,freq=2.0), product of:
                0.26131085 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030822188 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.1468614 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.1468614 = score(doc=400,freq=2.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.1468614 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.1468614 = score(doc=400,freq=2.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.1468614 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.1468614 = score(doc=400,freq=2.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(4/8)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.22
    0.22401133 = product of:
      0.44802266 = sum of:
        0.03263587 = product of:
          0.09790761 = sum of:
            0.09790761 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.09790761 = score(doc=5820,freq=2.0), product of:
                0.26131085 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030822188 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.13846226 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13846226 = score(doc=5820,freq=4.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.13846226 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13846226 = score(doc=5820,freq=4.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.13846226 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13846226 = score(doc=5820,freq=4.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(4/8)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.21
    0.20908675 = product of:
      0.3345388 = sum of:
        0.03263587 = product of:
          0.09790761 = sum of:
            0.09790761 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.09790761 = score(doc=701,freq=2.0), product of:
                0.26131085 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030822188 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.09790761 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.09790761 = score(doc=701,freq=2.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.09790761 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.09790761 = score(doc=701,freq=2.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.09790761 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.09790761 = score(doc=701,freq=2.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.008180073 = product of:
          0.02454022 = sum of:
            0.02454022 = weight(_text_:problem in 701) [ClassicSimilarity], result of:
              0.02454022 = score(doc=701,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.1875815 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.625 = coord(5/8)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Sartori, F.; Grazioli, L.: Metadata guiding kowledge engineering : a practical approach (2014) 0.01
    0.0074200584 = product of:
      0.029680233 = sum of:
        0.021252457 = product of:
          0.06375737 = sum of:
            0.06375737 = weight(_text_:problem in 1572) [ClassicSimilarity], result of:
              0.06375737 = score(doc=1572,freq=6.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.48735106 = fieldWeight in 1572, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1572)
          0.33333334 = coord(1/3)
        0.008427775 = product of:
          0.025283325 = sum of:
            0.025283325 = weight(_text_:29 in 1572) [ClassicSimilarity], result of:
              0.025283325 = score(doc=1572,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23319192 = fieldWeight in 1572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1572)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    This paper presents an approach to the analysis, design and development of Knowledge Based Systems based on the Knowledge Artifact concept. Knowledge Artifacts can be meant as means to acquire, represent and maintain knowledge involved in complex problem solving activities. A complex problem is typically made of a huge number of parts that are put together according to a first set of constraints (i.e. the procedural knowledge), dependable on the functional properties it must satisfy, and a second set of rules, dependable on what the expert thinks about the problem and how he/she would represent it. The paper illustrates a way to unify both types of knowledge into a Knowledge Artifact, exploiting Ontologies, Influence Nets and Task Structures formalisms and metadata paradigm.
    Source
    Metadata and semantics research: 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings. Eds.: S. Closs et al
  5. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie (2005) 0.01
    0.006014771 = product of:
      0.024059083 = sum of:
        0.014315128 = product of:
          0.042945385 = sum of:
            0.042945385 = weight(_text_:problem in 1852) [ClassicSimilarity], result of:
              0.042945385 = score(doc=1852,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.3282676 = fieldWeight in 1852, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1852)
          0.33333334 = coord(1/3)
        0.009743956 = product of:
          0.029231867 = sum of:
            0.029231867 = weight(_text_:22 in 1852) [ClassicSimilarity], result of:
              0.029231867 = score(doc=1852,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.2708308 = fieldWeight in 1852, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1852)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:58
  6. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2005) 0.01
    0.006014771 = product of:
      0.024059083 = sum of:
        0.014315128 = product of:
          0.042945385 = sum of:
            0.042945385 = weight(_text_:problem in 4324) [ClassicSimilarity], result of:
              0.042945385 = score(doc=4324,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.3282676 = fieldWeight in 4324, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4324)
          0.33333334 = coord(1/3)
        0.009743956 = product of:
          0.029231867 = sum of:
            0.029231867 = weight(_text_:22 in 4324) [ClassicSimilarity], result of:
              0.029231867 = score(doc=4324,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.2708308 = fieldWeight in 4324, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4324)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:25
  7. Koenderink, N.J.J.P.; Assem, M. van; Hulzebos, J.L.; Broekstra, J.; Top, J.L.: ROC: a method for proto-ontology construction by domain experts (2008) 0.01
    0.0053709024 = product of:
      0.02148361 = sum of:
        0.014460463 = product of:
          0.04338139 = sum of:
            0.04338139 = weight(_text_:problem in 4647) [ClassicSimilarity], result of:
              0.04338139 = score(doc=4647,freq=4.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.33160037 = fieldWeight in 4647, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4647)
          0.33333334 = coord(1/3)
        0.007023146 = product of:
          0.021069437 = sum of:
            0.021069437 = weight(_text_:29 in 4647) [ClassicSimilarity], result of:
              0.021069437 = score(doc=4647,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.19432661 = fieldWeight in 4647, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4647)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Ontology construction is a labour-intensive and costly process. Even though many formal and semi-formal vocabularies are available, creating an ontology for a specific application is hindered in a number of ways. Firstly, the process of elicitating concepts is a time consuming and strenuous process. Secondly, it is difficult to keep focus. Thirdly, technical modelling constructs are hard to understand for the uninitiated. We propose ROC as a method to cope with these problems. ROC builds on well-known approaches for ontology construction. However, we reuse existing sources to generate a repository of proposed associations. ROC assists in efficiently putting forward all relevant concepts and relations by providing a large set of potential candidate associations. Secondly, rather than using intermediate representations of formal constructs we confront the domain expert with 'natural-language-like' statements generated from RDF-based triples. Moreover, we strictly separate the roles of problem owner, domain expert and knowledge engineer, each having his own responsibilities and skills. The domain expert and problem owner keep focus by monitoring a well-defined application purpose. We have implemented an initial set of tools to support ROC. This paper describes the ROC method and two application cases in which we evaluate the overall approach.
    Date
    29. 7.2011 14:44:56
  8. Khiat, A.; Benaissa, M.: Approach for instance-based ontology alignment : using argument and event structures of generative lexicon (2014) 0.01
    0.0051744715 = product of:
      0.020697886 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 1577) [ClassicSimilarity], result of:
              0.03681033 = score(doc=1577,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 1577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1577)
          0.33333334 = coord(1/3)
        0.008427775 = product of:
          0.025283325 = sum of:
            0.025283325 = weight(_text_:29 in 1577) [ClassicSimilarity], result of:
              0.025283325 = score(doc=1577,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23319192 = fieldWeight in 1577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1577)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Ontology alignment became a very important problem to ensure semantic interoperability for different sources of information heterogeneous and distributed. Instance-based ontology alignment represents a very promising technique to find semantic correspondences between entities of different ontologies when they contain a lot of instances. In this paper, we describe a new approach to manage ontologies that do not share common instances.This approach extracts the argument and event structures from a set of instances of the concept of the source ontology and compared them with other semantic features extracted from a set of instances of the concept of the target ontology using Generative Lexicon Theory. We show that it is theoretically powerful because it is based on linguistic semantics and useful in practice. We present the experimental results obtained by running our approach on Biblio test of Benchmark series of OAEI 2011. The results show the good performance of our approach.
    Source
    Metadata and semantics research: 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings. Eds.: S. Closs et al
  9. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.01
    0.005155518 = product of:
      0.020622073 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 2623) [ClassicSimilarity], result of:
              0.03681033 = score(doc=2623,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 2623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2623)
          0.33333334 = coord(1/3)
        0.008351962 = product of:
          0.025055885 = sum of:
            0.025055885 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
              0.025055885 = score(doc=2623,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23214069 = fieldWeight in 2623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2623)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  10. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.00
    0.0049467054 = product of:
      0.019786822 = sum of:
        0.014168305 = product of:
          0.042504914 = sum of:
            0.042504914 = weight(_text_:problem in 4639) [ClassicSimilarity], result of:
              0.042504914 = score(doc=4639,freq=6.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.32490072 = fieldWeight in 4639, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.33333334 = coord(1/3)
        0.0056185164 = product of:
          0.016855549 = sum of:
            0.016855549 = weight(_text_:29 in 4639) [ClassicSimilarity], result of:
              0.016855549 = score(doc=4639,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.15546128 = fieldWeight in 4639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
    Date
    29. 7.2011 14:44:56
  11. Boteram, F.: Semantische Relationen in Dokumentationssprachen : vom Thesaurus zum semantischen Netz (2010) 0.00
    0.00489409 = product of:
      0.03915272 = sum of:
        0.03915272 = product of:
          0.05872908 = sum of:
            0.029497212 = weight(_text_:29 in 4792) [ClassicSimilarity], result of:
              0.029497212 = score(doc=4792,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.27205724 = fieldWeight in 4792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4792)
            0.029231867 = weight(_text_:22 in 4792) [ClassicSimilarity], result of:
              0.029231867 = score(doc=4792,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.2708308 = fieldWeight in 4792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4792)
          0.6666667 = coord(2/3)
      0.125 = coord(1/8)
    
    Date
    2. 3.2013 12:29:05
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  12. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.00
    0.0042840866 = product of:
      0.017136347 = sum of:
        0.011568371 = product of:
          0.034705114 = sum of:
            0.034705114 = weight(_text_:problem in 4399) [ClassicSimilarity], result of:
              0.034705114 = score(doc=4399,freq=4.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.2652803 = fieldWeight in 4399, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4399)
          0.33333334 = coord(1/3)
        0.005567975 = product of:
          0.016703924 = sum of:
            0.016703924 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
              0.016703924 = score(doc=4399,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.15476047 = fieldWeight in 4399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4399)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Indexing plays a vital role in Information Retrieval. With the availability of huge volume of information, it has become necessary to index the information in such a way to make easier for the end users to find the information they want efficiently and accurately. Keyword-based indexing uses words as indexing terms. It is not capable of capturing the implicit relation among terms or the semantics of the words in the document. To eliminate this limitation, ontology-based indexing came into existence, which allows semantic based indexing to solve complex and indirect user queries. Ontologies are used for document indexing which allows semantic based information retrieval. Existing ontologies or the ones constructed from scratch are used presently for indexing. Constructing ontologies from scratch is a labor-intensive task and requires extensive domain knowledge whereas use of an existing ontology may leave some important concepts in documents un-annotated. Using multiple ontologies can overcome the problem of missing out concepts to a great extent, but it is difficult to manage (changes in ontologies over time by their developers) multiple ontologies and ontology heterogeneity also arises due to ontologies constructed by different ontology developers. One possible solution to managing multiple ontologies and build from scratch is to use modular ontologies for indexing.
    Modular ontologies are built in modular manner by combining modules from multiple relevant ontologies. Ontology heterogeneity also arises during modular ontology construction because multiple ontologies are being dealt with, during this process. Ontologies need to be aligned before using them for modular ontology construction. The existing approaches for ontology alignment compare all the concepts of each ontology to be aligned, hence not optimized in terms of time and search space utilization. A new indexing technique is proposed based on modular ontology. An efficient ontology alignment technique is proposed to solve the heterogeneity problem during the construction of modular ontology. Results are satisfactory as Precision and Recall are improved by (8%) and (10%) respectively. The value of Pearsons Correlation Coefficient for degree of similarity, time, search space requirement, precision and recall are close to 1 which shows that the results are significant. Further research can be carried out for using modular ontology based indexing technique for Multimedia Information Retrieval and Bio-Medical information retrieval.
    Date
    20. 1.2015 18:30:22
  13. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.00
    0.0041949344 = product of:
      0.033559475 = sum of:
        0.033559475 = product of:
          0.05033921 = sum of:
            0.025283325 = weight(_text_:29 in 4649) [ClassicSimilarity], result of:
              0.025283325 = score(doc=4649,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23319192 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
            0.025055885 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.025055885 = score(doc=4649,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.6666667 = coord(2/3)
      0.125 = coord(1/8)
    
    Date
    29. 7.2011 14:44:56
    26.12.2011 13:40:22
  14. Kottmann, N.; Studer, T.: Improving semantic query answering (2006) 0.00
    0.0040900367 = product of:
      0.032720294 = sum of:
        0.032720294 = product of:
          0.09816088 = sum of:
            0.09816088 = weight(_text_:problem in 3979) [ClassicSimilarity], result of:
              0.09816088 = score(doc=3979,freq=8.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.750326 = fieldWeight in 3979, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3979)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    The retrieval problem is one of the main reasoning tasks for knowledge base systems. Given a knowledge base K and a concept C, the retrieval problem consists of finding all individuals a for which K logically entails C(a). We present an approach to answer retrieval queries over (a restriction of) OWL ontologies. Our solution is based on reducing the retrieval problem to a problem of evaluating an SQL query over a database constructed from the original knowledge base. We provide complete answers to retrieval problems. Still, our system performs very well as is shown by a standard benchmark.
  15. Miller, R.: Three problems in logic-based knowledge representation (2006) 0.00
    0.0026565571 = product of:
      0.021252457 = sum of:
        0.021252457 = product of:
          0.06375737 = sum of:
            0.06375737 = weight(_text_:problem in 660) [ClassicSimilarity], result of:
              0.06375737 = score(doc=660,freq=6.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.48735106 = fieldWeight in 660, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=660)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    Purpose - The purpose of this article is to give a non-technical overview of some of the technical progress made recently on tackling three fundamental problems in the area of formal knowledge representation/artificial intelligence. These are the Frame Problem, the Ramification Problem, and the Qualification Problem. The article aims to describe the development of two logic-based languages, the Event Calculus and Modular-E, to address various aspects of these issues. The article also aims to set this work in the wider context of contemporary developments in applied logic, non-monotonic reasoning and formal theories of common sense. Design/methodology/approach - The study applies symbolic logic to model aspects of human knowledge and reasoning. Findings - The article finds that there are fundamental interdependencies between the three problems mentioned above. The conceptual framework shared by the Event Calculus and Modular-E is appropriate for providing principled solutions to them. Originality/value - This article provides an overview of an important approach to dealing with three fundamental issues in artificial intelligence.
  16. Khalifa, M.; Shen, K.N.: Applying semantic networks to hypertext design : effects on knowledge structure acquisition and problem solving (2010) 0.00
    0.0026565571 = product of:
      0.021252457 = sum of:
        0.021252457 = product of:
          0.06375737 = sum of:
            0.06375737 = weight(_text_:problem in 3708) [ClassicSimilarity], result of:
              0.06375737 = score(doc=3708,freq=6.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.48735106 = fieldWeight in 3708, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3708)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    One of the key objectives of knowledge management is to transfer knowledge quickly and efficiently from experts to novices, who are different in terms of the structural properties of domain knowledge or knowledge structure. This study applies experts' semantic networks to hypertext navigation design and examines the potential of the resulting design, i.e., semantic hypertext, in facilitating knowledge structure acquisition and problem solving. Moreover, we argue that the level of sophistication of the knowledge structure acquired by learners is an important mediator influencing the learning outcomes (in this case, problem solving). The research model was empirically tested with a situated experiment involving 80 business professionals. The results of the empirical study provided strong support for the effectiveness of semantic hypertext in transferring knowledge structure and reported a significant full mediating effect of knowledge structure sophistication. Both theoretical and practical implications of this research are discussed.
  17. Hinkelmann, K.: Ontopia Omnigator : ein Werkzeug zur Einführung in Topic Maps (20xx) 0.00
    0.0024581011 = product of:
      0.01966481 = sum of:
        0.01966481 = product of:
          0.058994424 = sum of:
            0.058994424 = weight(_text_:29 in 3162) [ClassicSimilarity], result of:
              0.058994424 = score(doc=3162,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.5441145 = fieldWeight in 3162, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3162)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    4. 9.2011 12:29:09
  18. Román, J.H.; Hulin, K.J.; Collins, L.M.; Powell, J.E.: Entity disambiguation using semantic networks (2012) 0.00
    0.0022137975 = product of:
      0.01771038 = sum of:
        0.01771038 = product of:
          0.05313114 = sum of:
            0.05313114 = weight(_text_:problem in 461) [ClassicSimilarity], result of:
              0.05313114 = score(doc=461,freq=6.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.4061259 = fieldWeight in 461, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=461)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    A major stumbling block preventing machines from understanding text is the problem of entity disambiguation. While humans find it easy to determine that a person named in one story is the same person referenced in a second story, machines rely heavily on crude heuristics such as string matching and stemming to make guesses as to whether nouns are coreferent. A key advantage that humans have over machines is the ability to mentally make connections between ideas and, based on these connections, reason how likely two entities are to be the same. Mirroring this natural thought process, we have created a prototype framework for disambiguating entities that is based on connectedness. In this article, we demonstrate it in the practical application of disambiguating authors across a large set of bibliographic records. By representing knowledge from the records as edges in a graph between a subject and an object, we believe that the problem of disambiguating entities reduces to the problem of discovering the most strongly connected nodes in a graph. The knowledge from the records comes in many different forms, such as names of people, date of publication, and themes extracted from the text of the abstract. These different types of knowledge are fused to create the graph required for disambiguation. Furthermore, the resulting graph and framework can be used for more complex operations.
  19. Halpin, H.; Hayes, P.J.: When owl:sameAs isn't the same : an analysis of identity links on the Semantic Web (2010) 0.00
    0.0021690698 = product of:
      0.017352559 = sum of:
        0.017352559 = product of:
          0.052057672 = sum of:
            0.052057672 = weight(_text_:problem in 4834) [ClassicSimilarity], result of:
              0.052057672 = score(doc=4834,freq=4.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.39792046 = fieldWeight in 4834, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4834)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    In Linked Data, the use of owl:sameAs is ubiquitous in 'inter-linking' data-sets. However, there is a lurking suspicion within the Linked Data community that this use of owl:sameAs may be somehow incorrect, in particular with regards to its interactions with inference. In fact, owl:sameAs can be considered just one type of 'identity link', a link that declares two items to be identical in some fashion. After reviewing the definitions and history of the problem of identity in philosophy and knowledge representation, we outline four alternative readings of owl:sameAs, showing with examples how it is being (ab)used on the Web of data. Then we present possible solutions to this problem by introducing alternative identity links that rely on named graphs.
  20. Andelfinger, U.; Wyssusek, B.; Kremberg, B.; Totzke, R.: Ontologies in knowledge management : panacea or mirage? 0.00
    0.0021560297 = product of:
      0.008624119 = sum of:
        0.005112546 = product of:
          0.015337638 = sum of:
            0.015337638 = weight(_text_:problem in 4393) [ClassicSimilarity], result of:
              0.015337638 = score(doc=4393,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.11723843 = fieldWeight in 4393, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4393)
          0.33333334 = coord(1/3)
        0.003511573 = product of:
          0.010534719 = sum of:
            0.010534719 = weight(_text_:29 in 4393) [ClassicSimilarity], result of:
              0.010534719 = score(doc=4393,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.097163305 = fieldWeight in 4393, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4393)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Content
    Zusätzlich zu diesen Überlegungen treten sprachphilosophische Grundsatzüberlegungen: Jeder semantische Definitionsversuch einer technischen Ontologie muss durch Verwendung von Metasprachen erfolgen - letztlich kommt man hier wahrscheinlich nicht ohne die Verwendung natürlicher Sprache aus. Sehr schnell wird man also in einen unendlichen Regress verwiesen, wenn man versucht, technische Ontologien 'vollständig' durch weitere technische Ontologien zu beschreiben. Im Fortgang der Argumentation wird dann aufgezeigt, dass eine wesentliche Herausforderung bei technischen Ontologien also darin liegt, angesichts der Vielschichtigkeit menschlichen Wissens die Möglichkeiten (aber auch notwendigen Begrenzungen) symbolvermittelter Wissensrepräsentationen zu verbinden mit Formen der situativen und intersubjektiven Interpretation dieser symbolhaften Repräsentationen in sozialen Prozessen und in natürlicher Sprache. Nur so kann man auch dem Problem des skizzierten 'infiniten Regresses' begegnen, wonach die Bedeutung einer (technischen) Ontologie nie vollständig wieder selbst durch (technische) Ontologien beschrieben werden kann.
    Source
    http://www.cba.neu.edu/uploadedFiles/Site_Sections/OLKC_2010/Program_Overview/Parallel_Sessions/180_Wyssusek_Abstract_Ontologies%20in%20Knowledge%20Management%281%29.pdf

Authors

Years

Languages

  • e 118
  • d 23
  • f 1
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 103
  • el 42
  • x 12
  • m 9
  • s 4
  • n 1
  • r 1
  • More… Less…

Subjects