Search (151 results, page 1 of 8)

  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"el"
  1. Beppler, F.D.; Fonseca, F.T.; Pacheco, R.C.S.: Hermeneus: an architecture for an ontology-enabled information retrieval (2008) 0.03
    0.026819343 = product of:
      0.067048356 = sum of:
        0.008173384 = weight(_text_:a in 3261) [ClassicSimilarity], result of:
          0.008173384 = score(doc=3261,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 3261, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3261)
        0.058874972 = sum of:
          0.02118135 = weight(_text_:information in 3261) [ClassicSimilarity], result of:
            0.02118135 = score(doc=3261,freq=10.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.2602176 = fieldWeight in 3261, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=3261)
          0.037693623 = weight(_text_:22 in 3261) [ClassicSimilarity], result of:
            0.037693623 = score(doc=3261,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 3261, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3261)
      0.4 = coord(2/5)
    
    Abstract
    Ontologies improve IR systems regarding its retrieval and presentation of information, which make the task of finding information more effective, efficient, and interactive. In this paper we argue that ontologies also greatly improve the engineering of such systems. We created a framework that uses ontology to drive the process of engineering an IR system. We developed a prototype that shows how a domain specialist without knowledge in the IR field can build an IR system with interactive components. The resulting system provides support for users not only to find their information needs but also to extend their state of knowledge. This way, our approach to ontology-enabled information retrieval addresses both the engineering aspect described here and also the usability aspect described elsewhere.
    Date
    28.11.2016 12:43:22
    Type
    a
  2. Priss, U.: Faceted knowledge representation (1999) 0.03
    0.025825147 = product of:
      0.064562865 = sum of:
        0.009535614 = weight(_text_:a in 2654) [ClassicSimilarity], result of:
          0.009535614 = score(doc=2654,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 2654, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.05502725 = sum of:
          0.011051352 = weight(_text_:information in 2654) [ClassicSimilarity], result of:
            0.011051352 = score(doc=2654,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.13576832 = fieldWeight in 2654, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2654)
          0.043975897 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
            0.043975897 = score(doc=2654,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.2708308 = fieldWeight in 2654, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2654)
      0.4 = coord(2/5)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
    Type
    a
  3. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.02
    0.024290197 = product of:
      0.06072549 = sum of:
        0.004086692 = weight(_text_:a in 4820) [ClassicSimilarity], result of:
          0.004086692 = score(doc=4820,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.07643694 = fieldWeight in 4820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4820)
        0.0566388 = sum of:
          0.018945174 = weight(_text_:information in 4820) [ClassicSimilarity], result of:
            0.018945174 = score(doc=4820,freq=8.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.23274569 = fieldWeight in 4820, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
          0.037693623 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
            0.037693623 = score(doc=4820,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
      0.4 = coord(2/5)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
    Type
    a
  4. Priss, U.: Description logic and faceted knowledge representation (1999) 0.02
    0.024035787 = product of:
      0.060089465 = sum of:
        0.012923255 = weight(_text_:a in 2655) [ClassicSimilarity], result of:
          0.012923255 = score(doc=2655,freq=20.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.24171482 = fieldWeight in 2655, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.04716621 = sum of:
          0.009472587 = weight(_text_:information in 2655) [ClassicSimilarity], result of:
            0.009472587 = score(doc=2655,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.116372846 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
          0.037693623 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
            0.037693623 = score(doc=2655,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
      0.4 = coord(2/5)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
    Type
    a
  5. Definition of the CIDOC Conceptual Reference Model (2003) 0.02
    0.021178266 = product of:
      0.052945666 = sum of:
        0.005779455 = weight(_text_:a in 1652) [ClassicSimilarity], result of:
          0.005779455 = score(doc=1652,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 1652, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1652)
        0.04716621 = sum of:
          0.009472587 = weight(_text_:information in 1652) [ClassicSimilarity], result of:
            0.009472587 = score(doc=1652,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.116372846 = fieldWeight in 1652, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=1652)
          0.037693623 = weight(_text_:22 in 1652) [ClassicSimilarity], result of:
            0.037693623 = score(doc=1652,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 1652, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1652)
      0.4 = coord(2/5)
    
    Abstract
    This document is the formal definition of the CIDOC Conceptual Reference Model ("CRM"), a formal ontology intended to facilitate the integration, mediation and interchange of heterogeneous cultural heritage information. The CRM is the culmination of more than a decade of standards development work by the International Committee for Documentation (CIDOC) of the International Council of Museums (ICOM). Work on the CRM itself began in 1996 under the auspices of the ICOM-CIDOC Documentation Standards Working Group. Since 2000, development of the CRM has been officially delegated by ICOM-CIDOC to the CIDOC CRM Special Interest Group, which collaborates with the ISO working group ISO/TC46/SC4/WG9 to bring the CRM to the form and status of an International Standard.
    Date
    6. 8.2010 14:22:28
  6. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.02
    0.015289003 = product of:
      0.038222507 = sum of:
        0.0068111527 = weight(_text_:a in 539) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=539,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=539)
        0.031411353 = product of:
          0.06282271 = sum of:
            0.06282271 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.06282271 = score(doc=539,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    A discussion on current initiatives regarding terminology registries.
    Date
    26.12.2011 13:22:07
  7. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.01
    0.012231203 = product of:
      0.030578006 = sum of:
        0.005448922 = weight(_text_:a in 318) [ClassicSimilarity], result of:
          0.005448922 = score(doc=318,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 318, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=318)
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
              0.050258167 = score(doc=318,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=318)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 5.2021 12:43:05
    Type
    a
  8. Mayfield, J.; Finin, T.: Information retrieval on the Semantic Web : integrating inference and retrieval 0.01
    0.011920974 = product of:
      0.05960487 = sum of:
        0.05960487 = sum of:
          0.015628971 = weight(_text_:information in 4330) [ClassicSimilarity], result of:
            0.015628971 = score(doc=4330,freq=4.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.1920054 = fieldWeight in 4330, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4330)
          0.043975897 = weight(_text_:22 in 4330) [ClassicSimilarity], result of:
            0.043975897 = score(doc=4330,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.2708308 = fieldWeight in 4330, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4330)
      0.2 = coord(1/5)
    
    Abstract
    One vision of the Semantic Web is that it will be much like the Web we know today, except that documents will be enriched by annotations in machine understandable markup. These annotations will provide metadata about the documents as well as machine interpretable statements capturing some of the meaning of document content. We discuss how the information retrieval paradigm might be recast in such an environment. We suggest that retrieval can be tightly bound to inference. Doing so makes today's Web search engines useful to Semantic Web inference engines, and causes improvements in either retrieval or inference to lead directly to improvements in the other.
    Date
    12. 2.2011 17:35:22
  9. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2005) 0.01
    0.01100545 = product of:
      0.05502725 = sum of:
        0.05502725 = sum of:
          0.011051352 = weight(_text_:information in 4324) [ClassicSimilarity], result of:
            0.011051352 = score(doc=4324,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.13576832 = fieldWeight in 4324, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4324)
          0.043975897 = weight(_text_:22 in 4324) [ClassicSimilarity], result of:
            0.043975897 = score(doc=4324,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.2708308 = fieldWeight in 4324, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4324)
      0.2 = coord(1/5)
    
    Date
    11. 2.2011 18:22:25
  10. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.01
    0.009850507 = product of:
      0.024626266 = sum of:
        0.005779455 = weight(_text_:a in 4649) [ClassicSimilarity], result of:
          0.005779455 = score(doc=4649,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 4649, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4649)
        0.018846812 = product of:
          0.037693623 = sum of:
            0.037693623 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.037693623 = score(doc=4649,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  11. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.009006732 = product of:
      0.02251683 = sum of:
        0.0068111527 = weight(_text_:a in 4553) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=4553,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 4553, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.015705677 = product of:
          0.031411353 = sum of:
            0.031411353 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.031411353 = score(doc=4553,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
    Type
    a
  12. Hesse, W.; Verrijn-Stuart, A.: Towards a theory of information systems : the FRISCO approach (1999) 0.01
    0.008249131 = product of:
      0.020622827 = sum of:
        0.011797264 = weight(_text_:a in 3059) [ClassicSimilarity], result of:
          0.011797264 = score(doc=3059,freq=24.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.22065444 = fieldWeight in 3059, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3059)
        0.008825562 = product of:
          0.017651124 = sum of:
            0.017651124 = weight(_text_:information in 3059) [ClassicSimilarity], result of:
              0.017651124 = score(doc=3059,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21684799 = fieldWeight in 3059, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3059)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Information Systems (IS) is among the most widespread terms in the Computer Science field but a well founded, widely accepted theory of IS is still missing. With the Internet publication of the FRISCO report, the IFIP task group "FRamework of Information System COncepts" has taken a first step towards such a theory. Among the major achievements of this report are: (1) it builds on a solid basis formed by semiotics and ontology, (2) it defines a compendium of about 100 core IS concepts in a coherent and consistent way, (3) it goes beyond the common narrow view of information systems as pure technical artefacts by adopting an interdisciplinary, socio-technical view on them. In the autumn of 1999, a first review of the report and its impact was undertaken at the ISCO-4 conference in Leiden. In a workshop specifically devoted to the subject, the original aims and goals of FRISCO were confirmed to be still valid and the overall approach and achievements of the report were acknowledged. On the other hand, the workshop revealed some misconceptions, errors and weaknesses of the report in its present form, which are to be removed through a comprehensive revision now under way. This paper reports on the results of the Leiden conference and the current revision activities. It also points out some important consequences of the FRISCO approach as a whole.
    Theme
    Information
  13. Scheir, P.; Pammer, V.; Lindstaedt, S.N.: Information retrieval on the Semantic Web : does it exist? (2007) 0.01
    0.007639394 = product of:
      0.019098485 = sum of:
        0.0067426977 = weight(_text_:a in 4329) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=4329,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 4329, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
        0.012355788 = product of:
          0.024711575 = sum of:
            0.024711575 = weight(_text_:information in 4329) [ClassicSimilarity], result of:
              0.024711575 = score(doc=4329,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3035872 = fieldWeight in 4329, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4329)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Plenty of contemporary attempts to search exist that are associated with the area of Semantic Web. But which of them qualify as information retrieval for the Semantic Web? Do such approaches exist? To answer these questions we take a look at the nature of the Semantic Web and Semantic Desktop and at definitions for information and data retrieval. We survey current approaches referred to by their authors as information retrieval for the Semantic Web or that use Semantic Web technology for search.
    Source
    Lernen - Wissen - Adaption : workshop proceedings / LWA 2007, Halle, September 2007. Martin Luther University Halle-Wittenberg, Institute for Informatics, Databases and Information Systems. Hrsg.: Alexander Hinneburg
    Type
    a
  14. Riva, P.; Doerr, M.; Zumer, M.: FRBRoo: enabling a common view of information from memory institutions (2008) 0.01
    0.0074652806 = product of:
      0.018663201 = sum of:
        0.010769378 = weight(_text_:a in 3743) [ClassicSimilarity], result of:
          0.010769378 = score(doc=3743,freq=20.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20142901 = fieldWeight in 3743, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3743)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 3743) [ClassicSimilarity], result of:
              0.015787644 = score(doc=3743,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 3743, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3743)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In 2008 the FRBR/CRM Harmonisation Working Group has achieved a major milestone: a complete version of the object-oriented definition of FRBR (FRBRoo) was released for comment. After a brief overview of the history and context of the Working Group, this paper focuses on the primary contributions resulting from this work. - FRBRoo is a self-contained document which expresses the concepts of FRBR using the objectoriented methodology and framework of CIDOC CRM. It is an alternative view on library conceptualisation for a different purpose, not a replacement for FRBR. - This 'translation' process presented an opportunity to verify and confirm FRBR's internal consistency. - FRBRoo offers a common view of library and museum documentation as two kinds of information from memory institutions. Such a common view is necessary to provide interoperable information systems for all users interested in accessing common or related content. - The analysis provided an opportunity for mutual enrichment of FRBR and CIDOC CRM. Examples include: - - Addition of the modelling of time and events to FRBR, which can be seen in its application to the publishing process - - Clarification of the manifestation entity - - Explicit modelling of performances and recordings in FRBR - - Adding the work entity to CRM - - Adding the identifier assignment process to CRM. - Producing a formalisation which is more suited for implementation with object-oriented tools, and which facilitates the testing and adoption of FRBR concepts in implementations with different functional specifications and in different environments.
    Content
    Beitrag während: World library and information congress: 74th IFLA general conference and council, 10-14 August 2008, Québec, Canada.
  15. Waard, A. de; Fluit, C.; Harmelen, F. van: Drug Ontology Project for Elsevier (DOPE) (2007) 0.01
    0.007458289 = product of:
      0.018645722 = sum of:
        0.008173384 = weight(_text_:a in 758) [ClassicSimilarity], result of:
          0.008173384 = score(doc=758,freq=18.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 758, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
        0.010472339 = product of:
          0.020944677 = sum of:
            0.020944677 = weight(_text_:information in 758) [ClassicSimilarity], result of:
              0.020944677 = score(doc=758,freq=22.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.25731003 = fieldWeight in 758, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=758)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Innovative research institutes rely on the availability of complete and accurate information about new research and development, and it is the business of information providers such as Elsevier to provide the required information in a cost-effective way. It is very likely that the semantic web will make an important contribution to this effort, since it facilitates access to an unprecedented quantity of data. However, with the unremitting growth of scientific information, integrating access to all this information remains a significant problem, not least because of the heterogeneity of the information sources involved - sources which may use different syntactic standards (syntactic heterogeneity), organize information in very different ways (structural heterogeneity) and even use different terminologies to refer to the same information (semantic heterogeneity). The ability to address these different kinds of heterogeneity is the key to integrated access. Thesauri have already proven to be a core technology to effective information access as they provide controlled vocabularies for indexing information, and thereby help to overcome some of the problems of free-text search by relating and grouping relevant terms in a specific domain. However, currently there is no open architecture which supports the use of these thesauri for querying other data sources. For example, when we move from the centralized and controlled use of EMTREE within EMBASE.com to a distributed setting, it becomes crucial to improve access to the thesaurus by means of a standardized representation using open data standards that allow for semantic qualifications. In general, mental models and keywords for accessing data diverge between subject areas and communities, and so many different ontologies have been developed. An ideal architecture must therefore support the disclosure of distributed and heterogeneous data sources through different ontologies. The aim of the DOPE project (Drug Ontology Project for Elsevier) is to investigate the possibility of providing access to multiple information sources in the area of life science through a single interface.
    Type
    a
  16. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.01
    0.0074442835 = product of:
      0.018610708 = sum of:
        0.009138121 = weight(_text_:a in 5365) [ClassicSimilarity], result of:
          0.009138121 = score(doc=5365,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1709182 = fieldWeight in 5365, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
        0.009472587 = product of:
          0.018945174 = sum of:
            0.018945174 = weight(_text_:information in 5365) [ClassicSimilarity], result of:
              0.018945174 = score(doc=5365,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23274569 = fieldWeight in 5365, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5365)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  17. Aitken, S.; Reid, S.: Evaluation of an ontology-based information retrieval tool (2000) 0.01
    0.0073474604 = product of:
      0.01836865 = sum of:
        0.009437811 = weight(_text_:a in 2862) [ClassicSimilarity], result of:
          0.009437811 = score(doc=2862,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17652355 = fieldWeight in 2862, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=2862)
        0.0089308405 = product of:
          0.017861681 = sum of:
            0.017861681 = weight(_text_:information in 2862) [ClassicSimilarity], result of:
              0.017861681 = score(doc=2862,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21943474 = fieldWeight in 2862, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2862)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper evaluates the use of an explicit domain ontology in an information retrieval tool. The evaluation compares the performance of ontology-enhanced retrieval with keyword retrieval for a fixed set of queries across several data sets. The robustness of the IR approach is assessed by comparing the performance of the tool on the original data set with that on previously unseen data.
    Content
    Beitrag für: Workshop on the Applications of Ontologies and Problem-Solving Methods, (eds) Gómez-Pérez, A., Benjamins, V.R., Guarino, N., and Uschold, M. European Conference on Artificial Intelligence 2000, Berlin.
    Type
    a
  18. Zhang, L.; Liu, Q.L.; Zhang, J.; Wang, H.F.; Pan, Y.; Yu, Y.: Semplore: an IR approach to scalable hybrid query of Semantic Web data (2007) 0.01
    0.0072230585 = product of:
      0.018057646 = sum of:
        0.0076151006 = weight(_text_:a in 231) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=231,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 231, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=231)
        0.010442546 = product of:
          0.020885091 = sum of:
            0.020885091 = weight(_text_:information in 231) [ClassicSimilarity], result of:
              0.020885091 = score(doc=231,freq=14.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.256578 = fieldWeight in 231, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=231)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    As an extension to the current Web, Semantic Web will not only contain structured data with machine understandable semantics but also textual information. While structured queries can be used to find information more precisely on the Semantic Web, keyword searches are still needed to help exploit textual information. It thus becomes very important that we can combine precise structured queries with imprecise keyword searches to have a hybrid query capability. In addition, due to the huge volume of information on the Semantic Web, the hybrid query must be processed in a very scalable way. In this paper, we define such a hybrid query capability that combines unary tree-shaped structured queries with keyword searches. We show how existing information retrieval (IR) index structures and functions can be reused to index semantic web data and its textual information, and how the hybrid query is evaluated on the index structure using IR engines in an efficient and scalable manner. We implemented this IR approach in an engine called Semplore. Comprehensive experiments on its performance show that it is a promising approach. It leads us to believe that it may be possible to evolve current web search engines to query and search the Semantic Web. Finally, we briefy describe how Semplore is used for searching Wikipedia and an IBM customer's product information.
    Type
    a
  19. Rindflesch, T.C.; Aronson, A.R.: Semantic processing in information retrieval (1993) 0.01
    0.00711762 = product of:
      0.01779405 = sum of:
        0.0067426977 = weight(_text_:a in 4121) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=4121,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 4121, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4121)
        0.011051352 = product of:
          0.022102704 = sum of:
            0.022102704 = weight(_text_:information in 4121) [ClassicSimilarity], result of:
              0.022102704 = score(doc=4121,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27153665 = fieldWeight in 4121, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4121)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Intuition suggests that one way to enhance the information retrieval process would be the use of phrases to characterize the contents of text. A number of researchers, however, have noted that phrases alone do not improve retrieval effectiveness. In this paper we briefly review the use of phrases in information retrieval and then suggest extensions to this paradigm using semantic information. We claim that semantic processing, which can be viewed as expressing relations between the concepts represented by phrases, will in fact enhance retrieval effectiveness. The availability of the UMLS® domain model, which we exploit extensively, significantly contributes to the feasibility of this processing.
    Type
    a
  20. Aizawa, A.; Kohlhase, M.: Mathematical information retrieval (2021) 0.01
    0.00711762 = product of:
      0.01779405 = sum of:
        0.0067426977 = weight(_text_:a in 667) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=667,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 667, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
        0.011051352 = product of:
          0.022102704 = sum of:
            0.022102704 = weight(_text_:information in 667) [ClassicSimilarity], result of:
              0.022102704 = score(doc=667,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27153665 = fieldWeight in 667, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=667)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We present an overview of the NTCIR Math Tasks organized during NTCIR-10, 11, and 12. These tasks are primarily dedicated to techniques for searching mathematical content with formula expressions. In this chapter, we first summarize the task design and introduce test collections generated in the tasks. We also describe the features and main challenges of mathematical information retrieval systems and discuss future perspectives in the field.
    Series
    ¬The Information retrieval series, vol 43
    Source
    Evaluating information retrieval and access tasks. Eds.: Sakai, T., Oard, D., Kando, N. [https://doi.org/10.1007/978-981-15-5554-1_12]
    Type
    a

Years

Languages

  • e 134
  • d 13
  • el 1
  • More… Less…

Types

  • a 67
  • n 12
  • r 4
  • x 4
  • p 3
  • s 1
  • More… Less…