Search (104 results, page 1 of 6)

  • × language_ss:"e"
  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Mainz, I.; Weller, K.; Paulsen, I.; Mainz, D.; Kohl, J.; Haeseler, A. von: Ontoverse : collaborative ontology engineering for the life sciences (2008) 0.01
    0.011314856 = product of:
      0.033944566 = sum of:
        0.0071393843 = weight(_text_:in in 1594) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=1594,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 1594, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1594)
        0.026805183 = weight(_text_:und in 1594) [ClassicSimilarity], result of:
          0.026805183 = score(doc=1594,freq=4.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.27704588 = fieldWeight in 1594, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=1594)
      0.33333334 = coord(2/6)
    
    Abstract
    Ontologien werden als eine neue Methode zur detaillierten und formalisierten Wissensrepräsentation vorgestellt, dabei liegt der Schwerpunkt auf dem Einsatz von Ontologien in den Lebenswissenschaften. Wir zeigen, dass für ausgereifte wissenschaftliche Ontologien Ansätze zur gemeinschaftlichen Erarbeitung notwendig sind. Das Ontoverse Ontologie-Wiki wird vorgestellt; es ist ein Hilfsmittel, das alle Phasen des gemeinschaftlichen Ontologieaufbaus unterstützt.
    Source
    Information - Wissenschaft und Praxis. 59(2008) H.2, S.91-99
  2. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.01
    0.011251582 = product of:
      0.033754744 = sum of:
        0.010096614 = weight(_text_:in in 3376) [ClassicSimilarity], result of:
          0.010096614 = score(doc=3376,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.17003182 = fieldWeight in 3376, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=3376)
        0.02365813 = product of:
          0.04731626 = sum of:
            0.04731626 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
              0.04731626 = score(doc=3376,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.30952093 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This chapter presents ontologies and their role in the creation of the Semantic Web. Ontologies hold special interest, because they are very closely related to the way we understand the world. They provide common understanding, the very first step to successful communication. In following sections, we will present ontologies, how they are created and used. We will describe available tools for specifying and working with ontologies.
    Date
    31. 7.2010 16:58:22
  3. Mazzocchi, F.; Plini, P.: Refining thesaurus relational structure : implications and opportunities (2008) 0.01
    0.010270989 = product of:
      0.030812964 = sum of:
        0.010709076 = weight(_text_:in in 5448) [ClassicSimilarity], result of:
          0.010709076 = score(doc=5448,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 5448, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5448)
        0.020103889 = weight(_text_:und in 5448) [ClassicSimilarity], result of:
          0.020103889 = score(doc=5448,freq=4.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.20778441 = fieldWeight in 5448, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=5448)
      0.33333334 = coord(2/6)
    
    Abstract
    In this paper the possibility to develop a richer relational structure for thesauri is explored and described. The development of a new environmental thesaurus - EARTh (Environmental Applications Reference Thesaurus) - is serving as a case study for exploring the refinement of thesaurus relational structure by specialising standard relationships into different subtypes. Together with benefits and opportunities, implications and possible challenges that an expanded set of thesaurus relations may cause are evaluated.
    Series
    Fortschritte in der Wissensorganisation; Bd.10
    Source
    Kompatibilität, Medien und Ethik in der Wissensorganisation - Compatibility, Media and Ethics in Knowledge Organization: Proceedings der 10. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Wien, 3.-5. Juli 2006 - Proceedings of the 10th Conference of the German Section of the International Society of Knowledge Organization Vienna, 3-5 July 2006. Ed.: H.P. Ohly, S. Netscher u. K. Mitgutsch
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  4. Peters, I.; Weller. K.: Paradigmatic and syntagmatic relations in knowledge organization systems (2008) 0.01
    0.010184498 = product of:
      0.030553492 = sum of:
        0.013968632 = weight(_text_:in in 1593) [ClassicSimilarity], result of:
          0.013968632 = score(doc=1593,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.23523843 = fieldWeight in 1593, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1593)
        0.01658486 = weight(_text_:und in 1593) [ClassicSimilarity], result of:
          0.01658486 = score(doc=1593,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.17141339 = fieldWeight in 1593, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1593)
      0.33333334 = coord(2/6)
    
    Abstract
    Classical knowledge representation methods have been successfully working for years with established - but in a way restricted and vague - relations such as synonymy, hierarchy (meronymy, hyponymy) and unspecified associations. Recent developments like ontologies and folksonomies show new forms of collaboration, indexing and knowledge representation and encourage the reconsideration of standard knowledge relationships for practical use. In a summarizing overview we show which relations are currently used in knowledge organization systems (controlled vocabularies, ontologies and folksonomies) and which relations are expressed explicitly or which may be inherently hidden in them.
    Source
    Information - Wissenschaft und Praxis. 59(2008) H.2, S.100-107
  5. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.01
    0.00990557 = product of:
      0.02971671 = sum of:
        0.011973113 = weight(_text_:in in 4820) [ClassicSimilarity], result of:
          0.011973113 = score(doc=4820,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.20163295 = fieldWeight in 4820, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4820)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
              0.035487194 = score(doc=4820,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 4820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4820)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
  6. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.01
    0.009484224 = product of:
      0.028452672 = sum of:
        0.010709076 = weight(_text_:in in 2418) [ClassicSimilarity], result of:
          0.010709076 = score(doc=2418,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 2418, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
              0.035487194 = score(doc=2418,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2418)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Series
    Lecture notes in computer science; vol.4172
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  7. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.01
    0.0091349725 = product of:
      0.027404916 = sum of:
        0.010820055 = weight(_text_:in in 4644) [ClassicSimilarity], result of:
          0.010820055 = score(doc=4644,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1822149 = fieldWeight in 4644, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
        0.01658486 = weight(_text_:und in 4644) [ClassicSimilarity], result of:
          0.01658486 = score(doc=4644,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.17141339 = fieldWeight in 4644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL. The method identifies four steps in the conversion process. In each step, decisions have to be taken with respect to the syntax or semantics of the resulting representation. Each step is supported through a number of guidelines. The method is illustrated through conversions of two large thesauri: MeSH and WordNet.
    Series
    Lecture notes in computer science; no.3298
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  8. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.01
    0.009005977 = product of:
      0.027017929 = sum of:
        0.009274333 = weight(_text_:in in 2623) [ClassicSimilarity], result of:
          0.009274333 = score(doc=2623,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 2623, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
              0.035487194 = score(doc=2623,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 2623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2623)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  9. Kruk, S.R.; Kruk, E.; Stankiewicz, K.: Evaluation of semantic and social technologies for digital libraries (2009) 0.01
    0.009005977 = product of:
      0.027017929 = sum of:
        0.009274333 = weight(_text_:in in 3387) [ClassicSimilarity], result of:
          0.009274333 = score(doc=3387,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 3387, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3387)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 3387) [ClassicSimilarity], result of:
              0.035487194 = score(doc=3387,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 3387, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3387)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Libraries are the tools we use to learn and to answer our questions. The quality of our work depends, among others, on the quality of the tools we use. Recent research in digital libraries is focused, on one hand on improving the infrastructure of the digital library management systems (DLMS), and on the other on improving the metadata models used to annotate collections of objects maintained by DLMS. The latter includes, among others, the semantic web and social networking technologies. Recently, the semantic web and social networking technologies are being introduced to the digital libraries domain. The expected outcome is that the overall quality of information discovery in digital libraries can be improved by employing social and semantic technologies. In this chapter we present the results of an evaluation of social and semantic end-user information discovery services for the digital libraries.
    Date
    1. 8.2010 12:35:22
  10. Priss, U.: Faceted information representation (2000) 0.01
    0.008982609 = product of:
      0.026947826 = sum of:
        0.006246961 = weight(_text_:in in 5095) [ClassicSimilarity], result of:
          0.006246961 = score(doc=5095,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10520181 = fieldWeight in 5095, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5095)
        0.020700864 = product of:
          0.04140173 = sum of:
            0.04140173 = weight(_text_:22 in 5095) [ClassicSimilarity], result of:
              0.04140173 = score(doc=5095,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.2708308 = fieldWeight in 5095, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5095)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper presents an abstract formalization of the notion of "facets". Facets are relational structures of units, relations and other facets selected for a certain purpose. Facets can be used to structure large knowledge representation systems into a hierarchical arrangement of consistent and independent subsystems (facets) that facilitate flexibility and combinations of different viewpoints or aspects. This paper describes the basic notions, facet characteristics and construction mechanisms. It then explicates the theory in an example of a faceted information retrieval system (FaIR)
    Date
    22. 1.2016 17:47:06
  11. Beppler, F.D.; Fonseca, F.T.; Pacheco, R.C.S.: Hermeneus: an architecture for an ontology-enabled information retrieval (2008) 0.01
    0.008438686 = product of:
      0.025316058 = sum of:
        0.0075724614 = weight(_text_:in in 3261) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=3261,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 3261, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3261)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 3261) [ClassicSimilarity], result of:
              0.035487194 = score(doc=3261,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 3261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3261)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Ontologies improve IR systems regarding its retrieval and presentation of information, which make the task of finding information more effective, efficient, and interactive. In this paper we argue that ontologies also greatly improve the engineering of such systems. We created a framework that uses ontology to drive the process of engineering an IR system. We developed a prototype that shows how a domain specialist without knowledge in the IR field can build an IR system with interactive components. The resulting system provides support for users not only to find their information needs but also to extend their state of knowledge. This way, our approach to ontology-enabled information retrieval addresses both the engineering aspect described here and also the usability aspect described elsewhere.
    Date
    28.11.2016 12:43:22
  12. Schmitz-Esser, W.: Formalizing terminology-based knowledge for an ontology independently of a particular language (2008) 0.01
    0.008308224 = product of:
      0.024924671 = sum of:
        0.010709076 = weight(_text_:in in 1680) [ClassicSimilarity], result of:
          0.010709076 = score(doc=1680,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 1680, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1680)
        0.014215595 = weight(_text_:und in 1680) [ClassicSimilarity], result of:
          0.014215595 = score(doc=1680,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.14692576 = fieldWeight in 1680, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=1680)
      0.33333334 = coord(2/6)
    
    Abstract
    Last word ontological thought and practice is exemplified on an axiomatic framework [a model for an Integrative Cross-Language Ontology (ICLO), cf. Poli, R., Schmitz-Esser, W., forthcoming 2007] that is highly general, based on natural language, multilingual, can be implemented as topic maps and may be openly enhanced by software available for particular languages. Basics of ontological modelling, conditions for construction and maintenance, and the most salient points in application are addressed, such as cross-language text mining and knowledge generation. The rationale is to open the eyes for the tremendous potential of terminology-based ontologies for principled Knowledge Organization and the interchange and reuse of formalized knowledge.
    Series
    Fortschritte in der Wissensorganisation; Bd.10
    Source
    Kompatibilität, Medien und Ethik in der Wissensorganisation - Compatibility, Media and Ethics in Knowledge Organization: Proceedings der 10. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Wien, 3.-5. Juli 2006 - Proceedings of the 10th Conference of the German Section of the International Society of Knowledge Organization Vienna, 3-5 July 2006. Ed.: H.P. Ohly, S. Netscher u. K. Mitgutsch
  13. Poli, R.: Upper ontologies hold it together (2008) 0.01
    0.008308224 = product of:
      0.024924671 = sum of:
        0.010709076 = weight(_text_:in in 1685) [ClassicSimilarity], result of:
          0.010709076 = score(doc=1685,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 1685, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1685)
        0.014215595 = weight(_text_:und in 1685) [ClassicSimilarity], result of:
          0.014215595 = score(doc=1685,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.14692576 = fieldWeight in 1685, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=1685)
      0.33333334 = coord(2/6)
    
    Abstract
    After presenting some of the basic features of upper ontologies, the thesis is defended that all the relations needed by any concrete application can be generated by a small set of general relations, by adding proper ontological constraints to the general relations' arguments. This procedure provides an explicit and verifiable grounding to all forms of knowledge managements, including acquisition, interchange, integration, reuse, merging, aligning and updating knowledge. Upper ontologies therefore provide cues for developing both unification and decomposition methods. Finally, upper ontologies pave the ground for enhancing automatic reasoning and other machine-oriented procedures. I conclude by mentioning a difficulty in the theory of semantic fields.
    Series
    Fortschritte in der Wissensorganisation; Bd.10
    Source
    Kompatibilität, Medien und Ethik in der Wissensorganisation - Compatibility, Media and Ethics in Knowledge Organization: Proceedings der 10. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Wien, 3.-5. Juli 2006 - Proceedings of the 10th Conference of the German Section of the International Society of Knowledge Organization Vienna, 3-5 July 2006. Ed.: H.P. Ohly, S. Netscher u. K. Mitgutsch
  14. Cimiano, P.; Völker, J.; Studer, R.: Ontologies on demand? : a description of the state-of-the-art, applications, challenges and trends for ontology learning from text (2006) 0.01
    0.007829976 = product of:
      0.023489928 = sum of:
        0.009274333 = weight(_text_:in in 6014) [ClassicSimilarity], result of:
          0.009274333 = score(doc=6014,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 6014, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=6014)
        0.014215595 = weight(_text_:und in 6014) [ClassicSimilarity], result of:
          0.014215595 = score(doc=6014,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.14692576 = fieldWeight in 6014, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=6014)
      0.33333334 = coord(2/6)
    
    Abstract
    Ontologies are nowadays used for many applications requiring data, services and resources in general to be interoperable and machine understandable. Such applications are for example web service discovery and composition, information integration across databases, intelligent search, etc. The general idea is that data and services are semantically described with respect to ontologies, which are formal specifications of a domain of interest, and can thus be shared and reused in a way such that the shared meaning specified by the ontology remains formally the same across different parties and applications. As the cost of creating ontologies is relatively high, different proposals have emerged for learning ontologies from structured and unstructured resources. In this article we examine the maturity of techniques for ontology learning from textual resources, addressing the question whether the state-of-the-art is mature enough to produce ontologies 'on demand'.
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.6/7, S.315-320
  15. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.01
    0.0076372377 = product of:
      0.022911713 = sum of:
        0.0061828885 = weight(_text_:in in 2654) [ClassicSimilarity], result of:
          0.0061828885 = score(doc=2654,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1041228 = fieldWeight in 2654, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2654)
        0.016728824 = product of:
          0.033457648 = sum of:
            0.033457648 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.033457648 = score(doc=2654,freq=4.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.21886435 = fieldWeight in 2654, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  16. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.01
    0.007504981 = product of:
      0.022514943 = sum of:
        0.007728611 = weight(_text_:in in 4607) [ClassicSimilarity], result of:
          0.007728611 = score(doc=4607,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 4607, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
              0.029572664 = score(doc=4607,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 4607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4607)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Smart applications behave intelligently because they understand at least partially the context where they operate. To do this, they need not only a formal domain model but also formal descriptions of the data they process and their own operational behaviour. Interoperability of smart applications is based on formalised definitions of all their data and processes. This paper studies the semantic interoperability of data in the case of eLearning and describes an experiment and its assessment. New content is imported into a knowledge-based learning environment without real updates of the original domain model, which is encoded as a knowledge base of conceptual graphs. A component called mediator enables the import by assigning dummy metadata annotations for the imported items. However, some functionality of the original system is lost, when processing the imported content, due to the lack of proper metadata annotation which cannot be associated fully automatically. So the paper presents an interoperability scenario when appropriate content items are viewed from the perspective of the original world and can be (partially) reused there.
    Series
    Lecture notes in computer science: Lecture notes in artificial intelligence ; 4604
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a
  17. Garshol, L.M.: Metadata? Thesauri? Taxonomies? Topic Maps! : making sense of it all (2005) 0.01
    0.007262686 = product of:
      0.021788057 = sum of:
        0.0075724614 = weight(_text_:in in 4729) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=4729,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 4729, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4729)
        0.014215595 = weight(_text_:und in 4729) [ClassicSimilarity], result of:
          0.014215595 = score(doc=4729,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.14692576 = fieldWeight in 4729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4729)
      0.33333334 = coord(2/6)
    
    Abstract
    The task of an information architect is to create web sites where users can actually find the information they are looking for. As the ocean of information rises and leaves what we seek ever more deeply buried in what we don't seek, this discipline becomes ever more relevant. Information architecture involves many different aspects of web site creation and organization, but its principal tools are information organization techniques developed in other disciplines. Most of these techniques come from library science, such as thesauri, taxonomies, and faceted classification. Topic maps are a relative newcomer to this area and bring with them the promise of better-organized web sites, compared to what is possible with existing techniques. However, it is not generally understood how topic maps relate to the traditional techniques, and what advantages and disadvantages they have, compared to these techniques. The aim of this paper is to help build a better understanding of these issues.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  18. Fonseca, F.: ¬The double role of ontologies in information science research (2007) 0.00
    0.0028220895 = product of:
      0.016932536 = sum of:
        0.016932536 = weight(_text_:in in 277) [ClassicSimilarity], result of:
          0.016932536 = score(doc=277,freq=20.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.28515202 = fieldWeight in 277, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=277)
      0.16666667 = coord(1/6)
    
    Abstract
    In philosophy, Ontology is the basic description of things in the world. In information science, an ontology refers to an engineering artifact, constituted by a specific vocabulary used to describe a certain reality. Ontologies have been proposed for validating both conceptual models and conceptual schemas; however, these roles are quite dissimilar. In this article, we show that ontologies can be better understood if we classify the different uses of the term as it appears in the literature. First, we explain Ontology (upper case O) as used in Philosophy. Then, we propose a differentiation between ontologies of information systems and ontologies for information systems. All three concepts have an important role in information science. We clarify the different meanings and uses of Ontology and ontologies through a comparison of research by Wand and Weber and by Guarino in ontology-driven information systems. The contributions of this article are twofold: (a) It provides a better understanding of what ontologies are, and (b) it explains the double role of ontologies in information science research.
  19. Scheir, P.; Pammer, V.; Lindstaedt, S.N.: Information retrieval on the Semantic Web : does it exist? (2007) 0.00
    0.0027641435 = product of:
      0.01658486 = sum of:
        0.01658486 = weight(_text_:und in 4329) [ClassicSimilarity], result of:
          0.01658486 = score(doc=4329,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.17141339 = fieldWeight in 4329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
      0.16666667 = coord(1/6)
    
    Content
    Enthält einen Überblick über Modelle, Systeme und Projekte
  20. Hoang, H.H.; Tjoa, A.M: ¬The state of the art of ontology-based query systems : a comparison of existing approaches (2006) 0.00
    0.0026606917 = product of:
      0.01596415 = sum of:
        0.01596415 = weight(_text_:in in 792) [ClassicSimilarity], result of:
          0.01596415 = score(doc=792,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.26884392 = fieldWeight in 792, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=792)
      0.16666667 = coord(1/6)
    
    Abstract
    Based on an in-depth analysis of existing approaches in building ontology-based query systems we discuss and compare the methods, approaches to be used in current query systems using Ontology or the Semantic Web techniques. This paper identifies various relevant research directions in ontology-based querying research. Based on the results of our investigation we summarise the state of the art ontology-based query/search and name areas of further research activities.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval