Search (64 results, page 1 of 4)

  • × language_ss:"e"
  • × theme_ss:"Konzeption und Anwendung des Prinzips Thesaurus"
  1. Assem, M. van; Gangemi, A.; Schreiber, G.: Conversion of WordNet to a standard RDF/OWL representation (2006) 0.01
    0.009721208 = product of:
      0.068048455 = sum of:
        0.061167482 = weight(_text_:representation in 4641) [ClassicSimilarity], result of:
          0.061167482 = score(doc=4641,freq=6.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.5282854 = fieldWeight in 4641, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=4641)
        0.006880972 = product of:
          0.020642916 = sum of:
            0.020642916 = weight(_text_:29 in 4641) [ClassicSimilarity], result of:
              0.020642916 = score(doc=4641,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.23319192 = fieldWeight in 4641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4641)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper presents an overview of the work in progress at the W3C to produce a standard conversion of WordNet to the RDF/OWL representation language in use in the SemanticWeb community. Such a standard representation is useful to provide application developers a high-quality resource and to promote interoperability. Important requirements in this conversion process are that it should be complete and should stay close to WordNet's conceptual model. The paper explains the steps taken to produce the conversion and details design decisions such as the composition of the class hierarchy and properties, the addition of suitable OWL semantics and the chosen format of the URIs. Additional topics include a strategy to incorporate OWL and RDFS semantics in one schema such that both RDF(S) infrastructure and OWL infrastructure can interpret the information correctly, problems encountered in understanding the Prolog source files and the description of the two versions that are provided (Basic and Full) to accommodate different usages of WordNet.
    Date
    29. 7.2011 14:44:56
  2. Jarvelin, K.: ¬A deductive data model for thesaurus navigation and query expansion (1996) 0.01
    0.00803734 = product of:
      0.05626138 = sum of:
        0.04708675 = weight(_text_:representation in 5625) [ClassicSimilarity], result of:
          0.04708675 = score(doc=5625,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.40667427 = fieldWeight in 5625, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=5625)
        0.00917463 = product of:
          0.027523888 = sum of:
            0.027523888 = weight(_text_:29 in 5625) [ClassicSimilarity], result of:
              0.027523888 = score(doc=5625,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.31092256 = fieldWeight in 5625, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5625)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Describes a deductive data model based on 3 abstraction levels for representing vocabularies for information retrieval: conceptual level; expression level; and occurrence level. The proposed data model can be used for the representation and navigation of indexing and retrieval thesauri and as a vocabulary source for concept based query expansion in heterogeneous retrieval environments
    Date
    2. 3.1997 17:29:07
  3. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.01
    0.0073820096 = product of:
      0.051674064 = sum of:
        0.04708675 = weight(_text_:representation in 4639) [ClassicSimilarity], result of:
          0.04708675 = score(doc=4639,freq=8.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.40667427 = fieldWeight in 4639, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.004587315 = product of:
          0.013761944 = sum of:
            0.013761944 = weight(_text_:29 in 4639) [ClassicSimilarity], result of:
              0.013761944 = score(doc=4639,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.15546128 = fieldWeight in 4639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
    Date
    29. 7.2011 14:44:56
  4. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.01
    0.0070326724 = product of:
      0.049228705 = sum of:
        0.041200902 = weight(_text_:representation in 4644) [ClassicSimilarity], result of:
          0.041200902 = score(doc=4644,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35583997 = fieldWeight in 4644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
        0.008027801 = product of:
          0.024083402 = sum of:
            0.024083402 = weight(_text_:29 in 4644) [ClassicSimilarity], result of:
              0.024083402 = score(doc=4644,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.27205724 = fieldWeight in 4644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4644)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL. The method identifies four steps in the conversion process. In each step, decisions have to be taken with respect to the syntax or semantics of the resulting representation. Each step is supported through a number of guidelines. The method is illustrated through conversions of two large thesauri: MeSH and WordNet.
    Date
    29. 7.2011 14:44:56
  5. Engerer, V.: Control and syntagmatization : vocabulary requirements in information retrieval thesauri and natural language lexicons (2017) 0.01
    0.005084338 = product of:
      0.07118073 = sum of:
        0.07118073 = weight(_text_:mental in 3678) [ClassicSimilarity], result of:
          0.07118073 = score(doc=3678,freq=2.0), product of:
            0.16438161 = queryWeight, product of:
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.025165197 = queryNorm
            0.43302125 = fieldWeight in 3678, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.046875 = fieldNorm(doc=3678)
      0.071428575 = coord(1/14)
    
    Abstract
    This paper explores the relationships between natural language lexicons in lexical semantics and thesauri in information retrieval research. These different areas of knowledge have different restrictions on use of vocabulary; thesauri are used only in information search and retrieval contexts, whereas lexicons are mental systems and generally applicable in all domains of life. A set of vocabulary requirements that defines the more concrete characteristics of vocabulary items in the 2 contexts can be derived from this framework: lexicon items have to be learnable, complex, transparent, etc., whereas thesaurus terms must be effective, current and relevant, searchable, etc. The differences in vocabulary properties correlate with 2 other factors, the well-known dimension of Control (deliberate, social activities of building and maintaining vocabularies), and Syntagmatization, which is less known and describes vocabulary items' varying formal preparedness to exit the thesaurus/lexicon, enter into linear syntactic constructions, and, finally, acquire communicative functionality. It is proposed that there is an inverse relationship between Control and Syntagmatization.
  6. Jones, S.: ¬A thesaurus data model for an intelligent retrieval system (1993) 0.00
    0.004369106 = product of:
      0.061167482 = sum of:
        0.061167482 = weight(_text_:representation in 5279) [ClassicSimilarity], result of:
          0.061167482 = score(doc=5279,freq=6.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.5282854 = fieldWeight in 5279, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=5279)
      0.071428575 = coord(1/14)
    
    Abstract
    This paper demonstrates the application of conventional database design techniques to thesaurus representation. The thesaurus is considered as a printed document, as a semantic net, and as a relational database to be used in conjunction with an intelligent information retrieval system. Some issues raised by analysis of two standard thesauri include: the prevalence of compound terms and the representation of term structure; thesaurus redundancy and the extent to which it can be eliminated in machine-readable versions; the difficulty of exploiting thesaurus knowledge originally designed for human rather than automatic interpretation; deriving 'strength of association' measures between terms in a thesaurus considered as a semantic net; facet representation and the need for variations in the data model to cater for structural differences between thesauri. A complete schema of database tables is presented, with an outline suggestion for using the stored information when matching one or more thesaurus terms with a user's query
  7. Rahmstorf, G.: Information retrieval using conceptual representations of phrases (1994) 0.00
    0.0035673599 = product of:
      0.049943037 = sum of:
        0.049943037 = weight(_text_:representation in 7862) [ClassicSimilarity], result of:
          0.049943037 = score(doc=7862,freq=4.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.4313432 = fieldWeight in 7862, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
      0.071428575 = coord(1/14)
    
    Abstract
    The information retrieval problem is described starting from an analysis of the concepts 'user's information request' and 'information offerings of texts'. It is shown that natural language phrases are a more adequate medium for expressing information requests and information offerings than character string based query and indexing languages complemented by Boolean oprators. The phrases must be represented as concepts to reach a language invariant level for rule based relevance analysis. The special type of representation called advanced thesaurus is used for the semantic representation of natural language phrases and for relevance processing. The analysis of the retrieval problem leads to a symmetric system structure
  8. Hudon, M.: Term definitions in subject thesauri : the Canadian Literacy Thesaurus experience (1992) 0.00
    0.0033633395 = product of:
      0.04708675 = sum of:
        0.04708675 = weight(_text_:representation in 2107) [ClassicSimilarity], result of:
          0.04708675 = score(doc=2107,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.40667427 = fieldWeight in 2107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=2107)
      0.071428575 = coord(1/14)
    
    Source
    Classification research for knowledge representation and organization. Proc. 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991. Ed. by N.J. Williamson u. M. Hudon
  9. Roulin, C.: Sub-thesauri as part of a metathesaurus (1992) 0.00
    0.0033633395 = product of:
      0.04708675 = sum of:
        0.04708675 = weight(_text_:representation in 2112) [ClassicSimilarity], result of:
          0.04708675 = score(doc=2112,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.40667427 = fieldWeight in 2112, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=2112)
      0.071428575 = coord(1/14)
    
    Source
    Classification research for knowledge representation and organization. Proc. 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991. Ed. by N.J. Williamson u. M. Hudon
  10. Kent, R.E.: Implications and rules in thesauri (1994) 0.00
    0.0033633395 = product of:
      0.04708675 = sum of:
        0.04708675 = weight(_text_:representation in 3457) [ClassicSimilarity], result of:
          0.04708675 = score(doc=3457,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.40667427 = fieldWeight in 3457, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=3457)
      0.071428575 = coord(1/14)
    
    Abstract
    A central consideration in the study of whole language semantic space as encoded in thesauri is word sense comparability. Shows how word sense comparability can be adequately expressed by the logical implications and rules from Formal Concept Analysis. Formal concept analysis, a new approach to formal logic initiated by Rudolf Wille, has been used for data modelling, analysis and interpretation, and also for knowledge representation and knowledge discovery
  11. Diaz, I.: Semi-automatic construction of thesaurus applying domain analysis techniques (1998) 0.00
    0.0033633395 = product of:
      0.04708675 = sum of:
        0.04708675 = weight(_text_:representation in 3744) [ClassicSimilarity], result of:
          0.04708675 = score(doc=3744,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.40667427 = fieldWeight in 3744, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=3744)
      0.071428575 = coord(1/14)
    
    Abstract
    Describes a specific application of domain analysis to the construction of thesauri to exploit domain analysis' ability to construct valid domain representations and determine fuzzy limits that normally define specific domains. The system employs a structure, called a Software Thesaurus (developed from a descriptor thesaurus), as a repository to store the information regarding specific domains. The domain representation is constructued semi automatically and can be used as a means of semiautomatic thesaurus generation
  12. Miranda Guedes, R. de; Aparecida Moura, M.: Semantic warrant, cultural hospitality and knowledge representation in multicultural contexts : experiments with the use of the EuroVoc and UNBIS thesauri (2018) 0.00
    0.0033633395 = product of:
      0.04708675 = sum of:
        0.04708675 = weight(_text_:representation in 4778) [ClassicSimilarity], result of:
          0.04708675 = score(doc=4778,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.40667427 = fieldWeight in 4778, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=4778)
      0.071428575 = coord(1/14)
    
  13. Nkwenti-Azeh, B.: ¬The use of thesaural facets and definitions for the representation of knowledge structures (1994) 0.00
    0.0029429218 = product of:
      0.041200902 = sum of:
        0.041200902 = weight(_text_:representation in 7735) [ClassicSimilarity], result of:
          0.041200902 = score(doc=7735,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35583997 = fieldWeight in 7735, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7735)
      0.071428575 = coord(1/14)
    
  14. McCray, A.T.; Nelson, S.J.: ¬The representation of meaning in the UMLS (1995) 0.00
    0.0029429218 = product of:
      0.041200902 = sum of:
        0.041200902 = weight(_text_:representation in 1872) [ClassicSimilarity], result of:
          0.041200902 = score(doc=1872,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35583997 = fieldWeight in 1872, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1872)
      0.071428575 = coord(1/14)
    
  15. Lee, W.G.; Ishikawa, Y.; Yamagishi, T.; Nishioka, A.; Hatada, K.; Ohbo, N.; Fujiwara, S.: ¬A dynamic thesaurus for intelligent access to research databases (1989) 0.00
    0.0029429218 = product of:
      0.041200902 = sum of:
        0.041200902 = weight(_text_:representation in 3556) [ClassicSimilarity], result of:
          0.041200902 = score(doc=3556,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35583997 = fieldWeight in 3556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3556)
      0.071428575 = coord(1/14)
    
    Abstract
    Although thesauri can be solve some problems posed by computerised data base searching (synonyms, generic representation) their compilation requires extensive time and effort of experts and their maintenance is also difficult. Describes how a thesaurus was compiled and maintained automatically by taking advantage of the specially designed formats to input expertise with ease. The thesaurus was named a dynamic thesaurus because it depends on the set of stored data and is adapted to the necessary and sufficient range of keywords. A data base of polymers is taken as an example.
  16. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.00
    0.0025225044 = product of:
      0.03531506 = sum of:
        0.03531506 = weight(_text_:representation in 2203) [ClassicSimilarity], result of:
          0.03531506 = score(doc=2203,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.3050057 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
      0.071428575 = coord(1/14)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
  17. Z39.19-2005: Guidelines for the construction, format, and management of monolingual controlled vocabularies (2005) 0.00
    0.0025225044 = product of:
      0.03531506 = sum of:
        0.03531506 = weight(_text_:representation in 708) [ClassicSimilarity], result of:
          0.03531506 = score(doc=708,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.3050057 = fieldWeight in 708, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=708)
      0.071428575 = coord(1/14)
    
    Abstract
    This Standard presents guidelines and conventions for the contents, display, construction, testing, maintenance, and management of monolingual controlled vocabularies. This Standard focuses on controlled vocabularies that are used for the representation of content objects in knowledge organization systems including lists, synonym rings, taxonomies, and thesauri. This Standard should be regarded as a set of recommendations based on preferred techniques and procedures. Optional procedures are, however, sometimes described, e.g., for the display of terms in a controlled vocabulary. The primary purpose of vocabulary control is to achieve consistency in the description of content objects and to facilitate retrieval. Vocabulary control is accomplished by three principal methods: defining the scope, or meaning, of terms; using the equivalence relationship to link synonymous and nearly synonymous terms; and distinguishing among homographs.
  18. Fischer, D.H.: From thesauri towards ontologies? (1998) 0.00
    0.0025225044 = product of:
      0.03531506 = sum of:
        0.03531506 = weight(_text_:representation in 2176) [ClassicSimilarity], result of:
          0.03531506 = score(doc=2176,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.3050057 = fieldWeight in 2176, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=2176)
      0.071428575 = coord(1/14)
    
    Abstract
    The ISO 2788 guidelines for monolingual thesauri contain a differentiation of "the hierarchical relationship" into "generic", "partitive", and "instance", which, for purposes of document retrieval, was deemed adequate. However, ontologies, designed as language inventories for a wider scope of knowledge representation, are based on all these and some more logical differentiations. Rereading the ISO 2788 standard and inspecting the published Cyc Upper Ontology, it is argued that the adoption of the document-retrieval definition of subsumption generally prevents the conception or use of a thesaurus as a substructure of an ontology of the new kind as constructed for AI applications. When a thesaurus is used for fact description and inference on fact descriptions, the instance-of relationship too should be reconsidered: It may also link concepts and metaconcepts, and then its distinction from subsumption is needed. The treatment of the instance-of relationship in thesauri, the Cyc Upper Ontology, and WordNet is described from this perspective
  19. Lloréns, J.; Velasco, M.; Amescua, A. de; Moreiro, J.A.; Martínez, V.: Automatic generation of domain representations using thesaurus structures (2004) 0.00
    0.0025225044 = product of:
      0.03531506 = sum of:
        0.03531506 = weight(_text_:representation in 2501) [ClassicSimilarity], result of:
          0.03531506 = score(doc=2501,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.3050057 = fieldWeight in 2501, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=2501)
      0.071428575 = coord(1/14)
    
    Abstract
    Domain analysis was first used 15 years ago as one of the most important techniques for software reuse. Even today, new techniques appear every year, and different authors propose different domain representation structures to represent and store all the different software components and the relationships among them. These relationships among components are the kernel of the domain semantics. In this report, a set of techniques and tools is presented regarding mathematical, statistical, and neural fields that, when linked together, enable semiautomatically building domain representations and storing them in a thesaurus structure of software components. Thesaurus structures, widely used in information science, are presented as the domain-modeling key concept, due to their higher automation possibilities compared with previous structures. New metrics to evaluate the quality, consistency, and completeness of the domain model obtained through this technique are also presented.
  20. Dextre Clarke, S.G.: Planning controlled vocabularies for the UK public sector (2003) 0.00
    0.0025225044 = product of:
      0.03531506 = sum of:
        0.03531506 = weight(_text_:representation in 2695) [ClassicSimilarity], result of:
          0.03531506 = score(doc=2695,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.3050057 = fieldWeight in 2695, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=2695)
      0.071428575 = coord(1/14)
    
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas

Authors

Years

Types

  • a 53
  • el 7
  • m 3
  • n 2
  • s 2
  • More… Less…