Search (93 results, page 1 of 5)

  • × theme_ss:"Konzeption und Anwendung des Prinzips Thesaurus"
  1. Assem, M. van; Gangemi, A.; Schreiber, G.: Conversion of WordNet to a standard RDF/OWL representation (2006) 0.06
    0.05789217 = product of:
      0.11578434 = sum of:
        0.10407638 = weight(_text_:representation in 4641) [ClassicSimilarity], result of:
          0.10407638 = score(doc=4641,freq=6.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.5282854 = fieldWeight in 4641, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=4641)
        0.011707964 = product of:
          0.035123892 = sum of:
            0.035123892 = weight(_text_:29 in 4641) [ClassicSimilarity], result of:
              0.035123892 = score(doc=4641,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.23319192 = fieldWeight in 4641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4641)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This paper presents an overview of the work in progress at the W3C to produce a standard conversion of WordNet to the RDF/OWL representation language in use in the SemanticWeb community. Such a standard representation is useful to provide application developers a high-quality resource and to promote interoperability. Important requirements in this conversion process are that it should be complete and should stay close to WordNet's conceptual model. The paper explains the steps taken to produce the conversion and details design decisions such as the composition of the class hierarchy and properties, the addition of suitable OWL semantics and the chosen format of the URIs. Additional topics include a strategy to incorporate OWL and RDFS semantics in one schema such that both RDF(S) infrastructure and OWL infrastructure can interpret the information correctly, problems encountered in understanding the Prolog source files and the description of the two versions that are provided (Basic and Full) to accommodate different usages of WordNet.
    Date
    29. 7.2011 14:44:56
  2. Jarvelin, K.: ¬A deductive data model for thesaurus navigation and query expansion (1996) 0.05
    0.047864325 = product of:
      0.09572865 = sum of:
        0.08011803 = weight(_text_:representation in 5625) [ClassicSimilarity], result of:
          0.08011803 = score(doc=5625,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.40667427 = fieldWeight in 5625, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=5625)
        0.015610619 = product of:
          0.046831857 = sum of:
            0.046831857 = weight(_text_:29 in 5625) [ClassicSimilarity], result of:
              0.046831857 = score(doc=5625,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.31092256 = fieldWeight in 5625, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5625)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Describes a deductive data model based on 3 abstraction levels for representing vocabularies for information retrieval: conceptual level; expression level; and occurrence level. The proposed data model can be used for the representation and navigation of indexing and retrieval thesauri and as a vocabulary source for concept based query expansion in heterogeneous retrieval environments
    Date
    2. 3.1997 17:29:07
  3. Gagnon-Arguin, L.: Analyse documentaire 3 : thesaurus et fichier d'autorité à l'Université Concordia (1996/97) 0.04
    0.04459573 = product of:
      0.08919146 = sum of:
        0.07010327 = weight(_text_:representation in 7907) [ClassicSimilarity], result of:
          0.07010327 = score(doc=7907,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.35583997 = fieldWeight in 7907, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7907)
        0.019088186 = product of:
          0.057264555 = sum of:
            0.057264555 = weight(_text_:theory in 7907) [ClassicSimilarity], result of:
              0.057264555 = score(doc=7907,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.32160926 = fieldWeight in 7907, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7907)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Since the end of the 1980s, Concordia University Archives in Quebec, Canada, has indexed the minutes of the meetings of the institution's Board of Governors. A thesaurus and name authority file have been developed to support the indexing activities. Presents several theoretical concepts associated with indexing: thesaurus; authority control; content analysis; and content representation. Details the indexing tasks involved. Offers examples, combining theory and practice, demonstrating that, even with minimal resources, an indexing system can be introduced in any institution
  4. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.04
    0.04396167 = product of:
      0.08792334 = sum of:
        0.08011803 = weight(_text_:representation in 4639) [ClassicSimilarity], result of:
          0.08011803 = score(doc=4639,freq=8.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.40667427 = fieldWeight in 4639, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.0078053097 = product of:
          0.023415929 = sum of:
            0.023415929 = weight(_text_:29 in 4639) [ClassicSimilarity], result of:
              0.023415929 = score(doc=4639,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.15546128 = fieldWeight in 4639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
    Date
    29. 7.2011 14:44:56
  5. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.04
    0.041881282 = product of:
      0.083762564 = sum of:
        0.07010327 = weight(_text_:representation in 4644) [ClassicSimilarity], result of:
          0.07010327 = score(doc=4644,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.35583997 = fieldWeight in 4644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
        0.013659291 = product of:
          0.040977873 = sum of:
            0.040977873 = weight(_text_:29 in 4644) [ClassicSimilarity], result of:
              0.040977873 = score(doc=4644,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.27205724 = fieldWeight in 4644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4644)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL. The method identifies four steps in the conversion process. In each step, decisions have to be taken with respect to the syntax or semantics of the resulting representation. Each step is supported through a number of guidelines. The method is illustrated through conversions of two large thesauri: MeSH and WordNet.
    Date
    29. 7.2011 14:44:56
  6. Curras, E.: Ontologies, taxonomy and thesauri in information organisation and retrieval (2010) 0.04
    0.040280633 = product of:
      0.080561265 = sum of:
        0.050073773 = weight(_text_:representation in 3276) [ClassicSimilarity], result of:
          0.050073773 = score(doc=3276,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.25417143 = fieldWeight in 3276, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3276)
        0.030487489 = product of:
          0.09146246 = sum of:
            0.09146246 = weight(_text_:theory in 3276) [ClassicSimilarity], result of:
              0.09146246 = score(doc=3276,freq=10.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.5136716 = fieldWeight in 3276, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3276)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The originality of this book, which deals with such a new subject matter, lies in the application of methods and concepts never used before - such as Ontologies and Taxonomies, as well as Thesauri - to the ordering of knowledge based on primary information. Chapters in the book also examine the study of Ontologies, Taxonomies and Thesauri from the perspective of Systematics and General Systems Theory. "Ontologies, Taxonomy and Thesauri in Information Organisation and Retrieval" will be extremely useful to those operating within the network of related fields, which includes Documentation and Information Science.
    Content
    Inhalt: 1. From classifications to ontologies Knowledge - A new concept of knowledge - Knowledge and information - Knowledge organisation - Knowledge organisation and representation - Cognitive sciences - Talent management - Learning systematisation - Historical evolution - From classification to knowledge organisation - Why ontologies exist - Ontologies - The structure of ontologies 2. Taxonomies and thesauri From ordering to taxonomy - The origins of taxonomy - Hierarchical and horizontal order - Correlation with classifications - Taxonomy in computer science - Computing taxonomy - Definitions - Virtual taxonomy, cybernetic taxonomy - Taxonomy in Information Science - Similarities between taxonomies and thesauri - ifferences between taxonomies and thesauri 3. Thesauri Terminology in classification systems - Terminological languages - Thesauri - Thesauri definitions - Conditions that a thesaurus must fulfil - Historical evolution - Classes of thesauri 4. Thesauri in (cladist) systematics Systematics - Systematics as a noun - Definitions and historic evolution over time - Differences between taxonomy and systematics - Systematics in thesaurus construction theory - Classic, numerical and cladist systematics - Classic systematics in information science - Numerical systematics in information science - Thesauri in cladist systematics - Systematics in information technology - Some examples 5. Thesauri in systems theory Historical evolution - Approach to systems - Systems theory applied to the construction of thesauri - Components - Classes of system - Peculiarities of these systems - Working methods - Systems theory applied to ontologies and taxonomies
  7. Moreira, A.; Alvarenga, L.; Paiva Oliveira, A. de: "Thesaurus" and "Ontology" : a study of the definitions found in the computer and information science literature (2004) 0.03
    0.034134585 = product of:
      0.06826917 = sum of:
        0.060088523 = weight(_text_:representation in 3726) [ClassicSimilarity], result of:
          0.060088523 = score(doc=3726,freq=8.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.3050057 = fieldWeight in 3726, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3726)
        0.008180651 = product of:
          0.024541952 = sum of:
            0.024541952 = weight(_text_:theory in 3726) [ClassicSimilarity], result of:
              0.024541952 = score(doc=3726,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.13783254 = fieldWeight in 3726, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3726)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This is a comparative analysis of the term ontology, used in the computer science domain, with the term thesaurus, used in the information science domain. The aim of the study is to establish the main convergence points of these two knowledge representation instruments and to point out their differences. In order to fulfill this goal an analytical-Synthetic method was applied to extract the meaning underlying each of the selected definitions of the instruments. The definitions were obtained from texts weIl accepted by the research community from both areas. The definitions were applied to a KWIC system in order to rotate the terms that were examined qualitatively and quantitatively. We concluded that thesauri and ontologies operate at the same knowledge level, the epistemological level, in spite of different origins and purposes.
    Content
    "Ontologies" definitions taken from the computer science literature "[...] ontology is a representation vocabulary, often specialized to some domain or subject matter." (Chandrasekaran et al. 1999, 1) "[...] ontology is sometimes used to refer to a body of knowledge describing some domain, typically a commonsense knowledge domain, using a representation vocabulary." (Chandrasekaran et al. 1999, 1) "An ontology is a declarative model of the terms and relationships in a domain." (Eriksson et al. 1994, 1) " [...] an ontology is the (unspecified) conceptual system which we may assume to underlie a particular knowledge base." (Guarino and Giaretta 1995, 1) Ontology as a representation of a conceptual system via a logical theory". (Guarino and Giaretta 1995, 1) "An ontology is an explicit specification of a conceptualization." (Gruber 1993, 1) "[...] An ontology is a formal description of entities and their properties, relationships, constraints, behaviors." (Gruninger and Fox 1995, 1) "An ontology is set of terms, associated with definitions in natural language and, if possible, using formal relations and constraints, about some domain of interest ..." (Hovy 1998, 2) "Fach Ontology is a set of terms of interest in a particular information domain, expressed using DL ..." (Mena et al. 1996, 3) "[...] An ontology is a hierarchically structured set of terms for describing a domain that can be used as a skeletal foundation for a knowledge base." (Swartout et al. 1996, 1) "An ontology may take a variety of forms, but necessarily it will include a vocabulary of terms and some specification of their meaning." (Uschold 1996,3) "Ontologies are agreements about shared conceptualizations." (Uschold and Grunninger 1996, 6) "[...] a vocabulary of terms and a specification of their relationships." (Wiederhold 1994, 6)
  8. Jones, S.: ¬A thesaurus data model for an intelligent retrieval system (1993) 0.03
    0.026019095 = product of:
      0.10407638 = sum of:
        0.10407638 = weight(_text_:representation in 5279) [ClassicSimilarity], result of:
          0.10407638 = score(doc=5279,freq=6.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.5282854 = fieldWeight in 5279, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=5279)
      0.25 = coord(1/4)
    
    Abstract
    This paper demonstrates the application of conventional database design techniques to thesaurus representation. The thesaurus is considered as a printed document, as a semantic net, and as a relational database to be used in conjunction with an intelligent information retrieval system. Some issues raised by analysis of two standard thesauri include: the prevalence of compound terms and the representation of term structure; thesaurus redundancy and the extent to which it can be eliminated in machine-readable versions; the difficulty of exploiting thesaurus knowledge originally designed for human rather than automatic interpretation; deriving 'strength of association' measures between terms in a thesaurus considered as a semantic net; facet representation and the need for variations in the data model to cater for structural differences between thesauri. A complete schema of database tables is presented, with an outline suggestion for using the stored information when matching one or more thesaurus terms with a user's query
  9. Rahmstorf, G.: Information retrieval using conceptual representations of phrases (1994) 0.02
    0.0212445 = product of:
      0.084978 = sum of:
        0.084978 = weight(_text_:representation in 7862) [ClassicSimilarity], result of:
          0.084978 = score(doc=7862,freq=4.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.4313432 = fieldWeight in 7862, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
      0.25 = coord(1/4)
    
    Abstract
    The information retrieval problem is described starting from an analysis of the concepts 'user's information request' and 'information offerings of texts'. It is shown that natural language phrases are a more adequate medium for expressing information requests and information offerings than character string based query and indexing languages complemented by Boolean oprators. The phrases must be represented as concepts to reach a language invariant level for rule based relevance analysis. The special type of representation called advanced thesaurus is used for the semantic representation of natural language phrases and for relevance processing. The analysis of the retrieval problem leads to a symmetric system structure
  10. Hudon, M.: Term definitions in subject thesauri : the Canadian Literacy Thesaurus experience (1992) 0.02
    0.020029508 = product of:
      0.08011803 = sum of:
        0.08011803 = weight(_text_:representation in 2107) [ClassicSimilarity], result of:
          0.08011803 = score(doc=2107,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.40667427 = fieldWeight in 2107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=2107)
      0.25 = coord(1/4)
    
    Source
    Classification research for knowledge representation and organization. Proc. 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991. Ed. by N.J. Williamson u. M. Hudon
  11. Roulin, C.: Sub-thesauri as part of a metathesaurus (1992) 0.02
    0.020029508 = product of:
      0.08011803 = sum of:
        0.08011803 = weight(_text_:representation in 2112) [ClassicSimilarity], result of:
          0.08011803 = score(doc=2112,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.40667427 = fieldWeight in 2112, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=2112)
      0.25 = coord(1/4)
    
    Source
    Classification research for knowledge representation and organization. Proc. 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991. Ed. by N.J. Williamson u. M. Hudon
  12. Kent, R.E.: Implications and rules in thesauri (1994) 0.02
    0.020029508 = product of:
      0.08011803 = sum of:
        0.08011803 = weight(_text_:representation in 3457) [ClassicSimilarity], result of:
          0.08011803 = score(doc=3457,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.40667427 = fieldWeight in 3457, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=3457)
      0.25 = coord(1/4)
    
    Abstract
    A central consideration in the study of whole language semantic space as encoded in thesauri is word sense comparability. Shows how word sense comparability can be adequately expressed by the logical implications and rules from Formal Concept Analysis. Formal concept analysis, a new approach to formal logic initiated by Rudolf Wille, has been used for data modelling, analysis and interpretation, and also for knowledge representation and knowledge discovery
  13. Diaz, I.: Semi-automatic construction of thesaurus applying domain analysis techniques (1998) 0.02
    0.020029508 = product of:
      0.08011803 = sum of:
        0.08011803 = weight(_text_:representation in 3744) [ClassicSimilarity], result of:
          0.08011803 = score(doc=3744,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.40667427 = fieldWeight in 3744, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=3744)
      0.25 = coord(1/4)
    
    Abstract
    Describes a specific application of domain analysis to the construction of thesauri to exploit domain analysis' ability to construct valid domain representations and determine fuzzy limits that normally define specific domains. The system employs a structure, called a Software Thesaurus (developed from a descriptor thesaurus), as a repository to store the information regarding specific domains. The domain representation is constructued semi automatically and can be used as a means of semiautomatic thesaurus generation
  14. Miranda Guedes, R. de; Aparecida Moura, M.: Semantic warrant, cultural hospitality and knowledge representation in multicultural contexts : experiments with the use of the EuroVoc and UNBIS thesauri (2018) 0.02
    0.020029508 = product of:
      0.08011803 = sum of:
        0.08011803 = weight(_text_:representation in 4778) [ClassicSimilarity], result of:
          0.08011803 = score(doc=4778,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.40667427 = fieldWeight in 4778, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=4778)
      0.25 = coord(1/4)
    
  15. Nkwenti-Azeh, B.: ¬The use of thesaural facets and definitions for the representation of knowledge structures (1994) 0.02
    0.017525818 = product of:
      0.07010327 = sum of:
        0.07010327 = weight(_text_:representation in 7735) [ClassicSimilarity], result of:
          0.07010327 = score(doc=7735,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.35583997 = fieldWeight in 7735, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7735)
      0.25 = coord(1/4)
    
  16. McCray, A.T.; Nelson, S.J.: ¬The representation of meaning in the UMLS (1995) 0.02
    0.017525818 = product of:
      0.07010327 = sum of:
        0.07010327 = weight(_text_:representation in 1872) [ClassicSimilarity], result of:
          0.07010327 = score(doc=1872,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.35583997 = fieldWeight in 1872, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1872)
      0.25 = coord(1/4)
    
  17. Lee, W.G.; Ishikawa, Y.; Yamagishi, T.; Nishioka, A.; Hatada, K.; Ohbo, N.; Fujiwara, S.: ¬A dynamic thesaurus for intelligent access to research databases (1989) 0.02
    0.017525818 = product of:
      0.07010327 = sum of:
        0.07010327 = weight(_text_:representation in 3556) [ClassicSimilarity], result of:
          0.07010327 = score(doc=3556,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.35583997 = fieldWeight in 3556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3556)
      0.25 = coord(1/4)
    
    Abstract
    Although thesauri can be solve some problems posed by computerised data base searching (synonyms, generic representation) their compilation requires extensive time and effort of experts and their maintenance is also difficult. Describes how a thesaurus was compiled and maintained automatically by taking advantage of the specially designed formats to input expertise with ease. The thesaurus was named a dynamic thesaurus because it depends on the set of stored data and is adapted to the necessary and sufficient range of keywords. A data base of polymers is taken as an example.
  18. Scheven, E.: ¬Die neue Thesaurusnorm ISO 25964 und die GND (2017) 0.02
    0.016373739 = product of:
      0.065494955 = sum of:
        0.065494955 = product of:
          0.09824243 = sum of:
            0.057264555 = weight(_text_:theory in 3505) [ClassicSimilarity], result of:
              0.057264555 = score(doc=3505,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.32160926 = fieldWeight in 3505, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3505)
            0.040977873 = weight(_text_:29 in 3505) [ClassicSimilarity], result of:
              0.040977873 = score(doc=3505,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.27205724 = fieldWeight in 3505, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3505)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  19. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.02
    0.015022131 = product of:
      0.060088523 = sum of:
        0.060088523 = weight(_text_:representation in 2203) [ClassicSimilarity], result of:
          0.060088523 = score(doc=2203,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.3050057 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
      0.25 = coord(1/4)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
  20. Z39.19-2005: Guidelines for the construction, format, and management of monolingual controlled vocabularies (2005) 0.02
    0.015022131 = product of:
      0.060088523 = sum of:
        0.060088523 = weight(_text_:representation in 708) [ClassicSimilarity], result of:
          0.060088523 = score(doc=708,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.3050057 = fieldWeight in 708, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=708)
      0.25 = coord(1/4)
    
    Abstract
    This Standard presents guidelines and conventions for the contents, display, construction, testing, maintenance, and management of monolingual controlled vocabularies. This Standard focuses on controlled vocabularies that are used for the representation of content objects in knowledge organization systems including lists, synonym rings, taxonomies, and thesauri. This Standard should be regarded as a set of recommendations based on preferred techniques and procedures. Optional procedures are, however, sometimes described, e.g., for the display of terms in a controlled vocabulary. The primary purpose of vocabulary control is to achieve consistency in the description of content objects and to facilitate retrieval. Vocabulary control is accomplished by three principal methods: defining the scope, or meaning, of terms; using the equivalence relationship to link synonymous and nearly synonymous terms; and distinguishing among homographs.

Authors

Years

Languages

  • e 73
  • d 13
  • f 5
  • es 1
  • sp 1
  • More… Less…

Types

  • a 76
  • el 8
  • m 6
  • s 3
  • n 2
  • r 1
  • x 1
  • More… Less…