Search (28 results, page 1 of 2)

  • × theme_ss:"Wissensrepräsentation"
  • × year_i:[1990 TO 2000}
  1. Rolland-Thomas, P.: Thesaural codes : an appraisal of their use in the Library of Congress Subject Headings (1993) 0.05
    0.046187043 = product of:
      0.0769784 = sum of:
        0.010194 = weight(_text_:a in 549) [ClassicSimilarity], result of:
          0.010194 = score(doc=549,freq=28.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19066721 = fieldWeight in 549, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=549)
        0.06362687 = weight(_text_:91 in 549) [ClassicSimilarity], result of:
          0.06362687 = score(doc=549,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.24625893 = fieldWeight in 549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.03125 = fieldNorm(doc=549)
        0.003157529 = product of:
          0.006315058 = sum of:
            0.006315058 = weight(_text_:information in 549) [ClassicSimilarity], result of:
              0.006315058 = score(doc=549,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0775819 = fieldWeight in 549, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=549)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    LCSH is known as such since 1975. It always has created headings to serve the LC collections instead of a theoretical basis. It started to replace cross reference codes by thesaural codes in 1986, in a mechanical fashion. It was in no way transformed into a thesaurus. Its encyclopedic coverage, its pre-coordinate concepts make it substantially distinct, considering that thesauri usually map a restricted field of knowledge and use uniterms. The questions raised are whether the new symbols comply with thesaurus standards and if they are true to one or to several models. Explanations and definitions from other lists of subject headings and thesauri, literature in the field of classification and subject indexing will provide some answers. For instance, see refers from a subject heading not used to another or others used. Exceptionally it will lead from a specific term to a more general one. Some equate a see reference with the equivalence relationship. Such relationships are pointed by USE in LCSH. See also references are made from the broader subject to narrower parts of it and also between associated subjects. They suggest lateral or vertical connexions as well as reciprocal relationships. They serve a coordination purpose for some, lay down a methodical search itinerary for others. Since their inception in the 1950's thesauri have been devised for indexing and retrieving information in the fields of science and technology. Eventually they attended to a number of social sciences and humanities. Research derived from thesauri was voluminous. Numerous guidelines are designed. They did not discriminate between the "hard" sciences and the social sciences. RT relationships are widely but diversely used in numerous controlled vocabularies. LCSH's aim is to achieve a list almost free of RT and SA references. It thus restricts relationships to BT/NT, USE and UF. This raises the question as to whether all fields of knowledge can "fit" in the Procrustean bed of RT/NT, i.e., genus/species relationships. Standard codes were devised. It was soon realized that BT/NT, well suited to the genus/species couple could not signal a whole-part relationship. In LCSH, BT and NT function as reciprocals, the whole-part relationship is taken into account by ISO. It is amply elaborated upon by authors. The part-whole connexion is sometimes studied apart. The decision to replace cross reference codes was an improvement. Relations can now be distinguished through the distinct needs of numerous fields of knowledge are not attended to. Topic inclusion, and topic-subtopic, could provide the missing link where genus/species or whole/part are inadequate. Distinct codes, BT/NT and whole/part, should be provided. Sorting relationships with mechanical means can only lead to confusion.
    Source
    Cataloging and classification quarterly. 16(1993) no.2, S.71-91
    Type
    a
  2. Priss, U.: Faceted knowledge representation (1999) 0.03
    0.025825147 = product of:
      0.064562865 = sum of:
        0.009535614 = weight(_text_:a in 2654) [ClassicSimilarity], result of:
          0.009535614 = score(doc=2654,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 2654, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.05502725 = sum of:
          0.011051352 = weight(_text_:information in 2654) [ClassicSimilarity], result of:
            0.011051352 = score(doc=2654,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.13576832 = fieldWeight in 2654, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2654)
          0.043975897 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
            0.043975897 = score(doc=2654,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.2708308 = fieldWeight in 2654, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2654)
      0.4 = coord(2/5)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
    Type
    a
  3. Priss, U.: Description logic and faceted knowledge representation (1999) 0.02
    0.024035787 = product of:
      0.060089465 = sum of:
        0.012923255 = weight(_text_:a in 2655) [ClassicSimilarity], result of:
          0.012923255 = score(doc=2655,freq=20.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.24171482 = fieldWeight in 2655, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.04716621 = sum of:
          0.009472587 = weight(_text_:information in 2655) [ClassicSimilarity], result of:
            0.009472587 = score(doc=2655,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.116372846 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
          0.037693623 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
            0.037693623 = score(doc=2655,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
      0.4 = coord(2/5)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
    Type
    a
  4. Järvelin, K.; Kristensen, J.; Niemi, T.; Sormunen, E.; Keskustalo, H.: ¬A deductive data model for query expansion (1996) 0.02
    0.022135837 = product of:
      0.055339593 = sum of:
        0.008173384 = weight(_text_:a in 2230) [ClassicSimilarity], result of:
          0.008173384 = score(doc=2230,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 2230, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2230)
        0.04716621 = sum of:
          0.009472587 = weight(_text_:information in 2230) [ClassicSimilarity], result of:
            0.009472587 = score(doc=2230,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.116372846 = fieldWeight in 2230, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=2230)
          0.037693623 = weight(_text_:22 in 2230) [ClassicSimilarity], result of:
            0.037693623 = score(doc=2230,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 2230, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2230)
      0.4 = coord(2/5)
    
    Abstract
    We present a deductive data model for concept-based query expansion. It is based on three abstraction levels: the conceptual, linguistic and occurrence levels. Concepts and relationships among them are represented at the conceptual level. The expression level represents natural language expressions for concepts. Each expression has one or more matching models at the occurrence level. Each model specifies the matching of the expression in database indices built in varying ways. The data model supports a concept-based query expansion and formulation tool, the ExpansionTool, for environments providing heterogeneous IR systems. Expansion is controlled by adjustable matching reliability.
    Source
    Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR '96), Zürich, Switzerland, August 18-22, 1996. Eds.: H.P. Frei et al
    Type
    a
  5. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.02
    0.015289003 = product of:
      0.038222507 = sum of:
        0.0068111527 = weight(_text_:a in 6089) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=6089,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 6089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=6089)
        0.031411353 = product of:
          0.06282271 = sum of:
            0.06282271 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.06282271 = score(doc=6089,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Pages
    S.11-22
    Type
    a
  6. Giunchiglia, F.; Villafiorita, A.; Walsh, T.: Theories of abstraction (1997) 0.01
    0.013134009 = product of:
      0.03283502 = sum of:
        0.00770594 = weight(_text_:a in 4476) [ClassicSimilarity], result of:
          0.00770594 = score(doc=4476,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14413087 = fieldWeight in 4476, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4476)
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 4476) [ClassicSimilarity], result of:
              0.050258167 = score(doc=4476,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 4476, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4476)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    1.10.2018 14:13:22
    Type
    a
  7. Wright, L.W.; Nardini, H.K.G.; Aronson, A.R.; Rindflesch, T.C.: Hierarchical concept indexing of full-text documents in the Unified Medical Language System Information sources Map (1999) 0.01
    0.009635987 = product of:
      0.024089966 = sum of:
        0.01155891 = weight(_text_:a in 2111) [ClassicSimilarity], result of:
          0.01155891 = score(doc=2111,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.2161963 = fieldWeight in 2111, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2111)
        0.012531055 = product of:
          0.02506211 = sum of:
            0.02506211 = weight(_text_:information in 2111) [ClassicSimilarity], result of:
              0.02506211 = score(doc=2111,freq=14.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3078936 = fieldWeight in 2111, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2111)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Full-text documents are a vital and rapidly growing part of online biomedical information. A single large document can contain as much information as a small database, but normally lacks the tight structure and consistent indexing of a database. Retrieval systems will often miss highly relevant parts of a document if the document as a whole appears irrelevant. Access to full-text information is further complicated by the need to search separately many disparate information resources. This research explores how these problems can be addressed by the combined use of 2 techniques: 1) natural language processing for automatic concept-based indexing of full text, and 2) methods for exploiting the structure and hierarchy of full-text documents. We describe methods for applying these techniques to a large collection of full-text documents drawn from the Health Services / Technology Assessment Text (HSTAT) database at the NLM and examine how this hierarchical concept indexing can assist both document- and source-level retrieval in the context of NLM's Information Source Map project
    Source
    Journal of the American Society for Information Science. 50(1999) no.6, S.514-523
    Type
    a
  8. Hesse, W.; Verrijn-Stuart, A.: Towards a theory of information systems : the FRISCO approach (1999) 0.01
    0.008249131 = product of:
      0.020622827 = sum of:
        0.011797264 = weight(_text_:a in 3059) [ClassicSimilarity], result of:
          0.011797264 = score(doc=3059,freq=24.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.22065444 = fieldWeight in 3059, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3059)
        0.008825562 = product of:
          0.017651124 = sum of:
            0.017651124 = weight(_text_:information in 3059) [ClassicSimilarity], result of:
              0.017651124 = score(doc=3059,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21684799 = fieldWeight in 3059, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3059)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Information Systems (IS) is among the most widespread terms in the Computer Science field but a well founded, widely accepted theory of IS is still missing. With the Internet publication of the FRISCO report, the IFIP task group "FRamework of Information System COncepts" has taken a first step towards such a theory. Among the major achievements of this report are: (1) it builds on a solid basis formed by semiotics and ontology, (2) it defines a compendium of about 100 core IS concepts in a coherent and consistent way, (3) it goes beyond the common narrow view of information systems as pure technical artefacts by adopting an interdisciplinary, socio-technical view on them. In the autumn of 1999, a first review of the report and its impact was undertaken at the ISCO-4 conference in Leiden. In a workshop specifically devoted to the subject, the original aims and goals of FRISCO were confirmed to be still valid and the overall approach and achievements of the report were acknowledged. On the other hand, the workshop revealed some misconceptions, errors and weaknesses of the report in its present form, which are to be removed through a comprehensive revision now under way. This paper reports on the results of the Leiden conference and the current revision activities. It also points out some important consequences of the FRISCO approach as a whole.
    Theme
    Information
  9. Vickery, B.C.: Ontologies (1997) 0.01
    0.008150326 = product of:
      0.020375814 = sum of:
        0.009437811 = weight(_text_:a in 4891) [ClassicSimilarity], result of:
          0.009437811 = score(doc=4891,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17652355 = fieldWeight in 4891, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4891)
        0.010938003 = product of:
          0.021876005 = sum of:
            0.021876005 = weight(_text_:information in 4891) [ClassicSimilarity], result of:
              0.021876005 = score(doc=4891,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2687516 = fieldWeight in 4891, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4891)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Discusses the emergence of the term 'ontology' in knowledge engineering (and now in information science) with a definition of the term as currently used. Ontology is the study of what exists and what must be assumed to exist in order to achieve a cogent description or reality. The term has seen extensive application to artificial intelligence. Describes the process of building an ontology and the uses of such tools in knowledge engineering. Concludes by comparing ontologies with similar tools used in information science
    Source
    Journal of information science. 23(1997) no.4, S.277-286
    Type
    a
  10. Rindflesch, T.C.; Aronson, A.R.: Semantic processing in information retrieval (1993) 0.01
    0.00711762 = product of:
      0.01779405 = sum of:
        0.0067426977 = weight(_text_:a in 4121) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=4121,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 4121, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4121)
        0.011051352 = product of:
          0.022102704 = sum of:
            0.022102704 = weight(_text_:information in 4121) [ClassicSimilarity], result of:
              0.022102704 = score(doc=4121,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27153665 = fieldWeight in 4121, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4121)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Intuition suggests that one way to enhance the information retrieval process would be the use of phrases to characterize the contents of text. A number of researchers, however, have noted that phrases alone do not improve retrieval effectiveness. In this paper we briefly review the use of phrases in information retrieval and then suggest extensions to this paradigm using semantic information. We claim that semantic processing, which can be viewed as expressing relations between the concepts represented by phrases, will in fact enhance retrieval effectiveness. The availability of the UMLS® domain model, which we exploit extensively, significantly contributes to the feasibility of this processing.
    Type
    a
  11. Noy, N.F.: Knowledge representation for intelligent information retrieval in experimental sciences (1997) 0.01
    0.0063685044 = product of:
      0.015921261 = sum of:
        0.005448922 = weight(_text_:a in 694) [ClassicSimilarity], result of:
          0.005448922 = score(doc=694,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 694, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=694)
        0.010472339 = product of:
          0.020944677 = sum of:
            0.020944677 = weight(_text_:information in 694) [ClassicSimilarity], result of:
              0.020944677 = score(doc=694,freq=22.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.25731003 = fieldWeight in 694, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=694)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    More and more information is available on-line every day. The greater the amount of on-line information, the greater the demand for tools that process and disseminate this information. Processing electronic information in the form of text and answering users' queries about that information intelligently is one of the great challenges in natural language processing and information retrieval. The research presented in this talk is centered on the latter of these two tasks: intelligent information retrieval. In order for information to be retrieved, it first needs to be formalized in a database or knowledge base. The ontology for this formalization and assumptions it is based on are crucial to successful intelligent information retrieval. We have concentrated our effort on developing an ontology for representing knowledge in the domains of experimental sciences, molecular biology in particular. We show that existing ontological models cannot be readily applied to represent this domain adequately. For example, the fundamental notion of ontology design that every "real" object is defined as an instance of a category seems incompatible with the universe where objects can change their category as a result of experimental procedures. Another important problem is representing complex structures such as DNA, mixtures, populations of molecules, etc., that are very common in molecular biology. We present extensions that need to be made to an ontology to cover these issues: the representation of transformations that change the structure and/or category of their participants, and the component relations and spatial structures of complex objects. We demonstrate examples of how the proposed representations can be used to improve the quality and completeness of answers to user queries; discuss techniques for evaluating ontologies and show a prototype of an Information Retrieval System that we developed.
  12. Soergel, D.: SemWeb: Proposal for an Open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology : exploration and development of the concept (1996) 0.01
    0.0060856803 = product of:
      0.015214201 = sum of:
        0.009632425 = weight(_text_:a in 3576) [ClassicSimilarity], result of:
          0.009632425 = score(doc=3576,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18016359 = fieldWeight in 3576, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3576)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 3576) [ClassicSimilarity], result of:
              0.011163551 = score(doc=3576,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 3576, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3576)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents a proposal for the long-range development of an open, multifunctional, multilingual system for integrated access to many kinds of knowledge about concepts and terminology. The system would draw on existing knowledge bases that are accessible through the Internet or on CD-ROM an on a common integrated distributed knowledge base that would grow incrementally over time. Existing knowledge bases would be accessed through a common interface that would search several knowledge bases, collate the data into a common format, and present them to the user. The common integrated distributed knowledge base would provide an environment in which many contributors could carry out classification and terminological projects more efficiently, with the results available in a common format. Over time, data from other knowledge bases could be incorporated into the common knowledge base, either by actual transfer (provided the knowledge base producers are willing) or by reference through a link. Either way, such incorporation requires intellectual work but allows for tighter integration than common interface access to multiple knowledge bases. Each piece of information in the common knowledge base will have all its sources attached, providing an acknowledgment mechanism that gives due credit to all contributors. The whole system woul be designed to be usable by many levels of users for improved information exchange.
    Content
    Expanded version of a paper published in Advances in Knowledge Organization v.5 (1996): 165-173 (4th Annual ISKO Conference, Washington, D.C., 1996 July 15-18): SemWeb: proposal for an open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology.
    Type
    a
  13. Roth, G.; Schwegler, H.: Kognitive Referenz und Selbstreferentialität des Gehirns : ein Beitrag zur Klärung des Verhältnisses zwischen Erkenntnistheorie und Hirnforschung (1992) 0.01
    0.00588199 = product of:
      0.014704974 = sum of:
        0.0068111527 = weight(_text_:a in 4607) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=4607,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 4607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=4607)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 4607) [ClassicSimilarity], result of:
              0.015787644 = score(doc=4607,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 4607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4607)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Theme
    Information
    Type
    a
  14. Falkenberg, E.; Hesse, W.; Lindgreen, P.; Nilsson, B.E.; Oei, J.L.H.; Rolland, C.; Stamper, R.K.; Van Assche, F.J.M.; Verrijn-Stuart, A.A.; Voss, K.: FRISCO - A framework of information system concepts : the FRISCO report; final draft (1996) 0.01
    0.005751905 = product of:
      0.014379762 = sum of:
        0.005448922 = weight(_text_:a in 3056) [ClassicSimilarity], result of:
          0.005448922 = score(doc=3056,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 3056, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3056)
        0.0089308405 = product of:
          0.017861681 = sum of:
            0.017861681 = weight(_text_:information in 3056) [ClassicSimilarity], result of:
              0.017861681 = score(doc=3056,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21943474 = fieldWeight in 3056, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3056)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Theme
    Information
  15. Soergel, D.: SemWeb: proposal for an open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology (1996) 0.01
    0.00556948 = product of:
      0.0139237 = sum of:
        0.008341924 = weight(_text_:a in 3575) [ClassicSimilarity], result of:
          0.008341924 = score(doc=3575,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15602624 = fieldWeight in 3575, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3575)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 3575) [ClassicSimilarity], result of:
              0.011163551 = score(doc=3575,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 3575, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3575)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Presents a proposal for the long-range development of an open, multifunctional, multilingual system for integrated access to many kinds of knowledge about concepts and terminology. The system would draw on existing knowledge bases that are accessible through the Internet or on CD-ROM and on a common integrated distributed knowledge base that would grow incrementally over time. Existing knowledge bases would be accessed througha common interface that would search several knowledge bases, collate the data into a common format, and present them to the user. The common integrated distributed knowldge base would provide an environment in which many contributors could carry out classification and terminological projects more efficiently, with the results available in a common format. Over time, data from other knowledge bases could be incorporated into the common knowledge base, either by actual transfer (provided the knowledge base producers are willing) or by reference through a link. Either way, such incorporation requires intellectual work but allows for tighter integration than common interface access to multiple knowledge bases. Each piece of information in the common knowledge base will have all its sources attached, providing an acknowledgment mechanism that gives due credit to all contributors. The whole system would be designed to be usable by many levels of users for improved information exchange.
    Type
    a
  16. Hodgson, J.P.E.: Knowledge representation and language in AI (1991) 0.01
    0.005278751 = product of:
      0.013196876 = sum of:
        0.0076151006 = weight(_text_:a in 1529) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=1529,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 1529, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1529)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 1529) [ClassicSimilarity], result of:
              0.011163551 = score(doc=1529,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 1529, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1529)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The aim of this book is to highlight the relationship between knowledge representation and language in artificial intelligence, and in particular on the way in which the choice of representation influences the language used to discuss a problem - and vice versa. Opening with a discussion of knowledge representation methods, and following this with a look at reasoning methods, the author begins to make his case for the intimate relationship between language and representation. He shows how each representation method fits particularly well with some reasoning methods and less so with others, using specific languages as examples. The question of representation change, an important and complex issue about which very little is known, is addressed. Dr Hodgson gathers together recent work on problem solving, showing how, in some cases, it has been possible to use representation changes to recast problems into a language that makes them easier to solve. The author maintains throughout that the relationships that this book explores lie at the heart of the construction of large systems, examining a number of the current large AI systems from the viewpoint of representation and language to prove his point.
    LCSH
    Knowledge / representation (Information theory)
    Subject
    Knowledge / representation (Information theory)
  17. Semantic knowledge and semantic representations (1995) 0.01
    0.005232369 = product of:
      0.013080923 = sum of:
        0.008615503 = weight(_text_:a in 3568) [ClassicSimilarity], result of:
          0.008615503 = score(doc=3568,freq=20.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.16114321 = fieldWeight in 3568, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=3568)
        0.0044654203 = product of:
          0.0089308405 = sum of:
            0.0089308405 = weight(_text_:information in 3568) [ClassicSimilarity], result of:
              0.0089308405 = score(doc=3568,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.10971737 = fieldWeight in 3568, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3568)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    G. Gainotti, M.C. Silveri, A. Daniele, L. Giustolisi, Neuroanatomical Correlates of Category-specific Semantic Disorders: A Critical Survey. J. S. Snowden, H. L. Griffiths, D. Neary, Autobiographical Experience and Word Meaning. L. Cipolotti, E.K. Warrington, Towards a Unitary Account of Access Dysphasia: A Single Case Study. E. Forde, G.W. Humphreys, Refractory Semantics in Global Aphasia: On Semantic Organisation and the Access-Storage Distinction in Neuropsychology. A. E. Hillis, A. Caramazza, The Compositionality of Lexical Semantic Representations: Clues from Semantic Errors in Object Naming. H.E. Moss, L.K. Tyler, Investigating Semantic Memory Impairments: The Contribution of Semantic Priming. K.R. Laws, S.A. Humber, D.J.C. Ramsey, R.A. McCarthy, Probing Sensory and Associative Semantics for Animals and Objects in Normal Subjects. K.R. Laws, J.J. Evans, J. R. Hodges, R.A. McCarthy, Naming without Knowing and Appearance without Associations: Evidence for Constructive Processes in Semantic Memory? J. Powell, J. Davidoff, Selective Impairments of Object-knowledge in a Case of Acquired Cortical Blindness. J.R. Hodges, N. Graham, K. Patterson, Charting the Progression in Semantic Dementia: Implications for the Organisation of Semantic Memory. E. Funnell, Objects and Properties: A Study of the Breakdown of Semantic Memory. L.J. Tippett, S. McAuliffe, M. J. Farrar, Preservation of Categorical Knowledge in Alzheimer's Disease: A Computational Account. G. W. Humphreys, C. Lamote, T.J. Lloyd-Jones, An Interactive Activation Approach to Object Processing: Effects of Structural Similarity, Name Frequency, and Task in Normality and Pathology.
    Footnote
    This book is also a double special issue of the journal Memory which forms Issues 3 and 4 of Volume 3 (1995).
    LCSH
    Human information processing
    Subject
    Human information processing
  18. ISO/IEC FCD 13250: Topic maps. Information technology (1999) 0.00
    0.0025260232 = product of:
      0.012630116 = sum of:
        0.012630116 = product of:
          0.025260232 = sum of:
            0.025260232 = weight(_text_:information in 319) [ClassicSimilarity], result of:
              0.025260232 = score(doc=319,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3103276 = fieldWeight in 319, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.125 = fieldNorm(doc=319)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
  19. Rath, H.H.: Mozart oder Kugel : Mit Topic Maps intelligente Informationsnetze aufbauen (1999) 0.00
    0.0021795689 = product of:
      0.010897844 = sum of:
        0.010897844 = weight(_text_:a in 3893) [ClassicSimilarity], result of:
          0.010897844 = score(doc=3893,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20383182 = fieldWeight in 3893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=3893)
      0.2 = coord(1/5)
    
    Type
    a
  20. Fischer, D.H.: From thesauri towards ontologies? (1998) 0.00
    0.002002062 = product of:
      0.0100103095 = sum of:
        0.0100103095 = weight(_text_:a in 2176) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=2176,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 2176, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2176)
      0.2 = coord(1/5)
    
    Abstract
    The ISO 2788 guidelines for monolingual thesauri contain a differentiation of "the hierarchical relationship" into "generic", "partitive", and "instance", which, for purposes of document retrieval, was deemed adequate. However, ontologies, designed as language inventories for a wider scope of knowledge representation, are based on all these and some more logical differentiations. Rereading the ISO 2788 standard and inspecting the published Cyc Upper Ontology, it is argued that the adoption of the document-retrieval definition of subsumption generally prevents the conception or use of a thesaurus as a substructure of an ontology of the new kind as constructed for AI applications. When a thesaurus is used for fact description and inference on fact descriptions, the instance-of relationship too should be reconsidered: It may also link concepts and metaconcepts, and then its distinction from subsumption is needed. The treatment of the instance-of relationship in thesauri, the Cyc Upper Ontology, and WordNet is described from this perspective
    Type
    a

Languages

  • e 19
  • d 8

Types

  • a 19
  • el 5
  • m 3
  • r 2
  • s 2
  • n 1
  • x 1
  • More… Less…