Search (26 results, page 1 of 2)

  • × theme_ss:"Wissensrepräsentation"
  • × year_i:[1990 TO 2000}
  1. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.03
    0.034582928 = product of:
      0.069165856 = sum of:
        0.069165856 = sum of:
          0.006765375 = weight(_text_:a in 6089) [ClassicSimilarity], result of:
            0.006765375 = score(doc=6089,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.12739488 = fieldWeight in 6089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=6089)
          0.06240048 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
            0.06240048 = score(doc=6089,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.38690117 = fieldWeight in 6089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=6089)
      0.5 = coord(1/2)
    
    Pages
    S.11-22
    Type
    a
  2. Giunchiglia, F.; Villafiorita, A.; Walsh, T.: Theories of abstraction (1997) 0.03
    0.028787265 = product of:
      0.05757453 = sum of:
        0.05757453 = sum of:
          0.007654148 = weight(_text_:a in 4476) [ClassicSimilarity], result of:
            0.007654148 = score(doc=4476,freq=4.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.14413087 = fieldWeight in 4476, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=4476)
          0.04992038 = weight(_text_:22 in 4476) [ClassicSimilarity], result of:
            0.04992038 = score(doc=4476,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.30952093 = fieldWeight in 4476, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4476)
      0.5 = coord(1/2)
    
    Date
    1.10.2018 14:13:22
    Type
    a
  3. Priss, U.: Faceted knowledge representation (1999) 0.03
    0.026575929 = product of:
      0.053151857 = sum of:
        0.053151857 = sum of:
          0.009471525 = weight(_text_:a in 2654) [ClassicSimilarity], result of:
            0.009471525 = score(doc=2654,freq=8.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.17835285 = fieldWeight in 2654, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2654)
          0.043680333 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
            0.043680333 = score(doc=2654,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.2708308 = fieldWeight in 2654, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2654)
      0.5 = coord(1/2)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
    Type
    a
  4. Priss, U.: Description logic and faceted knowledge representation (1999) 0.03
    0.02513834 = product of:
      0.05027668 = sum of:
        0.05027668 = sum of:
          0.012836397 = weight(_text_:a in 2655) [ClassicSimilarity], result of:
            0.012836397 = score(doc=2655,freq=20.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.24171482 = fieldWeight in 2655, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
          0.037440285 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
            0.037440285 = score(doc=2655,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
      0.5 = coord(1/2)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
    Type
    a
  5. Järvelin, K.; Kristensen, J.; Niemi, T.; Sormunen, E.; Keskustalo, H.: ¬A deductive data model for query expansion (1996) 0.02
    0.022779368 = product of:
      0.045558736 = sum of:
        0.045558736 = sum of:
          0.008118451 = weight(_text_:a in 2230) [ClassicSimilarity], result of:
            0.008118451 = score(doc=2230,freq=8.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.15287387 = fieldWeight in 2230, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2230)
          0.037440285 = weight(_text_:22 in 2230) [ClassicSimilarity], result of:
            0.037440285 = score(doc=2230,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 2230, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2230)
      0.5 = coord(1/2)
    
    Abstract
    We present a deductive data model for concept-based query expansion. It is based on three abstraction levels: the conceptual, linguistic and occurrence levels. Concepts and relationships among them are represented at the conceptual level. The expression level represents natural language expressions for concepts. Each expression has one or more matching models at the occurrence level. Each model specifies the matching of the expression in database indices built in varying ways. The data model supports a concept-based query expansion and formulation tool, the ExpansionTool, for environments providing heterogeneous IR systems. Expansion is controlled by adjustable matching reliability.
    Source
    Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR '96), Zürich, Switzerland, August 18-22, 1996. Eds.: H.P. Frei et al
    Type
    a
  6. Hesse, W.; Verrijn-Stuart, A.: Towards a theory of information systems : the FRISCO approach (1999) 0.00
    0.0029294936 = product of:
      0.005858987 = sum of:
        0.005858987 = product of:
          0.011717974 = sum of:
            0.011717974 = weight(_text_:a in 3059) [ClassicSimilarity], result of:
              0.011717974 = score(doc=3059,freq=24.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22065444 = fieldWeight in 3059, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3059)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information Systems (IS) is among the most widespread terms in the Computer Science field but a well founded, widely accepted theory of IS is still missing. With the Internet publication of the FRISCO report, the IFIP task group "FRamework of Information System COncepts" has taken a first step towards such a theory. Among the major achievements of this report are: (1) it builds on a solid basis formed by semiotics and ontology, (2) it defines a compendium of about 100 core IS concepts in a coherent and consistent way, (3) it goes beyond the common narrow view of information systems as pure technical artefacts by adopting an interdisciplinary, socio-technical view on them. In the autumn of 1999, a first review of the report and its impact was undertaken at the ISCO-4 conference in Leiden. In a workshop specifically devoted to the subject, the original aims and goals of FRISCO were confirmed to be still valid and the overall approach and achievements of the report were acknowledged. On the other hand, the workshop revealed some misconceptions, errors and weaknesses of the report in its present form, which are to be removed through a comprehensive revision now under way. This paper reports on the results of the Leiden conference and the current revision activities. It also points out some important consequences of the FRISCO approach as a whole.
  7. Wright, L.W.; Nardini, H.K.G.; Aronson, A.R.; Rindflesch, T.C.: Hierarchical concept indexing of full-text documents in the Unified Medical Language System Information sources Map (1999) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 2111) [ClassicSimilarity], result of:
              0.011481222 = score(doc=2111,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 2111, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2111)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Full-text documents are a vital and rapidly growing part of online biomedical information. A single large document can contain as much information as a small database, but normally lacks the tight structure and consistent indexing of a database. Retrieval systems will often miss highly relevant parts of a document if the document as a whole appears irrelevant. Access to full-text information is further complicated by the need to search separately many disparate information resources. This research explores how these problems can be addressed by the combined use of 2 techniques: 1) natural language processing for automatic concept-based indexing of full text, and 2) methods for exploiting the structure and hierarchy of full-text documents. We describe methods for applying these techniques to a large collection of full-text documents drawn from the Health Services / Technology Assessment Text (HSTAT) database at the NLM and examine how this hierarchical concept indexing can assist both document- and source-level retrieval in the context of NLM's Information Source Map project
    Type
    a
  8. Rath, H.H.: Mozart oder Kugel : Mit Topic Maps intelligente Informationsnetze aufbauen (1999) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 3893) [ClassicSimilarity], result of:
              0.0108246 = score(doc=3893,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 3893, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=3893)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  9. Rolland-Thomas, P.: Thesaural codes : an appraisal of their use in the Library of Congress Subject Headings (1993) 0.00
    0.0025313715 = product of:
      0.005062743 = sum of:
        0.005062743 = product of:
          0.010125486 = sum of:
            0.010125486 = weight(_text_:a in 549) [ClassicSimilarity], result of:
              0.010125486 = score(doc=549,freq=28.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19066721 = fieldWeight in 549, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=549)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    LCSH is known as such since 1975. It always has created headings to serve the LC collections instead of a theoretical basis. It started to replace cross reference codes by thesaural codes in 1986, in a mechanical fashion. It was in no way transformed into a thesaurus. Its encyclopedic coverage, its pre-coordinate concepts make it substantially distinct, considering that thesauri usually map a restricted field of knowledge and use uniterms. The questions raised are whether the new symbols comply with thesaurus standards and if they are true to one or to several models. Explanations and definitions from other lists of subject headings and thesauri, literature in the field of classification and subject indexing will provide some answers. For instance, see refers from a subject heading not used to another or others used. Exceptionally it will lead from a specific term to a more general one. Some equate a see reference with the equivalence relationship. Such relationships are pointed by USE in LCSH. See also references are made from the broader subject to narrower parts of it and also between associated subjects. They suggest lateral or vertical connexions as well as reciprocal relationships. They serve a coordination purpose for some, lay down a methodical search itinerary for others. Since their inception in the 1950's thesauri have been devised for indexing and retrieving information in the fields of science and technology. Eventually they attended to a number of social sciences and humanities. Research derived from thesauri was voluminous. Numerous guidelines are designed. They did not discriminate between the "hard" sciences and the social sciences. RT relationships are widely but diversely used in numerous controlled vocabularies. LCSH's aim is to achieve a list almost free of RT and SA references. It thus restricts relationships to BT/NT, USE and UF. This raises the question as to whether all fields of knowledge can "fit" in the Procrustean bed of RT/NT, i.e., genus/species relationships. Standard codes were devised. It was soon realized that BT/NT, well suited to the genus/species couple could not signal a whole-part relationship. In LCSH, BT and NT function as reciprocals, the whole-part relationship is taken into account by ISO. It is amply elaborated upon by authors. The part-whole connexion is sometimes studied apart. The decision to replace cross reference codes was an improvement. Relations can now be distinguished through the distinct needs of numerous fields of knowledge are not attended to. Topic inclusion, and topic-subtopic, could provide the missing link where genus/species or whole/part are inadequate. Distinct codes, BT/NT and whole/part, should be provided. Sorting relationships with mechanical means can only lead to confusion.
    Type
    a
  10. Fischer, D.H.: From thesauri towards ontologies? (1998) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 2176) [ClassicSimilarity], result of:
              0.00994303 = score(doc=2176,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 2176, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2176)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The ISO 2788 guidelines for monolingual thesauri contain a differentiation of "the hierarchical relationship" into "generic", "partitive", and "instance", which, for purposes of document retrieval, was deemed adequate. However, ontologies, designed as language inventories for a wider scope of knowledge representation, are based on all these and some more logical differentiations. Rereading the ISO 2788 standard and inspecting the published Cyc Upper Ontology, it is argued that the adoption of the document-retrieval definition of subsumption generally prevents the conception or use of a thesaurus as a substructure of an ontology of the new kind as constructed for AI applications. When a thesaurus is used for fact description and inference on fact descriptions, the instance-of relationship too should be reconsidered: It may also link concepts and metaconcepts, and then its distinction from subsumption is needed. The treatment of the instance-of relationship in thesauri, the Cyc Upper Ontology, and WordNet is described from this perspective
    Type
    a
  11. Soergel, D.: SemWeb: Proposal for an Open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology : exploration and development of the concept (1996) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 3576) [ClassicSimilarity], result of:
              0.009567685 = score(doc=3576,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 3576, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3576)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a proposal for the long-range development of an open, multifunctional, multilingual system for integrated access to many kinds of knowledge about concepts and terminology. The system would draw on existing knowledge bases that are accessible through the Internet or on CD-ROM an on a common integrated distributed knowledge base that would grow incrementally over time. Existing knowledge bases would be accessed through a common interface that would search several knowledge bases, collate the data into a common format, and present them to the user. The common integrated distributed knowledge base would provide an environment in which many contributors could carry out classification and terminological projects more efficiently, with the results available in a common format. Over time, data from other knowledge bases could be incorporated into the common knowledge base, either by actual transfer (provided the knowledge base producers are willing) or by reference through a link. Either way, such incorporation requires intellectual work but allows for tighter integration than common interface access to multiple knowledge bases. Each piece of information in the common knowledge base will have all its sources attached, providing an acknowledgment mechanism that gives due credit to all contributors. The whole system woul be designed to be usable by many levels of users for improved information exchange.
    Content
    Expanded version of a paper published in Advances in Knowledge Organization v.5 (1996): 165-173 (4th Annual ISKO Conference, Washington, D.C., 1996 July 15-18): SemWeb: proposal for an open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology.
    Type
    a
  12. Nagao, M.: Knowledge and inference (1990) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 3304) [ClassicSimilarity], result of:
              0.009567685 = score(doc=3304,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 3304, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3304)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge and Inference discusses an important problem for software systems: How do we treat knowledge and ideas on a computer and how do we use inference to solve problems on a computer? The book talks about the problems of knowledge and inference for the purpose of merging artificial intelligence and library science. The book begins by clarifying the concept of ""knowledge"" from many points of view, followed by a chapter on the current state of library science and the place of artificial intelligence in library science. Subsequent chapters cover central topics in the artificial intelligence: search and problem solving, methods of making proofs, and the use of knowledge in looking for a proof. There is also a discussion of how to use the knowledge system. The final chapter describes a popular expert system. It describes tools for building expert systems using an example based on Expert Systems-A Practical Introduction by P. Sell (Macmillian, 1985). This type of software is called an ""expert system shell."" This book was written as a textbook for undergraduate students covering only the basics but explaining as much detail as possible.
  13. Vickery, B.C.: Ontologies (1997) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 4891) [ClassicSimilarity], result of:
              0.009374379 = score(doc=4891,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 4891, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4891)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Discusses the emergence of the term 'ontology' in knowledge engineering (and now in information science) with a definition of the term as currently used. Ontology is the study of what exists and what must be assumed to exist in order to achieve a cogent description or reality. The term has seen extensive application to artificial intelligence. Describes the process of building an ontology and the uses of such tools in knowledge engineering. Concludes by comparing ontologies with similar tools used in information science
    Type
    a
  14. Semantic knowledge and semantic representations (1995) 0.00
    0.0021393995 = product of:
      0.004278799 = sum of:
        0.004278799 = product of:
          0.008557598 = sum of:
            0.008557598 = weight(_text_:a in 3568) [ClassicSimilarity], result of:
              0.008557598 = score(doc=3568,freq=20.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.16114321 = fieldWeight in 3568, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3568)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    G. Gainotti, M.C. Silveri, A. Daniele, L. Giustolisi, Neuroanatomical Correlates of Category-specific Semantic Disorders: A Critical Survey. J. S. Snowden, H. L. Griffiths, D. Neary, Autobiographical Experience and Word Meaning. L. Cipolotti, E.K. Warrington, Towards a Unitary Account of Access Dysphasia: A Single Case Study. E. Forde, G.W. Humphreys, Refractory Semantics in Global Aphasia: On Semantic Organisation and the Access-Storage Distinction in Neuropsychology. A. E. Hillis, A. Caramazza, The Compositionality of Lexical Semantic Representations: Clues from Semantic Errors in Object Naming. H.E. Moss, L.K. Tyler, Investigating Semantic Memory Impairments: The Contribution of Semantic Priming. K.R. Laws, S.A. Humber, D.J.C. Ramsey, R.A. McCarthy, Probing Sensory and Associative Semantics for Animals and Objects in Normal Subjects. K.R. Laws, J.J. Evans, J. R. Hodges, R.A. McCarthy, Naming without Knowing and Appearance without Associations: Evidence for Constructive Processes in Semantic Memory? J. Powell, J. Davidoff, Selective Impairments of Object-knowledge in a Case of Acquired Cortical Blindness. J.R. Hodges, N. Graham, K. Patterson, Charting the Progression in Semantic Dementia: Implications for the Organisation of Semantic Memory. E. Funnell, Objects and Properties: A Study of the Breakdown of Semantic Memory. L.J. Tippett, S. McAuliffe, M. J. Farrar, Preservation of Categorical Knowledge in Alzheimer's Disease: A Computational Account. G. W. Humphreys, C. Lamote, T.J. Lloyd-Jones, An Interactive Activation Approach to Object Processing: Effects of Structural Similarity, Name Frequency, and Task in Normality and Pathology.
    Footnote
    This book is also a double special issue of the journal Memory which forms Issues 3 and 4 of Volume 3 (1995).
  15. Barsalou, L.W.: Frames, concepts, and conceptual fields (1992) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 3217) [ClassicSimilarity], result of:
              0.008285859 = score(doc=3217,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 3217, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3217)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this chapter I propose that frames provide the fundamental representation of knowledge in human cognition. In the first section, I raise problems with the feature list representations often found in theories of knowledge, and I sketch the solutions that frames provide to them. In the second section, I examine the three fundamental concepts of frames: attribute-value sets, structural invariants, and constraints. Because frames also represents the attributes, values, structural invariants, and constraints within a frame, the mechanism that constructs frames builds them recursively. The frame theory I propose borrows heavily from previous frame theories, although its collection of representational components is somewhat unique. Furthermore, frame theorists generally assume that frames are rigid configurations of independent attributes, whereas I propose that frames are dynamic relational structures whose form is flexible and context dependent. In the third section, I illustrate how frames support a wide variety of representational tasks central to conceptual processing in natural and artificial intelligence. Frames can represent exemplars and propositions, prototypes and membership, subordinates and taxonomies. Frames can also represent conceptual combinations, event sequences, rules, and plans. In the fourth section, I show how frames define the extent of conceptual fields and how they provide a powerful productive mechanism for generating specific concepts within a field.
    Source
    Frames, fields and contrasts: new essays in semantic and lexical organization. Eds.: A. Lehrer u. E.F. Kittay
    Type
    a
  16. Soergel, D.: SemWeb: proposal for an open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology (1996) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 3575) [ClassicSimilarity], result of:
              0.008285859 = score(doc=3575,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 3575, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3575)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents a proposal for the long-range development of an open, multifunctional, multilingual system for integrated access to many kinds of knowledge about concepts and terminology. The system would draw on existing knowledge bases that are accessible through the Internet or on CD-ROM and on a common integrated distributed knowledge base that would grow incrementally over time. Existing knowledge bases would be accessed througha common interface that would search several knowledge bases, collate the data into a common format, and present them to the user. The common integrated distributed knowldge base would provide an environment in which many contributors could carry out classification and terminological projects more efficiently, with the results available in a common format. Over time, data from other knowledge bases could be incorporated into the common knowledge base, either by actual transfer (provided the knowledge base producers are willing) or by reference through a link. Either way, such incorporation requires intellectual work but allows for tighter integration than common interface access to multiple knowledge bases. Each piece of information in the common knowledge base will have all its sources attached, providing an acknowledgment mechanism that gives due credit to all contributors. The whole system would be designed to be usable by many levels of users for improved information exchange.
    Type
    a
  17. Wildgen, W.: Semantischer Realismus und Antirealismus in der Sprachtheorie (1992) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 2071) [ClassicSimilarity], result of:
              0.008118451 = score(doc=2071,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 2071, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2071)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  18. Kutschera, F. von: ¬Der erkenntnistheoretische Realismus (1992) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 4608) [ClassicSimilarity], result of:
              0.008118451 = score(doc=4608,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 4608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4608)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  19. Franzen, W.: Idealismus statt Realismus? : Realismus plus Skeptizismus! (1992) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 4612) [ClassicSimilarity], result of:
              0.008118451 = score(doc=4612,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 4612, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4612)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  20. Hodgson, J.P.E.: Knowledge representation and language in AI (1991) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 1529) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=1529,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 1529, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1529)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The aim of this book is to highlight the relationship between knowledge representation and language in artificial intelligence, and in particular on the way in which the choice of representation influences the language used to discuss a problem - and vice versa. Opening with a discussion of knowledge representation methods, and following this with a look at reasoning methods, the author begins to make his case for the intimate relationship between language and representation. He shows how each representation method fits particularly well with some reasoning methods and less so with others, using specific languages as examples. The question of representation change, an important and complex issue about which very little is known, is addressed. Dr Hodgson gathers together recent work on problem solving, showing how, in some cases, it has been possible to use representation changes to recast problems into a language that makes them easier to solve. The author maintains throughout that the relationships that this book explores lie at the heart of the construction of large systems, examining a number of the current large AI systems from the viewpoint of representation and language to prove his point.

Languages

  • e 18
  • d 7

Types