Search (90 results, page 1 of 5)

  • × theme_ss:"Theorie verbaler Dokumentationssprachen"
  1. Khoo, C.; Chan, S.; Niu, Y.: ¬The many facets of the cause-effect relation (2002) 0.03
    0.031293757 = product of:
      0.17211567 = sum of:
        0.1573056 = weight(_text_:effect in 1192) [ClassicSimilarity], result of:
          0.1573056 = score(doc=1192,freq=12.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.8600655 = fieldWeight in 1192, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.046875 = fieldNorm(doc=1192)
        0.014810067 = weight(_text_:of in 1192) [ClassicSimilarity], result of:
          0.014810067 = score(doc=1192,freq=14.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.2742677 = fieldWeight in 1192, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1192)
      0.18181819 = coord(2/11)
    
    Abstract
    This chapter presents a broad survey of the cause-effect relation, with particular emphasis an how the relation is expressed in text. Philosophers have been grappling with the concept of causation for centuries. Researchers in social psychology have found that the human mind has a very complex mechanism for identifying and attributing the cause for an event. Inferring cause-effect relations between events and statements has also been found to be an important part of reading and text comprehension, especially for narrative text. Though many of the cause-effect relations in text are implied and have to be inferred by the reader, there is also a wide variety of linguistic expressions for explicitly indicating cause and effect. In addition, it has been found that certain words have "causal valence"-they bias the reader to attribute cause in certain ways. Cause-effect relations can also be divided into several different types.
    Source
    The semantics of relationships: an interdisciplinary perspective. Eds: Green, R., C.A. Bean u. S.H. Myaeng
  2. Mai, J.-E.: Actors, domains, and constraints in the design and construction of controlled vocabularies (2008) 0.03
    0.027520042 = product of:
      0.10090682 = sum of:
        0.019790784 = weight(_text_:of in 1921) [ClassicSimilarity], result of:
          0.019790784 = score(doc=1921,freq=36.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.36650562 = fieldWeight in 1921, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1921)
        0.02063419 = weight(_text_:on in 1921) [ClassicSimilarity], result of:
          0.02063419 = score(doc=1921,freq=10.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.271686 = fieldWeight in 1921, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1921)
        0.060481843 = weight(_text_:great in 1921) [ClassicSimilarity], result of:
          0.060481843 = score(doc=1921,freq=2.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.31105953 = fieldWeight in 1921, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1921)
      0.27272728 = coord(3/11)
    
    Abstract
    Classification schemes, thesauri, taxonomies, and other controlled vocabularies play important roles in the organization and retrieval of information in many different environments. While the design and construction of controlled vocabularies have been prescribed at the technical level in great detail over the past decades, the methodological level has been somewhat neglected. However, classification research has in recent years focused on developing approaches to the analysis of users, domains, and activities that could produce requirements for the design of controlled vocabularies. Researchers have often argued that the design, construction, and use of controlled vocabularies need to be based on analyses and understandings of the contexts in which these controlled vocabularies function. While one would assume that the growing body of research on human information behavior might help guide the development of controlled vocabularies shed light on these contexts, unfortunately, much of the research in this area is descriptive in nature and of little use for systems design. This paper discusses these trends and outlines a holistic approach that demonstrates how the design of controlled vocabularies can be informed by investigations of people's interactions with information. This approach is based on the Cognitive Work Analysis framework and outlines several dimensions of human-information interactions. Application of this approach will result is a comprehensive understanding of the contexts in which the controlled vocabulary will function and which can be used for the development of for the development of controlled vocabularies.
  3. Hudon, M.: ¬A preliminary investigation of the usefulness of semantic relations and of standardized definitions for the purpose of specifying meaning in a thesaurus (1998) 0.03
    0.026447162 = product of:
      0.09697293 = sum of:
        0.064219736 = weight(_text_:effect in 55) [ClassicSimilarity], result of:
          0.064219736 = score(doc=55,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.35112026 = fieldWeight in 55, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.046875 = fieldNorm(doc=55)
        0.021679718 = weight(_text_:of in 55) [ClassicSimilarity], result of:
          0.021679718 = score(doc=55,freq=30.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.4014868 = fieldWeight in 55, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=55)
        0.011073467 = weight(_text_:on in 55) [ClassicSimilarity], result of:
          0.011073467 = score(doc=55,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.14580199 = fieldWeight in 55, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=55)
      0.27272728 = coord(3/11)
    
    Abstract
    The terminological consistency of indexers working with a thesaurus as indexing aid remains low. This suggests that indexers cannot perceive easily or very clearly the meaning of each descriptor available as index term. This paper presents the background nd some of the findings of a small scale experiment designed to study the effect on interindexer terminological consistency of modifying the nature of the semantic information given with descriptors in a thesaurus. The study also provided some insights into the respective usefulness of standardized definitions and of traditional networks of hierarchical and associative relationships as means of providing essential meaning information in the thesaurus used as indexing aid
    Source
    Structures and relations in knowledge organization: Proceedings of the 5th International ISKO-Conference, Lille, 25.-29.8.1998. Ed.: W. Mustafa el Hadi et al
  4. ¬The semantics of relationships : an interdisciplinary perspective (2002) 0.02
    0.023881651 = product of:
      0.087566055 = sum of:
        0.053516448 = weight(_text_:effect in 1430) [ClassicSimilarity], result of:
          0.053516448 = score(doc=1430,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.2926002 = fieldWeight in 1430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1430)
        0.018066432 = weight(_text_:of in 1430) [ClassicSimilarity], result of:
          0.018066432 = score(doc=1430,freq=30.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.33457235 = fieldWeight in 1430, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1430)
        0.015983174 = weight(_text_:on in 1430) [ClassicSimilarity], result of:
          0.015983174 = score(doc=1430,freq=6.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.21044704 = fieldWeight in 1430, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1430)
      0.27272728 = coord(3/11)
    
    Abstract
    Work on relationships takes place in many communities, including, among others, data modeling, knowledge representation, natural language processing, linguistics, and information retrieval. Unfortunately, continued disciplinary splintering and specialization keeps any one person from being familiar with the full expanse of that work. By including contributions form experts in a variety of disciplines and backgrounds, this volume demonstrates both the parallels that inform work on relationships across a number of fields and the singular emphases that have yet to be fully embraced, The volume is organized into 3 parts: (1) Types of relationships (2) Relationships in knowledge representation and reasoning (3) Applications of relationships
    Content
    Enthält die Beiträge: Pt.1: Types of relationships: CRUDE, D.A.: Hyponymy and its varieties; FELLBAUM, C.: On the semantics of troponymy; PRIBBENOW, S.: Meronymic relationships: from classical mereology to complex part-whole relations; KHOO, C. u.a.: The many facets of cause-effect relation - Pt.2: Relationships in knowledge representation and reasoning: GREEN, R.: Internally-structured conceptual models in cognitive semantics; HOVY, E.: Comparing sets of semantic relations in ontologies; GUARINO, N., C. WELTY: Identity and subsumption; JOUIS; C.: Logic of relationships - Pt.3: Applications of relationships: EVENS, M.: Thesaural relations in information retrieval; KHOO, C., S.H. MYAENG: Identifying semantic relations in text for information retrieval and information extraction; McCRAY, A.T., O. BODENREICHER: A conceptual framework for the biiomedical domain; HETZLER, B.: Visual analysis and exploration of relationships
    Footnote
    Mit ausführlicher Einleitung der Herausgeber zu den Themen: Types of relationships - Relationships in knowledge representation and reasoning - Applications of relationships
  5. Khoo, S.G.; Na, J.-C.: Semantic relations in information science (2006) 0.02
    0.01767223 = product of:
      0.06479818 = sum of:
        0.032109868 = weight(_text_:effect in 1978) [ClassicSimilarity], result of:
          0.032109868 = score(doc=1978,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.17556013 = fieldWeight in 1978, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1978)
        0.016078109 = weight(_text_:of in 1978) [ClassicSimilarity], result of:
          0.016078109 = score(doc=1978,freq=66.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.2977506 = fieldWeight in 1978, product of:
              8.124039 = tf(freq=66.0), with freq of:
                66.0 = termFreq=66.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1978)
        0.016610201 = weight(_text_:on in 1978) [ClassicSimilarity], result of:
          0.016610201 = score(doc=1978,freq=18.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.21870299 = fieldWeight in 1978, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1978)
      0.27272728 = coord(3/11)
    
    Abstract
    This chapter examines the nature of semantic relations and their main applications in information science. The nature and types of semantic relations are discussed from the perspectives of linguistics and psychology. An overview of the semantic relations used in knowledge structures such as thesauri and ontologies is provided, as well as the main techniques used in the automatic extraction of semantic relations from text. The chapter then reviews the use of semantic relations in information extraction, information retrieval, question-answering, and automatic text summarization applications. Concepts and relations are the foundation of knowledge and thought. When we look at the world, we perceive not a mass of colors but objects to which we automatically assign category labels. Our perceptual system automatically segments the world into concepts and categories. Concepts are the building blocks of knowledge; relations act as the cement that links concepts into knowledge structures. We spend much of our lives identifying regular associations and relations between objects, events, and processes so that the world has an understandable structure and predictability. Our lives and work depend on the accuracy and richness of this knowledge structure and its web of relations. Relations are needed for reasoning and inferencing. Chaffin and Herrmann (1988b, p. 290) noted that "relations between ideas have long been viewed as basic to thought, language, comprehension, and memory." Aristotle's Metaphysics (Aristotle, 1961; McKeon, expounded on several types of relations. The majority of the 30 entries in a section of the Metaphysics known today as the Philosophical Lexicon referred to relations and attributes, including cause, part-whole, same and opposite, quality (i.e., attribute) and kind-of, and defined different types of each relation. Hume (1955) pointed out that there is a connection between successive ideas in our minds, even in our dreams, and that the introduction of an idea in our mind automatically recalls an associated idea. He argued that all the objects of human reasoning are divided into relations of ideas and matters of fact and that factual reasoning is founded on the cause-effect relation. His Treatise of Human Nature identified seven kinds of relations: resemblance, identity, relations of time and place, proportion in quantity or number, degrees in quality, contrariety, and causation. Mill (1974, pp. 989-1004) discoursed on several types of relations, claiming that all things are either feelings, substances, or attributes, and that attributes can be a quality (which belongs to one object) or a relation to other objects.
    Linguists in the structuralist tradition (e.g., Lyons, 1977; Saussure, 1959) have asserted that concepts cannot be defined on their own but only in relation to other concepts. Semantic relations appear to reflect a logical structure in the fundamental nature of thought (Caplan & Herrmann, 1993). Green, Bean, and Myaeng (2002) noted that semantic relations play a critical role in how we represent knowledge psychologically, linguistically, and computationally, and that many systems of knowledge representation start with a basic distinction between entities and relations. Green (2001, p. 3) said that "relationships are involved as we combine simple entities to form more complex entities, as we compare entities, as we group entities, as one entity performs a process on another entity, and so forth. Indeed, many things that we might initially regard as basic and elemental are revealed upon further examination to involve internal structure, or in other words, internal relationships." Concepts and relations are often expressed in language and text. Language is used not just for communicating concepts and relations, but also for representing, storing, and reasoning with concepts and relations. We shall examine the nature of semantic relations from a linguistic and psychological perspective, with an emphasis on relations expressed in text. The usefulness of semantic relations in information science, especially in ontology construction, information extraction, information retrieval, question-answering, and text summarization is discussed. Research and development in information science have focused on concepts and terms, but the focus will increasingly shift to the identification, processing, and management of relations to achieve greater effectiveness and refinement in information science techniques. Previous chapters in ARIST on natural language processing (Chowdhury, 2003), text mining (Trybula, 1999), information retrieval and the philosophy of language (Blair, 2003), and query expansion (Efthimiadis, 1996) provide a background for this discussion, as semantic relations are an important part of these applications.
    Source
    Annual review of information science and technology. 40(2006), S.157-228
  6. Mooers, C.N.: ¬The indexing language of an information retrieval system (1985) 0.02
    0.017557705 = product of:
      0.06437825 = sum of:
        0.013853548 = weight(_text_:of in 3644) [ClassicSimilarity], result of:
          0.013853548 = score(doc=3644,freq=36.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.25655392 = fieldWeight in 3644, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3644)
        0.04233729 = weight(_text_:great in 3644) [ClassicSimilarity], result of:
          0.04233729 = score(doc=3644,freq=2.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.21774168 = fieldWeight in 3644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3644)
        0.008187402 = product of:
          0.016374804 = sum of:
            0.016374804 = weight(_text_:22 in 3644) [ClassicSimilarity], result of:
              0.016374804 = score(doc=3644,freq=2.0), product of:
                0.12092275 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034531306 = queryNorm
                0.1354154 = fieldWeight in 3644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3644)
          0.5 = coord(1/2)
      0.27272728 = coord(3/11)
    
    Abstract
    Calvin Mooers' work toward the resolution of the problem of ambiguity in indexing went unrecognized for years. At the time he introduced the "descriptor" - a term with a very distinct meaning-indexers were, for the most part, taking index terms directly from the document, without either rationalizing them with context or normalizing them with some kind of classification. It is ironic that Mooers' term came to be attached to the popular but unsophisticated indexing methods which he was trying to root out. Simply expressed, what Mooers did was to take the dictionary definitions of terms and redefine them so clearly that they could not be used in any context except that provided by the new definition. He did, at great pains, construct such meanings for over four hundred words; disambiguation and specificity were sought after and found for these words. He proposed that all indexers adopt this method so that when the index supplied a term, it also supplied the exact meaning for that term as used in the indexed document. The same term used differently in another document would be defined differently and possibly renamed to avoid ambiguity. The disambiguation was achieved by using unabridged dictionaries and other sources of defining terminology. In practice, this tends to produce circularity in definition, that is, word A refers to word B which refers to word C which refers to word A. It was necessary, therefore, to break this chain by creating a new, definitive meaning for each word. Eventually, means such as those used by Austin (q.v.) for PRECIS achieved the same purpose, but by much more complex means than just creating a unique definition of each term. Mooers, however, was probably the first to realize how confusing undefined terminology could be. Early automatic indexers dealt with distinct disciplines and, as long as they did not stray beyond disciplinary boundaries, a quick and dirty keyword approach was satisfactory. The trouble came when attempts were made to make a combined index for two or more distinct disciplines. A number of processes have since been developed, mostly involving tagging of some kind or use of strings. Mooers' solution has rarely been considered seriously and probably would be extremely difficult to apply now because of so much interdisciplinarity. But for a specific, weIl defined field, it is still weIl worth considering. Mooers received training in mathematics and physics from the University of Minnesota and the Massachusetts Institute of Technology. He was the founder of Zator Company, which developed and marketed a coded card information retrieval system, and of Rockford Research, Inc., which engages in research in information science. He is the inventor of the TRAC computer language.
    Footnote
    Original in: Information retrieval today: papers presented at an Institute conducted by the Library School and the Center for Continuation Study, University of Minnesota, Sept. 19-22, 1962. Ed. by Wesley Simonton. Minneapolis, Minn.: The Center, 1963. S.21-36.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  7. Mikacic, M.: Statistical system for subject designation (SSSD) for libraries in Croatia (1996) 0.02
    0.016230613 = product of:
      0.059512246 = sum of:
        0.018281942 = weight(_text_:of in 2943) [ClassicSimilarity], result of:
          0.018281942 = score(doc=2943,freq=12.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.33856338 = fieldWeight in 2943, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2943)
        0.014764623 = weight(_text_:on in 2943) [ClassicSimilarity], result of:
          0.014764623 = score(doc=2943,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.19440265 = fieldWeight in 2943, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=2943)
        0.02646568 = product of:
          0.05293136 = sum of:
            0.05293136 = weight(_text_:22 in 2943) [ClassicSimilarity], result of:
              0.05293136 = score(doc=2943,freq=4.0), product of:
                0.12092275 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034531306 = queryNorm
                0.4377287 = fieldWeight in 2943, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2943)
          0.5 = coord(1/2)
      0.27272728 = coord(3/11)
    
    Abstract
    Describes the developments of the Statistical System for Subject Designation (SSSD): a syntactical system for subject designation for libraries in Croatia, based on the construction of subject headings in agreement with the theory of the sentence nature of subject headings. The discussion is preceded by a brief summary of theories underlying basic principles and fundamental rules of the alphabetical subject catalogue
    Date
    31. 7.2006 14:22:21
    Source
    Cataloging and classification quarterly. 22(1996) no.1, S.77-93
  8. Foskett, D.J.: Classification and integrative levels (1985) 0.01
    0.01212863 = product of:
      0.06670746 = sum of:
        0.05210454 = weight(_text_:higher in 3639) [ClassicSimilarity], result of:
          0.05210454 = score(doc=3639,freq=4.0), product of:
            0.18138453 = queryWeight, product of:
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.034531306 = queryNorm
            0.28726012 = fieldWeight in 3639, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3639)
        0.014602924 = weight(_text_:of in 3639) [ClassicSimilarity], result of:
          0.014602924 = score(doc=3639,freq=40.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.2704316 = fieldWeight in 3639, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3639)
      0.18181819 = coord(2/11)
    
    Abstract
    Very interesting experimental work was done by Douglas Foskett and other British classificationists during the fifteen-year period following the end of World War II. The research was effective in demonstrating that it was possible to make very sophisticated classification systems for virtually any subject-systems suitable for experts and for the general user needing a detailed subject classification. The success of these special systems led to consideration of the possibility of putting them together to form a new general classification system. To do such a thing would require a general, overall framework of some kind, since systems limited to a special subject are easier to construct because one does not have to worry about including all of the pertinent facets needed for a general system. Individual subject classifications do not automatically coalesce into a general pattern. For example, what is central to one special classification might be fringe in another or in several others. Fringe terminologies may not coincide in terms of logical relationships. Homographs and homonyms may not rear their ugly heads until attempts at merger are made. Foskett points out that even identifying a thing in terms of a noun or verb involves different assumptions in approach. For these and other reasons, it made sense to look for existing work in fields where the necessary framework already existed. Foskett found the rudiments of such a system in a number of writings, culminating in a logical system called "integrative levels" suggested by James K. Feibleman (q.v.). This system consists of a set of advancing conceptual levels relating to the apparent organization of nature. These levels are irreversible in that if one once reached a certain level there was no going back. Foskett points out that with higher levels and greater complexity in structure the analysis needed to establish valid levels becomes much more difficult, especially as Feibleman stipulates that a higher level must not be reducible to a lower one. (That is, one cannot put Humpty Dumpty together again.) Foskett is optimistic to the extent of suggesting that references from level to level be made upwards, with inductive reasoning, a system used by Derek Austin (q.v.) for making reference structures in PRECIS. Though the method of integrative levels so far has not been used successfully with the byproducts of human social behavior and thought, so much has been learned about these areas during the past twenty years that Foskett may yet be correct in his optimism. Foskett's name has Jong been associated with classification in the social sciences. As with many of the British classificationists included in this book, he has been a member of the Classification Research Group for about forty years. Like the others, he continues to contribute to the field.
    Footnote
    Original in: The Sayers memorial volume: essays in librarianship im memory of William Charles Berwick Sayers. London: The Library Association 1961. S.136-150.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  9. Maniez, J.: Fusion de banques de donnees documentaires at compatibilite des languages d'indexation (1997) 0.01
    0.011911204 = product of:
      0.043674413 = sum of:
        0.018565401 = weight(_text_:of in 2246) [ClassicSimilarity], result of:
          0.018565401 = score(doc=2246,freq=22.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.34381276 = fieldWeight in 2246, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2246)
        0.011073467 = weight(_text_:on in 2246) [ClassicSimilarity], result of:
          0.011073467 = score(doc=2246,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.14580199 = fieldWeight in 2246, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2246)
        0.014035545 = product of:
          0.02807109 = sum of:
            0.02807109 = weight(_text_:22 in 2246) [ClassicSimilarity], result of:
              0.02807109 = score(doc=2246,freq=2.0), product of:
                0.12092275 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034531306 = queryNorm
                0.23214069 = fieldWeight in 2246, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2246)
          0.5 = coord(1/2)
      0.27272728 = coord(3/11)
    
    Abstract
    Discusses the apparently unattainable goal of compatibility of information languages. While controlled languages can improve retrieval performance within a single system, they make cooperation across different systems more difficult. The Internet and downloading accentuate this adverse outcome and the acceleration of data exchange aggravates the problem of compatibility. Defines this familiar concept and demonstrates that coherence is just as necessary as it was for indexing languages, the proliferation of which has created confusion in grouped data banks. Describes 2 types of potential solutions, similar to those applied to automatic translation of natural languages: - harmonizing the information languages themselves, both difficult and expensive, or, the more flexible solution involving automatic harmonization of indexing formulae based on pre established concordance tables. However, structural incompatibilities between post coordinated languages and classifications may lead any harmonization tools up a blind alley, while the paths of a universal concordance model are rare and narrow
    Date
    1. 8.1996 22:01:00
    Footnote
    Übers. d. Titels: Integration of information data banks and compatibility of indexing languages
  10. Rolling, L.: ¬The role of graphic display of concept relationships in indexing and retrieval vocabularies (1985) 0.01
    0.011676019 = product of:
      0.064218104 = sum of:
        0.015832627 = weight(_text_:of in 3646) [ClassicSimilarity], result of:
          0.015832627 = score(doc=3646,freq=36.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.2932045 = fieldWeight in 3646, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=3646)
        0.048385475 = weight(_text_:great in 3646) [ClassicSimilarity], result of:
          0.048385475 = score(doc=3646,freq=2.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.24884763 = fieldWeight in 3646, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=3646)
      0.18181819 = coord(2/11)
    
    Abstract
    The use of diagrams to express relationships in classification is not new. Many classificationists have used this approach, but usually in a minor display to make a point or for part of a difficult relational situation. Ranganathan, for example, used diagrams for some of his more elusive concepts. The thesaurus in particular and subject headings in general, with direct and indirect crossreferences or equivalents, need many more diagrams than normally are included to make relationships and even semantics clear. A picture very often is worth a thousand words. Rolling has used directed graphs (arrowgraphs) to join terms as a practical method for rendering relationships between indexing terms lucid. He has succeeded very weIl in this endeavor. Four diagrams in this selection are all that one needs to explain how to employ the system; from initial listing to completed arrowgraph. The samples of his work include illustration of off-page connectors between arrowgraphs. The great advantage to using diagrams like this is that they present relations between individual terms in a format that is easy to comprehend. But of even greater value is the fact that one can use his arrowgraphs as schematics for making three-dimensional wire-and-ball models, in which the relationships may be seen even more clearly. In fact, errors or gaps in relations are much easier to find with this methodology. One also can get across the notion of the threedimensionality of classification systems with such models. Pettee's "hand reaching up and over" (q.v.) is not a figment of the imagination. While the actual hand is a wire or stick, the concept visualized is helpful in illuminating the three-dimensional figure that is latent in all systems that have cross-references or "broader," "narrower," or, especially, "related" terms. Classification schedules, being hemmed in by the dimensions of the printed page, also benefit from such physical illustrations. Rolling, an engineer by conviction, was the developer of information systems for the Cobalt Institute, the European Atomic Energy Community, and European Coal and Steel Community. He also developed and promoted computer-aided translation at the Commission of the European Communities in Luxembourg. One of his objectives has always been to increase the efficiency of mono- and multilingual thesauri for use in multinational information systems.
    Footnote
    Original in: Classification research: Proceedings of the Second International Study Conference held at Hotel Prins Hamlet, Elsinore, Denmark, 14th-18th Sept. 1964. Ed.: Pauline Atherton. Copenhagen: Munksgaard 1965. S.295-310.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  11. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.01
    0.009523193 = product of:
      0.034918375 = sum of:
        0.0139941955 = weight(_text_:of in 106) [ClassicSimilarity], result of:
          0.0139941955 = score(doc=106,freq=18.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.25915858 = fieldWeight in 106, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=106)
        0.009227889 = weight(_text_:on in 106) [ClassicSimilarity], result of:
          0.009227889 = score(doc=106,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.121501654 = fieldWeight in 106, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=106)
        0.011696288 = product of:
          0.023392577 = sum of:
            0.023392577 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
              0.023392577 = score(doc=106,freq=2.0), product of:
                0.12092275 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034531306 = queryNorm
                0.19345059 = fieldWeight in 106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=106)
          0.5 = coord(1/2)
      0.27272728 = coord(3/11)
    
    Abstract
    Purpose The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions. Design/methodology/approach This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions. Findings Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data. Originality/value This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.
    Date
    22. 1.2021 14:24:32
    Source
    Journal of documentation. 77(2021) no.1, S.93-105
  12. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.01
    0.008329247 = product of:
      0.04581086 = sum of:
        0.0130612515 = weight(_text_:of in 4506) [ClassicSimilarity], result of:
          0.0130612515 = score(doc=4506,freq=2.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.24188137 = fieldWeight in 4506, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=4506)
        0.03274961 = product of:
          0.06549922 = sum of:
            0.06549922 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.06549922 = score(doc=4506,freq=2.0), product of:
                0.12092275 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034531306 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.18181819 = coord(2/11)
    
    Date
    8.10.2000 11:52:22
  13. Kobrin, R.Y.: On the principles of terminological work in the creation of thesauri for information retrieval systems (1979) 0.01
    0.008056271 = product of:
      0.04430949 = sum of:
        0.0184714 = weight(_text_:of in 2954) [ClassicSimilarity], result of:
          0.0184714 = score(doc=2954,freq=4.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.34207192 = fieldWeight in 2954, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=2954)
        0.025838088 = weight(_text_:on in 2954) [ClassicSimilarity], result of:
          0.025838088 = score(doc=2954,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.34020463 = fieldWeight in 2954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.109375 = fieldNorm(doc=2954)
      0.18181819 = coord(2/11)
    
  14. Bonzi, S.: Terminological consistency in abstract and concrete disciplines (1984) 0.01
    0.007352912 = product of:
      0.040441014 = sum of:
        0.014602924 = weight(_text_:of in 2919) [ClassicSimilarity], result of:
          0.014602924 = score(doc=2919,freq=10.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.2704316 = fieldWeight in 2919, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2919)
        0.025838088 = weight(_text_:on in 2919) [ClassicSimilarity], result of:
          0.025838088 = score(doc=2919,freq=8.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.34020463 = fieldWeight in 2919, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2919)
      0.18181819 = coord(2/11)
    
    Abstract
    This study tested the hypothesis that the vocabulary of a discipline whose major emphasis is on concrete phenomena will, on the average, have fewer synonyms per concept than will the vocabulary of a discipline whose major emphasis is on abstract phenomena. Subject terms from each of two concrete disciplines and two abstract disciplines were analysed. Results showed that there was a significant difference at the 05 level between concrete and abstract disciplines but that the significant difference was attributable to only one of the abstract disciplines. The other abstract discipline was not significantly different from the two concrete disciplines. It was concluded that although thee is some support for the hypothesis, at least one other factor has a stronger influence on terminological consistency than the phenomena with which a subject deals
    Source
    Journal of documentation. 40(1984), S.247-263
  15. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.01
    0.007191215 = product of:
      0.039551683 = sum of:
        0.016159108 = weight(_text_:of in 6089) [ClassicSimilarity], result of:
          0.016159108 = score(doc=6089,freq=6.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.2992506 = fieldWeight in 6089, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=6089)
        0.023392577 = product of:
          0.046785153 = sum of:
            0.046785153 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.046785153 = score(doc=6089,freq=2.0), product of:
                0.12092275 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034531306 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.18181819 = coord(2/11)
    
    Pages
    S.11-22
    Source
    Compatibility and integration of order systems: Research Seminar Proceedings of the TIP/ISKO Meeting, Warsaw, 13-15 September 1995
  16. Melton, J.S.: ¬A use for the techniques of structural linguistics in documentation research (1965) 0.01
    0.0068307975 = product of:
      0.037569385 = sum of:
        0.016689055 = weight(_text_:of in 834) [ClassicSimilarity], result of:
          0.016689055 = score(doc=834,freq=10.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.3090647 = fieldWeight in 834, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=834)
        0.02088033 = weight(_text_:on in 834) [ClassicSimilarity], result of:
          0.02088033 = score(doc=834,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.27492687 = fieldWeight in 834, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=834)
      0.18181819 = coord(2/11)
    
    Abstract
    Index language (the system of symbols for representing subject content after analysis) is considered as a separate component and a variable in an information retrieval system. It is suggested that for purposes of testing, comparing and evaluating index language, the techniques of structural linguistics may provide a descriptive methodology by which all such languages (hierarchical and faceted classification, analytico-synthetic indexing, traditional subject indexing, indexes and classifications based on automatic text analysis, etc.) could be described in term of a linguistic model, and compared on a common basis
  17. Fugmann, R.: Unusual possibilities in indexing and classification (1990) 0.01
    0.0065226895 = product of:
      0.03587479 = sum of:
        0.02111017 = weight(_text_:of in 4781) [ClassicSimilarity], result of:
          0.02111017 = score(doc=4781,freq=16.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.39093933 = fieldWeight in 4781, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4781)
        0.014764623 = weight(_text_:on in 4781) [ClassicSimilarity], result of:
          0.014764623 = score(doc=4781,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.19440265 = fieldWeight in 4781, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=4781)
      0.18181819 = coord(2/11)
    
    Abstract
    Contemporary research in information science has concentrated on the development of methods for the algorithmic processing of natural language texts. Often, the equivalence of this approach to the intellectual technique of content analysis and indexing is claimed. It is, however, disregarded that contemporary intellectual techniques are far from exploiting their full capabilities. This is largely due to the omission of vocabulary categorisation. It is demonstrated how categorisation can drastically improve the quality of indexing and classification, and, hence, of retrieval
    Source
    Tools for knowledge organization and the human interface. Proceedings of the 1st International ISKO Conference, Darmstadt, 14.-17.8.1990. Pt.1
  18. Hoerman, H.L.; Furniss, K.A.: Turning practice into principles : a comparison of the IFLA Principles underlying Subject Heading Languages (SHLs) and the principles underlying the Library of Congress Subject Headings system (2000) 0.01
    0.006365898 = product of:
      0.03501244 = sum of:
        0.015832627 = weight(_text_:of in 5611) [ClassicSimilarity], result of:
          0.015832627 = score(doc=5611,freq=16.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.2932045 = fieldWeight in 5611, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5611)
        0.01917981 = weight(_text_:on in 5611) [ClassicSimilarity], result of:
          0.01917981 = score(doc=5611,freq=6.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.25253648 = fieldWeight in 5611, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=5611)
      0.18181819 = coord(2/11)
    
    Abstract
    The IFLA Section on Classification and Indexing's Working Group on Principles Underlying Subject Headings Languages has identified a set of eleven principles for subject heading languages and excerpted the texts that match each principle from the instructions for each of eleven national subject indexing systems, including excerpts from the LC's Subject Cataloging Manual: Subject Headings. This study compares the IFLA principles with other texts that express the principles underlying LCSH, especially Library of Congress Subject Headings: Principles of Structure and Policies for Application, prepared by Lois Mai Chan for the Library of Congress in 1990, Chan's later book on LCSH, and earlier documents by Haykin and Cutter. The principles are further elaborated for clarity and discussed
    Source
    The LCSH century: one hundred years with the Library of Congress Subject Headings system. Ed.: A.T. Stone
  19. ALA / Subcommittee on Subject Relationships/Reference Structures: Final Report to the ALCTS/CCS Subject Analysis Committee (1997) 0.01
    0.0063080443 = product of:
      0.034694243 = sum of:
        0.015315675 = weight(_text_:of in 1800) [ClassicSimilarity], result of:
          0.015315675 = score(doc=1800,freq=44.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.28363106 = fieldWeight in 1800, product of:
              6.6332498 = tf(freq=44.0), with freq of:
                44.0 = termFreq=44.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1800)
        0.019378567 = weight(_text_:on in 1800) [ClassicSimilarity], result of:
          0.019378567 = score(doc=1800,freq=18.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.25515348 = fieldWeight in 1800, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1800)
      0.18181819 = coord(2/11)
    
    Abstract
    The SAC Subcommittee on Subject Relationships/Reference Structures was authorized at the 1995 Midwinter Meeting and appointed shortly before Annual Conference. Its creation was one result of a discussion of how (and why) to promote the display and use of broader-term subject heading references, and its charge reads as follows: To investigate: (1) the kinds of relationships that exist between subjects, the display of which are likely to be useful to catalog users; (2) how these relationships are or could be recorded in authorities and classification formats; (3) options for how these relationships should be presented to users of online and print catalogs, indexes, lists, etc. By the summer 1996 Annual Conference, make some recommendations to SAC about how to disseminate the information and/or implement changes. At that time assess the need for additional time to investigate these issues. The Subcommittee's work on each of the imperatives in the charge was summarized in a report issued at the 1996 Annual Conference (Appendix A). Highlights of this work included the development of a taxonomy of 165 subject relationships; a demonstration that, using existing MARC coding, catalog systems could be programmed to generate references they do not currently support; and an examination of reference displays in several CD-ROM database products. Since that time, work has continued on identifying term relationships and display options; on tracking research, discussion, and implementation of subject relationships in information systems; and on compiling a list of further research needs.
    Content
    Enthält: Appendix A: Subcommittee on Subject Relationships/Reference Structures - REPORT TO THE ALCTS/CCS SUBJECT ANALYSIS COMMITTEE - July 1996 Appendix B (part 1): Taxonomy of Subject Relationships. Compiled by Dee Michel with the assistance of Pat Kuhr - June 1996 draft (alphabetical display) (Separat in: http://web2.ala.org/ala/alctscontent/CCS/committees/subjectanalysis/subjectrelations/msrscu2.pdf) Appendix B (part 2): Taxonomy of Subject Relationships. Compiled by Dee Michel with the assistance of Pat Kuhr - June 1996 draft (hierarchical display) Appendix C: Checklist of Candidate Subject Relationships for Information Retrieval. Compiled by Dee Michel, Pat Kuhr, and Jane Greenberg; edited by Greg Wool - June 1997 Appendix D: Review of Reference Displays in Selected CD-ROM Abstracts and Indexes by Harriette Hemmasi and Steven Riel Appendix E: Analysis of Relationships in Six LC Subject Authority Records by Harriette Hemmasi and Gary Strawn Appendix F: Report of a Preliminary Survey of Subject Referencing in OPACs by Gregory Wool Appendix G: LC Subject Referencing in OPACs--Why Bother? by Gregory Wool Appendix H: Research Needs on Subject Relationships and Reference Structures in Information Access compiled by Jane Greenberg and Steven Riel with contributions from Dee Michel and others edited by Gregory Wool Appendix I: Bibliography on Subject Relationships compiled mostly by Dee Michel with additional contributions from Jane Greenberg, Steven Riel, and Gregory Wool
  20. Gilchrist, A.: Structure and function in retrieval (2006) 0.01
    0.0062228455 = product of:
      0.03422565 = sum of:
        0.018565401 = weight(_text_:of in 5585) [ClassicSimilarity], result of:
          0.018565401 = score(doc=5585,freq=22.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.34381276 = fieldWeight in 5585, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5585)
        0.015660247 = weight(_text_:on in 5585) [ClassicSimilarity], result of:
          0.015660247 = score(doc=5585,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.20619515 = fieldWeight in 5585, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=5585)
      0.18181819 = coord(2/11)
    
    Abstract
    Purpose - This paper forms part of the series "60 years of the best in information research", marking the 60th anniversary of the Journal of Documentation. It aims to review the influence of Brian Vickery's 1971 paper, "Structure and function in retrieval languages". The paper is not an update of Vickery's work, but a comment on a greatly changed environment, in which his analysis still has much validity. Design/methodology/approach - A commentary on selected literature illustrates the continuing relevance of Vickery's ideas. Findings - Generic survey and specific reference are still the main functions of retrieval languages, with minor functional additions such as relevance ranking. New structures are becoming increasingly significant, through developments such as XML. Future development in artificial intelligence hold out new prospects still. Originality/value - The paper shows the continuing relevance of "traditional" ideas of information science from the 1960s and 1970s.
    Source
    Journal of documentation. 62(2006) no.1, S.21-29

Languages

  • e 82
  • d 3
  • f 3
  • ja 1
  • nl 1
  • More… Less…

Types

  • a 77
  • m 8
  • s 7
  • el 4
  • r 2
  • d 1
  • More… Less…