Search (80 results, page 2 of 4)

  • × language_ss:"e"
  • × theme_ss:"Theorie verbaler Dokumentationssprachen"
  1. Mikacic, M.: Statistical system for subject designation (SSSD) for libraries in Croatia (1996) 0.02
    0.016680822 = product of:
      0.050042465 = sum of:
        0.013504315 = weight(_text_:in in 2943) [ClassicSimilarity], result of:
          0.013504315 = score(doc=2943,freq=6.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.2082456 = fieldWeight in 2943, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2943)
        0.03653815 = product of:
          0.0730763 = sum of:
            0.0730763 = weight(_text_:22 in 2943) [ClassicSimilarity], result of:
              0.0730763 = score(doc=2943,freq=4.0), product of:
                0.16694428 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047673445 = queryNorm
                0.4377287 = fieldWeight in 2943, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2943)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Describes the developments of the Statistical System for Subject Designation (SSSD): a syntactical system for subject designation for libraries in Croatia, based on the construction of subject headings in agreement with the theory of the sentence nature of subject headings. The discussion is preceded by a brief summary of theories underlying basic principles and fundamental rules of the alphabetical subject catalogue
    Date
    31. 7.2006 14:22:21
    Source
    Cataloging and classification quarterly. 22(1996) no.1, S.77-93
  2. Kobrin, R.Y.: On the principles of terminological work in the creation of thesauri for information retrieval systems (1979) 0.02
    0.01579374 = product of:
      0.047381222 = sum of:
        0.0136442585 = weight(_text_:in in 2954) [ClassicSimilarity], result of:
          0.0136442585 = score(doc=2954,freq=2.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.21040362 = fieldWeight in 2954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=2954)
        0.033736963 = product of:
          0.067473926 = sum of:
            0.067473926 = weight(_text_:retrieval in 2954) [ClassicSimilarity], result of:
              0.067473926 = score(doc=2954,freq=2.0), product of:
                0.14420812 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047673445 = queryNorm
                0.46789268 = fieldWeight in 2954, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2954)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
  3. Salton, G.: Experiments in automatic thesaurus construction for information retrieval (1972) 0.02
    0.01579374 = product of:
      0.047381222 = sum of:
        0.0136442585 = weight(_text_:in in 5314) [ClassicSimilarity], result of:
          0.0136442585 = score(doc=5314,freq=2.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.21040362 = fieldWeight in 5314, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=5314)
        0.033736963 = product of:
          0.067473926 = sum of:
            0.067473926 = weight(_text_:retrieval in 5314) [ClassicSimilarity], result of:
              0.067473926 = score(doc=5314,freq=2.0), product of:
                0.14420812 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047673445 = queryNorm
                0.46789268 = fieldWeight in 5314, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5314)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
  4. Vickery, B.C.: Structure and function in retrieval languages (1971) 0.02
    0.01579374 = product of:
      0.047381222 = sum of:
        0.0136442585 = weight(_text_:in in 4971) [ClassicSimilarity], result of:
          0.0136442585 = score(doc=4971,freq=2.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.21040362 = fieldWeight in 4971, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=4971)
        0.033736963 = product of:
          0.067473926 = sum of:
            0.067473926 = weight(_text_:retrieval in 4971) [ClassicSimilarity], result of:
              0.067473926 = score(doc=4971,freq=2.0), product of:
                0.14420812 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047673445 = queryNorm
                0.46789268 = fieldWeight in 4971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4971)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
  5. Neelameghan, A.: Lateral relationships in multicultural, multilingual databases in the spiritual and religious domains : the OM Information service (2001) 0.02
    0.015653513 = product of:
      0.04696054 = sum of:
        0.013075498 = weight(_text_:in in 1146) [ClassicSimilarity], result of:
          0.013075498 = score(doc=1146,freq=10.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.20163295 = fieldWeight in 1146, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1146)
        0.033885043 = weight(_text_:u in 1146) [ClassicSimilarity], result of:
          0.033885043 = score(doc=1146,freq=2.0), product of:
            0.15610404 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.047673445 = queryNorm
            0.21706703 = fieldWeight in 1146, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=1146)
      0.33333334 = coord(2/6)
    
    Abstract
    Mapping a multidimensional universe of subjects for linear representation, such as in class number, subject heading, and faset structure is problematic. Into this context is recalled the near-seminal and postulational approach suggested by S. R Ranganathan. The non-hierarchical associative relationship or lateral relationship (LR) is distinguished at different levels-among information sources, databases, records of databases, and among concepts (LR-0). Over thirty lateral relationships at the concept level (LR-0) are identified and enumerated with examples from spiritual and religious texts. Special issues relating to LR-0 in multicultural, multilingual databases intended to be used globally by peoples of different cultures and faith are discussed, using as example the multimedia OM Information Service. Vocabulary assistance for users is described.
    Source
    Relationships in the organization of knowledge. Eds.: Bean, C.A. u. R. Green
  6. Khoo, C.; Chan, S.; Niu, Y.: ¬The many facets of the cause-effect relation (2002) 0.02
    0.015653513 = product of:
      0.04696054 = sum of:
        0.013075498 = weight(_text_:in in 1192) [ClassicSimilarity], result of:
          0.013075498 = score(doc=1192,freq=10.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.20163295 = fieldWeight in 1192, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1192)
        0.033885043 = weight(_text_:u in 1192) [ClassicSimilarity], result of:
          0.033885043 = score(doc=1192,freq=2.0), product of:
            0.15610404 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.047673445 = queryNorm
            0.21706703 = fieldWeight in 1192, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=1192)
      0.33333334 = coord(2/6)
    
    Abstract
    This chapter presents a broad survey of the cause-effect relation, with particular emphasis an how the relation is expressed in text. Philosophers have been grappling with the concept of causation for centuries. Researchers in social psychology have found that the human mind has a very complex mechanism for identifying and attributing the cause for an event. Inferring cause-effect relations between events and statements has also been found to be an important part of reading and text comprehension, especially for narrative text. Though many of the cause-effect relations in text are implied and have to be inferred by the reader, there is also a wide variety of linguistic expressions for explicitly indicating cause and effect. In addition, it has been found that certain words have "causal valence"-they bias the reader to attribute cause in certain ways. Cause-effect relations can also be divided into several different types.
    Source
    The semantics of relationships: an interdisciplinary perspective. Eds: Green, R., C.A. Bean u. S.H. Myaeng
  7. Mazzocchi, F.; Plini, P.: Refining thesaurus relational structure : implications and opportunities (2008) 0.02
    0.015193375 = product of:
      0.045580123 = sum of:
        0.011695079 = weight(_text_:in in 5448) [ClassicSimilarity], result of:
          0.011695079 = score(doc=5448,freq=8.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.18034597 = fieldWeight in 5448, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5448)
        0.033885043 = weight(_text_:u in 5448) [ClassicSimilarity], result of:
          0.033885043 = score(doc=5448,freq=2.0), product of:
            0.15610404 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.047673445 = queryNorm
            0.21706703 = fieldWeight in 5448, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=5448)
      0.33333334 = coord(2/6)
    
    Abstract
    In this paper the possibility to develop a richer relational structure for thesauri is explored and described. The development of a new environmental thesaurus - EARTh (Environmental Applications Reference Thesaurus) - is serving as a case study for exploring the refinement of thesaurus relational structure by specialising standard relationships into different subtypes. Together with benefits and opportunities, implications and possible challenges that an expanded set of thesaurus relations may cause are evaluated.
    Series
    Fortschritte in der Wissensorganisation; Bd.10
    Source
    Kompatibilität, Medien und Ethik in der Wissensorganisation - Compatibility, Media and Ethics in Knowledge Organization: Proceedings der 10. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Wien, 3.-5. Juli 2006 - Proceedings of the 10th Conference of the German Section of the International Society of Knowledge Organization Vienna, 3-5 July 2006. Ed.: H.P. Ohly, S. Netscher u. K. Mitgutsch
  8. Beghtol, C.: Relationships in classificatory structure and meaning (2001) 0.02
    0.015193375 = product of:
      0.045580123 = sum of:
        0.011695079 = weight(_text_:in in 1138) [ClassicSimilarity], result of:
          0.011695079 = score(doc=1138,freq=8.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.18034597 = fieldWeight in 1138, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1138)
        0.033885043 = weight(_text_:u in 1138) [ClassicSimilarity], result of:
          0.033885043 = score(doc=1138,freq=2.0), product of:
            0.15610404 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.047673445 = queryNorm
            0.21706703 = fieldWeight in 1138, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=1138)
      0.33333334 = coord(2/6)
    
    Abstract
    In a changing information environment, we need to reassess each element of bibliographic control, including classification theories and systems. Every classification system is a theoretical construct imposed an "reality." The classificatory relationships that are assumed to be valuable have generally received less attention than the topics included in the systems. Relationships are functions of both the syntactic and semantic axes of classification systems, and both explicit and implicit relationships are discussed. Examples are drawn from a number of different systems, both bibliographic and non-bibliographic, and the cultural warrant (i. e., the sociocultural context) of classification systems is examined. The part-whole relationship is discussed as an example of a universally valid concept that is treated as a component of the cultural warrant of a classification system.
    Source
    Relationships in the organization of knowledge. Eds.: Bean, C.A. u. R. Green
  9. Schmitz-Esser, W.: Formalizing terminology-based knowledge for an ontology independently of a particular language (2008) 0.02
    0.015193375 = product of:
      0.045580123 = sum of:
        0.011695079 = weight(_text_:in in 1680) [ClassicSimilarity], result of:
          0.011695079 = score(doc=1680,freq=8.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.18034597 = fieldWeight in 1680, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1680)
        0.033885043 = weight(_text_:u in 1680) [ClassicSimilarity], result of:
          0.033885043 = score(doc=1680,freq=2.0), product of:
            0.15610404 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.047673445 = queryNorm
            0.21706703 = fieldWeight in 1680, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=1680)
      0.33333334 = coord(2/6)
    
    Abstract
    Last word ontological thought and practice is exemplified on an axiomatic framework [a model for an Integrative Cross-Language Ontology (ICLO), cf. Poli, R., Schmitz-Esser, W., forthcoming 2007] that is highly general, based on natural language, multilingual, can be implemented as topic maps and may be openly enhanced by software available for particular languages. Basics of ontological modelling, conditions for construction and maintenance, and the most salient points in application are addressed, such as cross-language text mining and knowledge generation. The rationale is to open the eyes for the tremendous potential of terminology-based ontologies for principled Knowledge Organization and the interchange and reuse of formalized knowledge.
    Series
    Fortschritte in der Wissensorganisation; Bd.10
    Source
    Kompatibilität, Medien und Ethik in der Wissensorganisation - Compatibility, Media and Ethics in Knowledge Organization: Proceedings der 10. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Wien, 3.-5. Juli 2006 - Proceedings of the 10th Conference of the German Section of the International Society of Knowledge Organization Vienna, 3-5 July 2006. Ed.: H.P. Ohly, S. Netscher u. K. Mitgutsch
  10. Vickery, B.B.: Structure and function in retrieval languages (2006) 0.02
    0.015152246 = product of:
      0.045456737 = sum of:
        0.01653934 = weight(_text_:in in 5584) [ClassicSimilarity], result of:
          0.01653934 = score(doc=5584,freq=16.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.25504774 = fieldWeight in 5584, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5584)
        0.028917395 = product of:
          0.05783479 = sum of:
            0.05783479 = weight(_text_:retrieval in 5584) [ClassicSimilarity], result of:
              0.05783479 = score(doc=5584,freq=8.0), product of:
                0.14420812 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047673445 = queryNorm
                0.40105087 = fieldWeight in 5584, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5584)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - The purpose of this paper is to summarize the varied structural characteristics which may be present in retrieval languages. Design/methodology/approach - The languages serve varied purposes in information systems, and a number of these are identified. The relations between structure and function are discussed and suggestions made as to the most suitable structures needed for various purposes. Findings - A quantitative approach has been developed: a simple measure is the number of separate terms in a retrieval language, but this has to be related to the scope of its subject field. Some ratio of terms to items in the field seems a more suitable measure of the average specificity of the terms. Other aspects can be quantified - for example, the average number of links in hierarchical chains, or the average number of cross-references in a thesaurus. Originality/value - All the approaches to the analysis of retrieval language reported in this paper are of continuing value. Some practical studies of computer information systems undertaken by Aslib Research Department have suggested a further approach.
  11. Fugmann, R.: ¬The analytico-synthetic foundation for large indexing & information retrieval systems : dedicated to Prof. Dr. Werner Schultheis, the vigorous initiator of modern chem. documentation in Germany on the occasion of his 85th birthday (1983) 0.01
    0.014805719 = product of:
      0.044417158 = sum of:
        0.011026227 = weight(_text_:in in 215) [ClassicSimilarity], result of:
          0.011026227 = score(doc=215,freq=4.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.17003182 = fieldWeight in 215, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=215)
        0.03339093 = product of:
          0.06678186 = sum of:
            0.06678186 = weight(_text_:retrieval in 215) [ClassicSimilarity], result of:
              0.06678186 = score(doc=215,freq=6.0), product of:
                0.14420812 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047673445 = queryNorm
                0.46309367 = fieldWeight in 215, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=215)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Footnote
    Rez. in: International classification 12(1985) S.106 (L. Kalok)
    LCSH
    Information retrieval
    Subject
    Information retrieval
  12. Lopes, M.I.: Principles underlying subject heading languages : an international approach (1996) 0.01
    0.014287109 = product of:
      0.042861324 = sum of:
        0.0136442585 = weight(_text_:in in 5608) [ClassicSimilarity], result of:
          0.0136442585 = score(doc=5608,freq=8.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.21040362 = fieldWeight in 5608, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5608)
        0.029217066 = product of:
          0.058434132 = sum of:
            0.058434132 = weight(_text_:retrieval in 5608) [ClassicSimilarity], result of:
              0.058434132 = score(doc=5608,freq=6.0), product of:
                0.14420812 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047673445 = queryNorm
                0.40520695 = fieldWeight in 5608, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5608)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Discusses the problems in establishing commonly accepted principles for subject retrieval between different bibliographic systems. The Working Group on Principles Underlying Subject Heading Languages was established to devise general principles for any subject retrieval system and to review existing real systems in the light of such principles and compare them in order to evaluate the extent of their coverage and their application in current practices. Provides a background and history of the Working Group. Discusses the principles underlying subject headings and their purposes and the state of the work and major findings
    Theme
    Verbale Doksprachen im Online-Retrieval
  13. Green, R.: Syntagmatic relationships in index languages : a reassessment (1995) 0.01
    0.013522124 = product of:
      0.04056637 = sum of:
        0.016710738 = weight(_text_:in in 3144) [ClassicSimilarity], result of:
          0.016710738 = score(doc=3144,freq=12.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.2576908 = fieldWeight in 3144, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3144)
        0.023855632 = product of:
          0.047711264 = sum of:
            0.047711264 = weight(_text_:retrieval in 3144) [ClassicSimilarity], result of:
              0.047711264 = score(doc=3144,freq=4.0), product of:
                0.14420812 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047673445 = queryNorm
                0.33085006 = fieldWeight in 3144, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3144)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Effective use of syntagmatic relationships in index languages has suffered from inaccurate or incomplete characterization in both linguistics and information science. A number of 'myths' about syntagmatic relationships are debunked: the exclusivity of paradigmatic and syntagmatic relationships, linearity as a defining characteristic of syntagmatic relationships, the restriction of syntagmatic relationships to surface linguistic units, the limitation of syntagmatic relationship benefits in document retrieval to precision, and the general irrelevance of syntagmatic relationships for document retrieval. None of the mechanisms currently used with index languages is powerful enough to achieve the levels of precision and recall that the expression of conceptual syntagmatic relationships is in theory capable of. New designs for expressing these relationships in index languages will need to take into account such characteristics as their semantic nature, systematicity, generalizability and constituent nature
  14. Engerer, V.: Control and syntagmatization : vocabulary requirements in information retrieval thesauri and natural language lexicons (2017) 0.01
    0.013504779 = product of:
      0.040514335 = sum of:
        0.015471136 = weight(_text_:in in 3678) [ClassicSimilarity], result of:
          0.015471136 = score(doc=3678,freq=14.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.23857531 = fieldWeight in 3678, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3678)
        0.0250432 = product of:
          0.0500864 = sum of:
            0.0500864 = weight(_text_:retrieval in 3678) [ClassicSimilarity], result of:
              0.0500864 = score(doc=3678,freq=6.0), product of:
                0.14420812 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047673445 = queryNorm
                0.34732026 = fieldWeight in 3678, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3678)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper explores the relationships between natural language lexicons in lexical semantics and thesauri in information retrieval research. These different areas of knowledge have different restrictions on use of vocabulary; thesauri are used only in information search and retrieval contexts, whereas lexicons are mental systems and generally applicable in all domains of life. A set of vocabulary requirements that defines the more concrete characteristics of vocabulary items in the 2 contexts can be derived from this framework: lexicon items have to be learnable, complex, transparent, etc., whereas thesaurus terms must be effective, current and relevant, searchable, etc. The differences in vocabulary properties correlate with 2 other factors, the well-known dimension of Control (deliberate, social activities of building and maintaining vocabularies), and Syntagmatization, which is less known and describes vocabulary items' varying formal preparedness to exit the thesaurus/lexicon, enter into linear syntactic constructions, and, finally, acquire communicative functionality. It is proposed that there is an inverse relationship between Control and Syntagmatization.
  15. Gilchrist, A.: Structure and function in retrieval (2006) 0.01
    0.012706233 = product of:
      0.038118698 = sum of:
        0.013075498 = weight(_text_:in in 5585) [ClassicSimilarity], result of:
          0.013075498 = score(doc=5585,freq=10.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.20163295 = fieldWeight in 5585, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5585)
        0.0250432 = product of:
          0.0500864 = sum of:
            0.0500864 = weight(_text_:retrieval in 5585) [ClassicSimilarity], result of:
              0.0500864 = score(doc=5585,freq=6.0), product of:
                0.14420812 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047673445 = queryNorm
                0.34732026 = fieldWeight in 5585, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5585)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - This paper forms part of the series "60 years of the best in information research", marking the 60th anniversary of the Journal of Documentation. It aims to review the influence of Brian Vickery's 1971 paper, "Structure and function in retrieval languages". The paper is not an update of Vickery's work, but a comment on a greatly changed environment, in which his analysis still has much validity. Design/methodology/approach - A commentary on selected literature illustrates the continuing relevance of Vickery's ideas. Findings - Generic survey and specific reference are still the main functions of retrieval languages, with minor functional additions such as relevance ranking. New structures are becoming increasingly significant, through developments such as XML. Future development in artificial intelligence hold out new prospects still. Originality/value - The paper shows the continuing relevance of "traditional" ideas of information science from the 1960s and 1970s.
  16. Melton, J.S.: ¬A use for the techniques of structural linguistics in documentation research (1965) 0.01
    0.010927526 = product of:
      0.032782577 = sum of:
        0.013504315 = weight(_text_:in in 834) [ClassicSimilarity], result of:
          0.013504315 = score(doc=834,freq=6.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.2082456 = fieldWeight in 834, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=834)
        0.019278264 = product of:
          0.038556527 = sum of:
            0.038556527 = weight(_text_:retrieval in 834) [ClassicSimilarity], result of:
              0.038556527 = score(doc=834,freq=2.0), product of:
                0.14420812 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047673445 = queryNorm
                0.26736724 = fieldWeight in 834, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=834)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Index language (the system of symbols for representing subject content after analysis) is considered as a separate component and a variable in an information retrieval system. It is suggested that for purposes of testing, comparing and evaluating index language, the techniques of structural linguistics may provide a descriptive methodology by which all such languages (hierarchical and faceted classification, analytico-synthetic indexing, traditional subject indexing, indexes and classifications based on automatic text analysis, etc.) could be described in term of a linguistic model, and compared on a common basis
  17. Fugmann, R.: Unusual possibilities in indexing and classification (1990) 0.01
    0.010927526 = product of:
      0.032782577 = sum of:
        0.013504315 = weight(_text_:in in 4781) [ClassicSimilarity], result of:
          0.013504315 = score(doc=4781,freq=6.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.2082456 = fieldWeight in 4781, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4781)
        0.019278264 = product of:
          0.038556527 = sum of:
            0.038556527 = weight(_text_:retrieval in 4781) [ClassicSimilarity], result of:
              0.038556527 = score(doc=4781,freq=2.0), product of:
                0.14420812 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047673445 = queryNorm
                0.26736724 = fieldWeight in 4781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4781)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Contemporary research in information science has concentrated on the development of methods for the algorithmic processing of natural language texts. Often, the equivalence of this approach to the intellectual technique of content analysis and indexing is claimed. It is, however, disregarded that contemporary intellectual techniques are far from exploiting their full capabilities. This is largely due to the omission of vocabulary categorisation. It is demonstrated how categorisation can drastically improve the quality of indexing and classification, and, hence, of retrieval
    Series
    Advances in knowledge organization; vol.1
  18. Svenonius, E.: Unanswered questions in the design of controlled vocabularies (1986) 0.01
    0.010927526 = product of:
      0.032782577 = sum of:
        0.013504315 = weight(_text_:in in 584) [ClassicSimilarity], result of:
          0.013504315 = score(doc=584,freq=6.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.2082456 = fieldWeight in 584, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=584)
        0.019278264 = product of:
          0.038556527 = sum of:
            0.038556527 = weight(_text_:retrieval in 584) [ClassicSimilarity], result of:
              0.038556527 = score(doc=584,freq=2.0), product of:
                0.14420812 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047673445 = queryNorm
                0.26736724 = fieldWeight in 584, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=584)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The issue of free-text versus controlled vocabulary is examined in this article. The history of the issue, which is seen as beginning with the debate over title term indexing in the last century, is reviewed and the attention is turned to questions which have not been satisfactorily addressed by previous research. The point is made that these questions need to be answered if we are to design retrieval tools, such as thesauri, upon a national basis
  19. Fugmann, R.: ¬The complementarity of natural and indexing languages (1985) 0.01
    0.010324447 = product of:
      0.030973341 = sum of:
        0.011695079 = weight(_text_:in in 3641) [ClassicSimilarity], result of:
          0.011695079 = score(doc=3641,freq=18.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.18034597 = fieldWeight in 3641, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=3641)
        0.019278264 = product of:
          0.038556527 = sum of:
            0.038556527 = weight(_text_:retrieval in 3641) [ClassicSimilarity], result of:
              0.038556527 = score(doc=3641,freq=8.0), product of:
                0.14420812 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047673445 = queryNorm
                0.26736724 = fieldWeight in 3641, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3641)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The second Cranfield experiment (Cranfield II) in the mid-1960s challenged assumptions held by librarians for nearly a century, namely, that the objective of providing subject access was to bring together all materials an a given topic and that the achieving of this objective required vocabulary control in the form of an index language. The results of Cranfield II were replicated by other retrieval experiments quick to follow its lead and increasing support was given to the opinion that natural language information systems could perform at least as effectively, and certainly more economically, than those employing index languages. When the results of empirical research dramatically counter conventional wisdom, an obvious course is to question the validity of the research and, in the case of retrieval experiments, this eventually happened. Retrieval experiments were criticized for their artificiality, their unrepresentative sampies, and their problematic definitions-particularly the definition of relevance. In the minds of some, at least, the relative merits of natural languages vs. indexing languages continued to be an unresolved issue. As with many eitherlor options, a seemingly safe course to follow is to opt for "both," and indeed there seems to be an increasing amount of counsel advising a combination of natural language and index language search capabilities. One strong voice offering such counsel is that of Robert Fugmann, a chemist by training, a theoretician by predilection, and, currently, a practicing information scientist at Hoechst AG, Frankfurt/Main. This selection from his writings sheds light an the capabilities and limitations of both kinds of indexing. Its special significance lies in the fact that its arguments are based not an empirical but an rational grounds. Fugmann's major argument starts from the observation that in natural language there are essentially two different kinds of concepts: 1) individual concepts, repre sented by names of individual things (e.g., the name of the town Augsburg), and 2) general concepts represented by names of classes of things (e.g., pesticides). Individual concepts can be represented in language simply and succinctly, often by a single string of alphanumeric characters; general concepts, an the other hand, can be expressed in a multiplicity of ways. The word pesticides refers to the concept of pesticides, but also referring to this concept are numerous circumlocutions, such as "Substance X was effective against pests." Because natural language is capable of infinite variety, we cannot predict a priori the manifold ways a general concept, like pesticides, will be represented by any given author. It is this lack of predictability that limits natural language retrieval and causes poor precision and recall. Thus, the essential and defining characteristic of an index language ls that it is a tool for representational predictability.
    Footnote
    Original in: International classification 9(1982) no.3, S.140-144.
  20. Dextre Clarke, S.G.; Gilchrist, A.; Will, L.: Revision and extension of thesaurus standards (2004) 0.01
    0.009229176 = product of:
      0.027687527 = sum of:
        0.014055736 = weight(_text_:in in 2615) [ClassicSimilarity], result of:
          0.014055736 = score(doc=2615,freq=26.0), product of:
            0.06484802 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.047673445 = queryNorm
            0.2167489 = fieldWeight in 2615, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2615)
        0.013631791 = product of:
          0.027263582 = sum of:
            0.027263582 = weight(_text_:retrieval in 2615) [ClassicSimilarity], result of:
              0.027263582 = score(doc=2615,freq=4.0), product of:
                0.14420812 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047673445 = queryNorm
                0.18905719 = fieldWeight in 2615, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2615)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The current standards for monolingual and multilingual thesauri are long overdue for an update. This applies to the international standards ISO 2788 and ISO 5964, as well as the corresponding national standards in several countries and the American standard ANSI/NISO Z39.19. Work is now under way in the UK and in the USA to revise and extend the standards, with particular emphasis on interoperability needs in our world of vast electronic networks. Work in the UK is starting with the British Standards, in the hope of leading on to one international standard to serve all. Some of the issues still under discussion include the treatment of facet analysis, coverage of additional types of controlled vocabulary such as classification schemes, taxonomies and ontologies, and mapping from one vocabulary to another. 1. Are thesaurus standards still needed? Since the 1960s, even before the renowned Cranfield experiments (Cleverdon et al., 1966; Cleverdon, 1967) arguments have raged over the usefulness or otherwise of controlled vocabularies. The case has never been proved definitively one way or the other. At the same time, a recognition has become widespread that no one search method can answer all retrieval requirements. In today's environment of very large networks of resources, the skilled information professional uses a range of techniques. Among these, controlled vocabularies are valued alongside others. The first international standard for monolingual thesauri was issued in 1974. In those days, the main application was for postcoordinate indexing and retrieval from document collections or bibliographic databases. For many information professionals the only practicable alternative to a thesaurus was a classification scheme. And so the thesaurus developed a strong following. After computer systems with full text search capability became widely available, however, the arguments against controlled vocabularies gained more followers. The cost of building and maintaining a thesaurus or a classification scheme was a strong disincentive. Today's databases are typically immense compared with those three decades ago. Full text searching is taken for granted, not just in discrete databases but across all the resources in an intranet or even the Internet. But intranets have brought particular frustration as users discover that despite all the computer power, they cannot find items which they know to be present an the network. So the trend against controlled vocabularies is now being reversed, as many information professionals are turning to them for help. Standards to guide them are still in demand.
    Series
    Advances in knowledge organization; vol.9

Types

  • a 69
  • m 7
  • s 7
  • el 4
  • r 2
  • More… Less…