Search (105 results, page 2 of 6)

  • × theme_ss:"Theorie verbaler Dokumentationssprachen"
  1. Fugmann, R.: ¬Die Grenzen des Thesaurus-Verfahrens bei der Wiedergabe von Begriffsrelationen (1975) 0.00
    0.0035395343 = product of:
      0.010618603 = sum of:
        0.010618603 = weight(_text_:a in 4765) [ClassicSimilarity], result of:
          0.010618603 = score(doc=4765,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20383182 = fieldWeight in 4765, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=4765)
      0.33333334 = coord(1/3)
    
    Type
    a
  2. Fox, E.A.: Lexical relations : enhancing effectiveness of information retrieval systems (1980) 0.00
    0.0035395343 = product of:
      0.010618603 = sum of:
        0.010618603 = weight(_text_:a in 5310) [ClassicSimilarity], result of:
          0.010618603 = score(doc=5310,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20383182 = fieldWeight in 5310, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=5310)
      0.33333334 = coord(1/3)
    
    Type
    a
  3. Fugmann, R.: ¬The complementarity of natural and indexing languages (1982) 0.00
    0.0035395343 = product of:
      0.010618603 = sum of:
        0.010618603 = weight(_text_:a in 7648) [ClassicSimilarity], result of:
          0.010618603 = score(doc=7648,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20383182 = fieldWeight in 7648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=7648)
      0.33333334 = coord(1/3)
    
    Type
    a
  4. Haldenwanger, H.H.M.: Begriff und Sprache in der Dokumentation (1961) 0.00
    0.0035395343 = product of:
      0.010618603 = sum of:
        0.010618603 = weight(_text_:a in 690) [ClassicSimilarity], result of:
          0.010618603 = score(doc=690,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20383182 = fieldWeight in 690, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=690)
      0.33333334 = coord(1/3)
    
    Type
    a
  5. Dahlberg, I.: Über Gegenstände, Begriffe, Definitionen und Benennungen: zur möglichen Neufassung von DIN 2330 (1976) 0.00
    0.0035395343 = product of:
      0.010618603 = sum of:
        0.010618603 = weight(_text_:a in 1390) [ClassicSimilarity], result of:
          0.010618603 = score(doc=1390,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20383182 = fieldWeight in 1390, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=1390)
      0.33333334 = coord(1/3)
    
    Type
    a
  6. Beghtol, C.: Relationships in classificatory structure and meaning (2001) 0.00
    0.0035117732 = product of:
      0.010535319 = sum of:
        0.010535319 = weight(_text_:a in 1138) [ClassicSimilarity], result of:
          0.010535319 = score(doc=1138,freq=14.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20223314 = fieldWeight in 1138, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1138)
      0.33333334 = coord(1/3)
    
    Abstract
    In a changing information environment, we need to reassess each element of bibliographic control, including classification theories and systems. Every classification system is a theoretical construct imposed an "reality." The classificatory relationships that are assumed to be valuable have generally received less attention than the topics included in the systems. Relationships are functions of both the syntactic and semantic axes of classification systems, and both explicit and implicit relationships are discussed. Examples are drawn from a number of different systems, both bibliographic and non-bibliographic, and the cultural warrant (i. e., the sociocultural context) of classification systems is examined. The part-whole relationship is discussed as an example of a universally valid concept that is treated as a component of the cultural warrant of a classification system.
    Type
    a
  7. Szostak, R.: Facet analysis using grammar (2017) 0.00
    0.0034978096 = product of:
      0.010493428 = sum of:
        0.010493428 = weight(_text_:a in 3866) [ClassicSimilarity], result of:
          0.010493428 = score(doc=3866,freq=20.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20142901 = fieldWeight in 3866, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3866)
      0.33333334 = coord(1/3)
    
    Abstract
    Basic grammar can achieve most/all of the goals of facet analysis without requiring the use of facet indicators. Facet analysis is thus rendered far simpler for classificationist, classifier, and user. We compare facet analysis and grammar, and show how various facets can be represented grammatically. We then address potential challenges in employing grammar as subject classification. A detailed review of basic grammar supports the hypothesis that it is feasible to usefully employ grammatical construction in subject classification. A manageable - and programmable - set of adjustments is required as classifiers move fairly directly from sentences in a document (or object or idea) description to formulating a subject classification. The user likewise can move fairly quickly from a query to the identification of relevant works. A review of theories in linguistics indicates that a grammatical approach should reduce ambiguity while encouraging ease of use. This paper applies the recommended approach to a small sample of recently published books. It finds that the approach is feasible and results in a more precise subject description than the subject headings assigned at present. It then explores PRECIS, an indexing system developed in the 1970s. Though our approach differs from PRECIS in many important ways, the experience of PRECIS supports our conclusions regarding both feasibility and precision.
    Type
    a
  8. Foskett, D.J.: Classification and integrative levels (1985) 0.00
    0.003462655 = product of:
      0.010387965 = sum of:
        0.010387965 = weight(_text_:a in 3639) [ClassicSimilarity], result of:
          0.010387965 = score(doc=3639,freq=40.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19940455 = fieldWeight in 3639, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3639)
      0.33333334 = coord(1/3)
    
    Abstract
    Very interesting experimental work was done by Douglas Foskett and other British classificationists during the fifteen-year period following the end of World War II. The research was effective in demonstrating that it was possible to make very sophisticated classification systems for virtually any subject-systems suitable for experts and for the general user needing a detailed subject classification. The success of these special systems led to consideration of the possibility of putting them together to form a new general classification system. To do such a thing would require a general, overall framework of some kind, since systems limited to a special subject are easier to construct because one does not have to worry about including all of the pertinent facets needed for a general system. Individual subject classifications do not automatically coalesce into a general pattern. For example, what is central to one special classification might be fringe in another or in several others. Fringe terminologies may not coincide in terms of logical relationships. Homographs and homonyms may not rear their ugly heads until attempts at merger are made. Foskett points out that even identifying a thing in terms of a noun or verb involves different assumptions in approach. For these and other reasons, it made sense to look for existing work in fields where the necessary framework already existed. Foskett found the rudiments of such a system in a number of writings, culminating in a logical system called "integrative levels" suggested by James K. Feibleman (q.v.). This system consists of a set of advancing conceptual levels relating to the apparent organization of nature. These levels are irreversible in that if one once reached a certain level there was no going back. Foskett points out that with higher levels and greater complexity in structure the analysis needed to establish valid levels becomes much more difficult, especially as Feibleman stipulates that a higher level must not be reducible to a lower one. (That is, one cannot put Humpty Dumpty together again.) Foskett is optimistic to the extent of suggesting that references from level to level be made upwards, with inductive reasoning, a system used by Derek Austin (q.v.) for making reference structures in PRECIS. Though the method of integrative levels so far has not been used successfully with the byproducts of human social behavior and thought, so much has been learned about these areas during the past twenty years that Foskett may yet be correct in his optimism. Foskett's name has Jong been associated with classification in the social sciences. As with many of the British classificationists included in this book, he has been a member of the Classification Research Group for about forty years. Like the others, he continues to contribute to the field.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  9. Courrier, Y.: SYNTOL (2009) 0.00
    0.003462655 = product of:
      0.010387965 = sum of:
        0.010387965 = weight(_text_:a in 3887) [ClassicSimilarity], result of:
          0.010387965 = score(doc=3887,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19940455 = fieldWeight in 3887, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3887)
      0.33333334 = coord(1/3)
    
    Abstract
    In the 1960s and 1970s, a lot of work was done to develop indexing languages and models of indexing languages, in order to be able to produce the more specific indexing needed for highly specialized scientific papers. SYNTOL was a major contribution of the French to this activity. SYNTOL as a model was based on the linguistic distinction between paradigmatic and syntagmatic relations of words, and was intended to supply a complete and flexible platform for its own and other indexing languages.
    Type
    a
  10. Fugmann, R.: ¬The complementarity of natural and indexing languages (1985) 0.00
    0.0033109314 = product of:
      0.009932794 = sum of:
        0.009932794 = weight(_text_:a in 3641) [ClassicSimilarity], result of:
          0.009932794 = score(doc=3641,freq=28.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19066721 = fieldWeight in 3641, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=3641)
      0.33333334 = coord(1/3)
    
    Abstract
    The second Cranfield experiment (Cranfield II) in the mid-1960s challenged assumptions held by librarians for nearly a century, namely, that the objective of providing subject access was to bring together all materials an a given topic and that the achieving of this objective required vocabulary control in the form of an index language. The results of Cranfield II were replicated by other retrieval experiments quick to follow its lead and increasing support was given to the opinion that natural language information systems could perform at least as effectively, and certainly more economically, than those employing index languages. When the results of empirical research dramatically counter conventional wisdom, an obvious course is to question the validity of the research and, in the case of retrieval experiments, this eventually happened. Retrieval experiments were criticized for their artificiality, their unrepresentative sampies, and their problematic definitions-particularly the definition of relevance. In the minds of some, at least, the relative merits of natural languages vs. indexing languages continued to be an unresolved issue. As with many eitherlor options, a seemingly safe course to follow is to opt for "both," and indeed there seems to be an increasing amount of counsel advising a combination of natural language and index language search capabilities. One strong voice offering such counsel is that of Robert Fugmann, a chemist by training, a theoretician by predilection, and, currently, a practicing information scientist at Hoechst AG, Frankfurt/Main. This selection from his writings sheds light an the capabilities and limitations of both kinds of indexing. Its special significance lies in the fact that its arguments are based not an empirical but an rational grounds. Fugmann's major argument starts from the observation that in natural language there are essentially two different kinds of concepts: 1) individual concepts, repre sented by names of individual things (e.g., the name of the town Augsburg), and 2) general concepts represented by names of classes of things (e.g., pesticides). Individual concepts can be represented in language simply and succinctly, often by a single string of alphanumeric characters; general concepts, an the other hand, can be expressed in a multiplicity of ways. The word pesticides refers to the concept of pesticides, but also referring to this concept are numerous circumlocutions, such as "Substance X was effective against pests." Because natural language is capable of infinite variety, we cannot predict a priori the manifold ways a general concept, like pesticides, will be represented by any given author. It is this lack of predictability that limits natural language retrieval and causes poor precision and recall. Thus, the essential and defining characteristic of an index language ls that it is a tool for representational predictability.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  11. Hudon, M.: ¬A preliminary investigation of the usefulness of semantic relations and of standardized definitions for the purpose of specifying meaning in a thesaurus (1998) 0.00
    0.00325127 = product of:
      0.009753809 = sum of:
        0.009753809 = weight(_text_:a in 55) [ClassicSimilarity], result of:
          0.009753809 = score(doc=55,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.18723148 = fieldWeight in 55, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=55)
      0.33333334 = coord(1/3)
    
    Abstract
    The terminological consistency of indexers working with a thesaurus as indexing aid remains low. This suggests that indexers cannot perceive easily or very clearly the meaning of each descriptor available as index term. This paper presents the background nd some of the findings of a small scale experiment designed to study the effect on interindexer terminological consistency of modifying the nature of the semantic information given with descriptors in a thesaurus. The study also provided some insights into the respective usefulness of standardized definitions and of traditional networks of hierarchical and associative relationships as means of providing essential meaning information in the thesaurus used as indexing aid
    Type
    a
  12. Körner, H.G.: Syntax und Gewichtung in Informationssprachen : Ein Fortschrittsbericht über präzisere Indexierung und Computer-Suche (1985) 0.00
    0.0030970925 = product of:
      0.009291277 = sum of:
        0.009291277 = weight(_text_:a in 281) [ClassicSimilarity], result of:
          0.009291277 = score(doc=281,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=281)
      0.33333334 = coord(1/3)
    
    Type
    a
  13. Kobrin, R.Y.: On the principles of terminological work in the creation of thesauri for information retrieval systems (1979) 0.00
    0.0030970925 = product of:
      0.009291277 = sum of:
        0.009291277 = weight(_text_:a in 2954) [ClassicSimilarity], result of:
          0.009291277 = score(doc=2954,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 2954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=2954)
      0.33333334 = coord(1/3)
    
    Type
    a
  14. Dietze, J.: ¬Die semantische Struktur der Thesauruslexik (1988) 0.00
    0.0030970925 = product of:
      0.009291277 = sum of:
        0.009291277 = weight(_text_:a in 6051) [ClassicSimilarity], result of:
          0.009291277 = score(doc=6051,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 6051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=6051)
      0.33333334 = coord(1/3)
    
    Type
    a
  15. Svenonius, E.: Design of controlled vocabularies (1990) 0.00
    0.0030970925 = product of:
      0.009291277 = sum of:
        0.009291277 = weight(_text_:a in 1271) [ClassicSimilarity], result of:
          0.009291277 = score(doc=1271,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 1271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=1271)
      0.33333334 = coord(1/3)
    
    Type
    a
  16. Green, R.: Syntagmatic relationships in index languages : a reassessment (1995) 0.00
    0.0030970925 = product of:
      0.009291277 = sum of:
        0.009291277 = weight(_text_:a in 3144) [ClassicSimilarity], result of:
          0.009291277 = score(doc=3144,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 3144, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3144)
      0.33333334 = coord(1/3)
    
    Abstract
    Effective use of syntagmatic relationships in index languages has suffered from inaccurate or incomplete characterization in both linguistics and information science. A number of 'myths' about syntagmatic relationships are debunked: the exclusivity of paradigmatic and syntagmatic relationships, linearity as a defining characteristic of syntagmatic relationships, the restriction of syntagmatic relationships to surface linguistic units, the limitation of syntagmatic relationship benefits in document retrieval to precision, and the general irrelevance of syntagmatic relationships for document retrieval. None of the mechanisms currently used with index languages is powerful enough to achieve the levels of precision and recall that the expression of conceptual syntagmatic relationships is in theory capable of. New designs for expressing these relationships in index languages will need to take into account such characteristics as their semantic nature, systematicity, generalizability and constituent nature
    Type
    a
  17. Kuhlen, R.: Linguistische Grundlagen (1980) 0.00
    0.0030970925 = product of:
      0.009291277 = sum of:
        0.009291277 = weight(_text_:a in 3829) [ClassicSimilarity], result of:
          0.009291277 = score(doc=3829,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 3829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=3829)
      0.33333334 = coord(1/3)
    
    Type
    a
  18. Vickery, B.C.: Structure and function in retrieval languages (1971) 0.00
    0.0030970925 = product of:
      0.009291277 = sum of:
        0.009291277 = weight(_text_:a in 4971) [ClassicSimilarity], result of:
          0.009291277 = score(doc=4971,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 4971, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=4971)
      0.33333334 = coord(1/3)
    
    Type
    a
  19. Barite, M.G.: ¬The notion of "category" : its implications in subject analysis and in the construction and evaluation of indexing languages (2000) 0.00
    0.0030970925 = product of:
      0.009291277 = sum of:
        0.009291277 = weight(_text_:a in 6036) [ClassicSimilarity], result of:
          0.009291277 = score(doc=6036,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 6036, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6036)
      0.33333334 = coord(1/3)
    
    Abstract
    The notion of category, from Aristotle and Kant to the present time, has been used as a basic intellectual tool for the analysis of the existence and changeableness of things. Ranganathan was the first to extrapolate the concept into the Theory of Classification, placing it as an essential axis for the logical organization of knowledge and the construction of indexing languages. This paper proposes a conceptual and methodological reexamination of the notion of category from a functional and instrumental perspective, and tries to clarify the essential characters of categories in that context, and their present implications regarding the construction and evaluation of indexing languages
    Type
    a
  20. Farradane, J.: Concept organization for information retrieval (1967) 0.00
    0.0030970925 = product of:
      0.009291277 = sum of:
        0.009291277 = weight(_text_:a in 35) [ClassicSimilarity], result of:
          0.009291277 = score(doc=35,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 35, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=35)
      0.33333334 = coord(1/3)
    
    Type
    a

Languages

  • e 81
  • d 20
  • f 3
  • ja 1
  • More… Less…

Types

  • a 96
  • m 5
  • s 5
  • el 3
  • r 2
  • More… Less…

Classifications