Search (24 results, page 1 of 2)

  • × theme_ss:"Theorie verbaler Dokumentationssprachen"
  • × year_i:[1980 TO 1990}
  1. Fugmann, R.: ¬Die Funktion von semantischen Kategorien in Indexierungssprachen und bei der Indexierung (1986) 0.02
    0.020612791 = product of:
      0.041225582 = sum of:
        0.039231222 = weight(_text_:von in 1554) [ClassicSimilarity], result of:
          0.039231222 = score(doc=1554,freq=6.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.30633342 = fieldWeight in 1554, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.046875 = fieldNorm(doc=1554)
        0.001994362 = product of:
          0.005983086 = sum of:
            0.005983086 = weight(_text_:a in 1554) [ClassicSimilarity], result of:
              0.005983086 = score(doc=1554,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10809815 = fieldWeight in 1554, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1554)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Wenn man unter "Indexierung" den zweistufigen Prozeß (a) des Erkennens der Essenz eines wiederauffindbar zu machenden Textes und (b) des Wiedergebens dieser Essenz in einer ausreichend wiedergabetreuen und genügend gut voraussagbaren Form versteht, dann kann die Qualität der Indexierung gesteigert werden, wenn sie unter besonderer Beachtung der Begriffe aus einer kleinen Zahl von besonders wichtigen semantischen Kategorien erfolgt. Bei der Gestaltung der Indexierungssprache müssen die Begriffe aus diesen Kategorien in der erforderlichen Detailliertheit in den Wortschatz aufgenommen werden, und Präkombinationen, die zu "multikategorialen" Begroffen führen, sind möglichst weitgehend zu vermeiden. Präkombinationen, die ausschließlich durch Einbeziehung von häufig vorkommenden ("ubiquitätren") monokategorialen Begriffen gebildet werden, können und sollen aus pragmatischen Gründen für den Wortschatz zugelassen werden. Das Konzept des "Relationenweges" erklärt, inwiefern solche Präkombinationen für den Wortschatz nicht schädlich sind
    Type
    a
  2. Fugmann, R.: Theoretische Grundlagen der Indexierungspraxis (1985) 0.02
    0.020050319 = product of:
      0.040100638 = sum of:
        0.03775026 = weight(_text_:von in 280) [ClassicSimilarity], result of:
          0.03775026 = score(doc=280,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.29476947 = fieldWeight in 280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.078125 = fieldNorm(doc=280)
        0.002350378 = product of:
          0.007051134 = sum of:
            0.007051134 = weight(_text_:a in 280) [ClassicSimilarity], result of:
              0.007051134 = score(doc=280,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12739488 = fieldWeight in 280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=280)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Content
    Enthält in gut verständlicher Form u.a. eine Darstellung der Konzepte 'Vorhersagbarkeit' und 'Wiedergabetreue' von Elementen einer Indexierungssprache
    Type
    a
  3. Neet, H.: Assoziationsrelationen in Dokumentationslexika für die verbale Sacherschließung (1984) 0.02
    0.016882429 = product of:
      0.067529716 = sum of:
        0.067529716 = weight(_text_:von in 1254) [ClassicSimilarity], result of:
          0.067529716 = score(doc=1254,freq=10.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.52729964 = fieldWeight in 1254, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0625 = fieldNorm(doc=1254)
      0.25 = coord(1/4)
    
    Abstract
    Thesauri und Dokumentationslexika können als Varianten von onomasiologischen Wörterbüchern aufgefaßt werden, deren besonderes Interesse für die Linguistik darin besteht, daß Äquivalenz-, Hierarchie- und Assoziationsrelationen angegegeben werden. Regelwerke und Beiträge werden besprochen, die sich mit der Ausweisung von "verwandten" Begriffen in der bibliothekarischen und dokumentarischen Praxis befassen. Belege zu Musterbeispielen von "siehe auch"- und "related term"-Verweisungen werden anhand von drei deutschsprachigen Schlagwortregistern aufgelistet. Die Assoziationsrelationen werden in paradigmatische und syntagmatische Beziehungen eingeteilt. Auch Gruppierungen nach Begriffsfeldern und Assoziationsfeldern sind möglich. Untersuchungen von Assoziationsrelationen im Sachbereich "Buchwesen" bestätigen die Vermutung, daß die Mehrzahl der Verweisungen das gemeinsame Vorkommen bestimmter Begriffe in typischen Kontexten der außersprachlichen Wirklichkeit betrifft.
  4. Krömmelbein, U.: Linguistische und fachwissenschaftliche Gesichtspunkte der Schlagwortsyntax : Eine vergleichende Untersuchung der Regeln für die Schlagwortvergabe der Deutschen Bibliothek, der RSWK und der Indexierungsverfahren Voll-PRECIS und Kurz-PRECIS (1984) 0.02
    0.016721193 = product of:
      0.033442385 = sum of:
        0.03203216 = weight(_text_:von in 984) [ClassicSimilarity], result of:
          0.03203216 = score(doc=984,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.2501202 = fieldWeight in 984, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.046875 = fieldNorm(doc=984)
        0.001410227 = product of:
          0.004230681 = sum of:
            0.004230681 = weight(_text_:a in 984) [ClassicSimilarity], result of:
              0.004230681 = score(doc=984,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.07643694 = fieldWeight in 984, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=984)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Die deutsche Bibliothek in Frankfurt bietet seit einigen Jahren zentrale Dienste im Bereich der verbalen Sacherschließung an, Um deren Akzeptanz zu verbessern, will die Deutsche Bibliothek ab 1986 von der augenblicklichen gleichordnenden Indexierung zu einem syntaktischen Verfahren übergehen. Als Alternativen standen die RSWK und eine verkürzte Version des britischen Indexierungsverfahrens PRECIS zur Diskussion. Die Anforderungen einer Fachwissenschaft an die Schlagwort-Syntax einer adäquaten Dokumentationssprache werden exemplarisch entwickelt, die vier Alternativen - augenblickliche verbale Sacherschließunf der DB, RSWK, PRECIS (britische Version) und Kurz-PRECIS (DB-Version) - an ihnen gemessen. Die Kriterien basiern auf Grammatiktheorien der modernen Linguistik und gehen von einer Analogie zwischen Dokumentationssprachen und natürlicher Sprache aus.
    Type
    a
  5. Krömmelbein, U.: Natürliche Sprache und Strukturprinzipien von Dokumentationssprachen : eine vergleichende Analyse (1981) 0.02
    0.015100104 = product of:
      0.060400415 = sum of:
        0.060400415 = weight(_text_:von in 1372) [ClassicSimilarity], result of:
          0.060400415 = score(doc=1372,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.47163114 = fieldWeight in 1372, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.125 = fieldNorm(doc=1372)
      0.25 = coord(1/4)
    
  6. DIN 31623: Indexierung zur inhaltlichen Erschließung von Dokumenten : T.1: Begriffe, Grundlagen; T.2: Gleichordnende Indexierung mit Deskriptoren; T.3: Syntaktische Indexierung mit Deskriptoren (1988) 0.01
    0.011325077 = product of:
      0.04530031 = sum of:
        0.04530031 = weight(_text_:von in 832) [ClassicSimilarity], result of:
          0.04530031 = score(doc=832,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.35372335 = fieldWeight in 832, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.09375 = fieldNorm(doc=832)
      0.25 = coord(1/4)
    
  7. Mooers, C.N.: ¬The indexing language of an information retrieval system (1985) 0.01
    0.005332782 = product of:
      0.021331128 = sum of:
        0.021331128 = product of:
          0.03199669 = sum of:
            0.009234025 = weight(_text_:a in 3644) [ClassicSimilarity], result of:
              0.009234025 = score(doc=3644,freq=28.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.16683382 = fieldWeight in 3644, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3644)
            0.022762664 = weight(_text_:22 in 3644) [ClassicSimilarity], result of:
              0.022762664 = score(doc=3644,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.1354154 = fieldWeight in 3644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3644)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Calvin Mooers' work toward the resolution of the problem of ambiguity in indexing went unrecognized for years. At the time he introduced the "descriptor" - a term with a very distinct meaning-indexers were, for the most part, taking index terms directly from the document, without either rationalizing them with context or normalizing them with some kind of classification. It is ironic that Mooers' term came to be attached to the popular but unsophisticated indexing methods which he was trying to root out. Simply expressed, what Mooers did was to take the dictionary definitions of terms and redefine them so clearly that they could not be used in any context except that provided by the new definition. He did, at great pains, construct such meanings for over four hundred words; disambiguation and specificity were sought after and found for these words. He proposed that all indexers adopt this method so that when the index supplied a term, it also supplied the exact meaning for that term as used in the indexed document. The same term used differently in another document would be defined differently and possibly renamed to avoid ambiguity. The disambiguation was achieved by using unabridged dictionaries and other sources of defining terminology. In practice, this tends to produce circularity in definition, that is, word A refers to word B which refers to word C which refers to word A. It was necessary, therefore, to break this chain by creating a new, definitive meaning for each word. Eventually, means such as those used by Austin (q.v.) for PRECIS achieved the same purpose, but by much more complex means than just creating a unique definition of each term. Mooers, however, was probably the first to realize how confusing undefined terminology could be. Early automatic indexers dealt with distinct disciplines and, as long as they did not stray beyond disciplinary boundaries, a quick and dirty keyword approach was satisfactory. The trouble came when attempts were made to make a combined index for two or more distinct disciplines. A number of processes have since been developed, mostly involving tagging of some kind or use of strings. Mooers' solution has rarely been considered seriously and probably would be extremely difficult to apply now because of so much interdisciplinarity. But for a specific, weIl defined field, it is still weIl worth considering. Mooers received training in mathematics and physics from the University of Minnesota and the Massachusetts Institute of Technology. He was the founder of Zator Company, which developed and marketed a coded card information retrieval system, and of Rockford Research, Inc., which engages in research in information science. He is the inventor of the TRAC computer language.
    Footnote
    Original in: Information retrieval today: papers presented at an Institute conducted by the Library School and the Center for Continuation Study, University of Minnesota, Sept. 19-22, 1962. Ed. by Wesley Simonton. Minneapolis, Minn.: The Center, 1963. S.21-36.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  8. Bonzi, S.: Terminological consistency in abstract and concrete disciplines (1984) 0.00
    0.0010075148 = product of:
      0.004030059 = sum of:
        0.004030059 = product of:
          0.012090176 = sum of:
            0.012090176 = weight(_text_:a in 2919) [ClassicSimilarity], result of:
              0.012090176 = score(doc=2919,freq=12.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.21843673 = fieldWeight in 2919, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2919)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This study tested the hypothesis that the vocabulary of a discipline whose major emphasis is on concrete phenomena will, on the average, have fewer synonyms per concept than will the vocabulary of a discipline whose major emphasis is on abstract phenomena. Subject terms from each of two concrete disciplines and two abstract disciplines were analysed. Results showed that there was a significant difference at the 05 level between concrete and abstract disciplines but that the significant difference was attributable to only one of the abstract disciplines. The other abstract discipline was not significantly different from the two concrete disciplines. It was concluded that although thee is some support for the hypothesis, at least one other factor has a stronger influence on terminological consistency than the phenomena with which a subject deals
    Type
    a
  9. Fox, E.A.: Lexical relations : enhancing effectiveness of information retrieval systems (1980) 0.00
    9.401512E-4 = product of:
      0.003760605 = sum of:
        0.003760605 = product of:
          0.011281814 = sum of:
            0.011281814 = weight(_text_:a in 5310) [ClassicSimilarity], result of:
              0.011281814 = score(doc=5310,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20383182 = fieldWeight in 5310, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=5310)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  10. Fugmann, R.: ¬The complementarity of natural and indexing languages (1982) 0.00
    9.401512E-4 = product of:
      0.003760605 = sum of:
        0.003760605 = product of:
          0.011281814 = sum of:
            0.011281814 = weight(_text_:a in 7648) [ClassicSimilarity], result of:
              0.011281814 = score(doc=7648,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20383182 = fieldWeight in 7648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=7648)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  11. Foskett, D.J.: Classification and integrative levels (1985) 0.00
    9.19731E-4 = product of:
      0.003678924 = sum of:
        0.003678924 = product of:
          0.011036771 = sum of:
            0.011036771 = weight(_text_:a in 3639) [ClassicSimilarity], result of:
              0.011036771 = score(doc=3639,freq=40.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.19940455 = fieldWeight in 3639, product of:
                  6.3245554 = tf(freq=40.0), with freq of:
                    40.0 = termFreq=40.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3639)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Very interesting experimental work was done by Douglas Foskett and other British classificationists during the fifteen-year period following the end of World War II. The research was effective in demonstrating that it was possible to make very sophisticated classification systems for virtually any subject-systems suitable for experts and for the general user needing a detailed subject classification. The success of these special systems led to consideration of the possibility of putting them together to form a new general classification system. To do such a thing would require a general, overall framework of some kind, since systems limited to a special subject are easier to construct because one does not have to worry about including all of the pertinent facets needed for a general system. Individual subject classifications do not automatically coalesce into a general pattern. For example, what is central to one special classification might be fringe in another or in several others. Fringe terminologies may not coincide in terms of logical relationships. Homographs and homonyms may not rear their ugly heads until attempts at merger are made. Foskett points out that even identifying a thing in terms of a noun or verb involves different assumptions in approach. For these and other reasons, it made sense to look for existing work in fields where the necessary framework already existed. Foskett found the rudiments of such a system in a number of writings, culminating in a logical system called "integrative levels" suggested by James K. Feibleman (q.v.). This system consists of a set of advancing conceptual levels relating to the apparent organization of nature. These levels are irreversible in that if one once reached a certain level there was no going back. Foskett points out that with higher levels and greater complexity in structure the analysis needed to establish valid levels becomes much more difficult, especially as Feibleman stipulates that a higher level must not be reducible to a lower one. (That is, one cannot put Humpty Dumpty together again.) Foskett is optimistic to the extent of suggesting that references from level to level be made upwards, with inductive reasoning, a system used by Derek Austin (q.v.) for making reference structures in PRECIS. Though the method of integrative levels so far has not been used successfully with the byproducts of human social behavior and thought, so much has been learned about these areas during the past twenty years that Foskett may yet be correct in his optimism. Foskett's name has Jong been associated with classification in the social sciences. As with many of the British classificationists included in this book, he has been a member of the Classification Research Group for about forty years. Like the others, he continues to contribute to the field.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  12. Fugmann, R.: ¬The complementarity of natural and indexing languages (1985) 0.00
    8.794309E-4 = product of:
      0.0035177236 = sum of:
        0.0035177236 = product of:
          0.010553171 = sum of:
            0.010553171 = weight(_text_:a in 3641) [ClassicSimilarity], result of:
              0.010553171 = score(doc=3641,freq=28.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.19066721 = fieldWeight in 3641, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3641)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The second Cranfield experiment (Cranfield II) in the mid-1960s challenged assumptions held by librarians for nearly a century, namely, that the objective of providing subject access was to bring together all materials an a given topic and that the achieving of this objective required vocabulary control in the form of an index language. The results of Cranfield II were replicated by other retrieval experiments quick to follow its lead and increasing support was given to the opinion that natural language information systems could perform at least as effectively, and certainly more economically, than those employing index languages. When the results of empirical research dramatically counter conventional wisdom, an obvious course is to question the validity of the research and, in the case of retrieval experiments, this eventually happened. Retrieval experiments were criticized for their artificiality, their unrepresentative sampies, and their problematic definitions-particularly the definition of relevance. In the minds of some, at least, the relative merits of natural languages vs. indexing languages continued to be an unresolved issue. As with many eitherlor options, a seemingly safe course to follow is to opt for "both," and indeed there seems to be an increasing amount of counsel advising a combination of natural language and index language search capabilities. One strong voice offering such counsel is that of Robert Fugmann, a chemist by training, a theoretician by predilection, and, currently, a practicing information scientist at Hoechst AG, Frankfurt/Main. This selection from his writings sheds light an the capabilities and limitations of both kinds of indexing. Its special significance lies in the fact that its arguments are based not an empirical but an rational grounds. Fugmann's major argument starts from the observation that in natural language there are essentially two different kinds of concepts: 1) individual concepts, repre sented by names of individual things (e.g., the name of the town Augsburg), and 2) general concepts represented by names of classes of things (e.g., pesticides). Individual concepts can be represented in language simply and succinctly, often by a single string of alphanumeric characters; general concepts, an the other hand, can be expressed in a multiplicity of ways. The word pesticides refers to the concept of pesticides, but also referring to this concept are numerous circumlocutions, such as "Substance X was effective against pests." Because natural language is capable of infinite variety, we cannot predict a priori the manifold ways a general concept, like pesticides, will be represented by any given author. It is this lack of predictability that limits natural language retrieval and causes poor precision and recall. Thus, the essential and defining characteristic of an index language ls that it is a tool for representational predictability.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  13. Körner, H.G.: Syntax und Gewichtung in Informationssprachen : Ein Fortschrittsbericht über präzisere Indexierung und Computer-Suche (1985) 0.00
    8.2263234E-4 = product of:
      0.0032905294 = sum of:
        0.0032905294 = product of:
          0.009871588 = sum of:
            0.009871588 = weight(_text_:a in 281) [ClassicSimilarity], result of:
              0.009871588 = score(doc=281,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.17835285 = fieldWeight in 281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=281)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  14. Dietze, J.: ¬Die semantische Struktur der Thesauruslexik (1988) 0.00
    8.2263234E-4 = product of:
      0.0032905294 = sum of:
        0.0032905294 = product of:
          0.009871588 = sum of:
            0.009871588 = weight(_text_:a in 6051) [ClassicSimilarity], result of:
              0.009871588 = score(doc=6051,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.17835285 = fieldWeight in 6051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6051)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  15. Kuhlen, R.: Linguistische Grundlagen (1980) 0.00
    8.2263234E-4 = product of:
      0.0032905294 = sum of:
        0.0032905294 = product of:
          0.009871588 = sum of:
            0.009871588 = weight(_text_:a in 3829) [ClassicSimilarity], result of:
              0.009871588 = score(doc=3829,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.17835285 = fieldWeight in 3829, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3829)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  16. Rolling, L.: ¬The role of graphic display of concept relationships in indexing and retrieval vocabularies (1985) 0.00
    7.795323E-4 = product of:
      0.0031181292 = sum of:
        0.0031181292 = product of:
          0.009354387 = sum of:
            0.009354387 = weight(_text_:a in 3646) [ClassicSimilarity], result of:
              0.009354387 = score(doc=3646,freq=22.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.16900843 = fieldWeight in 3646, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3646)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The use of diagrams to express relationships in classification is not new. Many classificationists have used this approach, but usually in a minor display to make a point or for part of a difficult relational situation. Ranganathan, for example, used diagrams for some of his more elusive concepts. The thesaurus in particular and subject headings in general, with direct and indirect crossreferences or equivalents, need many more diagrams than normally are included to make relationships and even semantics clear. A picture very often is worth a thousand words. Rolling has used directed graphs (arrowgraphs) to join terms as a practical method for rendering relationships between indexing terms lucid. He has succeeded very weIl in this endeavor. Four diagrams in this selection are all that one needs to explain how to employ the system; from initial listing to completed arrowgraph. The samples of his work include illustration of off-page connectors between arrowgraphs. The great advantage to using diagrams like this is that they present relations between individual terms in a format that is easy to comprehend. But of even greater value is the fact that one can use his arrowgraphs as schematics for making three-dimensional wire-and-ball models, in which the relationships may be seen even more clearly. In fact, errors or gaps in relations are much easier to find with this methodology. One also can get across the notion of the threedimensionality of classification systems with such models. Pettee's "hand reaching up and over" (q.v.) is not a figment of the imagination. While the actual hand is a wire or stick, the concept visualized is helpful in illuminating the three-dimensional figure that is latent in all systems that have cross-references or "broader," "narrower," or, especially, "related" terms. Classification schedules, being hemmed in by the dimensions of the printed page, also benefit from such physical illustrations. Rolling, an engineer by conviction, was the developer of information systems for the Cobalt Institute, the European Atomic Energy Community, and European Coal and Steel Community. He also developed and promoted computer-aided translation at the Commission of the European Communities in Luxembourg. One of his objectives has always been to increase the efficiency of mono- and multilingual thesauri for use in multinational information systems.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  17. Coates, E.J.: Significance and term relationship in compound headings (1985) 0.00
    7.432549E-4 = product of:
      0.0029730196 = sum of:
        0.0029730196 = product of:
          0.008919058 = sum of:
            0.008919058 = weight(_text_:a in 3634) [ClassicSimilarity], result of:
              0.008919058 = score(doc=3634,freq=20.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.16114321 = fieldWeight in 3634, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3634)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    In the continuing search for criteria for determining the form of compound headings (i.e., headings containing more than one word), many authors have attempted to deal with the problem of entry element and citation order. Among the proposed criteria are Cutter's concept of "significance," Kaiser's formula of "concrete/process," Prevost's "noun rule," and Farradane's categories of relationships*' (q.v.). One of the problems in applying the criteria has been the difficulty in determining what is "significant," particularly when two or more words in the heading all refer to concrete objects. In the following excerpt from Subject Catalogues: Headings and Structure, a widely cited book an the alphabetical subject catalog, E. J. Coates proposes the concept of "term significance," that is, "the word which evokes the clearest mental image," as the criterion for determining the entry element in a compound heading. Since a concrete object generally evokes a clearer mental image than an action or process, Coates' theory is in line with Kaiser's theory of "concrete/process" (q.v.) which Coates renamed "thing/action." For determining the citation order of component elements in a compound heading where the elements are equally "significant" (i.e., both or all evoking clear mental images), Coates proposes the use of "term relationship" as the determining factor. He has identified twenty different kinds of relationships among terms and set down the citation order for each. Another frequently encountered problem related to citation order is the determination of the entry element for a compound heading which contains a topic and a locality. Entering such headings uniformly under either the topic or the locality has proven to be infeasible in practice. Many headings of this type have the topic as the main heading, subdivided by the locality; others are entered under the locality as the main heading with the topic as the subdivision. No criteria or rules have been proposed that ensure consistency or predictability. In the following selection, Coates attempts to deal with this problem by ranking the "main areas of knowledge according to the extent to which they appear to be significantly conditioned by locality." The theory Coates expounded in his book was put into practice in compiling the British Technology Index for which Coates served as the editor from 1961 to 1977.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  18. Fugmann, R.: ¬Der Mangel an Grammatik bei Indexsprachen und seine Folgen (1987) 0.00
    7.051135E-4 = product of:
      0.002820454 = sum of:
        0.002820454 = product of:
          0.008461362 = sum of:
            0.008461362 = weight(_text_:a in 257) [ClassicSimilarity], result of:
              0.008461362 = score(doc=257,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.15287387 = fieldWeight in 257, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=257)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  19. Free text in information systems: capabilities and limitations (1985) 0.00
    7.051135E-4 = product of:
      0.002820454 = sum of:
        0.002820454 = product of:
          0.008461362 = sum of:
            0.008461362 = weight(_text_:a in 2045) [ClassicSimilarity], result of:
              0.008461362 = score(doc=2045,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.15287387 = fieldWeight in 2045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2045)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  20. Svenonius, E.: Indexical contents (1982) 0.00
    7.051135E-4 = product of:
      0.002820454 = sum of:
        0.002820454 = product of:
          0.008461362 = sum of:
            0.008461362 = weight(_text_:a in 27) [ClassicSimilarity], result of:
              0.008461362 = score(doc=27,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.15287387 = fieldWeight in 27, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=27)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a