Search (21 results, page 1 of 2)

  • × year_i:[1980 TO 1990}
  • × theme_ss:"Theorie verbaler Dokumentationssprachen"
  1. Mooers, C.N.: ¬The indexing language of an information retrieval system (1985) 0.01
    0.013794359 = product of:
      0.027588718 = sum of:
        0.027588718 = sum of:
          0.007961915 = weight(_text_:a in 3644) [ClassicSimilarity], result of:
            0.007961915 = score(doc=3644,freq=28.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.16683382 = fieldWeight in 3644, product of:
                5.2915025 = tf(freq=28.0), with freq of:
                  28.0 = termFreq=28.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3644)
          0.019626802 = weight(_text_:22 in 3644) [ClassicSimilarity], result of:
            0.019626802 = score(doc=3644,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.1354154 = fieldWeight in 3644, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3644)
      0.5 = coord(1/2)
    
    Abstract
    Calvin Mooers' work toward the resolution of the problem of ambiguity in indexing went unrecognized for years. At the time he introduced the "descriptor" - a term with a very distinct meaning-indexers were, for the most part, taking index terms directly from the document, without either rationalizing them with context or normalizing them with some kind of classification. It is ironic that Mooers' term came to be attached to the popular but unsophisticated indexing methods which he was trying to root out. Simply expressed, what Mooers did was to take the dictionary definitions of terms and redefine them so clearly that they could not be used in any context except that provided by the new definition. He did, at great pains, construct such meanings for over four hundred words; disambiguation and specificity were sought after and found for these words. He proposed that all indexers adopt this method so that when the index supplied a term, it also supplied the exact meaning for that term as used in the indexed document. The same term used differently in another document would be defined differently and possibly renamed to avoid ambiguity. The disambiguation was achieved by using unabridged dictionaries and other sources of defining terminology. In practice, this tends to produce circularity in definition, that is, word A refers to word B which refers to word C which refers to word A. It was necessary, therefore, to break this chain by creating a new, definitive meaning for each word. Eventually, means such as those used by Austin (q.v.) for PRECIS achieved the same purpose, but by much more complex means than just creating a unique definition of each term. Mooers, however, was probably the first to realize how confusing undefined terminology could be. Early automatic indexers dealt with distinct disciplines and, as long as they did not stray beyond disciplinary boundaries, a quick and dirty keyword approach was satisfactory. The trouble came when attempts were made to make a combined index for two or more distinct disciplines. A number of processes have since been developed, mostly involving tagging of some kind or use of strings. Mooers' solution has rarely been considered seriously and probably would be extremely difficult to apply now because of so much interdisciplinarity. But for a specific, weIl defined field, it is still weIl worth considering. Mooers received training in mathematics and physics from the University of Minnesota and the Massachusetts Institute of Technology. He was the founder of Zator Company, which developed and marketed a coded card information retrieval system, and of Rockford Research, Inc., which engages in research in information science. He is the inventor of the TRAC computer language.
    Footnote
    Original in: Information retrieval today: papers presented at an Institute conducted by the Library School and the Center for Continuation Study, University of Minnesota, Sept. 19-22, 1962. Ed. by Wesley Simonton. Minneapolis, Minn.: The Center, 1963. S.21-36.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  2. Bonzi, S.: Terminological consistency in abstract and concrete disciplines (1984) 0.00
    0.0026061484 = product of:
      0.0052122967 = sum of:
        0.0052122967 = product of:
          0.010424593 = sum of:
            0.010424593 = weight(_text_:a in 2919) [ClassicSimilarity], result of:
              0.010424593 = score(doc=2919,freq=12.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.21843673 = fieldWeight in 2919, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2919)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study tested the hypothesis that the vocabulary of a discipline whose major emphasis is on concrete phenomena will, on the average, have fewer synonyms per concept than will the vocabulary of a discipline whose major emphasis is on abstract phenomena. Subject terms from each of two concrete disciplines and two abstract disciplines were analysed. Results showed that there was a significant difference at the 05 level between concrete and abstract disciplines but that the significant difference was attributable to only one of the abstract disciplines. The other abstract discipline was not significantly different from the two concrete disciplines. It was concluded that although thee is some support for the hypothesis, at least one other factor has a stronger influence on terminological consistency than the phenomena with which a subject deals
    Type
    a
  3. Fox, E.A.: Lexical relations : enhancing effectiveness of information retrieval systems (1980) 0.00
    0.0024318986 = product of:
      0.004863797 = sum of:
        0.004863797 = product of:
          0.009727594 = sum of:
            0.009727594 = weight(_text_:a in 5310) [ClassicSimilarity], result of:
              0.009727594 = score(doc=5310,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.20383182 = fieldWeight in 5310, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=5310)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  4. Fugmann, R.: ¬The complementarity of natural and indexing languages (1982) 0.00
    0.0024318986 = product of:
      0.004863797 = sum of:
        0.004863797 = product of:
          0.009727594 = sum of:
            0.009727594 = weight(_text_:a in 7648) [ClassicSimilarity], result of:
              0.009727594 = score(doc=7648,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.20383182 = fieldWeight in 7648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=7648)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  5. Foskett, D.J.: Classification and integrative levels (1985) 0.00
    0.0023790773 = product of:
      0.0047581545 = sum of:
        0.0047581545 = product of:
          0.009516309 = sum of:
            0.009516309 = weight(_text_:a in 3639) [ClassicSimilarity], result of:
              0.009516309 = score(doc=3639,freq=40.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.19940455 = fieldWeight in 3639, product of:
                  6.3245554 = tf(freq=40.0), with freq of:
                    40.0 = termFreq=40.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3639)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Very interesting experimental work was done by Douglas Foskett and other British classificationists during the fifteen-year period following the end of World War II. The research was effective in demonstrating that it was possible to make very sophisticated classification systems for virtually any subject-systems suitable for experts and for the general user needing a detailed subject classification. The success of these special systems led to consideration of the possibility of putting them together to form a new general classification system. To do such a thing would require a general, overall framework of some kind, since systems limited to a special subject are easier to construct because one does not have to worry about including all of the pertinent facets needed for a general system. Individual subject classifications do not automatically coalesce into a general pattern. For example, what is central to one special classification might be fringe in another or in several others. Fringe terminologies may not coincide in terms of logical relationships. Homographs and homonyms may not rear their ugly heads until attempts at merger are made. Foskett points out that even identifying a thing in terms of a noun or verb involves different assumptions in approach. For these and other reasons, it made sense to look for existing work in fields where the necessary framework already existed. Foskett found the rudiments of such a system in a number of writings, culminating in a logical system called "integrative levels" suggested by James K. Feibleman (q.v.). This system consists of a set of advancing conceptual levels relating to the apparent organization of nature. These levels are irreversible in that if one once reached a certain level there was no going back. Foskett points out that with higher levels and greater complexity in structure the analysis needed to establish valid levels becomes much more difficult, especially as Feibleman stipulates that a higher level must not be reducible to a lower one. (That is, one cannot put Humpty Dumpty together again.) Foskett is optimistic to the extent of suggesting that references from level to level be made upwards, with inductive reasoning, a system used by Derek Austin (q.v.) for making reference structures in PRECIS. Though the method of integrative levels so far has not been used successfully with the byproducts of human social behavior and thought, so much has been learned about these areas during the past twenty years that Foskett may yet be correct in his optimism. Foskett's name has Jong been associated with classification in the social sciences. As with many of the British classificationists included in this book, he has been a member of the Classification Research Group for about forty years. Like the others, he continues to contribute to the field.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  6. Fugmann, R.: ¬The complementarity of natural and indexing languages (1985) 0.00
    0.0022748327 = product of:
      0.0045496654 = sum of:
        0.0045496654 = product of:
          0.009099331 = sum of:
            0.009099331 = weight(_text_:a in 3641) [ClassicSimilarity], result of:
              0.009099331 = score(doc=3641,freq=28.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.19066721 = fieldWeight in 3641, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3641)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The second Cranfield experiment (Cranfield II) in the mid-1960s challenged assumptions held by librarians for nearly a century, namely, that the objective of providing subject access was to bring together all materials an a given topic and that the achieving of this objective required vocabulary control in the form of an index language. The results of Cranfield II were replicated by other retrieval experiments quick to follow its lead and increasing support was given to the opinion that natural language information systems could perform at least as effectively, and certainly more economically, than those employing index languages. When the results of empirical research dramatically counter conventional wisdom, an obvious course is to question the validity of the research and, in the case of retrieval experiments, this eventually happened. Retrieval experiments were criticized for their artificiality, their unrepresentative sampies, and their problematic definitions-particularly the definition of relevance. In the minds of some, at least, the relative merits of natural languages vs. indexing languages continued to be an unresolved issue. As with many eitherlor options, a seemingly safe course to follow is to opt for "both," and indeed there seems to be an increasing amount of counsel advising a combination of natural language and index language search capabilities. One strong voice offering such counsel is that of Robert Fugmann, a chemist by training, a theoretician by predilection, and, currently, a practicing information scientist at Hoechst AG, Frankfurt/Main. This selection from his writings sheds light an the capabilities and limitations of both kinds of indexing. Its special significance lies in the fact that its arguments are based not an empirical but an rational grounds. Fugmann's major argument starts from the observation that in natural language there are essentially two different kinds of concepts: 1) individual concepts, repre sented by names of individual things (e.g., the name of the town Augsburg), and 2) general concepts represented by names of classes of things (e.g., pesticides). Individual concepts can be represented in language simply and succinctly, often by a single string of alphanumeric characters; general concepts, an the other hand, can be expressed in a multiplicity of ways. The word pesticides refers to the concept of pesticides, but also referring to this concept are numerous circumlocutions, such as "Substance X was effective against pests." Because natural language is capable of infinite variety, we cannot predict a priori the manifold ways a general concept, like pesticides, will be represented by any given author. It is this lack of predictability that limits natural language retrieval and causes poor precision and recall. Thus, the essential and defining characteristic of an index language ls that it is a tool for representational predictability.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  7. Körner, H.G.: Syntax und Gewichtung in Informationssprachen : Ein Fortschrittsbericht über präzisere Indexierung und Computer-Suche (1985) 0.00
    0.0021279112 = product of:
      0.0042558224 = sum of:
        0.0042558224 = product of:
          0.008511645 = sum of:
            0.008511645 = weight(_text_:a in 281) [ClassicSimilarity], result of:
              0.008511645 = score(doc=281,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.17835285 = fieldWeight in 281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=281)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  8. Dietze, J.: ¬Die semantische Struktur der Thesauruslexik (1988) 0.00
    0.0021279112 = product of:
      0.0042558224 = sum of:
        0.0042558224 = product of:
          0.008511645 = sum of:
            0.008511645 = weight(_text_:a in 6051) [ClassicSimilarity], result of:
              0.008511645 = score(doc=6051,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.17835285 = fieldWeight in 6051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6051)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  9. Kuhlen, R.: Linguistische Grundlagen (1980) 0.00
    0.0021279112 = product of:
      0.0042558224 = sum of:
        0.0042558224 = product of:
          0.008511645 = sum of:
            0.008511645 = weight(_text_:a in 3829) [ClassicSimilarity], result of:
              0.008511645 = score(doc=3829,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.17835285 = fieldWeight in 3829, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3829)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  10. Rolling, L.: ¬The role of graphic display of concept relationships in indexing and retrieval vocabularies (1985) 0.00
    0.002016424 = product of:
      0.004032848 = sum of:
        0.004032848 = product of:
          0.008065696 = sum of:
            0.008065696 = weight(_text_:a in 3646) [ClassicSimilarity], result of:
              0.008065696 = score(doc=3646,freq=22.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.16900843 = fieldWeight in 3646, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3646)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The use of diagrams to express relationships in classification is not new. Many classificationists have used this approach, but usually in a minor display to make a point or for part of a difficult relational situation. Ranganathan, for example, used diagrams for some of his more elusive concepts. The thesaurus in particular and subject headings in general, with direct and indirect crossreferences or equivalents, need many more diagrams than normally are included to make relationships and even semantics clear. A picture very often is worth a thousand words. Rolling has used directed graphs (arrowgraphs) to join terms as a practical method for rendering relationships between indexing terms lucid. He has succeeded very weIl in this endeavor. Four diagrams in this selection are all that one needs to explain how to employ the system; from initial listing to completed arrowgraph. The samples of his work include illustration of off-page connectors between arrowgraphs. The great advantage to using diagrams like this is that they present relations between individual terms in a format that is easy to comprehend. But of even greater value is the fact that one can use his arrowgraphs as schematics for making three-dimensional wire-and-ball models, in which the relationships may be seen even more clearly. In fact, errors or gaps in relations are much easier to find with this methodology. One also can get across the notion of the threedimensionality of classification systems with such models. Pettee's "hand reaching up and over" (q.v.) is not a figment of the imagination. While the actual hand is a wire or stick, the concept visualized is helpful in illuminating the three-dimensional figure that is latent in all systems that have cross-references or "broader," "narrower," or, especially, "related" terms. Classification schedules, being hemmed in by the dimensions of the printed page, also benefit from such physical illustrations. Rolling, an engineer by conviction, was the developer of information systems for the Cobalt Institute, the European Atomic Energy Community, and European Coal and Steel Community. He also developed and promoted computer-aided translation at the Commission of the European Communities in Luxembourg. One of his objectives has always been to increase the efficiency of mono- and multilingual thesauri for use in multinational information systems.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  11. Coates, E.J.: Significance and term relationship in compound headings (1985) 0.00
    0.0019225847 = product of:
      0.0038451694 = sum of:
        0.0038451694 = product of:
          0.007690339 = sum of:
            0.007690339 = weight(_text_:a in 3634) [ClassicSimilarity], result of:
              0.007690339 = score(doc=3634,freq=20.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.16114321 = fieldWeight in 3634, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3634)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In the continuing search for criteria for determining the form of compound headings (i.e., headings containing more than one word), many authors have attempted to deal with the problem of entry element and citation order. Among the proposed criteria are Cutter's concept of "significance," Kaiser's formula of "concrete/process," Prevost's "noun rule," and Farradane's categories of relationships*' (q.v.). One of the problems in applying the criteria has been the difficulty in determining what is "significant," particularly when two or more words in the heading all refer to concrete objects. In the following excerpt from Subject Catalogues: Headings and Structure, a widely cited book an the alphabetical subject catalog, E. J. Coates proposes the concept of "term significance," that is, "the word which evokes the clearest mental image," as the criterion for determining the entry element in a compound heading. Since a concrete object generally evokes a clearer mental image than an action or process, Coates' theory is in line with Kaiser's theory of "concrete/process" (q.v.) which Coates renamed "thing/action." For determining the citation order of component elements in a compound heading where the elements are equally "significant" (i.e., both or all evoking clear mental images), Coates proposes the use of "term relationship" as the determining factor. He has identified twenty different kinds of relationships among terms and set down the citation order for each. Another frequently encountered problem related to citation order is the determination of the entry element for a compound heading which contains a topic and a locality. Entering such headings uniformly under either the topic or the locality has proven to be infeasible in practice. Many headings of this type have the topic as the main heading, subdivided by the locality; others are entered under the locality as the main heading with the topic as the subdivision. No criteria or rules have been proposed that ensure consistency or predictability. In the following selection, Coates attempts to deal with this problem by ranking the "main areas of knowledge according to the extent to which they appear to be significantly conditioned by locality." The theory Coates expounded in his book was put into practice in compiling the British Technology Index for which Coates served as the editor from 1961 to 1977.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  12. Fugmann, R.: ¬Der Mangel an Grammatik bei Indexsprachen und seine Folgen (1987) 0.00
    0.001823924 = product of:
      0.003647848 = sum of:
        0.003647848 = product of:
          0.007295696 = sum of:
            0.007295696 = weight(_text_:a in 257) [ClassicSimilarity], result of:
              0.007295696 = score(doc=257,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.15287387 = fieldWeight in 257, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=257)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  13. Free text in information systems: capabilities and limitations (1985) 0.00
    0.001823924 = product of:
      0.003647848 = sum of:
        0.003647848 = product of:
          0.007295696 = sum of:
            0.007295696 = weight(_text_:a in 2045) [ClassicSimilarity], result of:
              0.007295696 = score(doc=2045,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.15287387 = fieldWeight in 2045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2045)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  14. Svenonius, E.: Indexical contents (1982) 0.00
    0.001823924 = product of:
      0.003647848 = sum of:
        0.003647848 = product of:
          0.007295696 = sum of:
            0.007295696 = weight(_text_:a in 27) [ClassicSimilarity], result of:
              0.007295696 = score(doc=27,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.15287387 = fieldWeight in 27, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=27)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  15. Svenonius, E.: Unanswered questions in the design of controlled vocabularies (1986) 0.00
    0.001719612 = product of:
      0.003439224 = sum of:
        0.003439224 = product of:
          0.006878448 = sum of:
            0.006878448 = weight(_text_:a in 584) [ClassicSimilarity], result of:
              0.006878448 = score(doc=584,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.14413087 = fieldWeight in 584, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=584)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The issue of free-text versus controlled vocabulary is examined in this article. The history of the issue, which is seen as beginning with the debate over title term indexing in the last century, is reviewed and the attention is turned to questions which have not been satisfactorily addressed by previous research. The point is made that these questions need to be answered if we are to design retrieval tools, such as thesauri, upon a national basis
    Type
    a
  16. Fugmann, R.: Theoretische Grundlagen der Indexierungspraxis (1985) 0.00
    0.0015199365 = product of:
      0.003039873 = sum of:
        0.003039873 = product of:
          0.006079746 = sum of:
            0.006079746 = weight(_text_:a in 280) [ClassicSimilarity], result of:
              0.006079746 = score(doc=280,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.12739488 = fieldWeight in 280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=280)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  17. Winograd, T.: Software für Sprachverarbeitung (1984) 0.00
    0.0015199365 = product of:
      0.003039873 = sum of:
        0.003039873 = product of:
          0.006079746 = sum of:
            0.006079746 = weight(_text_:a in 1687) [ClassicSimilarity], result of:
              0.006079746 = score(doc=1687,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.12739488 = fieldWeight in 1687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1687)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  18. Farradane, J.E.L.: Fundamental fallacies and new needs in classification (1985) 0.00
    0.0015123179 = product of:
      0.0030246358 = sum of:
        0.0030246358 = product of:
          0.0060492717 = sum of:
            0.0060492717 = weight(_text_:a in 3642) [ClassicSimilarity], result of:
              0.0060492717 = score(doc=3642,freq=22.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.12675633 = fieldWeight in 3642, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3642)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This chapter from The Sayers Memorial Volume summarizes Farradane's earlier work in which he developed his major themes by drawing in part upon research in psychology, and particularly those discoveries called "cognitive" which now form part of cognitive science. Farradane, a chemist by training who later became an information scientist and Director of the Center for Information Science, City University, London, from 1958 to 1973, defines the various types of methods used to achieve classification systems-philosophic, scientific, and synthetic. Early an he distinguishes the view that classification is "some part of external 'reality' waiting to be discovered" from that view which considers it "an intellectual operation upon mental entities and concepts." Classification, therefore, is to be treated as a mental construct and not as something "out there" to be discovered as, say, in astronomy or botany. His approach could be termed, somewhat facetiously, as an "in there" one, meaning found by utilizing the human brain as the key tool. This is not to say that discoveries in astronomy or botany do not require the use of the brain as a key tool. It is merely that the "material" worked upon by this tool is presented to it for observation by "that inward eye," by memory and by inference rather than by planned physical observation, memory, and inference. This distinction could be refined or clarified by considering the initial "observation" as a specific kind of mental set required in each case. Farradane then proceeds to demolish the notion of main classes as "fictitious," partly because the various category-defining methodologies used in library classification are "randomly mixed." The implication, probably correct, is that this results in mixed metaphorical concepts. It is an interesting contrast to the approach of Julia Pettee (q.v.), who began with indexing terms and, in studying relationships between terms, discovered hidden hierarchies both between the terms themselves and between the cross-references leading from one term or set of terms to another. One is tempted to ask two questions: "Is hierarchy innate but misinterpreted?" and "ls it possible to have meaningful terms which have only categorical relationships (that have no see also or equivalent relationships to other, out-of-category terms)?" Partly as a result of the rejection of existing general library classification systems, the Classification Research Group-of which Farradane was a charter member decided to adopt the principles of Ranganathan's faceted classification system, while rejecting his limit an the number of fundamental categories. The advantage of the faceted method is that it is created by inductive, rather than deductive, methods. It can be altered more readily to keep up with changes in and additions to the knowledge base in a subject without having to re-do the major schedules. In 1961, when Farradane's paper appeared, the computer was beginning to be viewed as a tool for solving all information retrieval problems. He tartly remarks:
    The basic fallacy of mechanised information retrieval systems seems to be the often unconscious but apparently implied assumption that the machine can inject meaning into a group of juxtaposed terms although no methods of conceptual analysis and re-synthesis have been programmed (p. 203). As an example, he suggests considering the slight but vital differences in the meaning of the word "of" in selected examples: swarm of bees house of the mayor House of Lords spectrum of the sun basket of fish meeting of councillors cooking of meat book of the film Farradane's distinctive contribution is his matrix of basic relationships. The rows concern time and memory, in degree of happenstance: coincidentally, occasionally, or always. The columns represent degree of the "powers of discrimination": occurring together, linked by common elements only, or standing alone. To make these relationships easily managed, he used symbols for each of the nine kinds - "symbols found an every typewriter": /O (Theta) /* /; /= /+ /( /) /_ /: Farradane has maintained his basic insights to the present day. Though he has gone an to do other kinds of research in classification, his work indicates that he still believes that "the primary task ... is that of establishing satisfactory and enduring principles of subject analysis, or classification" (p. 208).
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  19. Fugmann, R.: ¬Die Funktion von semantischen Kategorien in Indexierungssprachen und bei der Indexierung (1986) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 1554) [ClassicSimilarity], result of:
              0.005158836 = score(doc=1554,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 1554, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1554)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Wenn man unter "Indexierung" den zweistufigen Prozeß (a) des Erkennens der Essenz eines wiederauffindbar zu machenden Textes und (b) des Wiedergebens dieser Essenz in einer ausreichend wiedergabetreuen und genügend gut voraussagbaren Form versteht, dann kann die Qualität der Indexierung gesteigert werden, wenn sie unter besonderer Beachtung der Begriffe aus einer kleinen Zahl von besonders wichtigen semantischen Kategorien erfolgt. Bei der Gestaltung der Indexierungssprache müssen die Begriffe aus diesen Kategorien in der erforderlichen Detailliertheit in den Wortschatz aufgenommen werden, und Präkombinationen, die zu "multikategorialen" Begroffen führen, sind möglichst weitgehend zu vermeiden. Präkombinationen, die ausschließlich durch Einbeziehung von häufig vorkommenden ("ubiquitätren") monokategorialen Begriffen gebildet werden, können und sollen aus pragmatischen Gründen für den Wortschatz zugelassen werden. Das Konzept des "Relationenweges" erklärt, inwiefern solche Präkombinationen für den Wortschatz nicht schädlich sind
    Type
    a
  20. Dietze, J.: Informationsrecherchesprache und deren Lexik : Bemerkungen zur Terminologiediskussion (1980) 0.00
    9.11962E-4 = product of:
      0.001823924 = sum of:
        0.001823924 = product of:
          0.003647848 = sum of:
            0.003647848 = weight(_text_:a in 32) [ClassicSimilarity], result of:
              0.003647848 = score(doc=32,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.07643694 = fieldWeight in 32, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=32)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a