Search (110 results, page 2 of 6)

  • × theme_ss:"Theorie verbaler Dokumentationssprachen"
  1. Degez, D.: Compatibilité des langages d'indexation mariage, cohabitation ou fusion? : Quelques examples concrèts (1998) 0.01
    0.008410187 = product of:
      0.03364075 = sum of:
        0.03364075 = product of:
          0.05046112 = sum of:
            0.004935794 = weight(_text_:a in 2245) [ClassicSimilarity], result of:
              0.004935794 = score(doc=2245,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.089176424 = fieldWeight in 2245, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2245)
            0.045525327 = weight(_text_:22 in 2245) [ClassicSimilarity], result of:
              0.045525327 = score(doc=2245,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.2708308 = fieldWeight in 2245, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2245)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Date
    1. 8.1996 22:01:00
    Type
    a
  2. Maniez, J.: Fusion de banques de donnees documentaires at compatibilite des languages d'indexation (1997) 0.01
    0.007913846 = product of:
      0.031655382 = sum of:
        0.031655382 = product of:
          0.04748307 = sum of:
            0.008461362 = weight(_text_:a in 2246) [ClassicSimilarity], result of:
              0.008461362 = score(doc=2246,freq=8.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.15287387 = fieldWeight in 2246, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2246)
            0.039021708 = weight(_text_:22 in 2246) [ClassicSimilarity], result of:
              0.039021708 = score(doc=2246,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.23214069 = fieldWeight in 2246, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2246)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the apparently unattainable goal of compatibility of information languages. While controlled languages can improve retrieval performance within a single system, they make cooperation across different systems more difficult. The Internet and downloading accentuate this adverse outcome and the acceleration of data exchange aggravates the problem of compatibility. Defines this familiar concept and demonstrates that coherence is just as necessary as it was for indexing languages, the proliferation of which has created confusion in grouped data banks. Describes 2 types of potential solutions, similar to those applied to automatic translation of natural languages: - harmonizing the information languages themselves, both difficult and expensive, or, the more flexible solution involving automatic harmonization of indexing formulae based on pre established concordance tables. However, structural incompatibilities between post coordinated languages and classifications may lead any harmonization tools up a blind alley, while the paths of a universal concordance model are rare and narrow
    Date
    1. 8.1996 22:01:00
    Type
    a
  3. Boteram, F.: Semantische Relationen in Dokumentationssprachen : vom Thesaurus zum semantischen Netz (2008) 0.01
    0.0066062952 = product of:
      0.026425181 = sum of:
        0.026425181 = weight(_text_:von in 2461) [ClassicSimilarity], result of:
          0.026425181 = score(doc=2461,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.20633863 = fieldWeight in 2461, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2461)
      0.25 = coord(1/4)
    
    Abstract
    Moderne Verfahren des Information Retrieval verlangen nach aussagekräftigen und detailliert relationierten Dokumentationssprachen. Der selektive Transfer einzelner Modellierungsstrategien aus dem Bereich semantischer Technologien für die Gestaltung und Relationierung bestehender Dokumentationssprachen wird diskutiert. Am Beispiel des Gegenstandsbereichs "Theater" der Schlagwortnormdatei wird ein hierarchisch strukturiertes Relationeninventar definiert, welches sowohl hinreichend allgemeine als auch zahlreiche spezifische Relationstypen enthält, welche eine detaillierte und damit funktionale Relationierung des Vokabulars ermöglichen. Die Relationierung des Gegenstandsbereichs wird als Ontologie im OWL-Format modelliert. Im Gegensatz zu anderen Ansätzen und Überlegungen zur Schaffung von Relationeninventaren entwickelt der vorgestellte Vorschlag das Relationeninventar aus der Begriffsmenge eines vorgegebenen Gegenstandsbereichs heraus. Das entwickelte Inventar wird als eine hierarchisch strukturierte Taxonomie gestaltet, was einen Zugewinn an Übersichtlichkeit und Funktionalität bringt.
  4. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.01
    0.006594871 = product of:
      0.026379485 = sum of:
        0.026379485 = product of:
          0.039569225 = sum of:
            0.007051134 = weight(_text_:a in 106) [ClassicSimilarity], result of:
              0.007051134 = score(doc=106,freq=8.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12739488 = fieldWeight in 106, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=106)
            0.032518093 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
              0.032518093 = score(doc=106,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.19345059 = fieldWeight in 106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=106)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions. Design/methodology/approach This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions. Findings Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data. Originality/value This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.
    Date
    22. 1.2021 14:24:32
    Type
    a
  5. Boteram, F.: Semantische Relationen in Dokumentationssprachen : vom Thesaurus zum semantischen Netz (2008) 0.01
    0.0056625386 = product of:
      0.022650154 = sum of:
        0.022650154 = weight(_text_:von in 1837) [ClassicSimilarity], result of:
          0.022650154 = score(doc=1837,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.17686167 = fieldWeight in 1837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.046875 = fieldNorm(doc=1837)
      0.25 = coord(1/4)
    
    Abstract
    Moderne Verfahren des Information Retrieval verlangen nach aussagekräftigen und detailliert relationierten Dokumentationssprachen. Der selektive Transfer einzelner Modellierungsstrategien aus dem Bereich semantischer Technologien für die Gestaltung und Relationierung bestehender Dokumentationssprachen wird diskutiert. Am Beispiel des Gegenstandsbereichs "Theater" der Schlagwortnormdatei wird ein hierarchisch strukturiertes Relationeninventar definiert, welches sowohl hinreichend allgemeine als auch zahlreiche spezifische Relationstypen enthält, welche eine detaillierte und damit funktionale Relationierung des Vokabulars ermöglichen. Die Relationierung des Gegenstandsbereichs wird als Ontologie im OWL-Format modelliert. Im Gegensatz zu anderen Ansätzen und Überlegungen zur Schaffung von Relationeninventaren entwickelt der vorgestellte Vorschlag das Relationeninventar aus der Begriffsmenge eines vorgegebenen Gegenstandsbereichs heraus. Das entwickelte Inventar wird als eine hierarchisch strukturierte Taxonomie gestaltet, was einen Zugewinn an Übersichtlichkeit und Funktionalität bringt.
  6. Mooers, C.N.: ¬The indexing language of an information retrieval system (1985) 0.01
    0.005332782 = product of:
      0.021331128 = sum of:
        0.021331128 = product of:
          0.03199669 = sum of:
            0.009234025 = weight(_text_:a in 3644) [ClassicSimilarity], result of:
              0.009234025 = score(doc=3644,freq=28.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.16683382 = fieldWeight in 3644, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3644)
            0.022762664 = weight(_text_:22 in 3644) [ClassicSimilarity], result of:
              0.022762664 = score(doc=3644,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.1354154 = fieldWeight in 3644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3644)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Calvin Mooers' work toward the resolution of the problem of ambiguity in indexing went unrecognized for years. At the time he introduced the "descriptor" - a term with a very distinct meaning-indexers were, for the most part, taking index terms directly from the document, without either rationalizing them with context or normalizing them with some kind of classification. It is ironic that Mooers' term came to be attached to the popular but unsophisticated indexing methods which he was trying to root out. Simply expressed, what Mooers did was to take the dictionary definitions of terms and redefine them so clearly that they could not be used in any context except that provided by the new definition. He did, at great pains, construct such meanings for over four hundred words; disambiguation and specificity were sought after and found for these words. He proposed that all indexers adopt this method so that when the index supplied a term, it also supplied the exact meaning for that term as used in the indexed document. The same term used differently in another document would be defined differently and possibly renamed to avoid ambiguity. The disambiguation was achieved by using unabridged dictionaries and other sources of defining terminology. In practice, this tends to produce circularity in definition, that is, word A refers to word B which refers to word C which refers to word A. It was necessary, therefore, to break this chain by creating a new, definitive meaning for each word. Eventually, means such as those used by Austin (q.v.) for PRECIS achieved the same purpose, but by much more complex means than just creating a unique definition of each term. Mooers, however, was probably the first to realize how confusing undefined terminology could be. Early automatic indexers dealt with distinct disciplines and, as long as they did not stray beyond disciplinary boundaries, a quick and dirty keyword approach was satisfactory. The trouble came when attempts were made to make a combined index for two or more distinct disciplines. A number of processes have since been developed, mostly involving tagging of some kind or use of strings. Mooers' solution has rarely been considered seriously and probably would be extremely difficult to apply now because of so much interdisciplinarity. But for a specific, weIl defined field, it is still weIl worth considering. Mooers received training in mathematics and physics from the University of Minnesota and the Massachusetts Institute of Technology. He was the founder of Zator Company, which developed and marketed a coded card information retrieval system, and of Rockford Research, Inc., which engages in research in information science. He is the inventor of the TRAC computer language.
    Footnote
    Original in: Information retrieval today: papers presented at an Institute conducted by the Library School and the Center for Continuation Study, University of Minnesota, Sept. 19-22, 1962. Ed. by Wesley Simonton. Minneapolis, Minn.: The Center, 1963. S.21-36.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  7. Bhattacharyya, G.: ¬A general theory of subject headings (1974) 0.00
    0.0016283898 = product of:
      0.0065135593 = sum of:
        0.0065135593 = product of:
          0.019540677 = sum of:
            0.019540677 = weight(_text_:a in 1592) [ClassicSimilarity], result of:
              0.019540677 = score(doc=1592,freq=6.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.3530471 = fieldWeight in 1592, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=1592)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Library science with a slant to documentation. 11(1974), S.23-29
    Type
    a
  8. Melton, J.S.: ¬A use for the techniques of structural linguistics in documentation research (1965) 0.00
    0.0012437033 = product of:
      0.004974813 = sum of:
        0.004974813 = product of:
          0.014924439 = sum of:
            0.014924439 = weight(_text_:a in 834) [ClassicSimilarity], result of:
              0.014924439 = score(doc=834,freq=14.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.26964417 = fieldWeight in 834, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=834)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Index language (the system of symbols for representing subject content after analysis) is considered as a separate component and a variable in an information retrieval system. It is suggested that for purposes of testing, comparing and evaluating index language, the techniques of structural linguistics may provide a descriptive methodology by which all such languages (hierarchical and faceted classification, analytico-synthetic indexing, traditional subject indexing, indexes and classifications based on automatic text analysis, etc.) could be described in term of a linguistic model, and compared on a common basis
    Type
    a
  9. Bonzi, S.: Terminological consistency in abstract and concrete disciplines (1984) 0.00
    0.0010075148 = product of:
      0.004030059 = sum of:
        0.004030059 = product of:
          0.012090176 = sum of:
            0.012090176 = weight(_text_:a in 2919) [ClassicSimilarity], result of:
              0.012090176 = score(doc=2919,freq=12.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.21843673 = fieldWeight in 2919, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2919)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This study tested the hypothesis that the vocabulary of a discipline whose major emphasis is on concrete phenomena will, on the average, have fewer synonyms per concept than will the vocabulary of a discipline whose major emphasis is on abstract phenomena. Subject terms from each of two concrete disciplines and two abstract disciplines were analysed. Results showed that there was a significant difference at the 05 level between concrete and abstract disciplines but that the significant difference was attributable to only one of the abstract disciplines. The other abstract discipline was not significantly different from the two concrete disciplines. It was concluded that although thee is some support for the hypothesis, at least one other factor has a stronger influence on terminological consistency than the phenomena with which a subject deals
    Type
    a
  10. Szostak, R.: Classifying relationships (2012) 0.00
    0.0010075148 = product of:
      0.004030059 = sum of:
        0.004030059 = product of:
          0.012090176 = sum of:
            0.012090176 = weight(_text_:a in 1923) [ClassicSimilarity], result of:
              0.012090176 = score(doc=1923,freq=12.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.21843673 = fieldWeight in 1923, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1923)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This paper develops a classification of relationships among things, with many potential uses within information science. Unlike previous classifications of relationships, it is hoped that this classification will provide benefits that exceed the costs of application. The major theoretical innovation is to stress the importance of causal relationships, albeit not exclusively. The paper also stresses the advantages of using compounds of simpler terms: verbs compounded with other verbs, adverbs, or things. The classification builds upon a review of the previous literature and a broad inductive survey of potential sources in a recent article in this journal. The result is a classification that is both manageable in size and easy to apply and yet encompasses all of the relationships necessary for classifying documents or even ideas.
    Type
    a
  11. Szostak, R.: Toward a classification of relationships (2012) 0.00
    0.0010075148 = product of:
      0.004030059 = sum of:
        0.004030059 = product of:
          0.012090176 = sum of:
            0.012090176 = weight(_text_:a in 131) [ClassicSimilarity], result of:
              0.012090176 = score(doc=131,freq=12.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.21843673 = fieldWeight in 131, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=131)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Several attempts have been made to develop a classification of relationships, but none of these have been widely accepted or applied within information science. It would seem that information scientists, while appreciating the potential value of a classification of relationships, have found all previous classifications to be too complicated in application relative to the benefits they provide. This paper begins by reviewing previous attempts and drawing lessons from these. It then surveys a range of sources within and beyond the field of knowledge organization that can together provide the basis for the development of a novel classification of relationships. One critical insight is that relationships governing causation/influence should be accorded priority.
    Type
    a
  12. Gopinath, M.A.; Prasad, K.N.: Compatibility of the principles for design of thesaurus and classification scheme (1976) 0.00
    9.97181E-4 = product of:
      0.003988724 = sum of:
        0.003988724 = product of:
          0.011966172 = sum of:
            0.011966172 = weight(_text_:a in 2943) [ClassicSimilarity], result of:
              0.011966172 = score(doc=2943,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.2161963 = fieldWeight in 2943, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2943)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Library science with a slant to documentation. 13(1976) no.2, S.56-66
    Type
    a
  13. Vickery, B.B.: Structure and function in retrieval languages (2006) 0.00
    9.97181E-4 = product of:
      0.003988724 = sum of:
        0.003988724 = product of:
          0.011966172 = sum of:
            0.011966172 = weight(_text_:a in 5584) [ClassicSimilarity], result of:
              0.011966172 = score(doc=5584,freq=16.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.2161963 = fieldWeight in 5584, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5584)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to summarize the varied structural characteristics which may be present in retrieval languages. Design/methodology/approach - The languages serve varied purposes in information systems, and a number of these are identified. The relations between structure and function are discussed and suggestions made as to the most suitable structures needed for various purposes. Findings - A quantitative approach has been developed: a simple measure is the number of separate terms in a retrieval language, but this has to be related to the scope of its subject field. Some ratio of terms to items in the field seems a more suitable measure of the average specificity of the terms. Other aspects can be quantified - for example, the average number of links in hierarchical chains, or the average number of cross-references in a thesaurus. Originality/value - All the approaches to the analysis of retrieval language reported in this paper are of continuing value. Some practical studies of computer information systems undertaken by Aslib Research Department have suggested a further approach.
    Type
    a
  14. Fugmann, R.: ¬Die Aufgabenteilung zwischen Wortschatz und Grammatik in einer Indexsprache (1979) 0.00
    9.401512E-4 = product of:
      0.003760605 = sum of:
        0.003760605 = product of:
          0.011281814 = sum of:
            0.011281814 = weight(_text_:a in 3) [ClassicSimilarity], result of:
              0.011281814 = score(doc=3,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20383182 = fieldWeight in 3, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=3)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  15. Fox, E.A.: Lexical relations : enhancing effectiveness of information retrieval systems (1980) 0.00
    9.401512E-4 = product of:
      0.003760605 = sum of:
        0.003760605 = product of:
          0.011281814 = sum of:
            0.011281814 = weight(_text_:a in 5310) [ClassicSimilarity], result of:
              0.011281814 = score(doc=5310,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20383182 = fieldWeight in 5310, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=5310)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  16. Fugmann, R.: ¬The complementarity of natural and indexing languages (1982) 0.00
    9.401512E-4 = product of:
      0.003760605 = sum of:
        0.003760605 = product of:
          0.011281814 = sum of:
            0.011281814 = weight(_text_:a in 7648) [ClassicSimilarity], result of:
              0.011281814 = score(doc=7648,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20383182 = fieldWeight in 7648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=7648)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  17. Haldenwanger, H.H.M.: Begriff und Sprache in der Dokumentation (1961) 0.00
    9.401512E-4 = product of:
      0.003760605 = sum of:
        0.003760605 = product of:
          0.011281814 = sum of:
            0.011281814 = weight(_text_:a in 690) [ClassicSimilarity], result of:
              0.011281814 = score(doc=690,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20383182 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=690)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  18. Beghtol, C.: Relationships in classificatory structure and meaning (2001) 0.00
    9.327775E-4 = product of:
      0.00373111 = sum of:
        0.00373111 = product of:
          0.0111933295 = sum of:
            0.0111933295 = weight(_text_:a in 1138) [ClassicSimilarity], result of:
              0.0111933295 = score(doc=1138,freq=14.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20223314 = fieldWeight in 1138, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1138)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    In a changing information environment, we need to reassess each element of bibliographic control, including classification theories and systems. Every classification system is a theoretical construct imposed an "reality." The classificatory relationships that are assumed to be valuable have generally received less attention than the topics included in the systems. Relationships are functions of both the syntactic and semantic axes of classification systems, and both explicit and implicit relationships are discussed. Examples are drawn from a number of different systems, both bibliographic and non-bibliographic, and the cultural warrant (i. e., the sociocultural context) of classification systems is examined. The part-whole relationship is discussed as an example of a universally valid concept that is treated as a component of the cultural warrant of a classification system.
    Type
    a
  19. Szostak, R.: Facet analysis using grammar (2017) 0.00
    9.2906854E-4 = product of:
      0.0037162742 = sum of:
        0.0037162742 = product of:
          0.0111488225 = sum of:
            0.0111488225 = weight(_text_:a in 3866) [ClassicSimilarity], result of:
              0.0111488225 = score(doc=3866,freq=20.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20142901 = fieldWeight in 3866, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3866)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Basic grammar can achieve most/all of the goals of facet analysis without requiring the use of facet indicators. Facet analysis is thus rendered far simpler for classificationist, classifier, and user. We compare facet analysis and grammar, and show how various facets can be represented grammatically. We then address potential challenges in employing grammar as subject classification. A detailed review of basic grammar supports the hypothesis that it is feasible to usefully employ grammatical construction in subject classification. A manageable - and programmable - set of adjustments is required as classifiers move fairly directly from sentences in a document (or object or idea) description to formulating a subject classification. The user likewise can move fairly quickly from a query to the identification of relevant works. A review of theories in linguistics indicates that a grammatical approach should reduce ambiguity while encouraging ease of use. This paper applies the recommended approach to a small sample of recently published books. It finds that the approach is feasible and results in a more precise subject description than the subject headings assigned at present. It then explores PRECIS, an indexing system developed in the 1970s. Though our approach differs from PRECIS in many important ways, the experience of PRECIS supports our conclusions regarding both feasibility and precision.
    Type
    a
  20. Foskett, D.J.: Classification and integrative levels (1985) 0.00
    9.19731E-4 = product of:
      0.003678924 = sum of:
        0.003678924 = product of:
          0.011036771 = sum of:
            0.011036771 = weight(_text_:a in 3639) [ClassicSimilarity], result of:
              0.011036771 = score(doc=3639,freq=40.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.19940455 = fieldWeight in 3639, product of:
                  6.3245554 = tf(freq=40.0), with freq of:
                    40.0 = termFreq=40.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3639)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Very interesting experimental work was done by Douglas Foskett and other British classificationists during the fifteen-year period following the end of World War II. The research was effective in demonstrating that it was possible to make very sophisticated classification systems for virtually any subject-systems suitable for experts and for the general user needing a detailed subject classification. The success of these special systems led to consideration of the possibility of putting them together to form a new general classification system. To do such a thing would require a general, overall framework of some kind, since systems limited to a special subject are easier to construct because one does not have to worry about including all of the pertinent facets needed for a general system. Individual subject classifications do not automatically coalesce into a general pattern. For example, what is central to one special classification might be fringe in another or in several others. Fringe terminologies may not coincide in terms of logical relationships. Homographs and homonyms may not rear their ugly heads until attempts at merger are made. Foskett points out that even identifying a thing in terms of a noun or verb involves different assumptions in approach. For these and other reasons, it made sense to look for existing work in fields where the necessary framework already existed. Foskett found the rudiments of such a system in a number of writings, culminating in a logical system called "integrative levels" suggested by James K. Feibleman (q.v.). This system consists of a set of advancing conceptual levels relating to the apparent organization of nature. These levels are irreversible in that if one once reached a certain level there was no going back. Foskett points out that with higher levels and greater complexity in structure the analysis needed to establish valid levels becomes much more difficult, especially as Feibleman stipulates that a higher level must not be reducible to a lower one. (That is, one cannot put Humpty Dumpty together again.) Foskett is optimistic to the extent of suggesting that references from level to level be made upwards, with inductive reasoning, a system used by Derek Austin (q.v.) for making reference structures in PRECIS. Though the method of integrative levels so far has not been used successfully with the byproducts of human social behavior and thought, so much has been learned about these areas during the past twenty years that Foskett may yet be correct in his optimism. Foskett's name has Jong been associated with classification in the social sciences. As with many of the British classificationists included in this book, he has been a member of the Classification Research Group for about forty years. Like the others, he continues to contribute to the field.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a

Languages

  • e 82
  • d 25
  • f 2
  • ja 1
  • More… Less…

Types

  • a 96
  • s 5
  • el 4
  • m 4
  • r 4
  • x 3
  • n 1
  • More… Less…

Classifications