Search (105 results, page 1 of 6)

  • × theme_ss:"Theorie verbaler Dokumentationssprachen"
  1. Fugmann, R.: ¬The complementarity of natural and controlled languages in indexing (1995) 0.12
    0.12365307 = product of:
      0.16487075 = sum of:
        0.006476338 = weight(_text_:a in 1634) [ClassicSimilarity], result of:
          0.006476338 = score(doc=1634,freq=2.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.12739488 = fieldWeight in 1634, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1634)
        0.10723893 = weight(_text_:et in 1634) [ClassicSimilarity], result of:
          0.10723893 = score(doc=1634,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.5183982 = fieldWeight in 1634, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.078125 = fieldNorm(doc=1634)
        0.051155485 = product of:
          0.10231097 = sum of:
            0.10231097 = weight(_text_:al in 1634) [ClassicSimilarity], result of:
              0.10231097 = score(doc=1634,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.5063471 = fieldWeight in 1634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1634)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Source
    Subject indexing: principles and practices in the 90's. Proceedings of the IFLA Satellite Meeting Held in Lisbon, Portugal, 17-18 August 1993, and sponsored by the IFLA Section on Classification and Indexing and the Instituto da Biblioteca Nacional e do Livro, Lisbon, Portugal. Ed.: R.P. Holley et al
    Type
    a
  2. Bean, C.: ¬The semantics of hierarchy : explicit parent-child relationships in MeSH tree structures (1998) 0.09
    0.0879655 = product of:
      0.11728734 = sum of:
        0.0064112484 = weight(_text_:a in 42) [ClassicSimilarity], result of:
          0.0064112484 = score(doc=42,freq=4.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.12611452 = fieldWeight in 42, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=42)
        0.07506725 = weight(_text_:et in 42) [ClassicSimilarity], result of:
          0.07506725 = score(doc=42,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.36287874 = fieldWeight in 42, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=42)
        0.03580884 = product of:
          0.07161768 = sum of:
            0.07161768 = weight(_text_:al in 42) [ClassicSimilarity], result of:
              0.07161768 = score(doc=42,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.35444298 = fieldWeight in 42, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=42)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Parent-Child relationships in MeSH trees were surveyed and described, and their patterns in the relational structure were determined for selected broad subject categories and subcategories. Is-a relationships dominated and were more prevalent overall than previously reported; however, an additional 67 different relationships were also seen, most of them nonhierarchical. Relational profiles were found to vary both within and among subject subdomains, but tended to display characteristic domain patterns. The implications for inferential reasoning and other cognitive and computational operations on hierarchical structures are considered
    Source
    Structures and relations in knowledge organization: Proceedings of the 5th International ISKO-Conference, Lille, 25.-29.8.1998. Ed.: W. Mustafa el Hadi et al
    Type
    a
  3. Hudon, M.: ¬A preliminary investigation of the usefulness of semantic relations and of standardized definitions for the purpose of specifying meaning in a thesaurus (1998) 0.08
    0.07841616 = product of:
      0.104554884 = sum of:
        0.009518234 = weight(_text_:a in 55) [ClassicSimilarity], result of:
          0.009518234 = score(doc=55,freq=12.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.18723148 = fieldWeight in 55, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=55)
        0.064343356 = weight(_text_:et in 55) [ClassicSimilarity], result of:
          0.064343356 = score(doc=55,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.3110389 = fieldWeight in 55, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.046875 = fieldNorm(doc=55)
        0.03069329 = product of:
          0.06138658 = sum of:
            0.06138658 = weight(_text_:al in 55) [ClassicSimilarity], result of:
              0.06138658 = score(doc=55,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.30380827 = fieldWeight in 55, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.046875 = fieldNorm(doc=55)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The terminological consistency of indexers working with a thesaurus as indexing aid remains low. This suggests that indexers cannot perceive easily or very clearly the meaning of each descriptor available as index term. This paper presents the background nd some of the findings of a small scale experiment designed to study the effect on interindexer terminological consistency of modifying the nature of the semantic information given with descriptors in a thesaurus. The study also provided some insights into the respective usefulness of standardized definitions and of traditional networks of hierarchical and associative relationships as means of providing essential meaning information in the thesaurus used as indexing aid
    Source
    Structures and relations in knowledge organization: Proceedings of the 5th International ISKO-Conference, Lille, 25.-29.8.1998. Ed.: W. Mustafa el Hadi et al
    Type
    a
  4. Mooers, C.N.: ¬The indexing language of an information retrieval system (1985) 0.08
    0.07704813 = product of:
      0.10273084 = sum of:
        0.008481284 = weight(_text_:a in 3644) [ClassicSimilarity], result of:
          0.008481284 = score(doc=3644,freq=28.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.16683382 = fieldWeight in 3644, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3644)
        0.037533626 = weight(_text_:et in 3644) [ClassicSimilarity], result of:
          0.037533626 = score(doc=3644,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.18143937 = fieldWeight in 3644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3644)
        0.056715928 = sum of:
          0.03580884 = weight(_text_:al in 3644) [ClassicSimilarity], result of:
            0.03580884 = score(doc=3644,freq=2.0), product of:
              0.20205697 = queryWeight, product of:
                4.582931 = idf(docFreq=1228, maxDocs=44218)
                0.044089027 = queryNorm
              0.17722149 = fieldWeight in 3644, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.582931 = idf(docFreq=1228, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3644)
          0.020907091 = weight(_text_:22 in 3644) [ClassicSimilarity], result of:
            0.020907091 = score(doc=3644,freq=2.0), product of:
              0.15439226 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044089027 = queryNorm
              0.1354154 = fieldWeight in 3644, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3644)
      0.75 = coord(3/4)
    
    Abstract
    Calvin Mooers' work toward the resolution of the problem of ambiguity in indexing went unrecognized for years. At the time he introduced the "descriptor" - a term with a very distinct meaning-indexers were, for the most part, taking index terms directly from the document, without either rationalizing them with context or normalizing them with some kind of classification. It is ironic that Mooers' term came to be attached to the popular but unsophisticated indexing methods which he was trying to root out. Simply expressed, what Mooers did was to take the dictionary definitions of terms and redefine them so clearly that they could not be used in any context except that provided by the new definition. He did, at great pains, construct such meanings for over four hundred words; disambiguation and specificity were sought after and found for these words. He proposed that all indexers adopt this method so that when the index supplied a term, it also supplied the exact meaning for that term as used in the indexed document. The same term used differently in another document would be defined differently and possibly renamed to avoid ambiguity. The disambiguation was achieved by using unabridged dictionaries and other sources of defining terminology. In practice, this tends to produce circularity in definition, that is, word A refers to word B which refers to word C which refers to word A. It was necessary, therefore, to break this chain by creating a new, definitive meaning for each word. Eventually, means such as those used by Austin (q.v.) for PRECIS achieved the same purpose, but by much more complex means than just creating a unique definition of each term. Mooers, however, was probably the first to realize how confusing undefined terminology could be. Early automatic indexers dealt with distinct disciplines and, as long as they did not stray beyond disciplinary boundaries, a quick and dirty keyword approach was satisfactory. The trouble came when attempts were made to make a combined index for two or more distinct disciplines. A number of processes have since been developed, mostly involving tagging of some kind or use of strings. Mooers' solution has rarely been considered seriously and probably would be extremely difficult to apply now because of so much interdisciplinarity. But for a specific, weIl defined field, it is still weIl worth considering. Mooers received training in mathematics and physics from the University of Minnesota and the Massachusetts Institute of Technology. He was the founder of Zator Company, which developed and marketed a coded card information retrieval system, and of Rockford Research, Inc., which engages in research in information science. He is the inventor of the TRAC computer language.
    Footnote
    Original in: Information retrieval today: papers presented at an Institute conducted by the Library School and the Center for Continuation Study, University of Minnesota, Sept. 19-22, 1962. Ed. by Wesley Simonton. Minneapolis, Minn.: The Center, 1963. S.21-36.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  5. Fugmann, R.: ¬The complementarity of natural and indexing languages (1985) 0.05
    0.054787997 = product of:
      0.07305066 = sum of:
        0.009692895 = weight(_text_:a in 3641) [ClassicSimilarity], result of:
          0.009692895 = score(doc=3641,freq=28.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.19066721 = fieldWeight in 3641, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=3641)
        0.042895574 = weight(_text_:et in 3641) [ClassicSimilarity], result of:
          0.042895574 = score(doc=3641,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.20735928 = fieldWeight in 3641, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.03125 = fieldNorm(doc=3641)
        0.020462193 = product of:
          0.040924385 = sum of:
            0.040924385 = weight(_text_:al in 3641) [ClassicSimilarity], result of:
              0.040924385 = score(doc=3641,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.20253885 = fieldWeight in 3641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3641)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The second Cranfield experiment (Cranfield II) in the mid-1960s challenged assumptions held by librarians for nearly a century, namely, that the objective of providing subject access was to bring together all materials an a given topic and that the achieving of this objective required vocabulary control in the form of an index language. The results of Cranfield II were replicated by other retrieval experiments quick to follow its lead and increasing support was given to the opinion that natural language information systems could perform at least as effectively, and certainly more economically, than those employing index languages. When the results of empirical research dramatically counter conventional wisdom, an obvious course is to question the validity of the research and, in the case of retrieval experiments, this eventually happened. Retrieval experiments were criticized for their artificiality, their unrepresentative sampies, and their problematic definitions-particularly the definition of relevance. In the minds of some, at least, the relative merits of natural languages vs. indexing languages continued to be an unresolved issue. As with many eitherlor options, a seemingly safe course to follow is to opt for "both," and indeed there seems to be an increasing amount of counsel advising a combination of natural language and index language search capabilities. One strong voice offering such counsel is that of Robert Fugmann, a chemist by training, a theoretician by predilection, and, currently, a practicing information scientist at Hoechst AG, Frankfurt/Main. This selection from his writings sheds light an the capabilities and limitations of both kinds of indexing. Its special significance lies in the fact that its arguments are based not an empirical but an rational grounds. Fugmann's major argument starts from the observation that in natural language there are essentially two different kinds of concepts: 1) individual concepts, repre sented by names of individual things (e.g., the name of the town Augsburg), and 2) general concepts represented by names of classes of things (e.g., pesticides). Individual concepts can be represented in language simply and succinctly, often by a single string of alphanumeric characters; general concepts, an the other hand, can be expressed in a multiplicity of ways. The word pesticides refers to the concept of pesticides, but also referring to this concept are numerous circumlocutions, such as "Substance X was effective against pests." Because natural language is capable of infinite variety, we cannot predict a priori the manifold ways a general concept, like pesticides, will be represented by any given author. It is this lack of predictability that limits natural language retrieval and causes poor precision and recall. Thus, the essential and defining characteristic of an index language ls that it is a tool for representational predictability.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  6. Rolling, L.: ¬The role of graphic display of concept relationships in indexing and retrieval vocabularies (1985) 0.05
    0.0539622 = product of:
      0.0719496 = sum of:
        0.008591834 = weight(_text_:a in 3646) [ClassicSimilarity], result of:
          0.008591834 = score(doc=3646,freq=22.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.16900843 = fieldWeight in 3646, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=3646)
        0.042895574 = weight(_text_:et in 3646) [ClassicSimilarity], result of:
          0.042895574 = score(doc=3646,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.20735928 = fieldWeight in 3646, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.03125 = fieldNorm(doc=3646)
        0.020462193 = product of:
          0.040924385 = sum of:
            0.040924385 = weight(_text_:al in 3646) [ClassicSimilarity], result of:
              0.040924385 = score(doc=3646,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.20253885 = fieldWeight in 3646, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3646)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The use of diagrams to express relationships in classification is not new. Many classificationists have used this approach, but usually in a minor display to make a point or for part of a difficult relational situation. Ranganathan, for example, used diagrams for some of his more elusive concepts. The thesaurus in particular and subject headings in general, with direct and indirect crossreferences or equivalents, need many more diagrams than normally are included to make relationships and even semantics clear. A picture very often is worth a thousand words. Rolling has used directed graphs (arrowgraphs) to join terms as a practical method for rendering relationships between indexing terms lucid. He has succeeded very weIl in this endeavor. Four diagrams in this selection are all that one needs to explain how to employ the system; from initial listing to completed arrowgraph. The samples of his work include illustration of off-page connectors between arrowgraphs. The great advantage to using diagrams like this is that they present relations between individual terms in a format that is easy to comprehend. But of even greater value is the fact that one can use his arrowgraphs as schematics for making three-dimensional wire-and-ball models, in which the relationships may be seen even more clearly. In fact, errors or gaps in relations are much easier to find with this methodology. One also can get across the notion of the threedimensionality of classification systems with such models. Pettee's "hand reaching up and over" (q.v.) is not a figment of the imagination. While the actual hand is a wire or stick, the concept visualized is helpful in illuminating the three-dimensional figure that is latent in all systems that have cross-references or "broader," "narrower," or, especially, "related" terms. Classification schedules, being hemmed in by the dimensions of the printed page, also benefit from such physical illustrations. Rolling, an engineer by conviction, was the developer of information systems for the Cobalt Institute, the European Atomic Energy Community, and European Coal and Steel Community. He also developed and promoted computer-aided translation at the Commission of the European Communities in Luxembourg. One of his objectives has always been to increase the efficiency of mono- and multilingual thesauri for use in multinational information systems.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  7. Dextre Clarke, S.G.; Gilchrist, A.; Will, L.: Revision and extension of thesaurus standards (2004) 0.05
    0.05366232 = product of:
      0.07154976 = sum of:
        0.0081919925 = weight(_text_:a in 2615) [ClassicSimilarity], result of:
          0.0081919925 = score(doc=2615,freq=20.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.16114321 = fieldWeight in 2615, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2615)
        0.042895574 = weight(_text_:et in 2615) [ClassicSimilarity], result of:
          0.042895574 = score(doc=2615,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.20735928 = fieldWeight in 2615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.03125 = fieldNorm(doc=2615)
        0.020462193 = product of:
          0.040924385 = sum of:
            0.040924385 = weight(_text_:al in 2615) [ClassicSimilarity], result of:
              0.040924385 = score(doc=2615,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.20253885 = fieldWeight in 2615, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2615)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The current standards for monolingual and multilingual thesauri are long overdue for an update. This applies to the international standards ISO 2788 and ISO 5964, as well as the corresponding national standards in several countries and the American standard ANSI/NISO Z39.19. Work is now under way in the UK and in the USA to revise and extend the standards, with particular emphasis on interoperability needs in our world of vast electronic networks. Work in the UK is starting with the British Standards, in the hope of leading on to one international standard to serve all. Some of the issues still under discussion include the treatment of facet analysis, coverage of additional types of controlled vocabulary such as classification schemes, taxonomies and ontologies, and mapping from one vocabulary to another. 1. Are thesaurus standards still needed? Since the 1960s, even before the renowned Cranfield experiments (Cleverdon et al., 1966; Cleverdon, 1967) arguments have raged over the usefulness or otherwise of controlled vocabularies. The case has never been proved definitively one way or the other. At the same time, a recognition has become widespread that no one search method can answer all retrieval requirements. In today's environment of very large networks of resources, the skilled information professional uses a range of techniques. Among these, controlled vocabularies are valued alongside others. The first international standard for monolingual thesauri was issued in 1974. In those days, the main application was for postcoordinate indexing and retrieval from document collections or bibliographic databases. For many information professionals the only practicable alternative to a thesaurus was a classification scheme. And so the thesaurus developed a strong following. After computer systems with full text search capability became widely available, however, the arguments against controlled vocabularies gained more followers. The cost of building and maintaining a thesaurus or a classification scheme was a strong disincentive. Today's databases are typically immense compared with those three decades ago. Full text searching is taken for granted, not just in discrete databases but across all the resources in an intranet or even the Internet. But intranets have brought particular frustration as users discover that despite all the computer power, they cannot find items which they know to be present an the network. So the trend against controlled vocabularies is now being reversed, as many information professionals are turning to them for help. Standards to guide them are still in demand.
    Type
    a
  8. Coates, E.J.: Significance and term relationship in compound headings (1985) 0.05
    0.05366232 = product of:
      0.07154976 = sum of:
        0.0081919925 = weight(_text_:a in 3634) [ClassicSimilarity], result of:
          0.0081919925 = score(doc=3634,freq=20.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.16114321 = fieldWeight in 3634, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=3634)
        0.042895574 = weight(_text_:et in 3634) [ClassicSimilarity], result of:
          0.042895574 = score(doc=3634,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.20735928 = fieldWeight in 3634, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.03125 = fieldNorm(doc=3634)
        0.020462193 = product of:
          0.040924385 = sum of:
            0.040924385 = weight(_text_:al in 3634) [ClassicSimilarity], result of:
              0.040924385 = score(doc=3634,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.20253885 = fieldWeight in 3634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3634)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In the continuing search for criteria for determining the form of compound headings (i.e., headings containing more than one word), many authors have attempted to deal with the problem of entry element and citation order. Among the proposed criteria are Cutter's concept of "significance," Kaiser's formula of "concrete/process," Prevost's "noun rule," and Farradane's categories of relationships*' (q.v.). One of the problems in applying the criteria has been the difficulty in determining what is "significant," particularly when two or more words in the heading all refer to concrete objects. In the following excerpt from Subject Catalogues: Headings and Structure, a widely cited book an the alphabetical subject catalog, E. J. Coates proposes the concept of "term significance," that is, "the word which evokes the clearest mental image," as the criterion for determining the entry element in a compound heading. Since a concrete object generally evokes a clearer mental image than an action or process, Coates' theory is in line with Kaiser's theory of "concrete/process" (q.v.) which Coates renamed "thing/action." For determining the citation order of component elements in a compound heading where the elements are equally "significant" (i.e., both or all evoking clear mental images), Coates proposes the use of "term relationship" as the determining factor. He has identified twenty different kinds of relationships among terms and set down the citation order for each. Another frequently encountered problem related to citation order is the determination of the entry element for a compound heading which contains a topic and a locality. Entering such headings uniformly under either the topic or the locality has proven to be infeasible in practice. Many headings of this type have the topic as the main heading, subdivided by the locality; others are entered under the locality as the main heading with the topic as the subdivision. No criteria or rules have been proposed that ensure consistency or predictability. In the following selection, Coates attempts to deal with this problem by ranking the "main areas of knowledge according to the extent to which they appear to be significantly conditioned by locality." The theory Coates expounded in his book was put into practice in compiling the British Technology Index for which Coates served as the editor from 1961 to 1977.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  9. Mazzocchi, F.: Relations in KOS : is it possible to couple a common nature with different roles? (2017) 0.05
    0.05334703 = product of:
      0.071129374 = sum of:
        0.007771606 = weight(_text_:a in 78) [ClassicSimilarity], result of:
          0.007771606 = score(doc=78,freq=18.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.15287387 = fieldWeight in 78, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=78)
        0.042895574 = weight(_text_:et in 78) [ClassicSimilarity], result of:
          0.042895574 = score(doc=78,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.20735928 = fieldWeight in 78, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.03125 = fieldNorm(doc=78)
        0.020462193 = product of:
          0.040924385 = sum of:
            0.040924385 = weight(_text_:al in 78) [ClassicSimilarity], result of:
              0.040924385 = score(doc=78,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.20253885 = fieldWeight in 78, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.03125 = fieldNorm(doc=78)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The purpose of this paper, which increases and deepens what was expressed in a previous work (Mazzocchi et al., 2007), is to scrutinize the underlying assumptions of the types of relations included in thesauri, particularly the genus-species relation. Logicist approaches to information organization, which are still dominant, will be compared with hermeneutically oriented approaches. In the light of these approaches, the nature and features of the relations, and what the notion of a priori could possibly mean with regard to them, are examined, together with the implications for designing and implementing knowledge organizations systems (KOS). Design/methodology/approach The inquiry is based on how the relations are described in literature, engaging in particular a discussion with Hjørland (2015) and Svenonius (2004). The philosophical roots of today's leading views are briefly illustrated, in order to put them under perspective and deconstruct the uncritical reception of their authority. To corroborate the discussion a semantic analysis of specific terms and relations is provided too. Findings All relations should be seen as "perspectival" (not as a priori). On the other hand, different types of relations, depending on the conceptual features of the terms involved, can hold a different degree of "stability." On this basis, they could be used to address different information concerns (e.g. interoperability vs expressiveness). Research limitations/implications Some arguments that the paper puts forth at the conceptual level need to be tested in application contexts. Originality/value This paper considers that the standpoint of logic and of hermeneutic (usually seen as conflicting) are both significant for information organization, and could be pragmatically integrated. In accordance with this view, an extension of thesaurus relations' set is advised, meaning that perspective hierarchical relations (i.e. relations that are not logically based but function contingently) should be also included in such a set.
    Type
    a
  10. Foskett, D.J.: Classification and integrative levels (1985) 0.05
    0.049181342 = product of:
      0.06557512 = sum of:
        0.010137074 = weight(_text_:a in 3639) [ClassicSimilarity], result of:
          0.010137074 = score(doc=3639,freq=40.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.19940455 = fieldWeight in 3639, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3639)
        0.037533626 = weight(_text_:et in 3639) [ClassicSimilarity], result of:
          0.037533626 = score(doc=3639,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.18143937 = fieldWeight in 3639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3639)
        0.01790442 = product of:
          0.03580884 = sum of:
            0.03580884 = weight(_text_:al in 3639) [ClassicSimilarity], result of:
              0.03580884 = score(doc=3639,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.17722149 = fieldWeight in 3639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3639)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Very interesting experimental work was done by Douglas Foskett and other British classificationists during the fifteen-year period following the end of World War II. The research was effective in demonstrating that it was possible to make very sophisticated classification systems for virtually any subject-systems suitable for experts and for the general user needing a detailed subject classification. The success of these special systems led to consideration of the possibility of putting them together to form a new general classification system. To do such a thing would require a general, overall framework of some kind, since systems limited to a special subject are easier to construct because one does not have to worry about including all of the pertinent facets needed for a general system. Individual subject classifications do not automatically coalesce into a general pattern. For example, what is central to one special classification might be fringe in another or in several others. Fringe terminologies may not coincide in terms of logical relationships. Homographs and homonyms may not rear their ugly heads until attempts at merger are made. Foskett points out that even identifying a thing in terms of a noun or verb involves different assumptions in approach. For these and other reasons, it made sense to look for existing work in fields where the necessary framework already existed. Foskett found the rudiments of such a system in a number of writings, culminating in a logical system called "integrative levels" suggested by James K. Feibleman (q.v.). This system consists of a set of advancing conceptual levels relating to the apparent organization of nature. These levels are irreversible in that if one once reached a certain level there was no going back. Foskett points out that with higher levels and greater complexity in structure the analysis needed to establish valid levels becomes much more difficult, especially as Feibleman stipulates that a higher level must not be reducible to a lower one. (That is, one cannot put Humpty Dumpty together again.) Foskett is optimistic to the extent of suggesting that references from level to level be made upwards, with inductive reasoning, a system used by Derek Austin (q.v.) for making reference structures in PRECIS. Though the method of integrative levels so far has not been used successfully with the byproducts of human social behavior and thought, so much has been learned about these areas during the past twenty years that Foskett may yet be correct in his optimism. Foskett's name has Jong been associated with classification in the social sciences. As with many of the British classificationists included in this book, he has been a member of the Classification Research Group for about forty years. Like the others, he continues to contribute to the field.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  11. Farradane, J.E.L.: Fundamental fallacies and new needs in classification (1985) 0.04
    0.04047165 = product of:
      0.0539622 = sum of:
        0.006443876 = weight(_text_:a in 3642) [ClassicSimilarity], result of:
          0.006443876 = score(doc=3642,freq=22.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.12675633 = fieldWeight in 3642, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3642)
        0.032171678 = weight(_text_:et in 3642) [ClassicSimilarity], result of:
          0.032171678 = score(doc=3642,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.15551946 = fieldWeight in 3642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3642)
        0.015346645 = product of:
          0.03069329 = sum of:
            0.03069329 = weight(_text_:al in 3642) [ClassicSimilarity], result of:
              0.03069329 = score(doc=3642,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.15190414 = fieldWeight in 3642, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3642)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This chapter from The Sayers Memorial Volume summarizes Farradane's earlier work in which he developed his major themes by drawing in part upon research in psychology, and particularly those discoveries called "cognitive" which now form part of cognitive science. Farradane, a chemist by training who later became an information scientist and Director of the Center for Information Science, City University, London, from 1958 to 1973, defines the various types of methods used to achieve classification systems-philosophic, scientific, and synthetic. Early an he distinguishes the view that classification is "some part of external 'reality' waiting to be discovered" from that view which considers it "an intellectual operation upon mental entities and concepts." Classification, therefore, is to be treated as a mental construct and not as something "out there" to be discovered as, say, in astronomy or botany. His approach could be termed, somewhat facetiously, as an "in there" one, meaning found by utilizing the human brain as the key tool. This is not to say that discoveries in astronomy or botany do not require the use of the brain as a key tool. It is merely that the "material" worked upon by this tool is presented to it for observation by "that inward eye," by memory and by inference rather than by planned physical observation, memory, and inference. This distinction could be refined or clarified by considering the initial "observation" as a specific kind of mental set required in each case. Farradane then proceeds to demolish the notion of main classes as "fictitious," partly because the various category-defining methodologies used in library classification are "randomly mixed." The implication, probably correct, is that this results in mixed metaphorical concepts. It is an interesting contrast to the approach of Julia Pettee (q.v.), who began with indexing terms and, in studying relationships between terms, discovered hidden hierarchies both between the terms themselves and between the cross-references leading from one term or set of terms to another. One is tempted to ask two questions: "Is hierarchy innate but misinterpreted?" and "ls it possible to have meaningful terms which have only categorical relationships (that have no see also or equivalent relationships to other, out-of-category terms)?" Partly as a result of the rejection of existing general library classification systems, the Classification Research Group-of which Farradane was a charter member decided to adopt the principles of Ranganathan's faceted classification system, while rejecting his limit an the number of fundamental categories. The advantage of the faceted method is that it is created by inductive, rather than deductive, methods. It can be altered more readily to keep up with changes in and additions to the knowledge base in a subject without having to re-do the major schedules. In 1961, when Farradane's paper appeared, the computer was beginning to be viewed as a tool for solving all information retrieval problems. He tartly remarks:
    The basic fallacy of mechanised information retrieval systems seems to be the often unconscious but apparently implied assumption that the machine can inject meaning into a group of juxtaposed terms although no methods of conceptual analysis and re-synthesis have been programmed (p. 203). As an example, he suggests considering the slight but vital differences in the meaning of the word "of" in selected examples: swarm of bees house of the mayor House of Lords spectrum of the sun basket of fish meeting of councillors cooking of meat book of the film Farradane's distinctive contribution is his matrix of basic relationships. The rows concern time and memory, in degree of happenstance: coincidentally, occasionally, or always. The columns represent degree of the "powers of discrimination": occurring together, linked by common elements only, or standing alone. To make these relationships easily managed, he used symbols for each of the nine kinds - "symbols found an every typewriter": /O (Theta) /* /; /= /+ /( /) /_ /: Farradane has maintained his basic insights to the present day. Though he has gone an to do other kinds of research in classification, his work indicates that he still believes that "the primary task ... is that of establishing satisfactory and enduring principles of subject analysis, or classification" (p. 208).
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  12. Maniez, J.: Actualité des langages documentaires : fondements théoriques de la recherche d'information (2002) 0.03
    0.032171678 = product of:
      0.12868671 = sum of:
        0.12868671 = weight(_text_:et in 887) [ClassicSimilarity], result of:
          0.12868671 = score(doc=887,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.6220778 = fieldWeight in 887, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.09375 = fieldNorm(doc=887)
      0.25 = coord(1/4)
    
    Series
    Collections Sciences de l'Information, série Etudes et techniques
  13. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.03
    0.028759234 = product of:
      0.057518467 = sum of:
        0.015704287 = weight(_text_:a in 4506) [ClassicSimilarity], result of:
          0.015704287 = score(doc=4506,freq=6.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.3089162 = fieldWeight in 4506, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=4506)
        0.041814182 = product of:
          0.083628364 = sum of:
            0.083628364 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.083628364 = score(doc=4506,freq=2.0), product of:
                0.15439226 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044089027 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    8.10.2000 11:52:22
    Source
    Library science with a slant to documentation. 28(1991) no.4, S.125-130
    Type
    a
  14. Mikacic, M.: Statistical system for subject designation (SSSD) for libraries in Croatia (1996) 0.02
    0.02138242 = product of:
      0.04276484 = sum of:
        0.008973878 = weight(_text_:a in 2943) [ClassicSimilarity], result of:
          0.008973878 = score(doc=2943,freq=6.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.17652355 = fieldWeight in 2943, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=2943)
        0.03379096 = product of:
          0.06758192 = sum of:
            0.06758192 = weight(_text_:22 in 2943) [ClassicSimilarity], result of:
              0.06758192 = score(doc=2943,freq=4.0), product of:
                0.15439226 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044089027 = queryNorm
                0.4377287 = fieldWeight in 2943, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2943)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Describes the developments of the Statistical System for Subject Designation (SSSD): a syntactical system for subject designation for libraries in Croatia, based on the construction of subject headings in agreement with the theory of the sentence nature of subject headings. The discussion is preceded by a brief summary of theories underlying basic principles and fundamental rules of the alphabetical subject catalogue
    Date
    31. 7.2006 14:22:21
    Source
    Cataloging and classification quarterly. 22(1996) no.1, S.77-93
    Type
    a
  15. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.02
    0.018171806 = product of:
      0.03634361 = sum of:
        0.006476338 = weight(_text_:a in 6089) [ClassicSimilarity], result of:
          0.006476338 = score(doc=6089,freq=2.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.12739488 = fieldWeight in 6089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=6089)
        0.029867273 = product of:
          0.059734546 = sum of:
            0.059734546 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.059734546 = score(doc=6089,freq=2.0), product of:
                0.15439226 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044089027 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Pages
    S.11-22
    Type
    a
  16. Dextre Clarke, S.G.: Thesaural relationships (2001) 0.01
    0.014986983 = product of:
      0.029973965 = sum of:
        0.009066874 = weight(_text_:a in 1149) [ClassicSimilarity], result of:
          0.009066874 = score(doc=1149,freq=8.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.17835285 = fieldWeight in 1149, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1149)
        0.020907091 = product of:
          0.041814182 = sum of:
            0.041814182 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
              0.041814182 = score(doc=1149,freq=2.0), product of:
                0.15439226 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044089027 = queryNorm
                0.2708308 = fieldWeight in 1149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1149)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A thesaurus in the controlled vocabulary environment is a tool designed to support effective infonnation retrieval (IR) by guiding indexers and searchers consistently to choose the same terms for expressing a given concept or combination of concepts. Terms in the thesaurus are linked by relationships of three well-known types: equivalence, hierarchical, and associative. The functions and properties of these three basic types and some subcategories are described, as well as some additional relationship types conunonly found in thesauri. Progressive automation of IR processes and the capability for simultaneous searching of vast networked resources are creating some pressures for change in the categorization and consistency of relationships.
    Date
    22. 9.2007 15:45:57
    Type
    a
  17. Maniez, J.: Fusion de banques de donnees documentaires at compatibilite des languages d'indexation (1997) 0.01
    0.012845984 = product of:
      0.025691967 = sum of:
        0.007771606 = weight(_text_:a in 2246) [ClassicSimilarity], result of:
          0.007771606 = score(doc=2246,freq=8.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.15287387 = fieldWeight in 2246, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2246)
        0.017920362 = product of:
          0.035840724 = sum of:
            0.035840724 = weight(_text_:22 in 2246) [ClassicSimilarity], result of:
              0.035840724 = score(doc=2246,freq=2.0), product of:
                0.15439226 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044089027 = queryNorm
                0.23214069 = fieldWeight in 2246, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2246)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Discusses the apparently unattainable goal of compatibility of information languages. While controlled languages can improve retrieval performance within a single system, they make cooperation across different systems more difficult. The Internet and downloading accentuate this adverse outcome and the acceleration of data exchange aggravates the problem of compatibility. Defines this familiar concept and demonstrates that coherence is just as necessary as it was for indexing languages, the proliferation of which has created confusion in grouped data banks. Describes 2 types of potential solutions, similar to those applied to automatic translation of natural languages: - harmonizing the information languages themselves, both difficult and expensive, or, the more flexible solution involving automatic harmonization of indexing formulae based on pre established concordance tables. However, structural incompatibilities between post coordinated languages and classifications may lead any harmonization tools up a blind alley, while the paths of a universal concordance model are rare and narrow
    Date
    1. 8.1996 22:01:00
    Type
    a
  18. Degez, D.: Compatibilité des langages d'indexation mariage, cohabitation ou fusion? : Quelques examples concrèts (1998) 0.01
    0.0127202645 = product of:
      0.025440529 = sum of:
        0.004533437 = weight(_text_:a in 2245) [ClassicSimilarity], result of:
          0.004533437 = score(doc=2245,freq=2.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.089176424 = fieldWeight in 2245, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2245)
        0.020907091 = product of:
          0.041814182 = sum of:
            0.041814182 = weight(_text_:22 in 2245) [ClassicSimilarity], result of:
              0.041814182 = score(doc=2245,freq=2.0), product of:
                0.15439226 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044089027 = queryNorm
                0.2708308 = fieldWeight in 2245, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2245)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    1. 8.1996 22:01:00
    Type
    a
  19. Boteram, F.: Semantische Relationen in Dokumentationssprachen : vom Thesaurus zum semantischen Netz (2010) 0.01
    0.0127202645 = product of:
      0.025440529 = sum of:
        0.004533437 = weight(_text_:a in 4792) [ClassicSimilarity], result of:
          0.004533437 = score(doc=4792,freq=2.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.089176424 = fieldWeight in 4792, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4792)
        0.020907091 = product of:
          0.041814182 = sum of:
            0.041814182 = weight(_text_:22 in 4792) [ClassicSimilarity], result of:
              0.041814182 = score(doc=4792,freq=2.0), product of:
                0.15439226 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044089027 = queryNorm
                0.2708308 = fieldWeight in 4792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4792)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
    Type
    a
  20. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.01
    0.010704987 = product of:
      0.021409974 = sum of:
        0.006476338 = weight(_text_:a in 106) [ClassicSimilarity], result of:
          0.006476338 = score(doc=106,freq=8.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.12739488 = fieldWeight in 106, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=106)
        0.014933636 = product of:
          0.029867273 = sum of:
            0.029867273 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
              0.029867273 = score(doc=106,freq=2.0), product of:
                0.15439226 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044089027 = queryNorm
                0.19345059 = fieldWeight in 106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=106)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions. Design/methodology/approach This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions. Findings Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data. Originality/value This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.
    Date
    22. 1.2021 14:24:32
    Type
    a

Languages

  • e 81
  • d 20
  • f 3
  • ja 1
  • More… Less…

Types

  • a 96
  • m 5
  • s 5
  • el 3
  • r 2
  • More… Less…

Classifications