Search (139 results, page 2 of 7)

  • × language_ss:"e"
  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Zhang, J.; Zeng, M.L.: ¬A new similarity measure for subject hierarchical structures (2014) 0.02
    0.015297981 = product of:
      0.04589394 = sum of:
        0.013190207 = weight(_text_:information in 1778) [ClassicSimilarity], result of:
          0.013190207 = score(doc=1778,freq=8.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.19395474 = fieldWeight in 1778, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.01958201 = weight(_text_:retrieval in 1778) [ClassicSimilarity], result of:
          0.01958201 = score(doc=1778,freq=2.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.16710453 = fieldWeight in 1778, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.013121725 = product of:
          0.02624345 = sum of:
            0.02624345 = weight(_text_:22 in 1778) [ClassicSimilarity], result of:
              0.02624345 = score(doc=1778,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.19345059 = fieldWeight in 1778, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1778)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Purpose - The purpose of this paper is to introduce a new similarity method to gauge the differences between two subject hierarchical structures. Design/methodology/approach - In the proposed similarity measure, nodes on two hierarchical structures are projected onto a two-dimensional space, respectively, and both structural similarity and subject similarity of nodes are considered in the similarity between the two hierarchical structures. The extent to which the structural similarity impacts on the similarity can be controlled by adjusting a parameter. An experiment was conducted to evaluate soundness of the measure. Eight experts whose research interests were information retrieval and information organization participated in the study. Results from the new measure were compared with results from the experts. Findings - The evaluation shows strong correlations between the results from the new method and the results from the experts. It suggests that the similarity method achieved satisfactory results. Practical implications - Hierarchical structures that are found in subject directories, taxonomies, classification systems, and other classificatory structures play an extremely important role in information organization and information representation. Measuring the similarity between two subject hierarchical structures allows an accurate overarching understanding of the degree to which the two hierarchical structures are similar. Originality/value - Both structural similarity and subject similarity of nodes were considered in the proposed similarity method, and the extent to which the structural similarity impacts on the similarity can be adjusted. In addition, a new evaluation method for a hierarchical structure similarity was presented.
    Date
    8. 4.2015 16:22:13
  2. Fripp, D.: Using linked data to classify web documents (2010) 0.01
    0.014972444 = product of:
      0.067375995 = sum of:
        0.009233146 = weight(_text_:information in 4172) [ClassicSimilarity], result of:
          0.009233146 = score(doc=4172,freq=2.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.13576832 = fieldWeight in 4172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4172)
        0.05814285 = weight(_text_:techniques in 4172) [ClassicSimilarity], result of:
          0.05814285 = score(doc=4172,freq=2.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.3406997 = fieldWeight in 4172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4172)
      0.22222222 = coord(2/9)
    
    Abstract
    Purpose - The purpose of this paper is to find a relationship between traditional faceted classification schemes and semantic web document annotators, particularly in the linked data environment. Design/methodology/approach - A consideration of the conceptual ideas behind faceted classification and linked data architecture is made. Analysis of selected web documents is performed using Calais' Semantic Proxy to support the considerations. Findings - Technical language aside, the principles of both approaches are very similar. Modern classification techniques have the potential to automatically generate metadata to drive more precise information recall by including a semantic layer. Originality/value - Linked data have not been explicitly considered in this context before in the published literature.
  3. Vickery, B.C.: Relations between subject fields : problems of constructing a general classification (1957) 0.01
    0.013961128 = product of:
      0.06282508 = sum of:
        0.01582825 = weight(_text_:information in 566) [ClassicSimilarity], result of:
          0.01582825 = score(doc=566,freq=2.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.23274569 = fieldWeight in 566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=566)
        0.046996824 = weight(_text_:retrieval in 566) [ClassicSimilarity], result of:
          0.046996824 = score(doc=566,freq=2.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.40105087 = fieldWeight in 566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=566)
      0.22222222 = coord(2/9)
    
    Source
    Proceedings of the International Study Conference on Classification for Information Retrieval, held at Beatrice Webb House, Dorking, England, 13.-17.5.1957
  4. ¬The need for a faceted classification as the basis of all methods of information retrieval : Memorandum of the Classification Research Group (1997) 0.01
    0.013907982 = product of:
      0.06258592 = sum of:
        0.018276889 = weight(_text_:information in 562) [ClassicSimilarity], result of:
          0.018276889 = score(doc=562,freq=6.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.2687516 = fieldWeight in 562, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=562)
        0.04430903 = weight(_text_:retrieval in 562) [ClassicSimilarity], result of:
          0.04430903 = score(doc=562,freq=4.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.37811437 = fieldWeight in 562, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=562)
      0.22222222 = coord(2/9)
    
    Footnote
    Wiederabdruck aus: Proceedings of the International Study Conference on Classification for Information Retrieval, Dorking. London: Aslib 1957.
    Imprint
    The Hague : International Federation for Information and Documentation (FID)
  5. Curras, E.: Ranganathan's classification theories under the systems science postulates (1992) 0.01
    0.013697323 = product of:
      0.061637953 = sum of:
        0.010552166 = weight(_text_:information in 6993) [ClassicSimilarity], result of:
          0.010552166 = score(doc=6993,freq=2.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.1551638 = fieldWeight in 6993, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6993)
        0.051085785 = product of:
          0.10217157 = sum of:
            0.10217157 = weight(_text_:theories in 6993) [ClassicSimilarity], result of:
              0.10217157 = score(doc=6993,freq=2.0), product of:
                0.21161452 = queryWeight, product of:
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.038739666 = queryNorm
                0.4828193 = fieldWeight in 6993, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6993)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Source
    Journal of library and information science. 17(1992) no.1, S.45-65
  6. Mai, J.-E.: Classification in context : Relativity, reality, and representation (2004) 0.01
    0.013697323 = product of:
      0.061637953 = sum of:
        0.010552166 = weight(_text_:information in 3017) [ClassicSimilarity], result of:
          0.010552166 = score(doc=3017,freq=2.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.1551638 = fieldWeight in 3017, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3017)
        0.051085785 = product of:
          0.10217157 = sum of:
            0.10217157 = weight(_text_:theories in 3017) [ClassicSimilarity], result of:
              0.10217157 = score(doc=3017,freq=2.0), product of:
                0.21161452 = queryWeight, product of:
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.038739666 = queryNorm
                0.4828193 = fieldWeight in 3017, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3017)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been an establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary classification research focus an contextual information as the guide for the design and construction of classification schemes.
  7. Szostak, R.: ¬A pluralistic approach to the philosophy of classification : a case for "public knowledge" (2015) 0.01
    0.013487187 = product of:
      0.06069234 = sum of:
        0.015992278 = weight(_text_:information in 5541) [ClassicSimilarity], result of:
          0.015992278 = score(doc=5541,freq=6.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.23515764 = fieldWeight in 5541, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5541)
        0.044700064 = product of:
          0.08940013 = sum of:
            0.08940013 = weight(_text_:theories in 5541) [ClassicSimilarity], result of:
              0.08940013 = score(doc=5541,freq=2.0), product of:
                0.21161452 = queryWeight, product of:
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.038739666 = queryNorm
                0.42246687 = fieldWeight in 5541, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5541)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Any classification system should be evaluated with respect to a variety of philosophical and practical concerns. This paper explores several distinct issues: the nature of a work, the value of a statement, the contribution of information science to philosophy, the nature of hierarchy, ethical evaluation, pre- versus postcoordination, the lived experience of librarians, and formalization versus natural language. It evaluates a particular approach to classification in terms of each of these but draws general lessons for philosophical evaluation. That approach to classification emphasizes the free combination of basic concepts representing both real things in the world and the relationships among these; works are also classified in terms of theories, methods, and perspectives applied.
    Content
    Beitrag in einem Themenheft: 'Exploring Philosophies of Information'.
    Theme
    Information
  8. Facets: a fruitful notion in many domains : special issue on facet analysis (2008) 0.01
    0.013441136 = product of:
      0.040323406 = sum of:
        0.005711528 = weight(_text_:information in 3262) [ClassicSimilarity], result of:
          0.005711528 = score(doc=3262,freq=6.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.083984874 = fieldWeight in 3262, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3262)
        0.013846572 = weight(_text_:retrieval in 3262) [ClassicSimilarity], result of:
          0.013846572 = score(doc=3262,freq=4.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.11816074 = fieldWeight in 3262, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3262)
        0.020765305 = weight(_text_:techniques in 3262) [ClassicSimilarity], result of:
          0.020765305 = score(doc=3262,freq=2.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.12167847 = fieldWeight in 3262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3262)
      0.33333334 = coord(3/9)
    
    Footnote
    Rez. in: KO 36(2009) no.1, S.62-63 (K. La Barre): "This special issue of Axiomathes presents an ambitious dual agenda. It attempts to highlight aspects of facet analysis (as used in LIS) that are shared by cognate approaches in philosophy, psychology, linguistics and computer science. Secondarily, the issue aims to attract others to the study and use of facet analysis. The authors represent a blend of lifetime involvement with facet analysis, such as Vickery, Broughton, Beghtol, and Dahlberg; those with well developed research agendas such as Tudhope, and Priss; and relative newcomers such as Gnoli, Cheti and Paradisi, and Slavic. Omissions are inescapable, but a more balanced issue would have resulted from inclusion of at least one researcher from the Indian school of facet theory. Another valuable addition might have been a reaction to the issue by one of the chief critics of facet analysis. Potentially useful, but absent, is a comprehensive bibliography of resources for those wishing to engage in further study, that now lie scattered throughout the issue. Several of the papers assume relative familiarity with facet analytical concepts and definitions, some of which are contested even within LIS. Gnoli's introduction (p. 127-130) traces the trajectory, extensions and new developments of this analytico- synthetic approach to subject access, while providing a laundry list of cognate approaches that are similar to facet analysis. This brief essay and the article by Priss (p. 243-255) directly addresses this first part of Gnoli's agenda. Priss provides detailed discussion of facet-like structures in computer science (p. 245- 246), and outlines the similarity between Formal Concept Analysis and facets. This comparison is equally fruitful for researchers in computer science and library and information science. By bridging into a discussion of visualization challenges for facet display, further research is also invited. Many of the remaining papers comprehensively detail the intellectual heritage of facet analysis (Beghtol; Broughton, p. 195-198; Dahlberg; Tudhope and Binding, p. 213-215; Vickery). Beghtol's (p. 131-144) examination of the origins of facet theory through the lens of the textbooks written by Ranganathan's mentor W.C.B. Sayers (1881-1960), Manual of Classification (1926, 1944, 1955) and a textbook written by Mills A Modern Outline of Classification (1964), serves to reveal the deep intellectual heritage of the changes in classification theory over time, as well as Ranganathan's own influence on and debt to Sayers.
    Several of the papers are clearly written as primers and neatly address the second agenda item: attracting others to the study and use of facet analysis. The most valuable papers are written in clear, approachable language. Vickery's paper (p. 145-160) is a clarion call for faceted classification and facet analysis. The heart of the paper is a primer for central concepts and techniques. Vickery explains the value of using faceted classification in document retrieval. Also provided are potential solutions to thorny interface and display issues with facets. Vickery looks to complementary themes in knowledge organization, such as thesauri and ontologies as potential areas for extending the facet concept. Broughton (p. 193-210) describes a rigorous approach to the application of facet analysis in the creation of a compatible thesaurus from the schedules of the 2nd edition of the Bliss Classification (BC2). This discussion of exemplary faceted thesauri, recent standards work, and difficulties encountered in the project will provide valuable guidance for future research in this area. Slavic (p. 257-271) provides a challenge to make faceted classification come 'alive' through promoting the use of machine-readable formats for use and exchange in applications such as Topic Maps and SKOS (Simple Knowledge Organization Systems), and as supported by the standard BS8723 (2005) Structured Vocabulary for Information Retrieval. She also urges designers of faceted classifications to get involved in standards work. Cheti and Paradisi (p. 223-241) outline a basic approach to converting an existing subject indexing tool, the Nuovo Soggetario, into a faceted thesaurus through the use of facet analysis. This discussion, well grounded in the canonical literature, may well serve as a primer for future efforts. Also useful for those who wish to construct faceted thesauri is the article by Tudhope and Binding (p. 211-222). This contains an outline of basic elements to be found in exemplar faceted thesauri, and a discussion of project FACET (Faceted Access to Cultural heritage Terminology) with algorithmically-based semantic query expansion in a dataset composed of items from the National Museum of Science and Industry indexed with AAT (Art and Architecture Thesaurus). This paper looks to the future hybridization of ontologies and facets through standards developments such as SKOS because of the "lightweight semantics" inherent in facets.
    Two of the papers revisit the interaction of facets with the theory of integrative levels, which posits that the organization of the natural world reflects increasingly interdependent complexity. This approach was tested as a basis for the creation of faceted classifications in the 1960s. These contemporary treatments of integrative levels are not discipline-driven as were the early approaches, but instead are ontological and phenomenological in focus. Dahlberg (p. 161-172) outlines the creation of the ICC (Information Coding System) and the application of the Systematifier in the generation of facets and the creation of a fully faceted classification. Gnoli (p. 177-192) proposes the use of fundamental categories as a way to redefine facets and fundamental categories in "more universal and level-independent ways" (p. 192). Given that Axiomathes has a stated focus on "contemporary issues in cognition and ontology" and the following thesis: "that real advances in contemporary science may depend upon a consideration of the origins and intellectual history of ideas at the forefront of current research," this venue seems well suited for the implementation of the stated agenda, to illustrate complementary approaches and to stimulate research. As situated, this special issue may well serve as a bridge to a more interdisciplinary dialogue about facet analysis than has previously been the case."
  9. Szostak, R.: Classifying science : phenomena, data, theory, method, practice (2004) 0.01
    0.013284621 = product of:
      0.05978079 = sum of:
        0.005596131 = weight(_text_:information in 325) [ClassicSimilarity], result of:
          0.005596131 = score(doc=325,freq=4.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.08228803 = fieldWeight in 325, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=325)
        0.05418466 = product of:
          0.10836932 = sum of:
            0.10836932 = weight(_text_:theories in 325) [ClassicSimilarity], result of:
              0.10836932 = score(doc=325,freq=16.0), product of:
                0.21161452 = queryWeight, product of:
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.038739666 = queryNorm
                0.5121072 = fieldWeight in 325, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=325)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Classification is the essential first step in science. The study of science, as well as the practice of science, will thus benefit from a detailed classification of different types of science. In this book, science - defined broadly to include the social sciences and humanities - is first unpacked into its constituent elements: the phenomena studied, the data used, the theories employed, the methods applied, and the practices of scientists. These five elements are then classified in turn. Notably, the classifications of both theory types and methods allow the key strengths and weaknesses of different theories and methods to be readily discerned and compared. Connections across classifications are explored: should certain theories or phenomena be investigated only with certain methods? What is the proper function and form of scientific paradigms? Are certain common errors and biases in scientific practice associated with particular phenomena, data, theories, or methods? The classifications point to several ways of improving both specialized and interdisciplinary research and teaching, and especially of enhancing communication across communities of scholars. The classifications also support a superior system of document classification that would allow searches by theory and method used as well as causal links investigated.
    Content
    Inhalt: - Chapter 1: Classifying Science: 1.1. A Simple Classificatory Guideline - 1.2. The First "Cut" (and Plan of Work) - 1.3. Some Preliminaries - Chapter 2: Classifying Phenomena and Data: 2.1. Classifying Phenomena - 2.2. Classifying Data - Chapter 3: Classifying Theory: 3.1. Typology of Theory - 3.2. What Is a Theory? - 3.3. Evaluating Theories - 3.4. Types of Theory and the Five Types of Causation - 3.5. Classifying Individual Theories - 3.6. Advantages of a Typology of Theory - Chapter 4: Classifying Method: 4.1. Classifying Methods - 4.2. Typology of Strengths and Weaknesses of Methods - 4.3. Qualitative Versus Quantitative Analysis Revisited - 4.4. Evaluating Methods - 4.5. Classifying Particular Methods Within The Typology - 4.6. Advantages of a Typology of Methods - Chapter 5: Classifying Practice: 5.1. Errors and Biases in ScienceChapter - 5.2. Typology of (Critiques of) Scientific Practice - 5.3. Utilizing This Classification - 5.4. The Five Types of Ethical Analysis - Chapter 6: Drawing Connections Across These Classifications: 6.1. Theory and Method - 6.2. Theory (Method) and Phenomena (Data) - 6.3. Better Paradigms - 6.4. Critiques of Scientific Practice: Are They Correlated with Other Classifications? - Chapter 7: Classifying Scientific Documents: 7.1. Faceted or Enumerative? - 7.2. Classifying By Phenomena Studied - 7.3. Classifying By Theory Used - 7.4. Classifying By Method Used - 7.5 Links Among Subjects - 7.6. Type of Work, Language, and More - 7.7. Critiques of Scientific Practice - 7.8. Classifying Philosophy - 7.9. Evaluating the System - Chapter 8: Concluding Remarks: 8.1. The Classifications - 8.2. Advantages of These Various Classifications - 8.3. Drawing Connections Across Classifications - 8.4. Golden Mean Arguments - 8.5. Why Should Science Be Believed? - 8.6. How Can Science Be Improved? - 8.7. How Should Science Be Taught?
    Footnote
    Rez. in: KO 32(2005) no.2, S.93-95 (H. Albrechtsen): "The book deals with mapping of the structures and contents of sciences, defined broadly to include the social sciences and the humanities. According to the author, the study of science, as well as the practice of science, could benefit from a detailed classification of different types of science. The book defines five universal constituents of the sciences: phenomena, data, theories, methods and practice. For each of these constituents, the author poses five questions, in the well-known 5W format: Who, What, Where, When, Why? - with the addition of the question How? (Szostak 2003). Two objectives of the author's endeavor stand out: 1) decision support for university curriculum development across disciplines and decision support for university students at advanced levels of education in selection of appropriate courses for their projects and to support cross-disciplinary inquiry for researchers and students; 2) decision support for researchers and students in scientific inquiry across disciplines, methods and theories. The main prospective audience of this book is university curriculum developers, university students and researchers, in that order of priority. The heart of the book is the chapters unfolding the author's ideas about how to classify phenomena and data, theory, method and practice, by use of the 5W inquiry model. . . .
    Despite its methodological flaws and lack of empirical foundation, the book could potentially bring new ideas to current discussions within the practices of curriculum development and knowledge management as weIl as design of information systems, an classification schemes as tools for knowledge sharing, decision-making and knowledge exploration. I hesitate to recommend the book to students, except to students at advanced levels of study, because of its biased presentation of the new ideas and its basis an secondary literature."
    Series
    Information Science & Knowledge Management ; 7
  10. Dousa, T.M.: Categories and the architectonics of system in Julius Otto Kaiser's method of systematic indexing (2014) 0.01
    0.013099614 = product of:
      0.03929884 = sum of:
        0.0065951035 = weight(_text_:information in 1418) [ClassicSimilarity], result of:
          0.0065951035 = score(doc=1418,freq=2.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.09697737 = fieldWeight in 1418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1418)
        0.01958201 = weight(_text_:retrieval in 1418) [ClassicSimilarity], result of:
          0.01958201 = score(doc=1418,freq=2.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.16710453 = fieldWeight in 1418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1418)
        0.013121725 = product of:
          0.02624345 = sum of:
            0.02624345 = weight(_text_:22 in 1418) [ClassicSimilarity], result of:
              0.02624345 = score(doc=1418,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.19345059 = fieldWeight in 1418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1418)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Categories, or concepts of high generality representing the most basic kinds of entities in the world, have long been understood to be a fundamental element in the construction of knowledge organization systems (KOSs), particularly faceted ones. Commentators on facet analysis have tended to foreground the role of categories in the structuring of controlled vocabularies and the construction of compound index terms, and the implications of this for subject representation and information retrieval. Less attention has been paid to the variety of ways in which categories can shape the overall architectonic framework of a KOS. This case study explores the range of functions that categories took in structuring various aspects of an early analytico-synthetic KOS, Julius Otto Kaiser's method of Systematic Indexing (SI). Within SI, categories not only functioned as mechanisms to partition an index vocabulary into smaller groupings of terms and as elements in the construction of compound index terms but also served as means of defining the units of indexing, or index items, incorporated into an index; determining the organization of card index files and the articulation of the guide card system serving as a navigational aids thereto; and setting structural constraints to the establishment of cross-references between terms. In all these ways, Kaiser's system of categories contributed to the general systematicity of SI.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  11. Mills, J.: Faceted classification and logical division in information retrieval (2004) 0.01
    0.012977105 = product of:
      0.05839697 = sum of:
        0.017696522 = weight(_text_:information in 831) [ClassicSimilarity], result of:
          0.017696522 = score(doc=831,freq=10.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.2602176 = fieldWeight in 831, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=831)
        0.040700447 = weight(_text_:retrieval in 831) [ClassicSimilarity], result of:
          0.040700447 = score(doc=831,freq=6.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.34732026 = fieldWeight in 831, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=831)
      0.22222222 = coord(2/9)
    
    Abstract
    The main object of the paper is to demonstrate in detail the role of classification in information retrieval (IR) and the design of classificatory structures by the application of logical division to all forms of the content of records, subject and imaginative. The natural product of such division is a faceted classification. The latter is seen not as a particular kind of library classification but the only viable form enabling the locating and relating of information to be optimally predictable. A detailed exposition of the practical steps in facet analysis is given, drawing on the experience of the new Bliss Classification (BC2). The continued existence of the library as a highly organized information store is assumed. But, it is argued, it must acknowledge the relevance of the revolution in library classification that has taken place. It considers also how alphabetically arranged subject indexes may utilize controlled use of categorical (generically inclusive) and syntactic relations to produce similarly predictable locating and relating systems for IR.
    Footnote
    Artikel in einem Themenheft: The philosophy of information
    Theme
    Klassifikationssysteme im Online-Retrieval
  12. Beghtol, C.: ¬The facet concept as a universal principle of subdivision (2006) 0.01
    0.012719265 = product of:
      0.057236694 = sum of:
        0.018466292 = weight(_text_:information in 1483) [ClassicSimilarity], result of:
          0.018466292 = score(doc=1483,freq=8.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.27153665 = fieldWeight in 1483, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1483)
        0.0387704 = weight(_text_:retrieval in 1483) [ClassicSimilarity], result of:
          0.0387704 = score(doc=1483,freq=4.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.33085006 = fieldWeight in 1483, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1483)
      0.22222222 = coord(2/9)
    
    Abstract
    Facet analysis has been one of the foremost contenders as a design principle for information retrieval classifications, both manual and electronic in the last fifty years. Evidence is presented that the facet concept has a claim to be considered as a method of subdivision that is cognitively available to human beings, regardless of language, culture, or academic discipline. The possibility that faceting is a universal method of subdivision enhances the claim that facet analysis as an unusually useful design principle for information retrieval classifications in any field. This possibility needs further investigation in an age when information access across boundaries is both necessary and possible.
    Source
    Knowledge organization, information systems and other essays: Professor A. Neelameghan Festschrift. Ed. by K.S. Raghavan and K.N. Prasad
  13. Mai, J.E.: ¬The future of general classification (2003) 0.01
    0.012191377 = product of:
      0.054861195 = sum of:
        0.010552166 = weight(_text_:information in 5478) [ClassicSimilarity], result of:
          0.010552166 = score(doc=5478,freq=2.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.1551638 = fieldWeight in 5478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5478)
        0.04430903 = weight(_text_:retrieval in 5478) [ClassicSimilarity], result of:
          0.04430903 = score(doc=5478,freq=4.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.37811437 = fieldWeight in 5478, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=5478)
      0.22222222 = coord(2/9)
    
    Abstract
    Discusses problems related to accessing multiple collections using a single retrieval language. Surveys the concepts of interoperability and switching language. Finds that mapping between more indexing languages always will be an approximation. Surveys the issues related to general classification and contrasts that to special classifications. Argues for the use of general classifications to provide access to collections nationally and internationally.
    Content
    Beitrag eines Themenheftes "Knowledge organization and classification in international information retrieval"
  14. Hjoerland, B.: Facet analysis : the logical approach to knowledge organization (2013) 0.01
    0.012106838 = product of:
      0.05448077 = sum of:
        0.009326885 = weight(_text_:information in 2720) [ClassicSimilarity], result of:
          0.009326885 = score(doc=2720,freq=4.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.13714671 = fieldWeight in 2720, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2720)
        0.045153882 = product of:
          0.090307765 = sum of:
            0.090307765 = weight(_text_:theories in 2720) [ClassicSimilarity], result of:
              0.090307765 = score(doc=2720,freq=4.0), product of:
                0.21161452 = queryWeight, product of:
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.038739666 = queryNorm
                0.426756 = fieldWeight in 2720, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2720)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    The facet-analytic paradigm is probably the most distinct approach to knowledge organization within Library and Information Science, and in many ways it has dominated what has be termed "modern classification theory". It was mainly developed by S.R. Ranganathan and the British Classification Research Group, but it is mostly based on principles of logical division developed more than two millennia ago. Colon Classification (CC) and Bliss 2 (BC2) are among the most important systems developed on this theoretical basis, but it has also influenced the development of other systems, such as the Dewey Decimal Classification (DDC) and is also applied in many websites. It still has a strong position in the field and it is the most explicit and "pure" theoretical approach to knowledge organization (KO) (but it is not by implication necessarily also the most important one). The strength of this approach is its logical principles and the way it provides structures in knowledge organization systems (KOS). The main weaknesses are (1) its lack of empirical basis and (2) its speculative ordering of knowledge without basis in the development or influence of theories and socio-historical studies. It seems to be based on the problematic assumption that relations between concepts are a priori and not established by the development of models, theories and laws.
    Source
    Information processing and management. 49(2013) no.2, S.545-557
  15. Beghtol, C.: Classification for information retrieval and classification for knowledge discovery : relationships between "professional" and "naïve" classifications (2003) 0.01
    0.011980249 = product of:
      0.05391112 = sum of:
        0.0147471 = weight(_text_:information in 3021) [ClassicSimilarity], result of:
          0.0147471 = score(doc=3021,freq=10.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.21684799 = fieldWeight in 3021, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3021)
        0.03916402 = weight(_text_:retrieval in 3021) [ClassicSimilarity], result of:
          0.03916402 = score(doc=3021,freq=8.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.33420905 = fieldWeight in 3021, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3021)
      0.22222222 = coord(2/9)
    
    Abstract
    Classification is a transdisciplinary activity that occurs during all human pursuits. Classificatory activity, however, serves different purposes in different situations. In information retrieval, the primary purpose of classification is to find knowledge that already exists, but one of the purposes of classification in other fields is to discover new knowledge. In this paper, classifications for information retrieval are called "professional" classifications because they are devised by people who have a professional interest in classification, and classifications for knowledge discovery are called "naive" classifications because they are devised by people who have no particular interest in studying classification as an end in itself. This paper compares the overall purposes and methods of these two kinds of classifications and provides a general model of the relationships between the two kinds of classificatory activity in the context of information studies. This model addresses issues of the influence of scholarly activity and communication an the creation and revision of classifications for the purposes of information retrieval and for the purposes of knowledge discovery. Further comparisons elucidate the relationships between the universality of classificatory methods and the specific purposes served by naive and professional classification systems.
  16. Pocock, H.: Classification schemes : development and survival (1997) 0.01
    0.0116342725 = product of:
      0.052354228 = sum of:
        0.013190207 = weight(_text_:information in 762) [ClassicSimilarity], result of:
          0.013190207 = score(doc=762,freq=2.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.19395474 = fieldWeight in 762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=762)
        0.03916402 = weight(_text_:retrieval in 762) [ClassicSimilarity], result of:
          0.03916402 = score(doc=762,freq=2.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.33420905 = fieldWeight in 762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=762)
      0.22222222 = coord(2/9)
    
    Abstract
    Discusses the development of classification schemes and their ability to adapt to and accomodate changes in the information world in order to survive. Examines the revision plans for the major classification schemes and the future use of classification search facilities for OPACs
    Theme
    Klassifikationssysteme im Online-Retrieval
  17. Qin, J.: Evolving paradigms of knowledge representation and organization : a comparative study of classification, XML/DTD and ontology (2003) 0.01
    0.011208165 = product of:
      0.033624496 = sum of:
        0.0074615083 = weight(_text_:information in 2763) [ClassicSimilarity], result of:
          0.0074615083 = score(doc=2763,freq=4.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.10971737 = fieldWeight in 2763, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.015665608 = weight(_text_:retrieval in 2763) [ClassicSimilarity], result of:
          0.015665608 = score(doc=2763,freq=2.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.13368362 = fieldWeight in 2763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.01049738 = product of:
          0.02099476 = sum of:
            0.02099476 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
              0.02099476 = score(doc=2763,freq=2.0), product of:
                0.13565971 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038739666 = queryNorm
                0.15476047 = fieldWeight in 2763, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2763)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    The different points of views an knowledge representation and organization from various research communities reflect underlying philosophies and paradigms in these communities. This paper reviews differences and relations in knowledge representation and organization and generalizes four paradigms-integrative and disintegrative pragmatism and integrative and disintegrative epistemologism. Examples such as classification, XML schemas, and ontologies are compared based an how they specify concepts, build data models, and encode knowledge organization structures. 1. Introduction Knowledge representation (KR) is a term that several research communities use to refer to somewhat different aspects of the same research area. The artificial intelligence (AI) community considers KR as simply "something to do with writing down, in some language or communications medium, descriptions or pictures that correspond in some salient way to the world or a state of the world" (Duce & Ringland, 1988, p. 3). It emphasizes the ways in which knowledge can be encoded in a computer program (Bench-Capon, 1990). For the library and information science (LIS) community, KR is literally the synonym of knowledge organization, i.e., KR is referred to as the process of organizing knowledge into classifications, thesauri, or subject heading lists. KR has another meaning in LIS: it "encompasses every type and method of indexing, abstracting, cataloguing, classification, records management, bibliography and the creation of textual or bibliographic databases for information retrieval" (Anderson, 1996, p. 336). Adding the social dimension to knowledge organization, Hjoerland (1997) states that knowledge is a part of human activities and tied to the division of labor in society, which should be the primary organization of knowledge. Knowledge organization in LIS is secondary or derived, because knowledge is organized in learned institutions and publications. These different points of views an KR suggest that an essential difference in the understanding of KR between both AI and LIS lies in the source of representationwhether KR targets human activities or derivatives (knowledge produced) from human activities. This difference also decides their difference in purpose-in AI KR is mainly computer-application oriented or pragmatic and the result of representation is used to support decisions an human activities, while in LIS KR is conceptually oriented or abstract and the result of representation is used for access to derivatives from human activities.
    Date
    12. 9.2004 17:22:35
  18. Beghtol, C.: Relationships in classificatory structure and meaning (2001) 0.01
    0.011001467 = product of:
      0.0495066 = sum of:
        0.011192262 = weight(_text_:information in 1138) [ClassicSimilarity], result of:
          0.011192262 = score(doc=1138,freq=4.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.16457605 = fieldWeight in 1138, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1138)
        0.03831434 = product of:
          0.07662868 = sum of:
            0.07662868 = weight(_text_:theories in 1138) [ClassicSimilarity], result of:
              0.07662868 = score(doc=1138,freq=2.0), product of:
                0.21161452 = queryWeight, product of:
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.038739666 = queryNorm
                0.36211446 = fieldWeight in 1138, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4624767 = idf(docFreq=509, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1138)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    In a changing information environment, we need to reassess each element of bibliographic control, including classification theories and systems. Every classification system is a theoretical construct imposed an "reality." The classificatory relationships that are assumed to be valuable have generally received less attention than the topics included in the systems. Relationships are functions of both the syntactic and semantic axes of classification systems, and both explicit and implicit relationships are discussed. Examples are drawn from a number of different systems, both bibliographic and non-bibliographic, and the cultural warrant (i. e., the sociocultural context) of classification systems is examined. The part-whole relationship is discussed as an example of a universally valid concept that is treated as a component of the cultural warrant of a classification system.
    Series
    Information science and knowledge management; vol.2
  19. Gnoli, C.; Mei, H.: Freely faceted classification for Web-based information retrieval (2006) 0.01
    0.0108032385 = product of:
      0.048614573 = sum of:
        0.007914125 = weight(_text_:information in 534) [ClassicSimilarity], result of:
          0.007914125 = score(doc=534,freq=2.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.116372846 = fieldWeight in 534, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=534)
        0.040700447 = weight(_text_:retrieval in 534) [ClassicSimilarity], result of:
          0.040700447 = score(doc=534,freq=6.0), product of:
            0.1171842 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.038739666 = queryNorm
            0.34732026 = fieldWeight in 534, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=534)
      0.22222222 = coord(2/9)
    
    Abstract
    In free classification, each concept is expressed by a constant notation, and classmarks are formed by free combinations of them, allowing the retrieval of records from a database by searching any of the component concepts. A refinement of free classification is freely faceted classification, where notation can include facets, expressing the kind of relations held between the concepts. The Integrative Level Classification project aims at testing free and freely faceted classification by applying them to small bibliographical samples in various domains. A sample, called the Dandelion Bibliography of Facet Analysis, is described here. Experience was gained using this system to classify 300 specialized papers dealing with facet analysis itself recorded on a MySQL database and building a Web interface exploiting freely faceted notation. The interface is written in PHP and uses string functions to process the queries and to yield relevant results selected and ordered according to the principles of integrative levels.
    Theme
    Klassifikationssysteme im Online-Retrieval
  20. Choi, I.: Visualizations of cross-cultural bibliographic classification : comparative studies of the Korean Decimal Classification and the Dewey Decimal Classification (2017) 0.01
    0.010694603 = product of:
      0.048125714 = sum of:
        0.0065951035 = weight(_text_:information in 3869) [ClassicSimilarity], result of:
          0.0065951035 = score(doc=3869,freq=2.0), product of:
            0.06800663 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.038739666 = queryNorm
            0.09697737 = fieldWeight in 3869, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3869)
        0.04153061 = weight(_text_:techniques in 3869) [ClassicSimilarity], result of:
          0.04153061 = score(doc=3869,freq=2.0), product of:
            0.17065717 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.038739666 = queryNorm
            0.24335694 = fieldWeight in 3869, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3869)
      0.22222222 = coord(2/9)
    
    Abstract
    The changes in KO systems induced by sociocultural influences may include those in both classificatory principles and cultural features. The proposed study will examine the Korean Decimal Classification (KDC)'s adaptation of the Dewey Decimal Classification (DDC) by comparing the two systems. This case manifests the sociocultural influences on KOSs in a cross-cultural context. Therefore, the study aims at an in-depth investigation of sociocultural influences by situating a KOS in a cross-cultural environment and examining the dynamics between two classification systems designed to organize information resources in two distinct sociocultural contexts. As a preceding stage of the comparison, the analysis was conducted on the changes that result from the meeting of different sociocultural feature in a descriptive method. The analysis aims to identify variations between the two schemes in comparison of the knowledge structures of the two classifications, in terms of the quantity of class numbers that represent concepts and their relationships in each of the individual main classes. The most effective analytic strategy to show the patterns of the comparison was visualizations of similarities and differences between the two systems. Increasing or decreasing tendencies in the class through various editions were analyzed. Comparing the compositions of the main classes and distributions of concepts in the KDC and DDC discloses the differences in their knowledge structures empirically. This phase of quantitative analysis and visualizing techniques generates empirical evidence leading to interpretation.

Authors

Types

  • a 125
  • m 11
  • el 6
  • s 4
  • More… Less…