Search (65 results, page 1 of 4)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Kwasnik, B.H.: ¬The role of classification in knowledge representation (1999) 0.05
    0.046775818 = product of:
      0.093551636 = sum of:
        0.093551636 = sum of:
          0.048973244 = weight(_text_:work in 2464) [ClassicSimilarity], result of:
            0.048973244 = score(doc=2464,freq=2.0), product of:
              0.20127523 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.054837555 = queryNorm
              0.2433148 = fieldWeight in 2464, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=2464)
          0.04457839 = weight(_text_:22 in 2464) [ClassicSimilarity], result of:
            0.04457839 = score(doc=2464,freq=2.0), product of:
              0.19203177 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.054837555 = queryNorm
              0.23214069 = fieldWeight in 2464, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2464)
      0.5 = coord(1/2)
    
    Abstract
    A fascinating, broad-ranging article about classification, knowledge, and how they relate. Hierarchies, trees, paradigms (a two-dimensional classification that can look something like a spreadsheet), and facets are covered, with descriptions of how they work and how they can be used for knowledge discovery and creation. Kwasnick outlines how to make a faceted classification: choose facets, develop facets, analyze entities using the facets, and make a citation order. Facets are useful for many reasons: they do not require complete knowledge of the entire body of material; they are hospitable, flexible, and expressive; they do not require a rigid background theory; they can mix theoretical structures and models; and they allow users to view things from many perspectives. Facets do have faults: it can be hard to pick the right ones; it is hard to show relations between them; and it is difficult to visualize them. The coverage of the other methods is equally thorough and there is much to consider for anyone putting a classification on the web.
    Source
    Library trends. 48(1999) no.1, S.22-47
  2. Jacob, E.K.: Proposal for a classification of classifications built on Beghtol's distinction between "Naïve Classification" and "Professional Classification" (2010) 0.05
    0.046775818 = product of:
      0.093551636 = sum of:
        0.093551636 = sum of:
          0.048973244 = weight(_text_:work in 2945) [ClassicSimilarity], result of:
            0.048973244 = score(doc=2945,freq=2.0), product of:
              0.20127523 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.054837555 = queryNorm
              0.2433148 = fieldWeight in 2945, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=2945)
          0.04457839 = weight(_text_:22 in 2945) [ClassicSimilarity], result of:
            0.04457839 = score(doc=2945,freq=2.0), product of:
              0.19203177 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.054837555 = queryNorm
              0.23214069 = fieldWeight in 2945, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2945)
      0.5 = coord(1/2)
    
    Abstract
    Argues that Beghtol's (2003) use of the terms "naive classification" and "professional classification" is valid because they are nominal definitions and that the distinction between these two types of classification points up the need for researchers in knowledge organization to broaden their scope beyond traditional classification systems intended for information retrieval. Argues that work by Beghtol (2003), Kwasnik (1999) and Bailey (1994) offer direction for the development of a classification of classifications based on the pragmatic dimensions of extant classification systems. Bezugnahme auf: Beghtol, C.: Naïve classification systems and the global information society. In: Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine. Würzburg: Ergon Verlag 2004. S.19-22. (Advances in knowledge organization; vol.9)
  3. Albrechtsen, H.; Pejtersen, A.M.: Cognitive work analysis and work centered design of classification schemes (2003) 0.04
    0.03871675 = product of:
      0.0774335 = sum of:
        0.0774335 = product of:
          0.154867 = sum of:
            0.154867 = weight(_text_:work in 3005) [ClassicSimilarity], result of:
              0.154867 = score(doc=3005,freq=20.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.76942897 = fieldWeight in 3005, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3005)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Work centered design of classification schemes is an emerging area of research which poses particular challenges to domain analysis and scheme construction. A key challenge in work centered design of classification schemes is the evolving semantics of work. This article introduces a work centered approach to the design of classification schemes, based an the framework of cognitive work analysis. We launch collaborative task situations as a new unit of analysis for capturing evolving semantic structures in work domains. An example case from a cognitive work analysis of three national film research archives illustrates the application of the framework for identifying actors' needs for a classification scheme to support collaborative knowledge integration. It is concluded that a main contribution of the new approach is support for empirical analysis and overall design of classification schemes that can serve as material interfaces for actors' negotiations and integration of knowledge perspectives during collaborative work.
  4. Mai, J.-E.: ¬The modernity of classification (2011) 0.03
    0.031939685 = product of:
      0.06387937 = sum of:
        0.06387937 = product of:
          0.12775874 = sum of:
            0.12775874 = weight(_text_:work in 293) [ClassicSimilarity], result of:
              0.12775874 = score(doc=293,freq=10.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.6347465 = fieldWeight in 293, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=293)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to explore the modernity of current classification theory and work, and outline a foundation for moving classification toward a late-modern conception. Design/methodology/approach - The paper examines the conceptual foundation for current modern classification work, provides critical analysis of that approach, and outlines three conflicts with modernity that shape the path out of the consequences of modernity. Findings - The paper presents an understanding of classification that establishes classification on a late-modern epistemology, and it lays the contours of how to reclaim the intellectual core of classification theory and work. Originality/value - The paper establishes a foundation for rethinking classification work, outlines consequences of current mainstream work, and provides concept for developing late-modern classification theory and practice.
  5. Quinn, B.: Recent theoretical approaches in classification and indexing (1994) 0.03
    0.028274715 = product of:
      0.05654943 = sum of:
        0.05654943 = product of:
          0.11309886 = sum of:
            0.11309886 = weight(_text_:work in 8276) [ClassicSimilarity], result of:
              0.11309886 = score(doc=8276,freq=6.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.56191146 = fieldWeight in 8276, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8276)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article is a selective review of recent studies in classification and indexing theory. A number of important problems are discussed, including subjectivity versus objectivity, theories of indexing, the theoretical role of automation, and theoretical approaches to a universal classification scheme. Interestingly, much of the work appears to have been done outside the United States. After reviewing the theoretical work itself, some possible reasons for the non-American origins of the work are explored
  6. Moss, R.: Categories and relations : Origins of two classification theories (1964) 0.02
    0.023086209 = product of:
      0.046172418 = sum of:
        0.046172418 = product of:
          0.092344835 = sum of:
            0.092344835 = weight(_text_:work in 1816) [ClassicSimilarity], result of:
              0.092344835 = score(doc=1816,freq=4.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.4587988 = fieldWeight in 1816, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1816)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The resemblances between the categories of Aristotle and those of Ranganathan are shown. These categories are examined in the light of criticism made by Bertrand Russell and are shown to have no validity. Similar comparisons are made between the relations of Huma and Farradane. Farradane's work is a return to Hume, who is generally acknowledged as one of the founders of the British school of empirical philosophy which continues to Russell and beyond. In Russell's work lies the most promising line of development for information classification and indexing
  7. Maniez, J.: ¬Des classifications aux thesaurus : du bon usage des facettes (1999) 0.02
    0.022289194 = product of:
      0.04457839 = sum of:
        0.04457839 = product of:
          0.08915678 = sum of:
            0.08915678 = weight(_text_:22 in 6404) [ClassicSimilarity], result of:
              0.08915678 = score(doc=6404,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.46428138 = fieldWeight in 6404, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6404)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:01:00
  8. Maniez, J.: ¬Du bon usage des facettes : des classifications aux thésaurus (1999) 0.02
    0.022289194 = product of:
      0.04457839 = sum of:
        0.04457839 = product of:
          0.08915678 = sum of:
            0.08915678 = weight(_text_:22 in 3773) [ClassicSimilarity], result of:
              0.08915678 = score(doc=3773,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.46428138 = fieldWeight in 3773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3773)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:01:00
  9. Foskett, D.J.: Systems theory and its relevance to documentary classification (2017) 0.02
    0.022289194 = product of:
      0.04457839 = sum of:
        0.04457839 = product of:
          0.08915678 = sum of:
            0.08915678 = weight(_text_:22 in 3176) [ClassicSimilarity], result of:
              0.08915678 = score(doc=3176,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.46428138 = fieldWeight in 3176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3176)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    6. 5.2017 18:46:22
  10. Negrini, G.; Zozi, P.: Ontological analysis of the literary work of art (2003) 0.02
    0.021206036 = product of:
      0.042412072 = sum of:
        0.042412072 = product of:
          0.084824145 = sum of:
            0.084824145 = weight(_text_:work in 2687) [ClassicSimilarity], result of:
              0.084824145 = score(doc=2687,freq=6.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.4214336 = fieldWeight in 2687, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2687)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ontological structures can aid the understanding and modelling of works of art. Ontology of the aesthetic object, and particularly of the literary work, has been analysed by Hartmann and Ingarden. Application of Dahlberg's ontical 'systematifier' model enabled us to organize the entire structure of the Thesaurus of Italian Literature, and to highlight a number of significant aspects of the literary work. After describing the conclusions arising from the experience of compiling the thesaurus, the paper briefly outlines Hartmann's and Ingarden's theories of levels and seeks to identify commonalities between the ontological analysis of the two theories and the conclusions of the thesaurus.
  11. Jacob, E.K.: ¬The everyday world of work : two approaches to the investigation of classification in context (2001) 0.02
    0.020405518 = product of:
      0.040811036 = sum of:
        0.040811036 = product of:
          0.08162207 = sum of:
            0.08162207 = weight(_text_:work in 4494) [ClassicSimilarity], result of:
              0.08162207 = score(doc=4494,freq=8.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.40552467 = fieldWeight in 4494, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4494)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    One major aspect of T.D. Wilson's research has been his insistence on situating the investigation of information behaviour within the context of its occurrence - within the everyday world of work. The significance of this approach is reviewed in light of the notion of embodied cognition that characterises the evolving theoretical episteme in cognitive science research. Embodied cognition employs complex external props such as stigmergic structures and cognitive scaffoldings to reduce the cognitive burden on the individual and to augment human problem-solving activities. The cognitive function of the classification scheme is described as exemplifying both stigmergic structures and cognitive scaffoldings. Two different but complementary approaches to the investigation of situated cognition are presented: cognition-as-scaffolding and cognition-as-infrastructure. Classification-as-scaffolding views the classification scheme as a knowledge storage device supporting and promoting cognitive economy. Classification-as-infrastructure views the classification system as a social convention that, when integrated with technological structures and organisational practices, supports knowledge management work. Both approaches are shown to build upon and extend Wilson's contention that research is most productive when it attends to the social and organisational contexts of cognitive activity by focusing on the everyday world of work.
  12. References on Integrative level classification (o.J.) 0.02
    0.020405518 = product of:
      0.040811036 = sum of:
        0.040811036 = product of:
          0.08162207 = sum of:
            0.08162207 = weight(_text_:work in 1098) [ClassicSimilarity], result of:
              0.08162207 = score(doc=1098,freq=2.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.40552467 = fieldWeight in 1098, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1098)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Philosophical and scientific sources ; works by CRG members ; comments to CRG work ; contributions of the present project ; references to the present project
  13. Kumar, K.: Distinctive contribution of Ranganathan to library classification (1992) 0.02
    0.020200431 = product of:
      0.040400863 = sum of:
        0.040400863 = product of:
          0.080801725 = sum of:
            0.080801725 = weight(_text_:work in 6991) [ClassicSimilarity], result of:
              0.080801725 = score(doc=6991,freq=4.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.40144894 = fieldWeight in 6991, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6991)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Stresses that S.R. Ranganathan was truly a great scholar, who made rich contribution to different aspects of library and information science, but is better known for his work in the field of library classification. discusses his distinctive contributions to classification such as normative principles, 3 plane model of work, freely faceted classification (involving facet analysis and the synthetic principle), postulational approach, fundamental categories and certain notational devices like the sector device, group notation device, emptying digit device and seminal mnemonic device. Regards these as seminal ideas forming the basis of his theory of library classification. Considers 7th ed. of the Colon Classification as the best example of the application of theses ideas
  14. Husain, S.: Library classification : facets and analyses (1993) 0.02
    0.020200431 = product of:
      0.040400863 = sum of:
        0.040400863 = product of:
          0.080801725 = sum of:
            0.080801725 = weight(_text_:work in 3752) [ClassicSimilarity], result of:
              0.080801725 = score(doc=3752,freq=4.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.40144894 = fieldWeight in 3752, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3752)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält folgende Kapitel: (1) Definition, need and purpose of classification, (2) History of library classification, (3) Terminology of classification, (4) Development of a theory of classification, (5) Work of classification in three planes and their interrelationship, (6) Work of classification in idea plane, (7) Verbal plane, (8) Notation, definition, need functions, (9) Multidimensional nature of subjects, (10) Growing universe of subjects: problems and solutions, (11) Postulational approach to classification, (12) Formation of sharpening of isolates, (13) Species of classification schemes, (14) DDC, UDC and CC, (15) Designing the depth schedules of classification, (16) Recent trends in classification
  15. Star, S.L.: Grounded classification : grounded theory and faceted classification (1998) 0.02
    0.020200431 = product of:
      0.040400863 = sum of:
        0.040400863 = product of:
          0.080801725 = sum of:
            0.080801725 = weight(_text_:work in 851) [ClassicSimilarity], result of:
              0.080801725 = score(doc=851,freq=4.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.40144894 = fieldWeight in 851, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=851)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    This article compares the qualitative method of grounded theory (GT) with Ranganathan's construction of faceted classifications (FC) in library and information science. Both struggle with a core problem-i.e., the representation of vernacular words and processes, empirically discovered, which will, although ethnographically faithful, be powerful beyond the single instance or case study. The article compares Glaser and Strauss's (1967) work with that of Ranganathan(1950).
    Footnote
    Artikel in einem Themenheft "How Classifications Work: Problems and Challenges in an Electronic Age"
  16. Spiteri, L.: ¬A simplified model for facet analysis : Ranganathan 101 (1998) 0.02
    0.017314656 = product of:
      0.03462931 = sum of:
        0.03462931 = product of:
          0.06925862 = sum of:
            0.06925862 = weight(_text_:work in 3842) [ClassicSimilarity], result of:
              0.06925862 = score(doc=3842,freq=4.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.3440991 = fieldWeight in 3842, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3842)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ranganathan's canons, principles, and postulates can easily confuse readers, especially because he revised and added to them in various editions of his many books. The Classification Research Group, who drew on Ranganathan's work as their basis for classification theory but developed it in their own way, has never clearly organized all their equivalent canons and principles. In this article Spiteri gathers the fundamental rules from both systems and compares and contrasts them. She makes her own clearer set of principles for constructing facets, stating the subject of a document, and designing notation. Spiteri's "simplified model" is clear and understandable, but certainly not simplistic. The model does not include methods for making a faceted system, but will serve as a very useful guide in how to turn initial work into a rigorous classification. Highly recommended
  17. Kublik, A.; Clevette, V.; Ward, D.; Olson, H.A.: Adapting dominant classifications to particular contexts (2003) 0.02
    0.017314656 = product of:
      0.03462931 = sum of:
        0.03462931 = product of:
          0.06925862 = sum of:
            0.06925862 = weight(_text_:work in 5516) [ClassicSimilarity], result of:
              0.06925862 = score(doc=5516,freq=4.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.3440991 = fieldWeight in 5516, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5516)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper addresses the process of adapting to a particular culture or context a classification that has grown out of western culture to become a global standard. The authors use a project that adapts DDC for use in a feminist/women's issues context to demonstrate an approach that works. The project is particularly useful as an interdisciplinary example. Discussion consists of four parts: (1) definition of the problem indicating the need for adaptation and efforts to date; (2) description of the methodology developed for creating an expansion; (3) description of the interface developed for actually doing the work, with its potential for a distributed group to work on it together (could even be internationally distributed); and (4) generalization of how the methodology could be used for particular contexts by country, ethnicity, perspective or other defining factors.
  18. Tennis, J.T.: Foundational, first-order, and second-order classification theory (2015) 0.02
    0.017314656 = product of:
      0.03462931 = sum of:
        0.03462931 = product of:
          0.06925862 = sum of:
            0.06925862 = weight(_text_:work in 2204) [ClassicSimilarity], result of:
              0.06925862 = score(doc=2204,freq=4.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.3440991 = fieldWeight in 2204, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2204)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Both basic and applied research on the construction, implementation, maintenance, and evaluation of classification schemes is called classification theory. If we employ Ritzer's metatheoretical method of analysis on the over one-hundred year-old body of literature, we can se categories of theory emerge. This paper looks at one particular part of knowledge organization work, namely classification theory, and asks 1) what are the contours of this intellectual space, and, 2) what have we produced in the theoretical reflection on constructing, implementing, and evaluating classification schemes? The preliminary findings from this work are that classification theory can be separated into three kinds: foundational classification theory, first-order classification theory, and second-order classification theory, each with its own concerns and objects of study.
  19. Kaula, P.N.: Canons in analytico-synthetic classification (1979) 0.02
    0.016324414 = product of:
      0.032648828 = sum of:
        0.032648828 = product of:
          0.065297656 = sum of:
            0.065297656 = weight(_text_:work in 1428) [ClassicSimilarity], result of:
              0.065297656 = score(doc=1428,freq=2.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.32441974 = fieldWeight in 1428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1428)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presentation of the rules (canons) which S.R. Ranganathan laid down for the three planes of work, the idea plane, the verbal plane and the notational plane and explanation of each of these 34 canons, indispensable tools for the establishment of any classification system. An overall survey of the canons is given
  20. Hurt, C.D.: Classification and subject analysis : looking to the future at a distance (1997) 0.02
    0.016324414 = product of:
      0.032648828 = sum of:
        0.032648828 = product of:
          0.065297656 = sum of:
            0.065297656 = weight(_text_:work in 6929) [ClassicSimilarity], result of:
              0.065297656 = score(doc=6929,freq=2.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.32441974 = fieldWeight in 6929, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6929)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Classic classification schemes are uni-dimensional, with few exceptions. One of the challenges of distance education and new learning strategies is that the proliferation of course work defies the traditional categorization. The rigidity of most present classification schemes does not mesh well with the burgeoning fluidity of the academic environment. One solution is a return to a largely forgotten area of study - classification theory. Some suggestions for exploration are nonmonotonic logic systems, neural network models, and non-library models.

Years

Languages

  • e 59
  • f 3
  • chi 1
  • d 1
  • i 1
  • More… Less…

Types

  • a 55
  • m 7
  • b 2
  • el 2
  • s 1
  • More… Less…