Search (76 results, page 1 of 4)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Belayche, C.: ¬A propos de la classification de Dewey (1997) 0.13
    0.13186574 = sum of:
      0.054339804 = product of:
        0.21735922 = sum of:
          0.21735922 = weight(_text_:author's in 1171) [ClassicSimilarity], result of:
            0.21735922 = score(doc=1171,freq=2.0), product of:
              0.36593494 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.05445336 = queryNorm
              0.59398323 = fieldWeight in 1171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.0625 = fieldNorm(doc=1171)
        0.25 = coord(1/4)
      0.07752594 = product of:
        0.116288915 = sum of:
          0.057267487 = weight(_text_:c in 1171) [ClassicSimilarity], result of:
            0.057267487 = score(doc=1171,freq=2.0), product of:
              0.18783171 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.05445336 = queryNorm
              0.3048872 = fieldWeight in 1171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.0625 = fieldNorm(doc=1171)
          0.05902143 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
            0.05902143 = score(doc=1171,freq=2.0), product of:
              0.19068639 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05445336 = queryNorm
              0.30952093 = fieldWeight in 1171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1171)
        0.6666667 = coord(2/3)
    
    Abstract
    All classifications are based on ideologies and Dewey is marked by its author's origins in 19th century North America. Subsequent revisions indicate changed ways of understanding the world. Section 157 (psycho-pathology) is now included with 616.89 (mental troubles), reflecting the move to a genetic-based approach. Table 5 (racial, ethnic and national groups) is however unchanged, despite changing views on such categorisation
    Source
    Bulletin d'informations de l'Association des Bibliothecaires Francais. 1997, no.175, S.22-23
  2. Gnoli, C.: Classifying phenomena : part 3: facets (2017) 0.04
    0.043487966 = product of:
      0.08697593 = sum of:
        0.08697593 = product of:
          0.1304639 = sum of:
            0.08590123 = weight(_text_:c in 4158) [ClassicSimilarity], result of:
              0.08590123 = score(doc=4158,freq=2.0), product of:
                0.18783171 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.05445336 = queryNorm
                0.45733082 = fieldWeight in 4158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4158)
            0.044562668 = weight(_text_:h in 4158) [ClassicSimilarity], result of:
              0.044562668 = score(doc=4158,freq=2.0), product of:
                0.13528661 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05445336 = queryNorm
                0.32939452 = fieldWeight in 4158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4158)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Dimensions of knowledge: facets for knowledge organization. Eds.: R.P. Smiraglia, u. H.-L. Lee
  3. Szostak, R.: Classifying science : phenomena, data, theory, method, practice (2004) 0.03
    0.03253159 = sum of:
      0.028818034 = product of:
        0.115272135 = sum of:
          0.115272135 = weight(_text_:author's in 325) [ClassicSimilarity], result of:
            0.115272135 = score(doc=325,freq=4.0), product of:
              0.36593494 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.05445336 = queryNorm
              0.31500718 = fieldWeight in 325, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.0234375 = fieldNorm(doc=325)
        0.25 = coord(1/4)
      0.0037135556 = product of:
        0.011140667 = sum of:
          0.011140667 = weight(_text_:h in 325) [ClassicSimilarity], result of:
            0.011140667 = score(doc=325,freq=2.0), product of:
              0.13528661 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.05445336 = queryNorm
              0.08234863 = fieldWeight in 325, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0234375 = fieldNorm(doc=325)
        0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: KO 32(2005) no.2, S.93-95 (H. Albrechtsen): "The book deals with mapping of the structures and contents of sciences, defined broadly to include the social sciences and the humanities. According to the author, the study of science, as well as the practice of science, could benefit from a detailed classification of different types of science. The book defines five universal constituents of the sciences: phenomena, data, theories, methods and practice. For each of these constituents, the author poses five questions, in the well-known 5W format: Who, What, Where, When, Why? - with the addition of the question How? (Szostak 2003). Two objectives of the author's endeavor stand out: 1) decision support for university curriculum development across disciplines and decision support for university students at advanced levels of education in selection of appropriate courses for their projects and to support cross-disciplinary inquiry for researchers and students; 2) decision support for researchers and students in scientific inquiry across disciplines, methods and theories. The main prospective audience of this book is university curriculum developers, university students and researchers, in that order of priority. The heart of the book is the chapters unfolding the author's ideas about how to classify phenomena and data, theory, method and practice, by use of the 5W inquiry model. . . .
  4. Lorenz, B.: Zur Theorie und Terminologie der bibliothekarischen Klassifikation (2018) 0.03
    0.029576626 = product of:
      0.05915325 = sum of:
        0.05915325 = product of:
          0.08872987 = sum of:
            0.029708447 = weight(_text_:h in 4339) [ClassicSimilarity], result of:
              0.029708447 = score(doc=4339,freq=2.0), product of:
                0.13528661 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05445336 = queryNorm
                0.21959636 = fieldWeight in 4339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4339)
            0.05902143 = weight(_text_:22 in 4339) [ClassicSimilarity], result of:
              0.05902143 = score(doc=4339,freq=2.0), product of:
                0.19068639 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05445336 = queryNorm
                0.30952093 = fieldWeight in 4339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4339)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Pages
    S.1-22
    Source
    Klassifikationen in Bibliotheken: Theorie - Anwendung - Nutzen. Hrsg.: H. Alex, G. Bee u. U. Junger
  5. Jacob, E.K.: Proposal for a classification of classifications built on Beghtol's distinction between "Naïve Classification" and "Professional Classification" (2010) 0.03
    0.02907223 = product of:
      0.05814446 = sum of:
        0.05814446 = product of:
          0.08721669 = sum of:
            0.042950615 = weight(_text_:c in 2945) [ClassicSimilarity], result of:
              0.042950615 = score(doc=2945,freq=2.0), product of:
                0.18783171 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.05445336 = queryNorm
                0.22866541 = fieldWeight in 2945, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2945)
            0.04426607 = weight(_text_:22 in 2945) [ClassicSimilarity], result of:
              0.04426607 = score(doc=2945,freq=2.0), product of:
                0.19068639 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05445336 = queryNorm
                0.23214069 = fieldWeight in 2945, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2945)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Argues that Beghtol's (2003) use of the terms "naive classification" and "professional classification" is valid because they are nominal definitions and that the distinction between these two types of classification points up the need for researchers in knowledge organization to broaden their scope beyond traditional classification systems intended for information retrieval. Argues that work by Beghtol (2003), Kwasnik (1999) and Bailey (1994) offer direction for the development of a classification of classifications based on the pragmatic dimensions of extant classification systems. Bezugnahme auf: Beghtol, C.: Naïve classification systems and the global information society. In: Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine. Würzburg: Ergon Verlag 2004. S.19-22. (Advances in knowledge organization; vol.9)
  6. Gnoli, C.: Classifying phenomena : part 4: themes and rhemes (2018) 0.03
    0.02907223 = product of:
      0.05814446 = sum of:
        0.05814446 = product of:
          0.08721669 = sum of:
            0.042950615 = weight(_text_:c in 4152) [ClassicSimilarity], result of:
              0.042950615 = score(doc=4152,freq=2.0), product of:
                0.18783171 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.05445336 = queryNorm
                0.22866541 = fieldWeight in 4152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4152)
            0.04426607 = weight(_text_:22 in 4152) [ClassicSimilarity], result of:
              0.04426607 = score(doc=4152,freq=2.0), product of:
                0.19068639 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05445336 = queryNorm
                0.23214069 = fieldWeight in 4152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4152)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    17. 2.2018 18:22:25
  7. Beghtol, C.: Naïve classification systems and the global information society (2004) 0.02
    0.02422686 = product of:
      0.04845372 = sum of:
        0.04845372 = product of:
          0.07268058 = sum of:
            0.03579218 = weight(_text_:c in 3483) [ClassicSimilarity], result of:
              0.03579218 = score(doc=3483,freq=2.0), product of:
                0.18783171 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.05445336 = queryNorm
                0.1905545 = fieldWeight in 3483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3483)
            0.036888395 = weight(_text_:22 in 3483) [ClassicSimilarity], result of:
              0.036888395 = score(doc=3483,freq=2.0), product of:
                0.19068639 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05445336 = queryNorm
                0.19345059 = fieldWeight in 3483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3483)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Pages
    S.19-22
  8. Gnoli, C.; Mei, H.: Freely faceted classification for Web-based information retrieval (2006) 0.02
    0.021743983 = product of:
      0.043487966 = sum of:
        0.043487966 = product of:
          0.06523195 = sum of:
            0.042950615 = weight(_text_:c in 534) [ClassicSimilarity], result of:
              0.042950615 = score(doc=534,freq=2.0), product of:
                0.18783171 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.05445336 = queryNorm
                0.22866541 = fieldWeight in 534, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=534)
            0.022281334 = weight(_text_:h in 534) [ClassicSimilarity], result of:
              0.022281334 = score(doc=534,freq=2.0), product of:
                0.13528661 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05445336 = queryNorm
                0.16469726 = fieldWeight in 534, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=534)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
  9. Broughton, V.: Essential classification (2004) 0.02
    0.021687727 = sum of:
      0.019212022 = product of:
        0.07684809 = sum of:
          0.07684809 = weight(_text_:author's in 2824) [ClassicSimilarity], result of:
            0.07684809 = score(doc=2824,freq=4.0), product of:
              0.36593494 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.05445336 = queryNorm
              0.21000479 = fieldWeight in 2824, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.015625 = fieldNorm(doc=2824)
        0.25 = coord(1/4)
      0.002475704 = product of:
        0.0074271117 = sum of:
          0.0074271117 = weight(_text_:h in 2824) [ClassicSimilarity], result of:
            0.0074271117 = score(doc=2824,freq=2.0), product of:
              0.13528661 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.05445336 = queryNorm
              0.05489909 = fieldWeight in 2824, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.015625 = fieldNorm(doc=2824)
        0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: KO 32(2005) no.1, S.47-49 (M. Hudon): "Vanda Broughton's Essential Classification is the most recent addition to a very small set of classification textbooks published over the past few years. The book's 21 chapters are based very closely an the cataloguing and classification module at the School of Library, Archive, and Information studies at University College, London. The author's main objective is clear: this is "first and foremost a book about how to classify. The emphasis throughout is an the activity of classification rather than the theory, the practical problems of the organization of collections, and the needs of the users" (p. 1). This is not a theoretical work, but a basic course in classification and classification scheme application. For this reviewer, who also teaches "Classification 101," this is also a fascinating peek into how a colleague organizes content and structures her course. "Classification is everywhere" (p. 1): the first sentence of this book is also one of the first statements in my own course, and Professor Broughton's metaphors - the supermarket, canned peas, flowers, etc. - are those that are used by our colleagues around the world. The combination of tone, writing style and content display are reader-friendly; they are in fact what make this book remarkable and what distinguishes it from more "formal" textbooks, such as The Organization of Information, the superb text written and recently updated (2004) by Professor Arlene Taylor (2nd ed. Westport, Conn.: Libraries Unlimited, 2004). Reading Essential Classification, at times, feels like being in a classroom, facing a teacher who assures you that "you don't need to worry about this at this stage" (p. 104), and reassures you that, although you now speed a long time looking for things, "you will soon speed up when you get to know the scheme better" (p. 137). This teacher uses redundancy in a productive fashion, and she is not afraid to express her own opinions ("I think that if these concepts are helpful they may be used" (p. 245); "It's annoying that LCC doesn't provide clearer instructions, but if you keep your head and take them one step at a time [i.e. the tables] they're fairly straightforward" (p. 174)). Chapters 1 to 7 present the essential theoretical concepts relating to knowledge organization and to bibliographic classification. The author is adept at making and explaining distinctions: known-item retrieval versus subject retrieval, personal versus public/shared/official classification systems, scientific versus folk classification systems, object versus aspect classification systems, semantic versus syntactic relationships, and so on. Chapters 8 and 9 discuss the practice of classification, through content analysis and subject description. A short discussion of difficult subjects, namely the treatment of unique concepts (persons, places, etc.) as subjects seems a little advanced for a beginners' class.
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
    Weitere Rez. in: ZfBB 53(2006) H.2, S.111-113 (W. Gödert)
  10. Raju, A.A.N.: Colon Classification: theory and practice : a self instructional manual (2001) 0.02
    0.016981188 = product of:
      0.033962376 = sum of:
        0.033962376 = product of:
          0.1358495 = sum of:
            0.1358495 = weight(_text_:author's in 1482) [ClassicSimilarity], result of:
              0.1358495 = score(doc=1482,freq=2.0), product of:
                0.36593494 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.05445336 = queryNorm
                0.3712395 = fieldWeight in 1482, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1482)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Colon Classification (CC) is truly the first freely faceted scheme for library classification devised and propagated by Dr. S.R. Ranganathan. The scheme is being taught in theory and practice to the students in most of the LIS schools in India and abroad also. Many manuals, Guide books and Introductory works have been published on CC in the past. But the present work tread a new path in presenting CC to the student, teaching and professional community. The present work Colon Classification: Theory and Practice; A Self Instructional Manual is the result of author's twenty-five years experience of teaching theory and practice of CC to the students of LIS. For the first ime concerted and systematic attempt has been made to present theory and practice of CC in self-instructional mode, keeping in view the requirements of students learners of Open Universities/ Distance Education Institutions in particular. The other singificant and novel features introduced in this manual are: Presenting the scope of each block consisting certain units bollowed by objectives, introduction, sections, sub-sections, self check exercises, glossary and assignment of each unit. It is hoped that all these features will help the users/readers of this manual to understand and grasp quickly, the intricacies involved in theory and practice of CC(6th Edition). The manual is presented in three blocks and twelve units.
  11. Zackland, M.; Fontaine, D.: Systematic building of conceptual classification systems with C-KAT (1996) 0.02
    0.016703017 = product of:
      0.033406034 = sum of:
        0.033406034 = product of:
          0.100218095 = sum of:
            0.100218095 = weight(_text_:c in 5145) [ClassicSimilarity], result of:
              0.100218095 = score(doc=5145,freq=8.0), product of:
                0.18783171 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.05445336 = queryNorm
                0.5335526 = fieldWeight in 5145, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5145)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    C-KAT is a method and a tool which supports the design of feature oriented classification systems for knowlegde based systems. It uses a specialized Heuristic Classification conceptual model named 'classification by structural shift' which sees the classification process as the matching of different classifications of the same set of objects or situations organized around different structural principles. To manage the complexity induced by the cross-product, C-KAT supports the use of a leastcommittment strategy which applies in a context of constraint-directed reasoning. Presents this method using an example from the field of industrial fire insurance
    Object
    C-KAT
  12. Gnoli, C.: Classificazione a facette (2004) 0.02
    0.016703017 = product of:
      0.033406034 = sum of:
        0.033406034 = product of:
          0.100218095 = sum of:
            0.100218095 = weight(_text_:c in 3746) [ClassicSimilarity], result of:
              0.100218095 = score(doc=3746,freq=2.0), product of:
                0.18783171 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.05445336 = queryNorm
                0.5335526 = fieldWeight in 3746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3746)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  13. Classification research for knowledge representation and organization : Proc. of the 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991 (1992) 0.02
    0.016112331 = product of:
      0.032224663 = sum of:
        0.032224663 = product of:
          0.04833699 = sum of:
            0.037196323 = weight(_text_:c in 2072) [ClassicSimilarity], result of:
              0.037196323 = score(doc=2072,freq=6.0), product of:
                0.18783171 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.05445336 = queryNorm
                0.19803005 = fieldWeight in 2072, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2072)
            0.011140667 = weight(_text_:h in 2072) [ClassicSimilarity], result of:
              0.011140667 = score(doc=2072,freq=2.0), product of:
                0.13528661 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05445336 = queryNorm
                0.08234863 = fieldWeight in 2072, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2072)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: SVENONIUS, E.: Classification: prospects, problems, and possibilities; BEALL, J.: Editing the Dewey Decimal Classification online: the evolution of the DDC database; BEGHTOL, C.: Toward a theory of fiction analysis for information storage and retrieval; CRAVEN, T.C.: Concept relation structures and their graphic display; FUGMANN, R.: Illusory goals in information science research; GILCHRIST, A.: UDC: the 1990's and beyond; GREEN, R.: The expression of syntagmatic relationships in indexing: are frame-based index languages the answer?; HUMPHREY, S.M.: Use and management of classification systems for knowledge-based indexing; MIKSA, F.L.: The concept of the universe of knowledge and the purpose of LIS classification; SCOTT, M. u. A.F. FONSECA: Methodology for functional appraisal of records and creation of a functional thesaurus; ALBRECHTSEN, H.: PRESS: a thesaurus-based information system for software reuse; AMAESHI, B.: A preliminary AAT compatible African art thesaurus; CHATTERJEE, A.: Structures of Indian classification systems of the pre-Ranganathan era and their impact on the Colon Classification; COCHRANE, P.A.: Indexing and searching thesauri, the Janus or Proteus of information retrieval; CRAVEN, T.C.: A general versus a special algorithm in the graphic display of thesauri; DAHLBERG, I.: The basis of a new universal classification system seen from a philosophy of science point of view: DRABENSTOTT, K.M., RIESTER, L.C. u. B.A.DEDE: Shelflisting using expert systems; FIDEL, R.: Thesaurus requirements for an intermediary expert system; GREEN, R.: Insights into classification from the cognitive sciences: ramifications for index languages; GROLIER, E. de: Towards a syndetic information retrieval system; GUENTHER, R.: The USMARC format for classification data: development and implementation; HOWARTH, L.C.: Factors influencing policies for the adoption and integration of revisions to classification schedules; HUDON, M.: Term definitions in subject thesauri: the Canadian literacy thesaurus experience; HUSAIN, S.: Notational techniques for the accomodation of subjects in Colon Classification 7th edition: theoretical possibility vis-à-vis practical need; KWASNIK, B.H. u. C. JORGERSEN: The exploration by means of repertory grids of semantic differences among names of official documents; MICCO, M.: Suggestions for automating the Library of Congress Classification schedules; PERREAULT, J.M.: An essay on the prehistory of general categories (II): G.W. Leibniz, Conrad Gesner; REES-POTTER, L.K.: How well do thesauri serve the social sciences?; REVIE, C.W. u. G. SMART: The construction and the use of faceted classification schema in technical domains; ROCKMORE, M.: Structuring a flexible faceted thsaurus record for corporate information retrieval; ROULIN, C.: Sub-thesauri as part of a metathesaurus; SMITH, L.C.: UNISIST revisited: compatibility in the context of collaboratories; STILES, W.G.: Notes concerning the use chain indexing as a possible means of simulating the inductive leap within artificial intelligence; SVENONIUS, E., LIU, S. u. B. SUBRAHMANYAM: Automation in chain indexing; TURNER, J.: Structure in data in the Stockshot database at the National Film Board of Canada; VIZINE-GOETZ, D.: The Dewey Decimal Classification as an online classification tool; WILLIAMSON, N.J.: Restructuring UDC: problems and possibilies; WILSON, A.: The hierarchy of belief: ideological tendentiousness in universal classification; WILSON, B.F.: An evaluation of the systematic botany schedule of the Universal Decimal Classification (English full edition, 1979); ZENG, L.: Research and development of classification and thesauri in China; CONFERENCE SUMMARY AND CONCLUSIONS
  14. Maniez, J.: ¬Des classifications aux thesaurus : du bon usage des facettes (1999) 0.01
    0.014755357 = product of:
      0.029510714 = sum of:
        0.029510714 = product of:
          0.08853214 = sum of:
            0.08853214 = weight(_text_:22 in 6404) [ClassicSimilarity], result of:
              0.08853214 = score(doc=6404,freq=2.0), product of:
                0.19068639 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05445336 = queryNorm
                0.46428138 = fieldWeight in 6404, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6404)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:01:00
  15. Maniez, J.: ¬Du bon usage des facettes : des classifications aux thésaurus (1999) 0.01
    0.014755357 = product of:
      0.029510714 = sum of:
        0.029510714 = product of:
          0.08853214 = sum of:
            0.08853214 = weight(_text_:22 in 3773) [ClassicSimilarity], result of:
              0.08853214 = score(doc=3773,freq=2.0), product of:
                0.19068639 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05445336 = queryNorm
                0.46428138 = fieldWeight in 3773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3773)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:01:00
  16. Foskett, D.J.: Systems theory and its relevance to documentary classification (2017) 0.01
    0.014755357 = product of:
      0.029510714 = sum of:
        0.029510714 = product of:
          0.08853214 = sum of:
            0.08853214 = weight(_text_:22 in 3176) [ClassicSimilarity], result of:
              0.08853214 = score(doc=3176,freq=2.0), product of:
                0.19068639 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05445336 = queryNorm
                0.46428138 = fieldWeight in 3176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3176)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    6. 5.2017 18:46:22
  17. Bowker, G.C.; Star, S.L.: Sorting things out : classification and its consequences (1999) 0.01
    0.014495989 = product of:
      0.028991979 = sum of:
        0.028991979 = product of:
          0.043487966 = sum of:
            0.028633744 = weight(_text_:c in 733) [ClassicSimilarity], result of:
              0.028633744 = score(doc=733,freq=2.0), product of:
                0.18783171 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.05445336 = queryNorm
                0.1524436 = fieldWeight in 733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.03125 = fieldNorm(doc=733)
            0.0148542235 = weight(_text_:h in 733) [ClassicSimilarity], result of:
              0.0148542235 = score(doc=733,freq=2.0), product of:
                0.13528661 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05445336 = queryNorm
                0.10979818 = fieldWeight in 733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.03125 = fieldNorm(doc=733)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Is this book sociology, anthropology, or taxonomy? Sorting Things Out, by communications theorists Geoffrey C. Bowker and Susan Leigh Star, covers a lot of conceptual ground in its effort to sort out exactly how and why we classify and categorize the things and concepts we encounter day to day. But the analysis doesn't stop there; the authors go on to explore what happens to our thinking as a result of our classifications. With great insight and precise academic language, they pick apart our information systems and language structures that lie deeper than the everyday categories we use. The authors focus first on the International Classification of Diseases (ICD), a widely used scheme used by health professionals worldwide, but also look at other health information systems, racial classifications used by South Africa during apartheid, and more. Though it comes off as a bit too academic at times (by the end of the 20th century, most writers should be able to get the spelling of McDonald's restaurant right), the book has a clever charm that thoughtful readers will surely appreciate. A sly sense of humor sneaks into the writing, giving rise to the chapter title "The Kindness of Strangers," for example. After arguing that categorization is both strongly influenced by and a powerful reinforcer of ideology, it follows that revolutions (political or scientific) must change the way things are sorted in order to throw over the old system. Who knew that such simple, basic elements of thought could have such far-reaching consequences? Whether you ultimately place it with social science, linguistics, or (as the authors fear) fantasy, make sure you put Sorting Things Out in your reading pile.
    Footnote
    Rez. in: Knowledge organization 27(2000) no.3, H.175-177 (B. Kwasnik); College and research libraries 61(2000) no.4, S.380-381 (J. Williams); Library resources and technical services 44(2000) no.4, S.107-108 (H.A. Olson); JASIST 51(2000) no.12, S.1149-1150 (T.A. Brooks)
  18. Fairthorne, R.A.: Temporal structure in bibliographic classification (1985) 0.01
    0.010188714 = product of:
      0.020377427 = sum of:
        0.020377427 = product of:
          0.08150971 = sum of:
            0.08150971 = weight(_text_:author's in 3651) [ClassicSimilarity], result of:
              0.08150971 = score(doc=3651,freq=2.0), product of:
                0.36593494 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.05445336 = queryNorm
                0.22274372 = fieldWeight in 3651, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3651)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    This paper, presented at the Ottawa Conference an the Conceptual Basis of the Classification of Knowledge, in 1971, is one of Fairthorne's more perceptive works and deserves a wide audience, especially as it breaks new ground in classification theory. In discussing the notion of discourse, he makes a "distinction between what discourse mentions and what discourse is about" [emphasis added], considered as a "fundamental factor to the relativistic nature of bibliographic classification" (p. 360). A table of mathematical functions, for example, describes exactly something represented by a collection of digits, but, without a preface, this table does not fit into a broader context. Some indication of the author's intent ls needed to fit the table into a broader context. This intent may appear in a title, chapter heading, class number or some other aid. Discourse an and discourse about something "cannot be determined solely from what it mentions" (p. 361). Some kind of background is needed. Fairthorne further develops the theme that knowledge about a subject comes from previous knowledge, thus adding a temporal factor to classification. "Some extra textual criteria are needed" in order to classify (p. 362). For example, "documents that mention the same things, but are an different topics, will have different ancestors, in the sense of preceding documents to which they are linked by various bibliographic characteristics ... [and] ... they will have different descendants" (p. 363). The classifier has to distinguish between documents that "mention exactly the same thing" but are not about the same thing. The classifier does this by classifying "sets of documents that form their histories, their bibliographic world lines" (p. 363). The practice of citation is one method of performing the linking and presents a "fan" of documents connected by a chain of citations to past work. The fan is seen as the effect of generations of documents - each generation connected to the previous one, and all ancestral to the present document. Thus, there are levels in temporal structure-that is, antecedent and successor documents-and these require that documents be identified in relation to other documents. This gives a set of documents an "irrevocable order," a loose order which Fairthorne calls "bibliographic time," and which is "generated by the fact of continual growth" (p. 364). He does not consider "bibliographic time" to be an equivalent to physical time because bibliographic events, as part of communication, require delay. Sets of documents, as indicated above, rather than single works, are used in classification. While an event, a person, a unique feature of the environment, may create a class of one-such as the French Revolution, Napoleon, Niagara Falls-revolutions, emperors, and waterfalls are sets which, as sets, will subsume individuals and make normal classes.
  19. Connaway, L.S.; Sievert, M.C.: Comparison of three classification systems for information on health insurance (1996) 0.01
    0.009836905 = product of:
      0.01967381 = sum of:
        0.01967381 = product of:
          0.05902143 = sum of:
            0.05902143 = weight(_text_:22 in 7242) [ClassicSimilarity], result of:
              0.05902143 = score(doc=7242,freq=2.0), product of:
                0.19068639 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05445336 = queryNorm
                0.30952093 = fieldWeight in 7242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7242)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 4.1997 21:10:19
  20. Lin, W.-Y.C.: ¬The concept and applications of faceted classifications (2006) 0.01
    0.009836905 = product of:
      0.01967381 = sum of:
        0.01967381 = product of:
          0.05902143 = sum of:
            0.05902143 = weight(_text_:22 in 5083) [ClassicSimilarity], result of:
              0.05902143 = score(doc=5083,freq=2.0), product of:
                0.19068639 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05445336 = queryNorm
                0.30952093 = fieldWeight in 5083, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5083)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    27. 5.2007 22:19:35

Years

Languages

  • e 65
  • d 5
  • f 3
  • i 2
  • chi 1
  • More… Less…

Types

  • a 66
  • m 9
  • s 4
  • b 1
  • el 1
  • More… Less…