Search (36 results, page 1 of 2)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Gnoli, C.: ¬The meaning of facets in non-disciplinary classifications (2006) 0.05
    0.046064835 = product of:
      0.06909725 = sum of:
        0.046260733 = weight(_text_:reference in 2291) [ClassicSimilarity], result of:
          0.046260733 = score(doc=2291,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.22474778 = fieldWeight in 2291, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2291)
        0.022836514 = product of:
          0.045673028 = sum of:
            0.045673028 = weight(_text_:database in 2291) [ClassicSimilarity], result of:
              0.045673028 = score(doc=2291,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.2233156 = fieldWeight in 2291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2291)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Disciplines are felt by many to be a constraint in classification, though they are a structuring principle of most bibliographic classification schemes. A non-disciplinary approach has been explored by the Classification Research Group, and research in this direction has been resumed recently by the Integrative Level Classification project. This paper focuses on the role and the definition of facets in non-disciplinary schemes. A generalized definition of facets is suggested with reference to predicate logic, allowing for having facets of phenomena as well as facets of disciplines. The general categories under which facets are often subsumed can be related ontologically to the evolutionary sequence of integrative levels. As a facet can be semantically connected with phenomena from any other part of a general scheme, its values can belong to three types, here called extra-defined foci (either special or general), and context-defined foci. Non-disciplinary freely faceted classification is being tested by applying it to little bibliographic samples stored in a MySQL database, and developing Web search interfaces to demonstrate possible uses of the described techniques.
  2. Hillman, D.J.: Mathematical classification techniques for nonstatic document collections, with particular reference to the problem of relevance (1965) 0.02
    0.02158834 = product of:
      0.06476502 = sum of:
        0.06476502 = weight(_text_:reference in 5516) [ClassicSimilarity], result of:
          0.06476502 = score(doc=5516,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31464687 = fieldWeight in 5516, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5516)
      0.33333334 = coord(1/3)
    
  3. Santoro, M.: Ripensare la CDU (1995) 0.02
    0.02158834 = product of:
      0.06476502 = sum of:
        0.06476502 = weight(_text_:reference in 4940) [ClassicSimilarity], result of:
          0.06476502 = score(doc=4940,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31464687 = fieldWeight in 4940, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4940)
      0.33333334 = coord(1/3)
    
    Abstract
    A detailed examination of the UDC's history, function and future prospects. Among topics discussed are: the early pioneering work of P. Otlet and H. LaFontaine; the development of Colon Classification; the 'UDC versus switching language' debate in the 1970s; the FID standard reference code project; and the recent scheme by Williamson and McIlwaine to restructure UDC completely, converting it into a Colon Classification and also creating a thesaurus drawn from the same classification. Comments that UDC, far from being a 'prehistoric monster', is becoming a sort of test laboratory for developing new and interesting documentation structures
  4. Kochar, R.S.: Library classification systems (1998) 0.02
    0.02158834 = product of:
      0.06476502 = sum of:
        0.06476502 = weight(_text_:reference in 931) [ClassicSimilarity], result of:
          0.06476502 = score(doc=931,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31464687 = fieldWeight in 931, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=931)
      0.33333334 = coord(1/3)
    
    Abstract
    Library classification traces the origins of the subject and leads an to the latest developments in it. This user-friendly text explains concepts through analogies, diagrams, and tables. The fundamental but important topics an terminology of classification has been uniquely explained. The book deals with the recent trends in the use of computers in cataloguing including on-line systems, artificial intelligence systems etc. With its up-to-date and comprehensive coverage the book will serve as a degree students of Library and Information Science and also prove to be invaluable reference material to professionals and researchers.
  5. Bosch, M.: Ontologies, different reasoning strategies, different logics, different kinds of knowledge representation : working together (2006) 0.02
    0.02158834 = product of:
      0.06476502 = sum of:
        0.06476502 = weight(_text_:reference in 166) [ClassicSimilarity], result of:
          0.06476502 = score(doc=166,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31464687 = fieldWeight in 166, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=166)
      0.33333334 = coord(1/3)
    
    Abstract
    The recent experiences in the building, maintenance and reuse of ontologies has shown that the most efficient approach is the collaborative one. However, communication between collaborators such as IT professionals, librarians, web designers and subject matter experts is difficult and time consuming. This is because there are different reasoning strategies, different logics and different kinds of knowledge representation in the applications of Semantic Web. This article intends to be a reference scheme. It uses concise and simple explanations that can be used in common by specialists of different backgrounds working together in an application of Semantic Web.
  6. Mayor, C.; Robinson, L.: Ontological realism, concepts and classification in molecular biology : development and application of the gene ontology (2014) 0.02
    0.018504292 = product of:
      0.055512875 = sum of:
        0.055512875 = weight(_text_:reference in 1771) [ClassicSimilarity], result of:
          0.055512875 = score(doc=1771,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.2696973 = fieldWeight in 1771, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=1771)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this article is to evaluate the development and use of the gene ontology (GO), a scientific vocabulary widely used in molecular biology databases, with particular reference to the relation between the theoretical basis of the GO, and the pragmatics of its application. Design/methodology/approach - The study uses a combination of bibliometric analysis, content analysis and discourse analysis. These analyses focus on details of the ways in which the terms of the ontology are amended and deleted, and in which they are applied by users. Findings - Although the GO is explicitly based on an objective realist epistemology, a considerable extent of subjectivity and social factors are evident in its development and use. It is concluded that bio-ontologies could beneficially be extended to be pluralist, while remaining objective, taking a view of concepts closer to that of more traditional controlled vocabularies. Originality/value - This is one of very few studies which evaluate the development of a formal ontology in relation to its conceptual foundations, and the first to consider the GO in this way.
  7. Broughton, V.: Faceted classification as a basis for knowledge organization in a digital environment : the Bliss Bibliographic Classification as a model for vocabulary management and the creation of multi-dimensional knowledge structures (2001) 0.02
    0.015420245 = product of:
      0.046260733 = sum of:
        0.046260733 = weight(_text_:reference in 5895) [ClassicSimilarity], result of:
          0.046260733 = score(doc=5895,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.22474778 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
      0.33333334 = coord(1/3)
    
    Abstract
    Broughton is one of the key people working on the second edition of the Bliss Bibliographic Classification (BC2). Her article has a brief, informative history of facets, then discusses semantic vs. syntactic relationships, standard facets used by Ranganathan and the Classification Research Group, facet analysis and citation order, and how to build subject indexes out of faceted classifications, all with occasional reference to digital environments and hypertext, but never with any specifics. It concludes by saying of faceted classification that the "capacity which it has to create highly sophisticated structures for the accommodation of complex objects suggests that it is worth investigation as an organizational tool for digital materials, and that the results of such investigation would be knowledge structures of unparalleled utility and elegance." How to build them is left to the reader, but this article provides an excellent starting point. It includes an example that shows how general concepts can be applied to a small set of documents and subjects, and how terms can be adapted to suit the material and users
  8. Maniez, J.: ¬Des classifications aux thesaurus : du bon usage des facettes (1999) 0.01
    0.013709504 = product of:
      0.041128512 = sum of:
        0.041128512 = product of:
          0.082257025 = sum of:
            0.082257025 = weight(_text_:22 in 6404) [ClassicSimilarity], result of:
              0.082257025 = score(doc=6404,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.46428138 = fieldWeight in 6404, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6404)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 8.1996 22:01:00
  9. Maniez, J.: ¬Du bon usage des facettes : des classifications aux thésaurus (1999) 0.01
    0.013709504 = product of:
      0.041128512 = sum of:
        0.041128512 = product of:
          0.082257025 = sum of:
            0.082257025 = weight(_text_:22 in 3773) [ClassicSimilarity], result of:
              0.082257025 = score(doc=3773,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.46428138 = fieldWeight in 3773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3773)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 8.1996 22:01:00
  10. Foskett, D.J.: Systems theory and its relevance to documentary classification (2017) 0.01
    0.013709504 = product of:
      0.041128512 = sum of:
        0.041128512 = product of:
          0.082257025 = sum of:
            0.082257025 = weight(_text_:22 in 3176) [ClassicSimilarity], result of:
              0.082257025 = score(doc=3176,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.46428138 = fieldWeight in 3176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3176)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    6. 5.2017 18:46:22
  11. Gnoli, C.; Mei, H.: Freely faceted classification for Web-based information retrieval (2006) 0.01
    0.012918284 = product of:
      0.03875485 = sum of:
        0.03875485 = product of:
          0.0775097 = sum of:
            0.0775097 = weight(_text_:database in 534) [ClassicSimilarity], result of:
              0.0775097 = score(doc=534,freq=4.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.37897915 = fieldWeight in 534, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=534)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In free classification, each concept is expressed by a constant notation, and classmarks are formed by free combinations of them, allowing the retrieval of records from a database by searching any of the component concepts. A refinement of free classification is freely faceted classification, where notation can include facets, expressing the kind of relations held between the concepts. The Integrative Level Classification project aims at testing free and freely faceted classification by applying them to small bibliographical samples in various domains. A sample, called the Dandelion Bibliography of Facet Analysis, is described here. Experience was gained using this system to classify 300 specialized papers dealing with facet analysis itself recorded on a MySQL database and building a Web interface exploiting freely faceted notation. The interface is written in PHP and uses string functions to process the queries and to yield relevant results selected and ordered according to the principles of integrative levels.
  12. Foskett, D.J.: Classification and integrative levels (1985) 0.01
    0.01079417 = product of:
      0.03238251 = sum of:
        0.03238251 = weight(_text_:reference in 3639) [ClassicSimilarity], result of:
          0.03238251 = score(doc=3639,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.15732343 = fieldWeight in 3639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3639)
      0.33333334 = coord(1/3)
    
    Abstract
    Very interesting experimental work was done by Douglas Foskett and other British classificationists during the fifteen-year period following the end of World War II. The research was effective in demonstrating that it was possible to make very sophisticated classification systems for virtually any subject-systems suitable for experts and for the general user needing a detailed subject classification. The success of these special systems led to consideration of the possibility of putting them together to form a new general classification system. To do such a thing would require a general, overall framework of some kind, since systems limited to a special subject are easier to construct because one does not have to worry about including all of the pertinent facets needed for a general system. Individual subject classifications do not automatically coalesce into a general pattern. For example, what is central to one special classification might be fringe in another or in several others. Fringe terminologies may not coincide in terms of logical relationships. Homographs and homonyms may not rear their ugly heads until attempts at merger are made. Foskett points out that even identifying a thing in terms of a noun or verb involves different assumptions in approach. For these and other reasons, it made sense to look for existing work in fields where the necessary framework already existed. Foskett found the rudiments of such a system in a number of writings, culminating in a logical system called "integrative levels" suggested by James K. Feibleman (q.v.). This system consists of a set of advancing conceptual levels relating to the apparent organization of nature. These levels are irreversible in that if one once reached a certain level there was no going back. Foskett points out that with higher levels and greater complexity in structure the analysis needed to establish valid levels becomes much more difficult, especially as Feibleman stipulates that a higher level must not be reducible to a lower one. (That is, one cannot put Humpty Dumpty together again.) Foskett is optimistic to the extent of suggesting that references from level to level be made upwards, with inductive reasoning, a system used by Derek Austin (q.v.) for making reference structures in PRECIS. Though the method of integrative levels so far has not been used successfully with the byproducts of human social behavior and thought, so much has been learned about these areas during the past twenty years that Foskett may yet be correct in his optimism. Foskett's name has Jong been associated with classification in the social sciences. As with many of the British classificationists included in this book, he has been a member of the Classification Research Group for about forty years. Like the others, he continues to contribute to the field.
  13. Frické, M.: Logic and the organization of information (2012) 0.01
    0.01079417 = product of:
      0.03238251 = sum of:
        0.03238251 = weight(_text_:reference in 1782) [ClassicSimilarity], result of:
          0.03238251 = score(doc=1782,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.15732343 = fieldWeight in 1782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
      0.33333334 = coord(1/3)
    
    Abstract
    Logic and the Organization of Information closely examines the historical and contemporary methodologies used to catalogue information objects-books, ebooks, journals, articles, web pages, images, emails, podcasts and more-in the digital era. This book provides an in-depth technical background for digital librarianship, and covers a broad range of theoretical and practical topics including: classification theory, topic annotation, automatic clustering, generalized synonymy and concept indexing, distributed libraries, semantic web ontologies and Simple Knowledge Organization System (SKOS). It also analyzes the challenges facing today's information architects, and outlines a series of techniques for overcoming them. Logic and the Organization of Information is intended for practitioners and professionals working at a design level as a reference book for digital librarianship. Advanced-level students, researchers and academics studying information science, library science, digital libraries and computer science will also find this book invaluable.
  14. Connaway, L.S.; Sievert, M.C.: Comparison of three classification systems for information on health insurance (1996) 0.01
    0.00913967 = product of:
      0.02741901 = sum of:
        0.02741901 = product of:
          0.05483802 = sum of:
            0.05483802 = weight(_text_:22 in 7242) [ClassicSimilarity], result of:
              0.05483802 = score(doc=7242,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.30952093 = fieldWeight in 7242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7242)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 4.1997 21:10:19
  15. Belayche, C.: ¬A propos de la classification de Dewey (1997) 0.01
    0.00913967 = product of:
      0.02741901 = sum of:
        0.02741901 = product of:
          0.05483802 = sum of:
            0.05483802 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.05483802 = score(doc=1171,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.30952093 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1171)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Bulletin d'informations de l'Association des Bibliothecaires Francais. 1997, no.175, S.22-23
  16. Lin, W.-Y.C.: ¬The concept and applications of faceted classifications (2006) 0.01
    0.00913967 = product of:
      0.02741901 = sum of:
        0.02741901 = product of:
          0.05483802 = sum of:
            0.05483802 = weight(_text_:22 in 5083) [ClassicSimilarity], result of:
              0.05483802 = score(doc=5083,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.30952093 = fieldWeight in 5083, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5083)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 5.2007 22:19:35
  17. Lorenz, B.: Zur Theorie und Terminologie der bibliothekarischen Klassifikation (2018) 0.01
    0.00913967 = product of:
      0.02741901 = sum of:
        0.02741901 = product of:
          0.05483802 = sum of:
            0.05483802 = weight(_text_:22 in 4339) [ClassicSimilarity], result of:
              0.05483802 = score(doc=4339,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.30952093 = fieldWeight in 4339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4339)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Pages
    S.1-22
  18. Broughton, V.: Essential classification (2004) 0.01
    0.0087230075 = product of:
      0.026169023 = sum of:
        0.026169023 = weight(_text_:reference in 2824) [ClassicSimilarity], result of:
          0.026169023 = score(doc=2824,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.12713654 = fieldWeight in 2824, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
      0.33333334 = coord(1/3)
    
    Footnote
    In Chapter 10, "Controlled indexing languages," Professor Broughton states that a classification scheme is truly a language "since it permits communication and the exchange of information" (p. 89), a Statement with which this reviewer wholly agrees. Chapter 11, however, "Word-based approaches to retrieval," moves us to a different field altogether, offering only a narrow view of the whole world of controlled indexing languages such as thesauri, and presenting disconnected discussions of alphabetical filing, form and structure of subject headings, modern developments in alphabetical subject indexing, etc. Chapters 12 and 13 focus an the Library of Congress Subject Headings (LCSH), without even a passing reference to existing subject headings lists in other languages (French RAMEAU, German SWK, etc.). If it is not surprising to see a section on subject headings in a book on classification, the two subjects being taught together in most library schools, the location of this section in the middle of this particular book is more difficult to understand. Chapter 14 brings the reader back to classification, for a discussion of essentials of classification scheme application. The following five chapters present in turn each one of the three major and currently used bibliographic classification schemes, in order of increasing complexity and difficulty of application. The Library of Congress Classification (LCC), the easiest to use, is covered in chapters 15 and 16. The Dewey Decimal Classification (DDC) deserves only a one-chapter treatment (Chapter 17), while the functionalities of the Universal Decimal Classification (UDC), which Professor Broughton knows extremely well, are described in chapters 18 and 19. Chapter 20 is a general discussion of faceted classification, on par with the first seven chapters for its theoretical content. Chapter 21, an interesting last chapter on managing classification, addresses down-to-earth matters such as the cost of classification, the need for re-classification, advantages and disadvantages of using print versions or e-versions of classification schemes, choice of classification scheme, general versus special scheme. But although the questions are interesting, the chapter provides only a very general overview of what appropriate answers might be. To facilitate reading and learning, summaries are strategically located at various places in the text, and always before switching to a related subject. Professor Broughton's choice of examples is always interesting, and sometimes even entertaining (see for example "Inside out: A brief history of underwear" (p. 71)). With many examples, however, and particularly those that appear in the five chapters an classification scheme applications, the novice reader would have benefited from more detailed explanations. On page 221, for example, "The history and social influence of the potato" results in this analysis of concepts: Potato - Sociology, and in the UDC class number: 635.21:316. What happened to the "history" aspect? Some examples are not very convincing: in Animals RT Reproduction and Art RT Reproduction (p. 102), the associative relationship is not appropriate as it is used to distinguish homographs and would do nothing to help either the indexer or the user at the retrieval stage.
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
  19. Winske, E.: ¬The development and structure of an urban, regional, and local documents classification scheme (1996) 0.01
    0.007997211 = product of:
      0.023991633 = sum of:
        0.023991633 = product of:
          0.047983266 = sum of:
            0.047983266 = weight(_text_:22 in 7241) [ClassicSimilarity], result of:
              0.047983266 = score(doc=7241,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.2708308 = fieldWeight in 7241, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7241)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Paper presented at conference on 'Local documents, a new classification scheme' at the Research Caucus of the Florida Library Association Annual Conference, Fort Lauderdale, Florida 22 Apr 95
  20. Olson, H.A.: Sameness and difference : a cultural foundation of classification (2001) 0.01
    0.007997211 = product of:
      0.023991633 = sum of:
        0.023991633 = product of:
          0.047983266 = sum of:
            0.047983266 = weight(_text_:22 in 166) [ClassicSimilarity], result of:
              0.047983266 = score(doc=166,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.2708308 = fieldWeight in 166, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=166)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    10. 9.2000 17:38:22