Search (108 results, page 1 of 6)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Green, R.; Panzer, M.: ¬The ontological character of classes in the Dewey Decimal Classification 0.04
    0.035267882 = product of:
      0.14107153 = sum of:
        0.030953024 = weight(_text_:26 in 3530) [ClassicSimilarity], result of:
          0.030953024 = score(doc=3530,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.27312735 = fieldWeight in 3530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3530)
        0.031475607 = product of:
          0.062951215 = sum of:
            0.062951215 = weight(_text_:rules in 3530) [ClassicSimilarity], result of:
              0.062951215 = score(doc=3530,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.38950738 = fieldWeight in 3530, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3530)
          0.5 = coord(1/2)
        0.062951215 = weight(_text_:rules in 3530) [ClassicSimilarity], result of:
          0.062951215 = score(doc=3530,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.38950738 = fieldWeight in 3530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3530)
        0.015691686 = product of:
          0.031383373 = sum of:
            0.031383373 = weight(_text_:ed in 3530) [ClassicSimilarity], result of:
              0.031383373 = score(doc=3530,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.27501947 = fieldWeight in 3530, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3530)
          0.5 = coord(1/2)
      0.25 = coord(4/16)
    
    Abstract
    Classes in the Dewey Decimal Classification (DDC) system function as neighborhoods around focal topics in captions and notes. Topical neighborhoods are generated through specialization and instantiation, complex topic synthesis, index terms and mapped headings, hierarchical force, rules for choosing between numbers, development of the DDC over time, and use of the system in classifying resources. Implications of representation using a formal knowledge representation language are explored.
    Source
    Paradigms and conceptual systems in knowledge organization: Proceedings of the Eleventh International ISKO conference, Rome, 23-26 February 2010, ed. Claudio Gnoli, Indeks, Frankfurt M
  2. Broughton, V.: Essential classification (2004) 0.03
    0.028066728 = product of:
      0.08981353 = sum of:
        0.023344096 = weight(_text_:author in 2824) [ClassicSimilarity], result of:
          0.023344096 = score(doc=2824,freq=4.0), product of:
            0.15482868 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.032090448 = queryNorm
            0.15077372 = fieldWeight in 2824, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.019825032 = weight(_text_:cataloguing in 2824) [ClassicSimilarity], result of:
          0.019825032 = score(doc=2824,freq=4.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.13894537 = fieldWeight in 2824, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.019825032 = weight(_text_:cataloguing in 2824) [ClassicSimilarity], result of:
          0.019825032 = score(doc=2824,freq=4.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.13894537 = fieldWeight in 2824, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.022336038 = weight(_text_:2nd in 2824) [ClassicSimilarity], result of:
          0.022336038 = score(doc=2824,freq=2.0), product of:
            0.18010403 = queryWeight, product of:
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.032090448 = queryNorm
            0.12401742 = fieldWeight in 2824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.0044833384 = product of:
          0.008966677 = sum of:
            0.008966677 = weight(_text_:ed in 2824) [ClassicSimilarity], result of:
              0.008966677 = score(doc=2824,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.07857699 = fieldWeight in 2824, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2824)
          0.5 = coord(1/2)
      0.3125 = coord(5/16)
    
    Abstract
    Classification is a crucial skill for all information workers involved in organizing collections, but it is a difficult concept to grasp - and is even more difficult to put into practice. Essential Classification offers full guidance an how to go about classifying a document from scratch. This much-needed text leads the novice classifier step by step through the basics of subject cataloguing, with an emphasis an practical document analysis and classification. It deals with fundamental questions of the purpose of classification in different situations, and the needs and expectations of end users. The novice is introduced to the ways in which document content can be assessed, and how this can best be expressed for translation into the language of specific indexing and classification systems. The characteristics of the major general schemes of classification are discussed, together with their suitability for different classification needs.
    Footnote
    Rez. in: KO 32(2005) no.1, S.47-49 (M. Hudon): "Vanda Broughton's Essential Classification is the most recent addition to a very small set of classification textbooks published over the past few years. The book's 21 chapters are based very closely an the cataloguing and classification module at the School of Library, Archive, and Information studies at University College, London. The author's main objective is clear: this is "first and foremost a book about how to classify. The emphasis throughout is an the activity of classification rather than the theory, the practical problems of the organization of collections, and the needs of the users" (p. 1). This is not a theoretical work, but a basic course in classification and classification scheme application. For this reviewer, who also teaches "Classification 101," this is also a fascinating peek into how a colleague organizes content and structures her course. "Classification is everywhere" (p. 1): the first sentence of this book is also one of the first statements in my own course, and Professor Broughton's metaphors - the supermarket, canned peas, flowers, etc. - are those that are used by our colleagues around the world. The combination of tone, writing style and content display are reader-friendly; they are in fact what make this book remarkable and what distinguishes it from more "formal" textbooks, such as The Organization of Information, the superb text written and recently updated (2004) by Professor Arlene Taylor (2nd ed. Westport, Conn.: Libraries Unlimited, 2004). Reading Essential Classification, at times, feels like being in a classroom, facing a teacher who assures you that "you don't need to worry about this at this stage" (p. 104), and reassures you that, although you now speed a long time looking for things, "you will soon speed up when you get to know the scheme better" (p. 137). This teacher uses redundancy in a productive fashion, and she is not afraid to express her own opinions ("I think that if these concepts are helpful they may be used" (p. 245); "It's annoying that LCC doesn't provide clearer instructions, but if you keep your head and take them one step at a time [i.e. the tables] they're fairly straightforward" (p. 174)). Chapters 1 to 7 present the essential theoretical concepts relating to knowledge organization and to bibliographic classification. The author is adept at making and explaining distinctions: known-item retrieval versus subject retrieval, personal versus public/shared/official classification systems, scientific versus folk classification systems, object versus aspect classification systems, semantic versus syntactic relationships, and so on. Chapters 8 and 9 discuss the practice of classification, through content analysis and subject description. A short discussion of difficult subjects, namely the treatment of unique concepts (persons, places, etc.) as subjects seems a little advanced for a beginners' class.
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
  3. Frické, M.: Logic and the organization of information (2012) 0.02
    0.023356954 = product of:
      0.093427815 = sum of:
        0.028886845 = weight(_text_:author in 1782) [ClassicSimilarity], result of:
          0.028886845 = score(doc=1782,freq=2.0), product of:
            0.15482868 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.032090448 = queryNorm
            0.18657295 = fieldWeight in 1782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
        0.015476512 = weight(_text_:26 in 1782) [ClassicSimilarity], result of:
          0.015476512 = score(doc=1782,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.13656367 = fieldWeight in 1782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
        0.024532227 = weight(_text_:cataloguing in 1782) [ClassicSimilarity], result of:
          0.024532227 = score(doc=1782,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.17193612 = fieldWeight in 1782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
        0.024532227 = weight(_text_:cataloguing in 1782) [ClassicSimilarity], result of:
          0.024532227 = score(doc=1782,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.17193612 = fieldWeight in 1782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
      0.25 = coord(4/16)
    
    Date
    16. 3.2012 11:26:29
    Footnote
    Rez. in: J. Doc. 70(2014) no.4: "Books on the organization of information and knowledge, aimed at a library/information audience, tend to fall into two clear categories. Most are practical and pragmatic, explaining the "how" as much or more than the "why". Some are theoretical, in part or in whole, showing how the practice of classification, indexing, resource description and the like relates to philosophy, logic, and other foundational bases; the books by Langridge (1992) and by Svenonious (2000) are well-known examples this latter kind. To this category certainly belongs a recent book by Martin Frické (2012). The author takes the reader for an extended tour through a variety of aspects of information organization, including classification and taxonomy, alphabetical vocabularies and indexing, cataloguing and FRBR, and aspects of the semantic web. The emphasis throughout is on showing how practice is, or should be, underpinned by formal structures; there is a particular emphasis on first order predicate calculus. The advantages of a greater, and more explicit, use of symbolic logic is a recurring theme of the book. There is a particularly commendable historical dimension, often omitted in texts on this subject. It cannot be said that this book is entirely an easy read, although it is well written with a helpful index, and its arguments are generally well supported by clear and relevant examples. It is thorough and detailed, but thereby seems better geared to the needs of advanced students and researchers than to the practitioners who are suggested as a main market. For graduate students in library/information science and related disciplines, in particular, this will be a valuable resource. I would place it alongside Svenonious' book as the best insight into the theoretical "why" of information organization. It has evoked a good deal of interest, including a set of essay commentaries in Journal of Information Science (Gilchrist et al., 2013). Introducing these, Alan Gilchrist rightly says that Frické deserves a salute for making explicit the fundamental relationship between the ancient discipline of logic and modern information organization. If information science is to continue to develop, and make a contribution to the organization of the information environments of the future, then this book sets the groundwork for the kind of studies which will be needed." (D. Bawden)
  4. Bliss, H.E.: ¬A bibliographic classification : principles and definitions (1985) 0.02
    0.020961102 = product of:
      0.11179255 = sum of:
        0.016484896 = weight(_text_:american in 3621) [ClassicSimilarity], result of:
          0.016484896 = score(doc=3621,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.15067379 = fieldWeight in 3621, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.03125 = fieldNorm(doc=3621)
        0.0773743 = weight(_text_:2nd in 3621) [ClassicSimilarity], result of:
          0.0773743 = score(doc=3621,freq=6.0), product of:
            0.18010403 = queryWeight, product of:
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.032090448 = queryNorm
            0.42960894 = fieldWeight in 3621, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.03125 = fieldNorm(doc=3621)
        0.017933354 = product of:
          0.035866708 = sum of:
            0.035866708 = weight(_text_:ed in 3621) [ClassicSimilarity], result of:
              0.035866708 = score(doc=3621,freq=8.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.31430796 = fieldWeight in 3621, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3621)
          0.5 = coord(1/2)
      0.1875 = coord(3/16)
    
    Abstract
    Henry Evelyn Bliss (1870-1955) devoted several decades of his life to the study of classification and the development of the Bibliographic Classification scheme while serving as a librarian in the College of the City of New York. In the course of the development of the Bibliographic Classification, Bliss developed a body of classification theory published in a number of articles and books, among which the best known are The Organization of Knowledge and the System of the Sciences (1929), Organization of Knowledge in Libraries and the Subject Approach to Books (1933; 2nd ed., 1939), and the lengthy preface to A Bibliographic Classification (Volumes 1-2, 1940; 2nd ed., 1952). In developing the Bibliographic Classification, Bliss carefully established its philosophical and theoretical basis, more so than was attempted by the makers of other classification schemes, with the possible exception of S. R. Ranganathan (q.v.) and his Colon Classification. The basic principles established by Bliss for the Bibliographic Classification are: consensus, collocation of related subjects, subordination of special to general and gradation in specialty, and the relativity of classes and of classification (hence alternative location and alternative treatment). In the preface to the schedules of A Bibliographic Classification, Bliss spells out the general principles of classification as weIl as principles specifically related to his scheme. The first volume of the schedules appeared in 1940. In 1952, he issued a second edition of the volume with a rewritten preface, from which the following excerpt is taken, and with the addition of a "Concise Synopsis," which is also included here to illustrate the principles of classificatory structure. In the excerpt reprinted below, Bliss discusses the correlation between classes, concepts, and terms, as weIl as the hierarchical structure basic to his classification scheme. In his discussion of cross-classification, Bliss recognizes the "polydimensional" nature of classification and the difficulties inherent in the two-dimensional approach which is characteristic of linear classification. This is one of the earliest works in which the multidimensional nature of classification is recognized. The Bibliographic Classification did not meet with great success in the United States because the Dewey Decimal Classification and the Library of Congress Classification were already weIl ensconced in American libraries by then. Nonetheless, it attracted considerable attention in the British Commonwealth and elsewhere in the world. A committee was formed in Britain which later became the Bliss Classification Association. A faceted edition of the scheme has been in preparation under the direction of J. Mills and V. Broughton. Several parts of this new edition, entitled Bliss Bibliographic Classification, have been published.
    Footnote
    Original in: Bliss, H.E.: A bibliographic classification extended by systematic auxuliary schedules for composite specification and notation. vols 1-2. 2nd ed. New York: Wilson 1952. S.3-11.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  5. Ereshefsky, M.: ¬The poverty of the Linnaean hierarchy : a philosophical study of biological taxonomy (2007) 0.02
    0.020497862 = product of:
      0.10932194 = sum of:
        0.033013538 = weight(_text_:author in 2493) [ClassicSimilarity], result of:
          0.033013538 = score(doc=2493,freq=2.0), product of:
            0.15482868 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.032090448 = queryNorm
            0.21322623 = fieldWeight in 2493, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.03125 = fieldNorm(doc=2493)
        0.025436133 = product of:
          0.050872266 = sum of:
            0.050872266 = weight(_text_:rules in 2493) [ClassicSimilarity], result of:
              0.050872266 = score(doc=2493,freq=4.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.3147695 = fieldWeight in 2493, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2493)
          0.5 = coord(1/2)
        0.050872266 = weight(_text_:rules in 2493) [ClassicSimilarity], result of:
          0.050872266 = score(doc=2493,freq=4.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.3147695 = fieldWeight in 2493, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.03125 = fieldNorm(doc=2493)
      0.1875 = coord(3/16)
    
    Footnote
    Rez. in: KO 35(2008) no.4, S.255-259 (B. Hjoerland): "This book was published in 2000 simultaneously in hardback and as an electronic resource, and, in 2007, as a paperback. The author is a professor of philosophy at the University of Calgary, Canada. He has an impressive list of contributions, mostly addressing issues in biological taxonomy such as units of evolution, natural kinds and the species concept. The book is a scholarly criticism of the famous classification system developed by the Swedish botanist Carl Linnaeus (1707-1778). This system consists of both a set of rules for the naming of living organisms (biological nomenclature) and principles of classification. Linné's system has been used and adapted by biologists over a period of almost 250 years. Under the current system of codes, it is now applied to more than two million species of organisms. Inherent in the Linnaean system is the indication of hierarchic relationships. The Linnaean system has been justified primarily on the basis of stability. Although it has been criticized and alternatives have been suggested, it still has its advocates (e.g., Schuh, 2003). One of the alternatives being developed is The International Code of Phylogenetic Nomenclature, known as the PhyloCode for short, a system that radically alters the current nomenclatural rules. The new proposals have provoked hot debate on nomenclatural issues in biology. . . ."
  6. Frické, M.: Logical division (2016) 0.02
    0.019986346 = product of:
      0.10659385 = sum of:
        0.031795166 = product of:
          0.06359033 = sum of:
            0.06359033 = weight(_text_:rules in 3183) [ClassicSimilarity], result of:
              0.06359033 = score(doc=3183,freq=4.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.39346188 = fieldWeight in 3183, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3183)
          0.5 = coord(1/2)
        0.06359033 = weight(_text_:rules in 3183) [ClassicSimilarity], result of:
          0.06359033 = score(doc=3183,freq=4.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.39346188 = fieldWeight in 3183, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3183)
        0.011208346 = product of:
          0.022416692 = sum of:
            0.022416692 = weight(_text_:ed in 3183) [ClassicSimilarity], result of:
              0.022416692 = score(doc=3183,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.19644247 = fieldWeight in 3183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3183)
          0.5 = coord(1/2)
      0.1875 = coord(3/16)
    
    Content
    Contents: 1. Introduction: Kinds of Division; 2. The Basics of Logical Division; 3. History; 4. Formalization; 5. The Rules; 6. The Status of the Rules; 7. The Process of Logical Division; 8. Conclusion
    Source
    ISKO Encyclopedia of Knowledge Organization, ed. by B. Hjoerland. [http://www.isko.org/cyclo/logical_division]
  7. Gnoli, C.: Classifying phenomena : part 4: themes and rhemes (2018) 0.02
    0.017621385 = product of:
      0.09398072 = sum of:
        0.026979093 = product of:
          0.053958185 = sum of:
            0.053958185 = weight(_text_:rules in 4152) [ClassicSimilarity], result of:
              0.053958185 = score(doc=4152,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.33386347 = fieldWeight in 4152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4152)
          0.5 = coord(1/2)
        0.053958185 = weight(_text_:rules in 4152) [ClassicSimilarity], result of:
          0.053958185 = score(doc=4152,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.33386347 = fieldWeight in 4152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.046875 = fieldNorm(doc=4152)
        0.013043438 = product of:
          0.026086876 = sum of:
            0.026086876 = weight(_text_:22 in 4152) [ClassicSimilarity], result of:
              0.026086876 = score(doc=4152,freq=2.0), product of:
                0.11237528 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032090448 = queryNorm
                0.23214069 = fieldWeight in 4152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4152)
          0.5 = coord(1/2)
      0.1875 = coord(3/16)
    
    Abstract
    This is the fourth in a series of papers on classification based on phenomena instead of disciplines. Together with types, levels and facets that have been discussed in the previous parts, themes and rhemes are further structural components of such a classification. In a statement or in a longer document, a base theme and several particular themes can be identified. Base theme should be cited first in a classmark, followed by particular themes, each with its own facets. In some cases, rhemes can also be expressed, that is new information provided about a theme, converting an abstract statement ("wolves, affected by cervids") into a claim that some thing actually occurs ("wolves are affected by cervids"). In the Integrative Levels Classification rhemes can be expressed by special deictic classes, including those for actual specimens, anaphoras, unknown values, conjunctions and spans, whole universe, anthropocentric favoured classes, and favoured host classes. These features, together with rules for pronounciation, make a classification of phenomena a true language, that may be suitable for many uses.
    Date
    17. 2.2018 18:22:25
  8. Pocock, H.: Classification schemes : development and survival (1997) 0.02
    0.017523019 = product of:
      0.14018415 = sum of:
        0.070092075 = weight(_text_:cataloguing in 762) [ClassicSimilarity], result of:
          0.070092075 = score(doc=762,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.49124604 = fieldWeight in 762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.078125 = fieldNorm(doc=762)
        0.070092075 = weight(_text_:cataloguing in 762) [ClassicSimilarity], result of:
          0.070092075 = score(doc=762,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.49124604 = fieldWeight in 762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.078125 = fieldNorm(doc=762)
      0.125 = coord(2/16)
    
    Source
    Cataloguing Australia. 23(1997) nos.1/2, S.10-16
  9. Qin, J.: Evolving paradigms of knowledge representation and organization : a comparative study of classification, XML/DTD and ontology (2003) 0.02
    0.017137174 = product of:
      0.09139826 = sum of:
        0.02803683 = weight(_text_:cataloguing in 2763) [ClassicSimilarity], result of:
          0.02803683 = score(doc=2763,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.19649842 = fieldWeight in 2763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.02803683 = weight(_text_:cataloguing in 2763) [ClassicSimilarity], result of:
          0.02803683 = score(doc=2763,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.19649842 = fieldWeight in 2763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.035324603 = sum of:
          0.017933354 = weight(_text_:ed in 2763) [ClassicSimilarity], result of:
            0.017933354 = score(doc=2763,freq=2.0), product of:
              0.11411327 = queryWeight, product of:
                3.5559888 = idf(docFreq=3431, maxDocs=44218)
                0.032090448 = queryNorm
              0.15715398 = fieldWeight in 2763, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5559888 = idf(docFreq=3431, maxDocs=44218)
                0.03125 = fieldNorm(doc=2763)
          0.017391251 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
            0.017391251 = score(doc=2763,freq=2.0), product of:
              0.11237528 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.032090448 = queryNorm
              0.15476047 = fieldWeight in 2763, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2763)
      0.1875 = coord(3/16)
    
    Abstract
    The different points of views an knowledge representation and organization from various research communities reflect underlying philosophies and paradigms in these communities. This paper reviews differences and relations in knowledge representation and organization and generalizes four paradigms-integrative and disintegrative pragmatism and integrative and disintegrative epistemologism. Examples such as classification, XML schemas, and ontologies are compared based an how they specify concepts, build data models, and encode knowledge organization structures. 1. Introduction Knowledge representation (KR) is a term that several research communities use to refer to somewhat different aspects of the same research area. The artificial intelligence (AI) community considers KR as simply "something to do with writing down, in some language or communications medium, descriptions or pictures that correspond in some salient way to the world or a state of the world" (Duce & Ringland, 1988, p. 3). It emphasizes the ways in which knowledge can be encoded in a computer program (Bench-Capon, 1990). For the library and information science (LIS) community, KR is literally the synonym of knowledge organization, i.e., KR is referred to as the process of organizing knowledge into classifications, thesauri, or subject heading lists. KR has another meaning in LIS: it "encompasses every type and method of indexing, abstracting, cataloguing, classification, records management, bibliography and the creation of textual or bibliographic databases for information retrieval" (Anderson, 1996, p. 336). Adding the social dimension to knowledge organization, Hjoerland (1997) states that knowledge is a part of human activities and tied to the division of labor in society, which should be the primary organization of knowledge. Knowledge organization in LIS is secondary or derived, because knowledge is organized in learned institutions and publications. These different points of views an KR suggest that an essential difference in the understanding of KR between both AI and LIS lies in the source of representationwhether KR targets human activities or derivatives (knowledge produced) from human activities. This difference also decides their difference in purpose-in AI KR is mainly computer-application oriented or pragmatic and the result of representation is used to support decisions an human activities, while in LIS KR is conceptually oriented or abstract and the result of representation is used for access to derivatives from human activities.
    Date
    12. 9.2004 17:22:35
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
  10. Broughton, V.: Essential classification (2015) 0.02
    0.01676211 = product of:
      0.13409688 = sum of:
        0.11168019 = weight(_text_:2nd in 2098) [ClassicSimilarity], result of:
          0.11168019 = score(doc=2098,freq=2.0), product of:
            0.18010403 = queryWeight, product of:
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.032090448 = queryNorm
            0.6200871 = fieldWeight in 2098, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.078125 = fieldNorm(doc=2098)
        0.022416692 = product of:
          0.044833384 = sum of:
            0.044833384 = weight(_text_:ed in 2098) [ClassicSimilarity], result of:
              0.044833384 = score(doc=2098,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.39288494 = fieldWeight in 2098, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2098)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Issue
    2nd ed.
  11. Svenonius, E.: Facets as semantic categories (1979) 0.01
    0.014307825 = product of:
      0.1144626 = sum of:
        0.0381542 = product of:
          0.0763084 = sum of:
            0.0763084 = weight(_text_:rules in 1427) [ClassicSimilarity], result of:
              0.0763084 = score(doc=1427,freq=4.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.47215426 = fieldWeight in 1427, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1427)
          0.5 = coord(1/2)
        0.0763084 = weight(_text_:rules in 1427) [ClassicSimilarity], result of:
          0.0763084 = score(doc=1427,freq=4.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.47215426 = fieldWeight in 1427, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.046875 = fieldNorm(doc=1427)
      0.125 = coord(2/16)
    
    Abstract
    The paper looks at the semantic and syntactic components of facet definition. In synthetic classificatory languages, primitive terms are categorized into facets; facet information, when, is used in stating the syntactic rules for combining primitive terms into the acceptable (well-formed) complex expressions in the language. In other words, the structure of a synthetic classificatory language can be defined in terms of the facets recognized in the language and the syntactic rules employed by the language. Thus, facets are the "grammatical categories" of classificatory languages and their definition is the first step in formulating structural descriptions of such languages. As well, the study of how facets are defined can give some insight into how language is used to embody information
  12. Kaula, P.N.: Canons in analytico-synthetic classification (1979) 0.01
    0.013489546 = product of:
      0.10791637 = sum of:
        0.035972122 = product of:
          0.071944244 = sum of:
            0.071944244 = weight(_text_:rules in 1428) [ClassicSimilarity], result of:
              0.071944244 = score(doc=1428,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.4451513 = fieldWeight in 1428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1428)
          0.5 = coord(1/2)
        0.071944244 = weight(_text_:rules in 1428) [ClassicSimilarity], result of:
          0.071944244 = score(doc=1428,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.4451513 = fieldWeight in 1428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0625 = fieldNorm(doc=1428)
      0.125 = coord(2/16)
    
    Abstract
    Presentation of the rules (canons) which S.R. Ranganathan laid down for the three planes of work, the idea plane, the verbal plane and the notational plane and explanation of each of these 34 canons, indispensable tools for the establishment of any classification system. An overall survey of the canons is given
  13. Mills, J.; Broughton, V.: Bliss Bibliographic Classification : Introduction and auxiliary schedules (1992) 0.01
    0.013409688 = product of:
      0.107277505 = sum of:
        0.08934415 = weight(_text_:2nd in 821) [ClassicSimilarity], result of:
          0.08934415 = score(doc=821,freq=2.0), product of:
            0.18010403 = queryWeight, product of:
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.032090448 = queryNorm
            0.49606967 = fieldWeight in 821, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.0625 = fieldNorm(doc=821)
        0.017933354 = product of:
          0.035866708 = sum of:
            0.035866708 = weight(_text_:ed in 821) [ClassicSimilarity], result of:
              0.035866708 = score(doc=821,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.31430796 = fieldWeight in 821, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.0625 = fieldNorm(doc=821)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Issue
    2nd ed.
  14. Kochar, R.S.: Library classification systems (1998) 0.01
    0.012266113 = product of:
      0.09812891 = sum of:
        0.049064454 = weight(_text_:cataloguing in 931) [ClassicSimilarity], result of:
          0.049064454 = score(doc=931,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.34387225 = fieldWeight in 931, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.0546875 = fieldNorm(doc=931)
        0.049064454 = weight(_text_:cataloguing in 931) [ClassicSimilarity], result of:
          0.049064454 = score(doc=931,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.34387225 = fieldWeight in 931, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.0546875 = fieldNorm(doc=931)
      0.125 = coord(2/16)
    
    Abstract
    Library classification traces the origins of the subject and leads an to the latest developments in it. This user-friendly text explains concepts through analogies, diagrams, and tables. The fundamental but important topics an terminology of classification has been uniquely explained. The book deals with the recent trends in the use of computers in cataloguing including on-line systems, artificial intelligence systems etc. With its up-to-date and comprehensive coverage the book will serve as a degree students of Library and Information Science and also prove to be invaluable reference material to professionals and researchers.
  15. Karamuftuoglu, M.: Need for a systemic theory of classification in information science (2007) 0.01
    0.011844954 = product of:
      0.094759636 = sum of:
        0.07003229 = weight(_text_:author in 615) [ClassicSimilarity], result of:
          0.07003229 = score(doc=615,freq=4.0), product of:
            0.15482868 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.032090448 = queryNorm
            0.45232117 = fieldWeight in 615, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.046875 = fieldNorm(doc=615)
        0.024727343 = weight(_text_:american in 615) [ClassicSimilarity], result of:
          0.024727343 = score(doc=615,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.22601068 = fieldWeight in 615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.046875 = fieldNorm(doc=615)
      0.125 = coord(2/16)
    
    Abstract
    In the article, the author aims to clarify some of the issues surrounding the discussion regarding the usefulness of a substantive classification theory in information science (IS) by means of a broad perspective. By utilizing a concrete example from the High Accuracy Retrieval from Documents (HARD) track of a Text REtrieval Conference (TREC), the author suggests that the bag of words approach to information retrieval (IR) and techniques such as relevance feedback have significant limitations in expressing and resolving complex user information needs. He argues that a comprehensive analysis of information needs involves explicating often-implicit assumptions made by the authors of scholarly documents, as well as everyday texts such as news articles. He also argues that progress in IS can be furthered by developing general theories that are applicable to multiple domains. The concrete example of application of the domain-analytic approach to subject analysis in IS to the aesthetic evaluation of works of information arts is used to support this argument.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.13, S.1977-1987
  16. Hillman, D.J.: Mathematical classification techniques for nonstatic document collections, with particular reference to the problem of relevance (1965) 0.01
    0.011733476 = product of:
      0.09386781 = sum of:
        0.078176126 = weight(_text_:2nd in 5516) [ClassicSimilarity], result of:
          0.078176126 = score(doc=5516,freq=2.0), product of:
            0.18010403 = queryWeight, product of:
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.032090448 = queryNorm
            0.43406096 = fieldWeight in 5516, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5516)
        0.015691686 = product of:
          0.031383373 = sum of:
            0.031383373 = weight(_text_:ed in 5516) [ClassicSimilarity], result of:
              0.031383373 = score(doc=5516,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.27501947 = fieldWeight in 5516, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5516)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Source
    Classification research. Proc. of the 2nd Int. Study Conf. ... , Elsinore, 14.-18.9.1964. Ed.: P. Atherton
  17. Farradane, J.E.L.: ¬A scientific theory of classification and indexing and its practical applications (1950) 0.01
    0.01011716 = product of:
      0.08093728 = sum of:
        0.026979093 = product of:
          0.053958185 = sum of:
            0.053958185 = weight(_text_:rules in 1654) [ClassicSimilarity], result of:
              0.053958185 = score(doc=1654,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.33386347 = fieldWeight in 1654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1654)
          0.5 = coord(1/2)
        0.053958185 = weight(_text_:rules in 1654) [ClassicSimilarity], result of:
          0.053958185 = score(doc=1654,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.33386347 = fieldWeight in 1654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.046875 = fieldNorm(doc=1654)
      0.125 = coord(2/16)
    
    Abstract
    A classification is a theory of the structure of knowledge. From a discussion of the nature of truth, it is held that scientific knowledge is the only knowledge which can be regarded as true. The method of induction from empirical data is therefore applied to the construction of a classification. Items of knowledge are divided into uniquely definable terms, called isolates, and the relations between them, called operators. It is shown that only four basic operators exist, expressing appurtenance, equivalence, reaction and causation; using symbols for these operators, all subjects can be analysed in a linear form called an analet. With the addition of the permissible permutations of such analets, formed according to simple rules, alphabetical arrangement of the first terms provide a complete, logical subject index. Examples are given, and possible difficulties are considered. A classification can then be constructed by selection of deductive relations, arranged in hierarchical form. The nature of possible classifications is discussed. It is claimed that such an inductively constructed classification is the only true representation of the structure of knowledge, and that these principles provide a simple technique for accurately and fully indexing and classifying any given set of data, with complete flexibility
  18. Spiteri, L.: ¬A simplified model for facet analysis : Ranganathan 101 (1998) 0.01
    0.01011716 = product of:
      0.08093728 = sum of:
        0.026979093 = product of:
          0.053958185 = sum of:
            0.053958185 = weight(_text_:rules in 3842) [ClassicSimilarity], result of:
              0.053958185 = score(doc=3842,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.33386347 = fieldWeight in 3842, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3842)
          0.5 = coord(1/2)
        0.053958185 = weight(_text_:rules in 3842) [ClassicSimilarity], result of:
          0.053958185 = score(doc=3842,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.33386347 = fieldWeight in 3842, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.046875 = fieldNorm(doc=3842)
      0.125 = coord(2/16)
    
    Abstract
    Ranganathan's canons, principles, and postulates can easily confuse readers, especially because he revised and added to them in various editions of his many books. The Classification Research Group, who drew on Ranganathan's work as their basis for classification theory but developed it in their own way, has never clearly organized all their equivalent canons and principles. In this article Spiteri gathers the fundamental rules from both systems and compares and contrasts them. She makes her own clearer set of principles for constructing facets, stating the subject of a document, and designing notation. Spiteri's "simplified model" is clear and understandable, but certainly not simplistic. The model does not include methods for making a faceted system, but will serve as a very useful guide in how to turn initial work into a rigorous classification. Highly recommended
  19. Loehrlein, A.J.; Lemieux, V.L.; Bennett, M.: ¬The classification of financial products (2014) 0.01
    0.01011716 = product of:
      0.08093728 = sum of:
        0.026979093 = product of:
          0.053958185 = sum of:
            0.053958185 = weight(_text_:rules in 1196) [ClassicSimilarity], result of:
              0.053958185 = score(doc=1196,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.33386347 = fieldWeight in 1196, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1196)
          0.5 = coord(1/2)
        0.053958185 = weight(_text_:rules in 1196) [ClassicSimilarity], result of:
          0.053958185 = score(doc=1196,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.33386347 = fieldWeight in 1196, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.046875 = fieldNorm(doc=1196)
      0.125 = coord(2/16)
    
    Abstract
    In the wake of the global financial crisis, the U.S. Dodd- Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank) was enacted to provide increased transparency in financial markets. In response to Dodd-Frank, a series of rules relating to swaps record keeping have been issued, and one such rule calls for the creation of a financial products classification system. The manner in which financial products are classified will have a profound effect on data integration and analysis in the financial industry. This article considers various approaches that can be taken when classifying financial products and recommends the use of facet analysis. The article argues that this type of analysis is flexible enough to accommodate multiple viewpoints and rigorous enough to facilitate inferences that are based on the hierarchical structure. Various use cases are examined that pertain to the organization of financial products. The use cases confirm the practical utility of taxonomies that are designed according to faceted principles.
  20. Jacob, E.K.: Proposal for a classification of classifications built on Beghtol's distinction between "Naïve Classification" and "Professional Classification" (2010) 0.01
    0.009939759 = product of:
      0.07951807 = sum of:
        0.026531162 = weight(_text_:26 in 2945) [ClassicSimilarity], result of:
          0.026531162 = score(doc=2945,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.23410915 = fieldWeight in 2945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.046875 = fieldNorm(doc=2945)
        0.05298691 = sum of:
          0.026900033 = weight(_text_:ed in 2945) [ClassicSimilarity], result of:
            0.026900033 = score(doc=2945,freq=2.0), product of:
              0.11411327 = queryWeight, product of:
                3.5559888 = idf(docFreq=3431, maxDocs=44218)
                0.032090448 = queryNorm
              0.23573098 = fieldWeight in 2945, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5559888 = idf(docFreq=3431, maxDocs=44218)
                0.046875 = fieldNorm(doc=2945)
          0.026086876 = weight(_text_:22 in 2945) [ClassicSimilarity], result of:
            0.026086876 = score(doc=2945,freq=2.0), product of:
              0.11237528 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.032090448 = queryNorm
              0.23214069 = fieldWeight in 2945, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2945)
      0.125 = coord(2/16)
    
    Abstract
    Argues that Beghtol's (2003) use of the terms "naive classification" and "professional classification" is valid because they are nominal definitions and that the distinction between these two types of classification points up the need for researchers in knowledge organization to broaden their scope beyond traditional classification systems intended for information retrieval. Argues that work by Beghtol (2003), Kwasnik (1999) and Bailey (1994) offer direction for the development of a classification of classifications based on the pragmatic dimensions of extant classification systems. Bezugnahme auf: Beghtol, C.: Naïve classification systems and the global information society. In: Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine. Würzburg: Ergon Verlag 2004. S.19-22. (Advances in knowledge organization; vol.9)
    Date
    1. 6.2010 17:46:26

Languages

  • e 102
  • f 3
  • chi 1
  • d 1
  • sp 1
  • More… Less…

Types

  • a 97
  • m 9
  • el 4
  • s 1
  • More… Less…