Document (#39826)

Author
Broughton, V.
Title
Essential classification
Imprint
London : Facet Publ.
Year
2004
Pages
324 S
Isbn
1-85604-514-5
Abstract
Classification is a crucial skill for all information workers involved in organizing collections, but it is a difficult concept to grasp - and is even more difficult to put into practice. Essential Classification offers full guidance an how to go about classifying a document from scratch. This much-needed text leads the novice classifier step by step through the basics of subject cataloguing, with an emphasis an practical document analysis and classification. It deals with fundamental questions of the purpose of classification in different situations, and the needs and expectations of end users. The novice is introduced to the ways in which document content can be assessed, and how this can best be expressed for translation into the language of specific indexing and classification systems. The characteristics of the major general schemes of classification are discussed, together with their suitability for different classification needs.
Content
Parallelausgabe: New York, NY: Neal-Schuman (ISBN 1-55570-507-3)
Footnote
Rez. in: KO 32(2005) no.1, S.47-49 (M. Hudon): "Vanda Broughton's Essential Classification is the most recent addition to a very small set of classification textbooks published over the past few years. The book's 21 chapters are based very closely an the cataloguing and classification module at the School of Library, Archive, and Information studies at University College, London. The author's main objective is clear: this is "first and foremost a book about how to classify. The emphasis throughout is an the activity of classification rather than the theory, the practical problems of the organization of collections, and the needs of the users" (p. 1). This is not a theoretical work, but a basic course in classification and classification scheme application. For this reviewer, who also teaches "Classification 101," this is also a fascinating peek into how a colleague organizes content and structures her course. "Classification is everywhere" (p. 1): the first sentence of this book is also one of the first statements in my own course, and Professor Broughton's metaphors - the supermarket, canned peas, flowers, etc. - are those that are used by our colleagues around the world. The combination of tone, writing style and content display are reader-friendly; they are in fact what make this book remarkable and what distinguishes it from more "formal" textbooks, such as The Organization of Information, the superb text written and recently updated (2004) by Professor Arlene Taylor (2nd ed. Westport, Conn.: Libraries Unlimited, 2004). Reading Essential Classification, at times, feels like being in a classroom, facing a teacher who assures you that "you don't need to worry about this at this stage" (p. 104), and reassures you that, although you now speed a long time looking for things, "you will soon speed up when you get to know the scheme better" (p. 137). This teacher uses redundancy in a productive fashion, and she is not afraid to express her own opinions ("I think that if these concepts are helpful they may be used" (p. 245); "It's annoying that LCC doesn't provide clearer instructions, but if you keep your head and take them one step at a time [i.e. the tables] they're fairly straightforward" (p. 174)). Chapters 1 to 7 present the essential theoretical concepts relating to knowledge organization and to bibliographic classification. The author is adept at making and explaining distinctions: known-item retrieval versus subject retrieval, personal versus public/shared/official classification systems, scientific versus folk classification systems, object versus aspect classification systems, semantic versus syntactic relationships, and so on. Chapters 8 and 9 discuss the practice of classification, through content analysis and subject description. A short discussion of difficult subjects, namely the treatment of unique concepts (persons, places, etc.) as subjects seems a little advanced for a beginners' class.
In Chapter 10, "Controlled indexing languages," Professor Broughton states that a classification scheme is truly a language "since it permits communication and the exchange of information" (p. 89), a Statement with which this reviewer wholly agrees. Chapter 11, however, "Word-based approaches to retrieval," moves us to a different field altogether, offering only a narrow view of the whole world of controlled indexing languages such as thesauri, and presenting disconnected discussions of alphabetical filing, form and structure of subject headings, modern developments in alphabetical subject indexing, etc. Chapters 12 and 13 focus an the Library of Congress Subject Headings (LCSH), without even a passing reference to existing subject headings lists in other languages (French RAMEAU, German SWK, etc.). If it is not surprising to see a section on subject headings in a book on classification, the two subjects being taught together in most library schools, the location of this section in the middle of this particular book is more difficult to understand. Chapter 14 brings the reader back to classification, for a discussion of essentials of classification scheme application. The following five chapters present in turn each one of the three major and currently used bibliographic classification schemes, in order of increasing complexity and difficulty of application. The Library of Congress Classification (LCC), the easiest to use, is covered in chapters 15 and 16. The Dewey Decimal Classification (DDC) deserves only a one-chapter treatment (Chapter 17), while the functionalities of the Universal Decimal Classification (UDC), which Professor Broughton knows extremely well, are described in chapters 18 and 19. Chapter 20 is a general discussion of faceted classification, on par with the first seven chapters for its theoretical content. Chapter 21, an interesting last chapter on managing classification, addresses down-to-earth matters such as the cost of classification, the need for re-classification, advantages and disadvantages of using print versions or e-versions of classification schemes, choice of classification scheme, general versus special scheme. But although the questions are interesting, the chapter provides only a very general overview of what appropriate answers might be. To facilitate reading and learning, summaries are strategically located at various places in the text, and always before switching to a related subject. Professor Broughton's choice of examples is always interesting, and sometimes even entertaining (see for example "Inside out: A brief history of underwear" (p. 71)). With many examples, however, and particularly those that appear in the five chapters an classification scheme applications, the novice reader would have benefited from more detailed explanations. On page 221, for example, "The history and social influence of the potato" results in this analysis of concepts: Potato - Sociology, and in the UDC class number: 635.21:316. What happened to the "history" aspect? Some examples are not very convincing: in Animals RT Reproduction and Art RT Reproduction (p. 102), the associative relationship is not appropriate as it is used to distinguish homographs and would do nothing to help either the indexer or the user at the retrieval stage.
Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
Weitere Rez. in: ZfBB 53(2006) H.2, S.111-113 (W. Gödert)
Theme
Klassifikationstheorie: Elemente / Struktur
Grundlagen u. Einführungen: Allgemeine Literatur

Similar documents (author)

  1. Broughton, V.: Classification and subject organization and retrieval (2007) 5.02
    5.023287 = sum of:
      5.023287 = weight(author_txt:broughton in 6145) [ClassicSimilarity], result of:
        5.023287 = fieldWeight in 6145, product of:
          1.0 = tf(freq=1.0), with freq of:
            1.0 = termFreq=1.0
          8.037259 = idf(docFreq=37, maxDocs=43254)
          0.625 = fieldNorm(doc=6145)
    
  2. Broughton, V.: Meccano, molecules, and the organization of knowledge : the continuing contribution of S.R. Ranganathan (2007) 5.02
    5.023287 = sum of:
      5.023287 = weight(author_txt:broughton in 2876) [ClassicSimilarity], result of:
        5.023287 = fieldWeight in 2876, product of:
          1.0 = tf(freq=1.0), with freq of:
            1.0 = termFreq=1.0
          8.037259 = idf(docFreq=37, maxDocs=43254)
          0.625 = fieldNorm(doc=2876)
    
  3. Broughton, V.: Organizing a national humanities portal : a model for the classification and subject management of digital resources (2002) 5.02
    5.023287 = sum of:
      5.023287 = weight(author_txt:broughton in 6608) [ClassicSimilarity], result of:
        5.023287 = fieldWeight in 6608, product of:
          1.0 = tf(freq=1.0), with freq of:
            1.0 = termFreq=1.0
          8.037259 = idf(docFreq=37, maxDocs=43254)
          0.625 = fieldNorm(doc=6608)
    
  4. Broughton, V.: ¬A new classification for the literature for religion (2000) 5.02
    5.023287 = sum of:
      5.023287 = weight(author_txt:broughton in 399) [ClassicSimilarity], result of:
        5.023287 = fieldWeight in 399, product of:
          1.0 = tf(freq=1.0), with freq of:
            1.0 = termFreq=1.0
          8.037259 = idf(docFreq=37, maxDocs=43254)
          0.625 = fieldNorm(doc=399)
    
  5. Broughton, V.: ¬The revision process in UDC : an examination of the systematic auxiliary of 'Point-of-View' using facet-analytical methods (1998) 5.02
    5.023287 = sum of:
      5.023287 = weight(author_txt:broughton in 1368) [ClassicSimilarity], result of:
        5.023287 = fieldWeight in 1368, product of:
          1.0 = tf(freq=1.0), with freq of:
            1.0 = termFreq=1.0
          8.037259 = idf(docFreq=37, maxDocs=43254)
          0.625 = fieldNorm(doc=1368)
    

Similar documents (content)

  1. Li, Y.; Belkin, N.J.: ¬A faceted approach to conceptualizing tasks in information seeking (2008) 0.14
    0.14472827 = sum of:
      0.14472827 = product of:
        0.5168867 = sum of:
          0.056632932 = weight(abstract_txt:leads in 4443) [ClassicSimilarity], result of:
            0.056632932 = score(doc=4443,freq=1.0), product of:
              0.1402693 = queryWeight, product of:
                1.0130763 = boost
                6.4599094 = idf(docFreq=183, maxDocs=43254)
                0.021433545 = queryNorm
              0.40374434 = fieldWeight in 4443, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                6.4599094 = idf(docFreq=183, maxDocs=43254)
                0.0625 = fieldNorm(doc=4443)
          0.060909923 = weight(abstract_txt:classifying in 4443) [ClassicSimilarity], result of:
            0.060909923 = score(doc=4443,freq=1.0), product of:
              0.14724547 = queryWeight, product of:
                1.0379629 = boost
                6.6185994 = idf(docFreq=156, maxDocs=43254)
                0.021433545 = queryNorm
              0.41366246 = fieldWeight in 4443, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                6.6185994 = idf(docFreq=156, maxDocs=43254)
                0.0625 = fieldNorm(doc=4443)
          0.020977585 = weight(abstract_txt:different in 4443) [ClassicSimilarity], result of:
            0.020977585 = score(doc=4443,freq=1.0), product of:
              0.091151446 = queryWeight, product of:
                1.1549352 = boost
                3.6822383 = idf(docFreq=2958, maxDocs=43254)
                0.021433545 = queryNorm
              0.2301399 = fieldWeight in 4443, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                3.6822383 = idf(docFreq=2958, maxDocs=43254)
                0.0625 = fieldNorm(doc=4443)
          0.030453932 = weight(abstract_txt:into in 4443) [ClassicSimilarity], result of:
            0.030453932 = score(doc=4443,freq=2.0), product of:
              0.09275674 = queryWeight, product of:
                1.1650609 = boost
                3.7145214 = idf(docFreq=2864, maxDocs=43254)
                0.021433545 = queryNorm
              0.3283204 = fieldWeight in 4443, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7145214 = idf(docFreq=2864, maxDocs=43254)
                0.0625 = fieldNorm(doc=4443)
          0.009974043 = weight(abstract_txt:with in 4443) [ClassicSimilarity], result of:
            0.009974043 = score(doc=4443,freq=1.0), product of:
              0.06356301 = queryWeight, product of:
                1.1812007 = boost
                2.5106533 = idf(docFreq=9548, maxDocs=43254)
                0.021433545 = queryNorm
              0.15691583 = fieldWeight in 4443, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                2.5106533 = idf(docFreq=9548, maxDocs=43254)
                0.0625 = fieldNorm(doc=4443)
          0.07487899 = weight(abstract_txt:essential in 4443) [ClassicSimilarity], result of:
            0.07487899 = score(doc=4443,freq=1.0), product of:
              0.21289553 = queryWeight, product of:
                1.7650586 = boost
                5.627473 = idf(docFreq=422, maxDocs=43254)
                0.021433545 = queryNorm
              0.35171705 = fieldWeight in 4443, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.627473 = idf(docFreq=422, maxDocs=43254)
                0.0625 = fieldNorm(doc=4443)
          0.26305935 = weight(abstract_txt:classification in 4443) [ClassicSimilarity], result of:
            0.26305935 = score(doc=4443,freq=6.0), product of:
              0.4297989 = queryWeight, product of:
                5.0157804 = boost
                3.9979079 = idf(docFreq=2157, maxDocs=43254)
                0.021433545 = queryNorm
              0.61205214 = fieldWeight in 4443, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.9979079 = idf(docFreq=2157, maxDocs=43254)
                0.0625 = fieldNorm(doc=4443)
        0.28 = coord(7/25)
    
  2. Szostak, R.: Classifying science : phenomena, data, theory, method, practice (2004) 0.13
    0.13230492 = sum of:
      0.13230492 = product of:
        0.47251758 = sum of:
          0.029666783 = weight(abstract_txt:different in 1790) [ClassicSimilarity], result of:
            0.029666783 = score(doc=1790,freq=2.0), product of:
              0.091151446 = queryWeight, product of:
                1.1549352 = boost
                3.6822383 = idf(docFreq=2958, maxDocs=43254)
                0.021433545 = queryNorm
              0.32546696 = fieldWeight in 1790, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6822383 = idf(docFreq=2958, maxDocs=43254)
                0.0625 = fieldNorm(doc=1790)
          0.021534182 = weight(abstract_txt:into in 1790) [ClassicSimilarity], result of:
            0.021534182 = score(doc=1790,freq=1.0), product of:
              0.09275674 = queryWeight, product of:
                1.1650609 = boost
                3.7145214 = idf(docFreq=2864, maxDocs=43254)
                0.021433545 = queryNorm
              0.23215759 = fieldWeight in 1790, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                3.7145214 = idf(docFreq=2864, maxDocs=43254)
                0.0625 = fieldNorm(doc=1790)
          0.014105426 = weight(abstract_txt:with in 1790) [ClassicSimilarity], result of:
            0.014105426 = score(doc=1790,freq=2.0), product of:
              0.06356301 = queryWeight, product of:
                1.1812007 = boost
                2.5106533 = idf(docFreq=9548, maxDocs=43254)
                0.021433545 = queryNorm
              0.22191249 = fieldWeight in 1790, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.5106533 = idf(docFreq=9548, maxDocs=43254)
                0.0625 = fieldNorm(doc=1790)
          0.07487899 = weight(abstract_txt:essential in 1790) [ClassicSimilarity], result of:
            0.07487899 = score(doc=1790,freq=1.0), product of:
              0.21289553 = queryWeight, product of:
                1.7650586 = boost
                5.627473 = idf(docFreq=422, maxDocs=43254)
                0.021433545 = queryNorm
              0.35171705 = fieldWeight in 1790, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.627473 = idf(docFreq=422, maxDocs=43254)
                0.0625 = fieldNorm(doc=1790)
          0.09676773 = weight(abstract_txt:step in 1790) [ClassicSimilarity], result of:
            0.09676773 = score(doc=1790,freq=1.0), product of:
              0.2525885 = queryWeight, product of:
                1.9225723 = boost
                6.1296678 = idf(docFreq=255, maxDocs=43254)
                0.021433545 = queryNorm
              0.38310423 = fieldWeight in 1790, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                6.1296678 = idf(docFreq=255, maxDocs=43254)
                0.0625 = fieldNorm(doc=1790)
          0.049553417 = weight(abstract_txt:document in 1790) [ClassicSimilarity], result of:
            0.049553417 = score(doc=1790,freq=1.0), product of:
              0.18507144 = queryWeight, product of:
                2.015539 = boost
                4.2840466 = idf(docFreq=1620, maxDocs=43254)
                0.021433545 = queryNorm
              0.26775292 = fieldWeight in 1790, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                4.2840466 = idf(docFreq=1620, maxDocs=43254)
                0.0625 = fieldNorm(doc=1790)
          0.18601105 = weight(abstract_txt:classification in 1790) [ClassicSimilarity], result of:
            0.18601105 = score(doc=1790,freq=3.0), product of:
              0.4297989 = queryWeight, product of:
                5.0157804 = boost
                3.9979079 = idf(docFreq=2157, maxDocs=43254)
                0.021433545 = queryNorm
              0.43278623 = fieldWeight in 1790, product of:
                1.7320508 = tf(freq=3.0), with freq of:
                  3.0 = termFreq=3.0
                3.9979079 = idf(docFreq=2157, maxDocs=43254)
                0.0625 = fieldNorm(doc=1790)
        0.28 = coord(7/25)
    
  3. French, J.C.; Brown, D.E.; Kim, N.-H.: ¬A classification approach to Boolean query reformulation (1997) 0.12
    0.122281075 = sum of:
      0.122281075 = product of:
        0.5095045 = sum of:
          0.060909923 = weight(abstract_txt:classifying in 1198) [ClassicSimilarity], result of:
            0.060909923 = score(doc=1198,freq=1.0), product of:
              0.14724547 = queryWeight, product of:
                1.0379629 = boost
                6.6185994 = idf(docFreq=156, maxDocs=43254)
                0.021433545 = queryNorm
              0.41366246 = fieldWeight in 1198, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                6.6185994 = idf(docFreq=156, maxDocs=43254)
                0.0625 = fieldNorm(doc=1198)
          0.0802569 = weight(abstract_txt:classifier in 1198) [ClassicSimilarity], result of:
            0.0802569 = score(doc=1198,freq=1.0), product of:
              0.1769721 = queryWeight, product of:
                1.1379241 = boost
                7.2560043 = idf(docFreq=82, maxDocs=43254)
                0.021433545 = queryNorm
              0.45350027 = fieldWeight in 1198, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                7.2560043 = idf(docFreq=82, maxDocs=43254)
                0.0625 = fieldNorm(doc=1198)
          0.037298296 = weight(abstract_txt:into in 1198) [ClassicSimilarity], result of:
            0.037298296 = score(doc=1198,freq=3.0), product of:
              0.09275674 = queryWeight, product of:
                1.1650609 = boost
                3.7145214 = idf(docFreq=2864, maxDocs=43254)
                0.021433545 = queryNorm
              0.40210873 = fieldWeight in 1198, product of:
                1.7320508 = tf(freq=3.0), with freq of:
                  3.0 = termFreq=3.0
                3.7145214 = idf(docFreq=2864, maxDocs=43254)
                0.0625 = fieldNorm(doc=1198)
          0.06916175 = weight(abstract_txt:difficult in 1198) [ClassicSimilarity], result of:
            0.06916175 = score(doc=1198,freq=1.0), product of:
              0.20191592 = queryWeight, product of:
                1.7189417 = boost
                5.4804397 = idf(docFreq=489, maxDocs=43254)
                0.021433545 = queryNorm
              0.34252748 = fieldWeight in 1198, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.4804397 = idf(docFreq=489, maxDocs=43254)
                0.0625 = fieldNorm(doc=1198)
          0.15448412 = weight(abstract_txt:novice in 1198) [ClassicSimilarity], result of:
            0.15448412 = score(doc=1198,freq=1.0), product of:
              0.34502387 = queryWeight, product of:
                2.2469864 = boost
                7.1639853 = idf(docFreq=90, maxDocs=43254)
                0.021433545 = queryNorm
              0.44774908 = fieldWeight in 1198, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                7.1639853 = idf(docFreq=90, maxDocs=43254)
                0.0625 = fieldNorm(doc=1198)
          0.107393526 = weight(abstract_txt:classification in 1198) [ClassicSimilarity], result of:
            0.107393526 = score(doc=1198,freq=1.0), product of:
              0.4297989 = queryWeight, product of:
                5.0157804 = boost
                3.9979079 = idf(docFreq=2157, maxDocs=43254)
                0.021433545 = queryNorm
              0.24986924 = fieldWeight in 1198, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                3.9979079 = idf(docFreq=2157, maxDocs=43254)
                0.0625 = fieldNorm(doc=1198)
        0.24 = coord(6/25)
    
  4. Broughton, V.: Essential thesaurus construction (2006) 0.12
    0.12188554 = sum of:
      0.12188554 = product of:
        0.5078564 = sum of:
          0.0321312 = weight(abstract_txt:needs in 4925) [ClassicSimilarity], result of:
            0.0321312 = score(doc=4925,freq=1.0), product of:
              0.13239528 = queryWeight, product of:
                1.391913 = boost
                4.437786 = idf(docFreq=1389, maxDocs=43254)
                0.021433545 = queryNorm
              0.24269143 = fieldWeight in 4925, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                4.437786 = idf(docFreq=1389, maxDocs=43254)
                0.0546875 = fieldNorm(doc=4925)
          0.06551912 = weight(abstract_txt:essential in 4925) [ClassicSimilarity], result of:
            0.06551912 = score(doc=4925,freq=1.0), product of:
              0.21289553 = queryWeight, product of:
                1.7650586 = boost
                5.627473 = idf(docFreq=422, maxDocs=43254)
                0.021433545 = queryNorm
              0.30775243 = fieldWeight in 4925, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.627473 = idf(docFreq=422, maxDocs=43254)
                0.0546875 = fieldNorm(doc=4925)
          0.11974395 = weight(abstract_txt:step in 4925) [ClassicSimilarity], result of:
            0.11974395 = score(doc=4925,freq=2.0), product of:
              0.2525885 = queryWeight, product of:
                1.9225723 = boost
                6.1296678 = idf(docFreq=255, maxDocs=43254)
                0.021433545 = queryNorm
              0.4740673 = fieldWeight in 4925, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.1296678 = idf(docFreq=255, maxDocs=43254)
                0.0546875 = fieldNorm(doc=4925)
          0.061319225 = weight(abstract_txt:document in 4925) [ClassicSimilarity], result of:
            0.061319225 = score(doc=4925,freq=2.0), product of:
              0.18507144 = queryWeight, product of:
                2.015539 = boost
                4.2840466 = idf(docFreq=1620, maxDocs=43254)
                0.021433545 = queryNorm
              0.33132732 = fieldWeight in 4925, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.2840466 = idf(docFreq=1620, maxDocs=43254)
                0.0546875 = fieldNorm(doc=4925)
          0.1351736 = weight(abstract_txt:novice in 4925) [ClassicSimilarity], result of:
            0.1351736 = score(doc=4925,freq=1.0), product of:
              0.34502387 = queryWeight, product of:
                2.2469864 = boost
                7.1639853 = idf(docFreq=90, maxDocs=43254)
                0.021433545 = queryNorm
              0.39178044 = fieldWeight in 4925, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                7.1639853 = idf(docFreq=90, maxDocs=43254)
                0.0546875 = fieldNorm(doc=4925)
          0.09396934 = weight(abstract_txt:classification in 4925) [ClassicSimilarity], result of:
            0.09396934 = score(doc=4925,freq=1.0), product of:
              0.4297989 = queryWeight, product of:
                5.0157804 = boost
                3.9979079 = idf(docFreq=2157, maxDocs=43254)
                0.021433545 = queryNorm
              0.21863559 = fieldWeight in 4925, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                3.9979079 = idf(docFreq=2157, maxDocs=43254)
                0.0546875 = fieldNorm(doc=4925)
        0.24 = coord(6/25)
    
  5. Prabowo, R.; Jackson, M.; Burden, P.; Knoell, H.-D.: Ontology-based automatic classification for the Web pages : design, implementation and evaluation (2002) 0.12
    0.11789161 = sum of:
      0.11789161 = product of:
        0.58945805 = sum of:
          0.07613741 = weight(abstract_txt:classifying in 5384) [ClassicSimilarity], result of:
            0.07613741 = score(doc=5384,freq=1.0), product of:
              0.14724547 = queryWeight, product of:
                1.0379629 = boost
                6.6185994 = idf(docFreq=156, maxDocs=43254)
                0.021433545 = queryNorm
              0.5170781 = fieldWeight in 5384, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                6.6185994 = idf(docFreq=156, maxDocs=43254)
                0.078125 = fieldNorm(doc=5384)
          0.17376128 = weight(abstract_txt:classifier in 5384) [ClassicSimilarity], result of:
            0.17376128 = score(doc=5384,freq=3.0), product of:
              0.1769721 = queryWeight, product of:
                1.1379241 = boost
                7.2560043 = idf(docFreq=82, maxDocs=43254)
                0.021433545 = queryNorm
              0.9818569 = fieldWeight in 5384, product of:
                1.7320508 = tf(freq=3.0), with freq of:
                  3.0 = termFreq=3.0
                7.2560043 = idf(docFreq=82, maxDocs=43254)
                0.078125 = fieldNorm(doc=5384)
          0.026917726 = weight(abstract_txt:into in 5384) [ClassicSimilarity], result of:
            0.026917726 = score(doc=5384,freq=1.0), product of:
              0.09275674 = queryWeight, product of:
                1.1650609 = boost
                3.7145214 = idf(docFreq=2864, maxDocs=43254)
                0.021433545 = queryNorm
              0.29019699 = fieldWeight in 5384, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                3.7145214 = idf(docFreq=2864, maxDocs=43254)
                0.078125 = fieldNorm(doc=5384)
          0.012467554 = weight(abstract_txt:with in 5384) [ClassicSimilarity], result of:
            0.012467554 = score(doc=5384,freq=1.0), product of:
              0.06356301 = queryWeight, product of:
                1.1812007 = boost
                2.5106533 = idf(docFreq=9548, maxDocs=43254)
                0.021433545 = queryNorm
              0.19614479 = fieldWeight in 5384, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                2.5106533 = idf(docFreq=9548, maxDocs=43254)
                0.078125 = fieldNorm(doc=5384)
          0.30017406 = weight(abstract_txt:classification in 5384) [ClassicSimilarity], result of:
            0.30017406 = score(doc=5384,freq=5.0), product of:
              0.4297989 = queryWeight, product of:
                5.0157804 = boost
                3.9979079 = idf(docFreq=2157, maxDocs=43254)
                0.021433545 = queryNorm
              0.6984058 = fieldWeight in 5384, product of:
                2.236068 = tf(freq=5.0), with freq of:
                  5.0 = termFreq=5.0
                3.9979079 = idf(docFreq=2157, maxDocs=43254)
                0.078125 = fieldNorm(doc=5384)
        0.2 = coord(5/25)