Search (174 results, page 1 of 9)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Belayche, C.: ¬A propos de la classification de Dewey (1997) 0.11
    0.10791097 = product of:
      0.1798516 = sum of:
        0.12269233 = weight(_text_:section in 1171) [ClassicSimilarity], result of:
          0.12269233 = score(doc=1171,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.46641576 = fieldWeight in 1171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.0625 = fieldNorm(doc=1171)
        0.03014327 = weight(_text_:on in 1171) [ClassicSimilarity], result of:
          0.03014327 = score(doc=1171,freq=4.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.27492687 = fieldWeight in 1171, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=1171)
        0.027015999 = product of:
          0.054031998 = sum of:
            0.054031998 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.054031998 = score(doc=1171,freq=2.0), product of:
                0.17456654 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049850095 = queryNorm
                0.30952093 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1171)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    All classifications are based on ideologies and Dewey is marked by its author's origins in 19th century North America. Subsequent revisions indicate changed ways of understanding the world. Section 157 (psycho-pathology) is now included with 616.89 (mental troubles), reflecting the move to a genetic-based approach. Table 5 (racial, ethnic and national groups) is however unchanged, despite changing views on such categorisation
    Source
    Bulletin d'informations de l'Association des Bibliothecaires Francais. 1997, no.175, S.22-23
  2. Dahlberg, I.: DIN 32705: the German standard on classification systems : a critical appraisal (1992) 0.05
    0.049596407 = product of:
      0.12399101 = sum of:
        0.092019245 = weight(_text_:section in 2669) [ClassicSimilarity], result of:
          0.092019245 = score(doc=2669,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.34981182 = fieldWeight in 2669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.046875 = fieldNorm(doc=2669)
        0.031971764 = weight(_text_:on in 2669) [ClassicSimilarity], result of:
          0.031971764 = score(doc=2669,freq=8.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.29160398 = fieldWeight in 2669, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2669)
      0.4 = coord(2/5)
    
    Abstract
    The German standard on the construction and further development of classification systems is introduced with its background. The contents of its 8 chapters is described. A critical appraisal considers (1) the fact that the standard does not openly deal with the optimal form of CS, viz. faceted CS, but treats them as one possibility among others, although the authors seem to have had this kind in mind when recommending the section on steps of CS development and other sections of the standard; (2) that the standard does not give any recommendation on the computerization of the necessary activities in establishing CS; and (3) that a convergence of CS and thesauri in the form of faceted CS and faceted thesauri has not been taken into consideration. - Concludingly some doubts are raised whether a standard would be the best medium to provide recommendations or guidelines for the construction of such systems. More adequate ways for this should be explored
  3. Wang, Z.; Chaudhry, A.S.; Khoo, C.S.G.: Using classification schemes and thesauri to build an organizational taxonomy for organizing content and aiding navigation (2008) 0.05
    0.04510741 = product of:
      0.07517901 = sum of:
        0.010657255 = weight(_text_:on in 2346) [ClassicSimilarity], result of:
          0.010657255 = score(doc=2346,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.097201325 = fieldWeight in 2346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
        0.017962666 = weight(_text_:information in 2346) [ClassicSimilarity], result of:
          0.017962666 = score(doc=2346,freq=14.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.20526241 = fieldWeight in 2346, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
        0.046559088 = sum of:
          0.01954309 = weight(_text_:technology in 2346) [ClassicSimilarity], result of:
            0.01954309 = score(doc=2346,freq=2.0), product of:
              0.14847288 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.049850095 = queryNorm
              0.13162735 = fieldWeight in 2346, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.03125 = fieldNorm(doc=2346)
          0.027015999 = weight(_text_:22 in 2346) [ClassicSimilarity], result of:
            0.027015999 = score(doc=2346,freq=2.0), product of:
              0.17456654 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049850095 = queryNorm
              0.15476047 = fieldWeight in 2346, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2346)
      0.6 = coord(3/5)
    
    Abstract
    Purpose - Potential and benefits of classification schemes and thesauri in building organizational taxonomies cannot be fully utilized by organizations. Empirical data of building an organizational taxonomy by the top-down approach of using classification schemes and thesauri appear to be lacking. The paper seeks to make a contribution in this regard. Design/methodology/approach - A case study of building an organizational taxonomy was conducted in the information studies domain for the Division of Information Studies at Nanyang Technology University, Singapore. The taxonomy was built by using the Dewey Decimal Classification, the Information Science Taxonomy, two information systems taxonomies, and three thesauri (ASIS&T, LISA, and ERIC). Findings - Classification schemes and thesauri were found to be helpful in creating the structure and categories related to the subject facet of the taxonomy, but organizational community sources had to be consulted and several methods had to be employed. The organizational activities and stakeholders' needs had to be identified to determine the objectives, facets, and the subject coverage of the taxonomy. Main categories were determined by identifying the stakeholders' interests and consulting organizational community sources and domain taxonomies. Category terms were selected from terminologies of classification schemes, domain taxonomies, and thesauri against the stakeholders' interests. Hierarchical structures of the main categories were constructed in line with the stakeholders' perspectives and the navigational role taking advantage of structures/term relationships from classification schemes and thesauri. Categories were determined in line with the concepts and the hierarchical levels. Format of categories were uniformed according to a commonly used standard. The consistency principle was employed to make the taxonomy structure and categories neater. Validation of the draft taxonomy through consultations with the stakeholders further refined the taxonomy. Originality/value - No similar study could be traced in the literature. The steps and methods used in the taxonomy development, and the information studies taxonomy itself, will be helpful for library and information schools and other similar organizations in their effort to develop taxonomies for organizing content and aiding navigation on organizational sites.
    Date
    7.11.2008 15:22:04
    Theme
    Information Resources Management
  4. Connaway, L.S.; Sievert, M.C.: Comparison of three classification systems for information on health insurance (1996) 0.04
    0.04244266 = product of:
      0.070737764 = sum of:
        0.03014327 = weight(_text_:on in 7242) [ClassicSimilarity], result of:
          0.03014327 = score(doc=7242,freq=4.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.27492687 = fieldWeight in 7242, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=7242)
        0.013578499 = weight(_text_:information in 7242) [ClassicSimilarity], result of:
          0.013578499 = score(doc=7242,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.1551638 = fieldWeight in 7242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=7242)
        0.027015999 = product of:
          0.054031998 = sum of:
            0.054031998 = weight(_text_:22 in 7242) [ClassicSimilarity], result of:
              0.054031998 = score(doc=7242,freq=2.0), product of:
                0.17456654 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049850095 = queryNorm
                0.30952093 = fieldWeight in 7242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7242)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Reports results of a comparative study of 3 classification schemes: LCC, DDC and NLM Classification to determine their effectiveness in classifying materials on health insurance. Examined 2 hypotheses: that there would be no differences in the scatter of the 3 classification schemes; and that there would be overlap between all 3 schemes but no difference in the classes into which the subject was placed. There was subject scatter in all 3 classification schemes and litlle overlap between the 3 systems
    Date
    22. 4.1997 21:10:19
  5. Broughton, V.: Essential classification (2004) 0.04
    0.03855946 = product of:
      0.064265765 = sum of:
        0.04337829 = weight(_text_:section in 2824) [ClassicSimilarity], result of:
          0.04337829 = score(doc=2824,freq=4.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.16490288 = fieldWeight in 2824, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.014098224 = weight(_text_:on in 2824) [ClassicSimilarity], result of:
          0.014098224 = score(doc=2824,freq=14.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.12858528 = fieldWeight in 2824, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.0067892494 = weight(_text_:information in 2824) [ClassicSimilarity], result of:
          0.0067892494 = score(doc=2824,freq=8.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.0775819 = fieldWeight in 2824, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
      0.6 = coord(3/5)
    
    Abstract
    Classification is a crucial skill for all information workers involved in organizing collections, but it is a difficult concept to grasp - and is even more difficult to put into practice. Essential Classification offers full guidance an how to go about classifying a document from scratch. This much-needed text leads the novice classifier step by step through the basics of subject cataloguing, with an emphasis an practical document analysis and classification. It deals with fundamental questions of the purpose of classification in different situations, and the needs and expectations of end users. The novice is introduced to the ways in which document content can be assessed, and how this can best be expressed for translation into the language of specific indexing and classification systems. The characteristics of the major general schemes of classification are discussed, together with their suitability for different classification needs.
    Footnote
    Rez. in: KO 32(2005) no.1, S.47-49 (M. Hudon): "Vanda Broughton's Essential Classification is the most recent addition to a very small set of classification textbooks published over the past few years. The book's 21 chapters are based very closely an the cataloguing and classification module at the School of Library, Archive, and Information studies at University College, London. The author's main objective is clear: this is "first and foremost a book about how to classify. The emphasis throughout is an the activity of classification rather than the theory, the practical problems of the organization of collections, and the needs of the users" (p. 1). This is not a theoretical work, but a basic course in classification and classification scheme application. For this reviewer, who also teaches "Classification 101," this is also a fascinating peek into how a colleague organizes content and structures her course. "Classification is everywhere" (p. 1): the first sentence of this book is also one of the first statements in my own course, and Professor Broughton's metaphors - the supermarket, canned peas, flowers, etc. - are those that are used by our colleagues around the world. The combination of tone, writing style and content display are reader-friendly; they are in fact what make this book remarkable and what distinguishes it from more "formal" textbooks, such as The Organization of Information, the superb text written and recently updated (2004) by Professor Arlene Taylor (2nd ed. Westport, Conn.: Libraries Unlimited, 2004). Reading Essential Classification, at times, feels like being in a classroom, facing a teacher who assures you that "you don't need to worry about this at this stage" (p. 104), and reassures you that, although you now speed a long time looking for things, "you will soon speed up when you get to know the scheme better" (p. 137). This teacher uses redundancy in a productive fashion, and she is not afraid to express her own opinions ("I think that if these concepts are helpful they may be used" (p. 245); "It's annoying that LCC doesn't provide clearer instructions, but if you keep your head and take them one step at a time [i.e. the tables] they're fairly straightforward" (p. 174)). Chapters 1 to 7 present the essential theoretical concepts relating to knowledge organization and to bibliographic classification. The author is adept at making and explaining distinctions: known-item retrieval versus subject retrieval, personal versus public/shared/official classification systems, scientific versus folk classification systems, object versus aspect classification systems, semantic versus syntactic relationships, and so on. Chapters 8 and 9 discuss the practice of classification, through content analysis and subject description. A short discussion of difficult subjects, namely the treatment of unique concepts (persons, places, etc.) as subjects seems a little advanced for a beginners' class.
    In Chapter 10, "Controlled indexing languages," Professor Broughton states that a classification scheme is truly a language "since it permits communication and the exchange of information" (p. 89), a Statement with which this reviewer wholly agrees. Chapter 11, however, "Word-based approaches to retrieval," moves us to a different field altogether, offering only a narrow view of the whole world of controlled indexing languages such as thesauri, and presenting disconnected discussions of alphabetical filing, form and structure of subject headings, modern developments in alphabetical subject indexing, etc. Chapters 12 and 13 focus an the Library of Congress Subject Headings (LCSH), without even a passing reference to existing subject headings lists in other languages (French RAMEAU, German SWK, etc.). If it is not surprising to see a section on subject headings in a book on classification, the two subjects being taught together in most library schools, the location of this section in the middle of this particular book is more difficult to understand. Chapter 14 brings the reader back to classification, for a discussion of essentials of classification scheme application. The following five chapters present in turn each one of the three major and currently used bibliographic classification schemes, in order of increasing complexity and difficulty of application. The Library of Congress Classification (LCC), the easiest to use, is covered in chapters 15 and 16. The Dewey Decimal Classification (DDC) deserves only a one-chapter treatment (Chapter 17), while the functionalities of the Universal Decimal Classification (UDC), which Professor Broughton knows extremely well, are described in chapters 18 and 19. Chapter 20 is a general discussion of faceted classification, on par with the first seven chapters for its theoretical content. Chapter 21, an interesting last chapter on managing classification, addresses down-to-earth matters such as the cost of classification, the need for re-classification, advantages and disadvantages of using print versions or e-versions of classification schemes, choice of classification scheme, general versus special scheme. But although the questions are interesting, the chapter provides only a very general overview of what appropriate answers might be. To facilitate reading and learning, summaries are strategically located at various places in the text, and always before switching to a related subject. Professor Broughton's choice of examples is always interesting, and sometimes even entertaining (see for example "Inside out: A brief history of underwear" (p. 71)). With many examples, however, and particularly those that appear in the five chapters an classification scheme applications, the novice reader would have benefited from more detailed explanations. On page 221, for example, "The history and social influence of the potato" results in this analysis of concepts: Potato - Sociology, and in the UDC class number: 635.21:316. What happened to the "history" aspect? Some examples are not very convincing: in Animals RT Reproduction and Art RT Reproduction (p. 102), the associative relationship is not appropriate as it is used to distinguish homographs and would do nothing to help either the indexer or the user at the retrieval stage.
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
  6. Jacob, E.K.: Proposal for a classification of classifications built on Beghtol's distinction between "Naïve Classification" and "Professional Classification" (2010) 0.04
    0.036305062 = product of:
      0.060508437 = sum of:
        0.022607451 = weight(_text_:on in 2945) [ClassicSimilarity], result of:
          0.022607451 = score(doc=2945,freq=4.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.20619515 = fieldWeight in 2945, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2945)
        0.017638987 = weight(_text_:information in 2945) [ClassicSimilarity], result of:
          0.017638987 = score(doc=2945,freq=6.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.20156369 = fieldWeight in 2945, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2945)
        0.020261997 = product of:
          0.040523995 = sum of:
            0.040523995 = weight(_text_:22 in 2945) [ClassicSimilarity], result of:
              0.040523995 = score(doc=2945,freq=2.0), product of:
                0.17456654 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049850095 = queryNorm
                0.23214069 = fieldWeight in 2945, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2945)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Argues that Beghtol's (2003) use of the terms "naive classification" and "professional classification" is valid because they are nominal definitions and that the distinction between these two types of classification points up the need for researchers in knowledge organization to broaden their scope beyond traditional classification systems intended for information retrieval. Argues that work by Beghtol (2003), Kwasnik (1999) and Bailey (1994) offer direction for the development of a classification of classifications based on the pragmatic dimensions of extant classification systems. Bezugnahme auf: Beghtol, C.: Naïve classification systems and the global information society. In: Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine. Würzburg: Ergon Verlag 2004. S.19-22. (Advances in knowledge organization; vol.9)
  7. Fairthorne, R.A.: Temporal structure in bibliographic classification (1985) 0.04
    0.035456702 = product of:
      0.059094504 = sum of:
        0.046009623 = weight(_text_:section in 3651) [ClassicSimilarity], result of:
          0.046009623 = score(doc=3651,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.17490591 = fieldWeight in 3651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.007992941 = weight(_text_:on in 3651) [ClassicSimilarity], result of:
          0.007992941 = score(doc=3651,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.072900996 = fieldWeight in 3651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.0050919373 = weight(_text_:information in 3651) [ClassicSimilarity], result of:
          0.0050919373 = score(doc=3651,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.058186423 = fieldWeight in 3651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
      0.6 = coord(3/5)
    
    Abstract
    The fan of past documents may be seen across time as a philosophical "wake," translated documents as a sideways relationship and future documents as another fan spreading forward from a given document (p. 365). The "overlap of reading histories can be used to detect common interests among readers," (p. 365) and readers may be classified accordingly. Finally, Fairthorne rejects the notion of a "general" classification, which he regards as a mirage, to be replaced by a citation-type network to identify classes. An interesting feature of his work lies in his linkage between old and new documents via a bibliographic method-citations, authors' names, imprints, style, and vocabulary - rather than topical (subject) terms. This is an indirect method of creating classes. The subject (aboutness) is conceived as a finite, common sharing of knowledge over time (past, present, and future) as opposed to the more common hierarchy of topics in an infinite schema assumed to be universally useful. Fairthorne, a mathematician by training, is a prolific writer an the foundations of classification and information. His professional career includes work with the Royal Engineers Chemical Warfare Section and the Royal Aircraft Establishment (RAE). He was the founder of the Computing Unit which became the RAE Mathematics Department.
    Footnote
    Original in: Ottawa Conference on the Conceptual Basis of the Classification of Knowledge, Ottawa, 1971. Ed.: Jerzy A Wojceichowski. Pullach: Verlag Dokumentation 1974. S.404-412.
  8. Zhang, J.; Zeng, M.L.: ¬A new similarity measure for subject hierarchical structures (2014) 0.03
    0.034159057 = product of:
      0.056931756 = sum of:
        0.023073634 = weight(_text_:on in 1778) [ClassicSimilarity], result of:
          0.023073634 = score(doc=1778,freq=6.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.21044704 = fieldWeight in 1778, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.016973123 = weight(_text_:information in 1778) [ClassicSimilarity], result of:
          0.016973123 = score(doc=1778,freq=8.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.19395474 = fieldWeight in 1778, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.016885 = product of:
          0.03377 = sum of:
            0.03377 = weight(_text_:22 in 1778) [ClassicSimilarity], result of:
              0.03377 = score(doc=1778,freq=2.0), product of:
                0.17456654 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049850095 = queryNorm
                0.19345059 = fieldWeight in 1778, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1778)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Purpose - The purpose of this paper is to introduce a new similarity method to gauge the differences between two subject hierarchical structures. Design/methodology/approach - In the proposed similarity measure, nodes on two hierarchical structures are projected onto a two-dimensional space, respectively, and both structural similarity and subject similarity of nodes are considered in the similarity between the two hierarchical structures. The extent to which the structural similarity impacts on the similarity can be controlled by adjusting a parameter. An experiment was conducted to evaluate soundness of the measure. Eight experts whose research interests were information retrieval and information organization participated in the study. Results from the new measure were compared with results from the experts. Findings - The evaluation shows strong correlations between the results from the new method and the results from the experts. It suggests that the similarity method achieved satisfactory results. Practical implications - Hierarchical structures that are found in subject directories, taxonomies, classification systems, and other classificatory structures play an extremely important role in information organization and information representation. Measuring the similarity between two subject hierarchical structures allows an accurate overarching understanding of the degree to which the two hierarchical structures are similar. Originality/value - Both structural similarity and subject similarity of nodes were considered in the proposed similarity method, and the extent to which the structural similarity impacts on the similarity can be adjusted. In addition, a new evaluation method for a hierarchical structure similarity was presented.
    Date
    8. 4.2015 16:22:13
  9. Gnoli, C.: Classifying phenomena : part 4: themes and rhemes (2018) 0.03
    0.031831995 = product of:
      0.05305332 = sum of:
        0.022607451 = weight(_text_:on in 4152) [ClassicSimilarity], result of:
          0.022607451 = score(doc=4152,freq=4.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.20619515 = fieldWeight in 4152, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=4152)
        0.0101838745 = weight(_text_:information in 4152) [ClassicSimilarity], result of:
          0.0101838745 = score(doc=4152,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.116372846 = fieldWeight in 4152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4152)
        0.020261997 = product of:
          0.040523995 = sum of:
            0.040523995 = weight(_text_:22 in 4152) [ClassicSimilarity], result of:
              0.040523995 = score(doc=4152,freq=2.0), product of:
                0.17456654 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049850095 = queryNorm
                0.23214069 = fieldWeight in 4152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4152)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    This is the fourth in a series of papers on classification based on phenomena instead of disciplines. Together with types, levels and facets that have been discussed in the previous parts, themes and rhemes are further structural components of such a classification. In a statement or in a longer document, a base theme and several particular themes can be identified. Base theme should be cited first in a classmark, followed by particular themes, each with its own facets. In some cases, rhemes can also be expressed, that is new information provided about a theme, converting an abstract statement ("wolves, affected by cervids") into a claim that some thing actually occurs ("wolves are affected by cervids"). In the Integrative Levels Classification rhemes can be expressed by special deictic classes, including those for actual specimens, anaphoras, unknown values, conjunctions and spans, whole universe, anthropocentric favoured classes, and favoured host classes. These features, together with rules for pronounciation, make a classification of phenomena a true language, that may be suitable for many uses.
    Date
    17. 2.2018 18:22:25
  10. Molholt, P.: Qualities of classification schemes for the Information Superhighway (1995) 0.03
    0.031176269 = product of:
      0.051960446 = sum of:
        0.023073634 = weight(_text_:on in 5562) [ClassicSimilarity], result of:
          0.023073634 = score(doc=5562,freq=6.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.21044704 = fieldWeight in 5562, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5562)
        0.012001811 = weight(_text_:information in 5562) [ClassicSimilarity], result of:
          0.012001811 = score(doc=5562,freq=4.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.13714671 = fieldWeight in 5562, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5562)
        0.016885 = product of:
          0.03377 = sum of:
            0.03377 = weight(_text_:22 in 5562) [ClassicSimilarity], result of:
              0.03377 = score(doc=5562,freq=2.0), product of:
                0.17456654 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049850095 = queryNorm
                0.19345059 = fieldWeight in 5562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5562)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    For my segment of this program I'd like to focus on some basic qualities of classification schemes. These qualities are critical to our ability to truly organize knowledge for access. As I see it, there are at least five qualities of note. The first one of these properties that I want to talk about is "authoritative." By this I mean standardized, but I mean more than standardized with a built in consensus-building process. A classification scheme constructed by a collaborative, consensus-building process carries the approval, and the authority, of the discipline groups that contribute to it and that it affects... The next property of classification systems is "expandable," living, responsive, with a clear locus of responsibility for its continuous upkeep. The worst thing you can do with a thesaurus, or a classification scheme, is to finish it. You can't ever finish it because it reflects ongoing intellectual activity... The third property is "intuitive." That is, the system has to be approachable, it has to be transparent, or at least capable of being transparent. It has to have an underlying logic that supports the classification scheme but doesn't dominate it... The fourth property is "organized and logical." I advocate very strongly, and agree with Lois Chan, that classification must be based on a rule-based structure, on somebody's world-view of the syndetic structure... The fifth property is "universal" by which I mean the classification scheme needs be useable by any specific system or application, and be available as a language for multiple purposes.
    Footnote
    Paper presented at the 36th Allerton Institute, 23-25 Oct 94, Allerton Park, Monticello, IL: "New Roles for Classification in Libraries and Information Networks: Presentation and Reports"
    Source
    Cataloging and classification quarterly. 21(1995) no.2, S.19-22
  11. Loehrlein, A.J.; Lemieux, V.L.; Bennett, M.: ¬The classification of financial products (2014) 0.03
    0.028469186 = product of:
      0.047448643 = sum of:
        0.022607451 = weight(_text_:on in 1196) [ClassicSimilarity], result of:
          0.022607451 = score(doc=1196,freq=4.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.20619515 = fieldWeight in 1196, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=1196)
        0.0101838745 = weight(_text_:information in 1196) [ClassicSimilarity], result of:
          0.0101838745 = score(doc=1196,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.116372846 = fieldWeight in 1196, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1196)
        0.014657319 = product of:
          0.029314637 = sum of:
            0.029314637 = weight(_text_:technology in 1196) [ClassicSimilarity], result of:
              0.029314637 = score(doc=1196,freq=2.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.19744103 = fieldWeight in 1196, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1196)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    In the wake of the global financial crisis, the U.S. Dodd- Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank) was enacted to provide increased transparency in financial markets. In response to Dodd-Frank, a series of rules relating to swaps record keeping have been issued, and one such rule calls for the creation of a financial products classification system. The manner in which financial products are classified will have a profound effect on data integration and analysis in the financial industry. This article considers various approaches that can be taken when classifying financial products and recommends the use of facet analysis. The article argues that this type of analysis is flexible enough to accommodate multiple viewpoints and rigorous enough to facilitate inferences that are based on the hierarchical structure. Various use cases are examined that pertain to the organization of financial products. The use cases confirm the practical utility of taxonomies that are designed according to faceted principles.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.2, S.263-280
  12. Beghtol, C.: Naïve classification systems and the global information society (2004) 0.03
    0.028307816 = product of:
      0.04717969 = sum of:
        0.013321568 = weight(_text_:on in 3483) [ClassicSimilarity], result of:
          0.013321568 = score(doc=3483,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.121501654 = fieldWeight in 3483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3483)
        0.016973123 = weight(_text_:information in 3483) [ClassicSimilarity], result of:
          0.016973123 = score(doc=3483,freq=8.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.19395474 = fieldWeight in 3483, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3483)
        0.016885 = product of:
          0.03377 = sum of:
            0.03377 = weight(_text_:22 in 3483) [ClassicSimilarity], result of:
              0.03377 = score(doc=3483,freq=2.0), product of:
                0.17456654 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049850095 = queryNorm
                0.19345059 = fieldWeight in 3483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3483)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Classification is an activity that transcends time and space and that bridges the divisions between different languages and cultures, including the divisions between academic disciplines. Classificatory activity, however, serves different purposes in different situations. Classifications for infonnation retrieval can be called "professional" classifications and classifications in other fields can be called "naïve" classifications because they are developed by people who have no particular interest in classificatory issues. The general purpose of naïve classification systems is to discover new knowledge. In contrast, the general purpose of information retrieval classifications is to classify pre-existing knowledge. Different classificatory purposes may thus inform systems that are intended to span the cultural specifics of the globalized information society. This paper builds an previous research into the purposes and characteristics of naïve classifications. It describes some of the relationships between the purpose and context of a naive classification, the units of analysis used in it, and the theory that the context and the units of analysis imply.
    Footnote
    Vgl.: Jacob, E.K.: Proposal for a classification of classifications built on Beghtol's distinction between "Naïve Classification" and "Professional Classification". In: Knowledge organization. 37(2010) no.2, S.111-120.
    Pages
    S.19-22
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
  13. Slavic, A.: On the nature and typology of documentary classifications and their use in a networked environment (2007) 0.03
    0.027859053 = product of:
      0.046431754 = sum of:
        0.015985882 = weight(_text_:on in 780) [ClassicSimilarity], result of:
          0.015985882 = score(doc=780,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.14580199 = fieldWeight in 780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=780)
        0.0101838745 = weight(_text_:information in 780) [ClassicSimilarity], result of:
          0.0101838745 = score(doc=780,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.116372846 = fieldWeight in 780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=780)
        0.020261997 = product of:
          0.040523995 = sum of:
            0.040523995 = weight(_text_:22 in 780) [ClassicSimilarity], result of:
              0.040523995 = score(doc=780,freq=2.0), product of:
                0.17456654 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049850095 = queryNorm
                0.23214069 = fieldWeight in 780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=780)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Networked orientated standards for vocabulary publishing and exchange and proposals for terminological services and terminology registries will improve sharing and use of all knowledge organization systems in the networked information environment. This means that documentary classifications may also become more applicable for use outside their original domain of application. The paper summarises some characteristics common to documentary classifications and explains some terminological, functional and implementation aspects. The original purpose behind each classification scheme determines the functions that the vocabulary is designed to facilitate. These functions influence the structure, semantics and syntax, scheme coverage and format in which classification data are published and made available. The author suggests that attention should be paid to the differences between documentary classifications as these may determine their suitability for a certain purpose and may impose different requirements with respect to their use online. As we speak, many classifications are being created for knowledge organization and it may be important to promote expertise from the bibliographic domain with respect to building and using classification systems.
    Date
    22.12.2007 17:22:31
  14. Gopinath, M.A.: Ranganathan's theory of facet analysis and knowledge representation (1992) 0.03
    0.026497273 = product of:
      0.06624318 = sum of:
        0.027156997 = weight(_text_:information in 6133) [ClassicSimilarity], result of:
          0.027156997 = score(doc=6133,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.3103276 = fieldWeight in 6133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=6133)
        0.03908618 = product of:
          0.07817236 = sum of:
            0.07817236 = weight(_text_:technology in 6133) [ClassicSimilarity], result of:
              0.07817236 = score(doc=6133,freq=2.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.5265094 = fieldWeight in 6133, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.125 = fieldNorm(doc=6133)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    DESIDOC bulletin of information technology. 12(1992) no.5, S.16-20
  15. Foskett, D.J.: ¬The construction of a faceted classification for a special subject (1959) 0.02
    0.024425104 = product of:
      0.06106276 = sum of:
        0.03730039 = weight(_text_:on in 551) [ClassicSimilarity], result of:
          0.03730039 = score(doc=551,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.34020463 = fieldWeight in 551, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.109375 = fieldNorm(doc=551)
        0.023762373 = weight(_text_:information in 551) [ClassicSimilarity], result of:
          0.023762373 = score(doc=551,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.27153665 = fieldWeight in 551, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=551)
      0.4 = coord(2/5)
    
    Source
    Proc. Int. Conf. on Scientific Information, Washington
  16. McLachlan, H.V.: Buchanan, Locke and Wittgenstein on classification (1981) 0.02
    0.024425104 = product of:
      0.06106276 = sum of:
        0.03730039 = weight(_text_:on in 1781) [ClassicSimilarity], result of:
          0.03730039 = score(doc=1781,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.34020463 = fieldWeight in 1781, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.109375 = fieldNorm(doc=1781)
        0.023762373 = weight(_text_:information in 1781) [ClassicSimilarity], result of:
          0.023762373 = score(doc=1781,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.27153665 = fieldWeight in 1781, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=1781)
      0.4 = coord(2/5)
    
    Source
    Journal of information science. 3(1981), S.191-195
  17. Ranganathan, S.R.: Library classification as a discipline (1957) 0.02
    0.024425104 = product of:
      0.06106276 = sum of:
        0.03730039 = weight(_text_:on in 564) [ClassicSimilarity], result of:
          0.03730039 = score(doc=564,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.34020463 = fieldWeight in 564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.109375 = fieldNorm(doc=564)
        0.023762373 = weight(_text_:information in 564) [ClassicSimilarity], result of:
          0.023762373 = score(doc=564,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.27153665 = fieldWeight in 564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=564)
      0.4 = coord(2/5)
    
    Source
    Proceedings of the International Study Conference on Classification for Information Retrieval, held at Beatrice Webb House,Dorking, England, 13.-17.5.1957
  18. Shera, J.H.: Pattern, structure, and conceptualization in classification for information retrieval (1957) 0.02
    0.024310444 = product of:
      0.060776107 = sum of:
        0.031971764 = weight(_text_:on in 1287) [ClassicSimilarity], result of:
          0.031971764 = score(doc=1287,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.29160398 = fieldWeight in 1287, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.09375 = fieldNorm(doc=1287)
        0.028804345 = weight(_text_:information in 1287) [ClassicSimilarity], result of:
          0.028804345 = score(doc=1287,freq=4.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.3291521 = fieldWeight in 1287, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1287)
      0.4 = coord(2/5)
    
    Source
    Proceedings of the International Study Conference on Classification for Information Retrieval, held at Beatrice Webb House, Dorking, England, 13.-17.5.1957
  19. ¬The need for a faceted classification as the basis of all methods of information retrieval : Memorandum of the Classification Research Group (1997) 0.02
    0.024174588 = product of:
      0.06043647 = sum of:
        0.036917817 = weight(_text_:on in 562) [ClassicSimilarity], result of:
          0.036917817 = score(doc=562,freq=6.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.33671528 = fieldWeight in 562, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=562)
        0.023518652 = weight(_text_:information in 562) [ClassicSimilarity], result of:
          0.023518652 = score(doc=562,freq=6.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.2687516 = fieldWeight in 562, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=562)
      0.4 = coord(2/5)
    
    Footnote
    Wiederabdruck aus: Proceedings of the International Study Conference on Classification for Information Retrieval, Dorking. London: Aslib 1957.
    Imprint
    The Hague : International Federation for Information and Documentation (FID)
    Source
    From classification to 'knowledge organization': Dorking revisited or 'past is prelude'. A collection of reprints to commemorate the firty year span between the Dorking Conference (First International Study Conference on Classification Research 1957) and the Sixth International Study Conference on Classification Research (London 1997). Ed.: A. Gilchrist
  20. Dousa, T.M.: Categories and the architectonics of system in Julius Otto Kaiser's method of systematic indexing (2014) 0.02
    0.023215879 = product of:
      0.03869313 = sum of:
        0.013321568 = weight(_text_:on in 1418) [ClassicSimilarity], result of:
          0.013321568 = score(doc=1418,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.121501654 = fieldWeight in 1418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1418)
        0.0084865615 = weight(_text_:information in 1418) [ClassicSimilarity], result of:
          0.0084865615 = score(doc=1418,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.09697737 = fieldWeight in 1418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1418)
        0.016885 = product of:
          0.03377 = sum of:
            0.03377 = weight(_text_:22 in 1418) [ClassicSimilarity], result of:
              0.03377 = score(doc=1418,freq=2.0), product of:
                0.17456654 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049850095 = queryNorm
                0.19345059 = fieldWeight in 1418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1418)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Categories, or concepts of high generality representing the most basic kinds of entities in the world, have long been understood to be a fundamental element in the construction of knowledge organization systems (KOSs), particularly faceted ones. Commentators on facet analysis have tended to foreground the role of categories in the structuring of controlled vocabularies and the construction of compound index terms, and the implications of this for subject representation and information retrieval. Less attention has been paid to the variety of ways in which categories can shape the overall architectonic framework of a KOS. This case study explores the range of functions that categories took in structuring various aspects of an early analytico-synthetic KOS, Julius Otto Kaiser's method of Systematic Indexing (SI). Within SI, categories not only functioned as mechanisms to partition an index vocabulary into smaller groupings of terms and as elements in the construction of compound index terms but also served as means of defining the units of indexing, or index items, incorporated into an index; determining the organization of card index files and the articulation of the guide card system serving as a navigational aids thereto; and setting structural constraints to the establishment of cross-references between terms. In all these ways, Kaiser's system of categories contributed to the general systematicity of SI.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik

Authors

Languages

Types

  • a 154
  • m 14
  • el 9
  • s 4
  • b 2
  • More… Less…