Search (234 results, page 1 of 12)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Olson, H.A.: ¬The ubiquitous hierarchy : an army to overcome the threat of a mob (2004) 0.08
    0.08110704 = product of:
      0.1351784 = sum of:
        0.02970992 = weight(_text_:of in 833) [ClassicSimilarity], result of:
          0.02970992 = score(doc=833,freq=16.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.39093933 = fieldWeight in 833, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=833)
        0.05494872 = weight(_text_:subject in 833) [ClassicSimilarity], result of:
          0.05494872 = score(doc=833,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.31612942 = fieldWeight in 833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0625 = fieldNorm(doc=833)
        0.05051976 = product of:
          0.10103952 = sum of:
            0.10103952 = weight(_text_:headings in 833) [ClassicSimilarity], result of:
              0.10103952 = score(doc=833,freq=2.0), product of:
                0.23569997 = queryWeight, product of:
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.04859849 = queryNorm
                0.42867854 = fieldWeight in 833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.0625 = fieldNorm(doc=833)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    This article explores the connections between Melvil Dewey and Hegelianism and Charles Cutter and the Scottish Common Sense philosophers. It traces the practice of hierarchy from these philosophical influences to Dewey and Cutter and their legacy to today's Dewey Decimal Classification and Library of Congress Subject Headings. The ubiquity of hierarchy is linked to Dewey's and Cutter's metaphor of organizing the mob of information into an orderly army using the tool of logic.
    Footnote
    Artikel in einem Themenheft: The philosophy of information
  2. Connaway, L.S.; Sievert, M.C.: Comparison of three classification systems for information on health insurance (1996) 0.08
    0.07503301 = product of:
      0.12505502 = sum of:
        0.021008085 = weight(_text_:of in 7242) [ClassicSimilarity], result of:
          0.021008085 = score(doc=7242,freq=8.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.27643585 = fieldWeight in 7242, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=7242)
        0.07770923 = weight(_text_:subject in 7242) [ClassicSimilarity], result of:
          0.07770923 = score(doc=7242,freq=4.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.4470745 = fieldWeight in 7242, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0625 = fieldNorm(doc=7242)
        0.026337698 = product of:
          0.052675396 = sum of:
            0.052675396 = weight(_text_:22 in 7242) [ClassicSimilarity], result of:
              0.052675396 = score(doc=7242,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.30952093 = fieldWeight in 7242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7242)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Reports results of a comparative study of 3 classification schemes: LCC, DDC and NLM Classification to determine their effectiveness in classifying materials on health insurance. Examined 2 hypotheses: that there would be no differences in the scatter of the 3 classification schemes; and that there would be overlap between all 3 schemes but no difference in the classes into which the subject was placed. There was subject scatter in all 3 classification schemes and litlle overlap between the 3 systems
    Date
    22. 4.1997 21:10:19
  3. Zhang, J.; Zeng, M.L.: ¬A new similarity measure for subject hierarchical structures (2014) 0.07
    0.069998845 = product of:
      0.11666474 = sum of:
        0.016080966 = weight(_text_:of in 1778) [ClassicSimilarity], result of:
          0.016080966 = score(doc=1778,freq=12.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.21160212 = fieldWeight in 1778, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.08412271 = weight(_text_:subject in 1778) [ClassicSimilarity], result of:
          0.08412271 = score(doc=1778,freq=12.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.48397237 = fieldWeight in 1778, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.016461061 = product of:
          0.032922123 = sum of:
            0.032922123 = weight(_text_:22 in 1778) [ClassicSimilarity], result of:
              0.032922123 = score(doc=1778,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.19345059 = fieldWeight in 1778, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1778)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Purpose - The purpose of this paper is to introduce a new similarity method to gauge the differences between two subject hierarchical structures. Design/methodology/approach - In the proposed similarity measure, nodes on two hierarchical structures are projected onto a two-dimensional space, respectively, and both structural similarity and subject similarity of nodes are considered in the similarity between the two hierarchical structures. The extent to which the structural similarity impacts on the similarity can be controlled by adjusting a parameter. An experiment was conducted to evaluate soundness of the measure. Eight experts whose research interests were information retrieval and information organization participated in the study. Results from the new measure were compared with results from the experts. Findings - The evaluation shows strong correlations between the results from the new method and the results from the experts. It suggests that the similarity method achieved satisfactory results. Practical implications - Hierarchical structures that are found in subject directories, taxonomies, classification systems, and other classificatory structures play an extremely important role in information organization and information representation. Measuring the similarity between two subject hierarchical structures allows an accurate overarching understanding of the degree to which the two hierarchical structures are similar. Originality/value - Both structural similarity and subject similarity of nodes were considered in the proposed similarity method, and the extent to which the structural similarity impacts on the similarity can be adjusted. In addition, a new evaluation method for a hierarchical structure similarity was presented.
    Date
    8. 4.2015 16:22:13
    Source
    Journal of documentation. 70(2014) no.3, S.364-391
  4. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.07
    0.06605006 = product of:
      0.08256257 = sum of:
        0.036069524 = weight(_text_:list in 2467) [ClassicSimilarity], result of:
          0.036069524 = score(doc=2467,freq=2.0), product of:
            0.25191793 = queryWeight, product of:
              5.183657 = idf(docFreq=673, maxDocs=44218)
              0.04859849 = queryNorm
            0.14317966 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.183657 = idf(docFreq=673, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.01353415 = weight(_text_:of in 2467) [ClassicSimilarity], result of:
          0.01353415 = score(doc=2467,freq=34.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.17808972 = fieldWeight in 2467, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.017171476 = weight(_text_:subject in 2467) [ClassicSimilarity], result of:
          0.017171476 = score(doc=2467,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.098790444 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.015787425 = product of:
          0.03157485 = sum of:
            0.03157485 = weight(_text_:headings in 2467) [ClassicSimilarity], result of:
              0.03157485 = score(doc=2467,freq=2.0), product of:
                0.23569997 = queryWeight, product of:
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.04859849 = queryNorm
                0.13396205 = fieldWeight in 2467, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2467)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Abstract
    This is a classified, annotated bibliography about how to design faceted classification systems and make them usable on the World Wide Web. It is the first of three works I will be doing. The second, based on the material here and elsewhere, will discuss how to actually make the faceted system and put it online. The third will be a report of how I did just that, what worked, what didn't, and what I learned. Almost every article or book listed here begins with an explanation of what a faceted classification system is, so I won't (but see Steckel in Background below if you don't already know). They all agree that faceted systems are very appropriate for the web. Even pre-web articles (such as Duncan's in Background, below) assert that hypertext and facets will go together well. Combined, it is possible to take a set of documents and classify them or apply subject headings to describe what they are about, then build a navigational structure so that any user, no matter how he or she approaches the material, no matter what his or her goals, can move and search in a way that makes sense to them, but still get to the same useful results as someone else following a different path to the same goal. There is no one way that everyone will always use when looking for information. The more flexible the organization of the information, the more accommodating it is. Facets are more flexible for hypertext browsing than any enumerative or hierarchical system.
    Consider movie listings in newspapers. Most Canadian newspapers list movie showtimes in two large blocks, for the two major theatre chains. The listings are ordered by region (in large cities), then theatre, then movie, and finally by showtime. Anyone wondering where and when a particular movie is playing must scan the complete listings. Determining what movies are playing in the next half hour is very difficult. When movie listings went onto the web, most sites used a simple faceted organization, always with movie name and theatre, and perhaps with region or neighbourhood (thankfully, theatre chains were left out). They make it easy to pick a theatre and see what movies are playing there, or to pick a movie and see what theatres are showing it. To complete the system, the sites should allow users to browse by neighbourhood and showtime, and to order the results in any way they desired. Thus could people easily find answers to such questions as, "Where is the new James Bond movie playing?" "What's showing at the Roxy tonight?" "I'm going to be out in in Little Finland this afternoon with three hours to kill starting at 2 ... is anything interesting playing?" A hypertext, faceted classification system makes more useful information more easily available to the user. Reading the books and articles below in chronological order will show a certain progression: suggestions that faceting and hypertext might work well, confidence that facets would work well if only someone would make such a system, and finally the beginning of serious work on actually designing, building, and testing faceted web sites. There is a solid basis of how to make faceted classifications (see Vickery in Recommended), but their application online is just starting. Work on XFML (see Van Dijck's work in Recommended) the Exchangeable Faceted Metadata Language, will make this easier. If it follows previous patterns, parts of the Internet community will embrace the idea and make open source software available for others to reuse. It will be particularly beneficial if professionals in both information studies and computer science can work together to build working systems, standards, and code. Each can benefit from the other's expertise in what can be a very complicated and technical area. One particularly nice thing about this area of research is that people interested in combining facets and the web often have web sites where they post their writings.
    This bibliography is not meant to be exhaustive, but unfortunately it is not as complete as I wanted. Some books and articles are not be included, but they may be used in my future work. (These include two books and one article by B.C. Vickery: Faceted Classification Schemes (New Brunswick, NJ: Rutgers, 1966), Classification and Indexing in Science, 3rd ed. (London: Butterworths, 1975), and "Knowledge Representation: A Brief Review" (Journal of Documentation 42 no. 3 (September 1986): 145-159; and A.C. Foskett's "The Future of Faceted Classification" in The Future of Classification, edited by Rita Marcella and Arthur Maltby (Aldershot, England: Gower, 2000): 69-80). Nevertheless, I hope this bibliography will be useful for those both new to or familiar with faceted hypertext systems. Some very basic resources are listed, as well as some very advanced ones. Some example web sites are mentioned, but there is no detailed technical discussion of any software. The user interface to any web site is extremely important, and this is briefly mentioned in two or three places (for example the discussion of lawforwa.org (see Example Web Sites)). The larger question of how to display information graphically and with hypertext is outside the scope of this bibliography. There are five sections: Recommended, Background, Not Relevant, Example Web Sites, and Mailing Lists. Background material is either introductory, advanced, or of peripheral interest, and can be read after the Recommended resources if the reader wants to know more. The Not Relevant category contains articles that may appear in bibliographies but are not relevant for my purposes.
  5. Broughton, V.: Essential classification (2004) 0.07
    0.0650739 = product of:
      0.10845649 = sum of:
        0.021333812 = weight(_text_:of in 2824) [ClassicSimilarity], result of:
          0.021333812 = score(doc=2824,freq=132.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.28072193 = fieldWeight in 2824, product of:
              11.489125 = tf(freq=132.0), with freq of:
                132.0 = termFreq=132.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.051399823 = weight(_text_:subject in 2824) [ClassicSimilarity], result of:
          0.051399823 = score(doc=2824,freq=28.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.295712 = fieldWeight in 2824, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.035722863 = product of:
          0.071445726 = sum of:
            0.071445726 = weight(_text_:headings in 2824) [ClassicSimilarity], result of:
              0.071445726 = score(doc=2824,freq=16.0), product of:
                0.23569997 = queryWeight, product of:
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.04859849 = queryNorm
                0.3031215 = fieldWeight in 2824, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2824)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Classification is a crucial skill for all information workers involved in organizing collections, but it is a difficult concept to grasp - and is even more difficult to put into practice. Essential Classification offers full guidance an how to go about classifying a document from scratch. This much-needed text leads the novice classifier step by step through the basics of subject cataloguing, with an emphasis an practical document analysis and classification. It deals with fundamental questions of the purpose of classification in different situations, and the needs and expectations of end users. The novice is introduced to the ways in which document content can be assessed, and how this can best be expressed for translation into the language of specific indexing and classification systems. The characteristics of the major general schemes of classification are discussed, together with their suitability for different classification needs.
    Footnote
    Rez. in: KO 32(2005) no.1, S.47-49 (M. Hudon): "Vanda Broughton's Essential Classification is the most recent addition to a very small set of classification textbooks published over the past few years. The book's 21 chapters are based very closely an the cataloguing and classification module at the School of Library, Archive, and Information studies at University College, London. The author's main objective is clear: this is "first and foremost a book about how to classify. The emphasis throughout is an the activity of classification rather than the theory, the practical problems of the organization of collections, and the needs of the users" (p. 1). This is not a theoretical work, but a basic course in classification and classification scheme application. For this reviewer, who also teaches "Classification 101," this is also a fascinating peek into how a colleague organizes content and structures her course. "Classification is everywhere" (p. 1): the first sentence of this book is also one of the first statements in my own course, and Professor Broughton's metaphors - the supermarket, canned peas, flowers, etc. - are those that are used by our colleagues around the world. The combination of tone, writing style and content display are reader-friendly; they are in fact what make this book remarkable and what distinguishes it from more "formal" textbooks, such as The Organization of Information, the superb text written and recently updated (2004) by Professor Arlene Taylor (2nd ed. Westport, Conn.: Libraries Unlimited, 2004). Reading Essential Classification, at times, feels like being in a classroom, facing a teacher who assures you that "you don't need to worry about this at this stage" (p. 104), and reassures you that, although you now speed a long time looking for things, "you will soon speed up when you get to know the scheme better" (p. 137). This teacher uses redundancy in a productive fashion, and she is not afraid to express her own opinions ("I think that if these concepts are helpful they may be used" (p. 245); "It's annoying that LCC doesn't provide clearer instructions, but if you keep your head and take them one step at a time [i.e. the tables] they're fairly straightforward" (p. 174)). Chapters 1 to 7 present the essential theoretical concepts relating to knowledge organization and to bibliographic classification. The author is adept at making and explaining distinctions: known-item retrieval versus subject retrieval, personal versus public/shared/official classification systems, scientific versus folk classification systems, object versus aspect classification systems, semantic versus syntactic relationships, and so on. Chapters 8 and 9 discuss the practice of classification, through content analysis and subject description. A short discussion of difficult subjects, namely the treatment of unique concepts (persons, places, etc.) as subjects seems a little advanced for a beginners' class.
    In Chapter 10, "Controlled indexing languages," Professor Broughton states that a classification scheme is truly a language "since it permits communication and the exchange of information" (p. 89), a Statement with which this reviewer wholly agrees. Chapter 11, however, "Word-based approaches to retrieval," moves us to a different field altogether, offering only a narrow view of the whole world of controlled indexing languages such as thesauri, and presenting disconnected discussions of alphabetical filing, form and structure of subject headings, modern developments in alphabetical subject indexing, etc. Chapters 12 and 13 focus an the Library of Congress Subject Headings (LCSH), without even a passing reference to existing subject headings lists in other languages (French RAMEAU, German SWK, etc.). If it is not surprising to see a section on subject headings in a book on classification, the two subjects being taught together in most library schools, the location of this section in the middle of this particular book is more difficult to understand. Chapter 14 brings the reader back to classification, for a discussion of essentials of classification scheme application. The following five chapters present in turn each one of the three major and currently used bibliographic classification schemes, in order of increasing complexity and difficulty of application. The Library of Congress Classification (LCC), the easiest to use, is covered in chapters 15 and 16. The Dewey Decimal Classification (DDC) deserves only a one-chapter treatment (Chapter 17), while the functionalities of the Universal Decimal Classification (UDC), which Professor Broughton knows extremely well, are described in chapters 18 and 19. Chapter 20 is a general discussion of faceted classification, on par with the first seven chapters for its theoretical content. Chapter 21, an interesting last chapter on managing classification, addresses down-to-earth matters such as the cost of classification, the need for re-classification, advantages and disadvantages of using print versions or e-versions of classification schemes, choice of classification scheme, general versus special scheme. But although the questions are interesting, the chapter provides only a very general overview of what appropriate answers might be. To facilitate reading and learning, summaries are strategically located at various places in the text, and always before switching to a related subject. Professor Broughton's choice of examples is always interesting, and sometimes even entertaining (see for example "Inside out: A brief history of underwear" (p. 71)). With many examples, however, and particularly those that appear in the five chapters an classification scheme applications, the novice reader would have benefited from more detailed explanations. On page 221, for example, "The history and social influence of the potato" results in this analysis of concepts: Potato - Sociology, and in the UDC class number: 635.21:316. What happened to the "history" aspect? Some examples are not very convincing: in Animals RT Reproduction and Art RT Reproduction (p. 102), the associative relationship is not appropriate as it is used to distinguish homographs and would do nothing to help either the indexer or the user at the retrieval stage.
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
  6. Vickery, B.C.: Systematic subject indexing (1985) 0.06
    0.06337852 = product of:
      0.10563086 = sum of:
        0.018936433 = weight(_text_:of in 3636) [ClassicSimilarity], result of:
          0.018936433 = score(doc=3636,freq=26.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.2491759 = fieldWeight in 3636, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=3636)
        0.06143454 = weight(_text_:subject in 3636) [ClassicSimilarity], result of:
          0.06143454 = score(doc=3636,freq=10.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.35344344 = fieldWeight in 3636, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=3636)
        0.02525988 = product of:
          0.05051976 = sum of:
            0.05051976 = weight(_text_:headings in 3636) [ClassicSimilarity], result of:
              0.05051976 = score(doc=3636,freq=2.0), product of:
                0.23569997 = queryWeight, product of:
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.04859849 = queryNorm
                0.21433927 = fieldWeight in 3636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3636)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Brian C. Vickery, Director and Professor, School of Library, Archive and Information Studies, University College, London, is a prolific writer on classification and information retrieval. This paper was one of the earliest to present initial efforts by the Classification Research Group (q.v.). In it he clearly outlined the need for classification in subject indexing, which, at the time he wrote, was not a commonplace understanding. In fact, some indexing systems were made in the first place specifically to avoid general classification systems which were out of date in all fast-moving disciplines, especially in the "hard" sciences. Vickery picked up Julia Pettee's work (q.v.) an the concealed classification in subject headings (1947) and added to it, mainly adopting concepts from the work of S. R. Ranganathan (q.v.). He had already published a paper an notation in classification, pointing out connections between notation, words, and the concepts which they represent. He was especially concerned about the structure of notational symbols as such symbols represented relationships among subjects. Vickery also emphasized that index terms cover all aspects of a subject so that, in addition to having a basis in classification, the ideal index system should also have standardized nomenclature, as weIl as show evidence of a systematic classing of elementary terms. The necessary linkage between system and terms should be one of a number of methods, notably:
    - adding a relational term ("operator") to identify and join terms; - indicating grammatical case with terms where this would help clarify relationships; and - analyzing elementary terms to reveal fundamental categories where needed. He further added that a standard order for showing relational factors was highly desirable. Eventually, some years later, he was able to suggest such an order. This was accepted by his peers in the Classification Research Group, and utilized by Derek Austin in PRECIS (q.v.). Vickery began where Farradane began - with perception (a sound base according to current cognitive psychology). From this came further recognition of properties, parts, constituents, organs, effects, reactions, operations (physical and mental), added to the original "identity," "difference," "class membership," and "species." By defining categories more carefully, Vickery arrived at six (in addition to space (geographic) and time): - personality, thing, substance (e.g., dog, bicycle, rose) - part (e.g., paw, wheel, leaf) - substance (e.g., copper, water, butter) - action (e.g., scattering) - property (e.g., length, velocity) - operation (e.g., analysis, measurement) Thus, as early as 1953, the foundations were already laid for research that ultimately produced very sophisticated systems, such as PRECIS.
    Footnote
    Original in: Journal of documentation 9(1953) S.48-57.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  7. Gnoli, C.: Metadata about what? : distinguishing between ontic, epistemic, and documental dimensions in knowledge organization (2012) 0.06
    0.06115011 = product of:
      0.10191685 = sum of:
        0.02177373 = weight(_text_:of in 323) [ClassicSimilarity], result of:
          0.02177373 = score(doc=323,freq=22.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.28651062 = fieldWeight in 323, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=323)
        0.048568267 = weight(_text_:subject in 323) [ClassicSimilarity], result of:
          0.048568267 = score(doc=323,freq=4.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.27942157 = fieldWeight in 323, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=323)
        0.03157485 = product of:
          0.0631497 = sum of:
            0.0631497 = weight(_text_:headings in 323) [ClassicSimilarity], result of:
              0.0631497 = score(doc=323,freq=2.0), product of:
                0.23569997 = queryWeight, product of:
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.04859849 = queryNorm
                0.2679241 = fieldWeight in 323, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=323)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The spread of many new media and formats is changing the scenario faced by knowledge organizers: as printed monographs are not the only standard form of knowledge carrier anymore, the traditional kind of knowledge organization (KO) systems based on academic disciplines is put into question. A sounder foundation can be provided by an analysis of the different dimensions concurring to form the content of any knowledge item-what Brian Vickery described as the steps "from the world to the classifier." The ultimate referents of documents are the phenomena of the real world, that can be ordered by ontology, the study of what exists. Phenomena coexist in subjects with the perspectives by which they are considered, pertaining to epistemology, and with the formal features of knowledge carriers, adding a further, pragmatic layer. All these dimensions can be accounted for in metadata, but are often done so in mixed ways, making indexes less rigorous and interoperable. For example, while facet analysis was originally developed for subject indexing, many "faceted" interfaces today mix subject facets with form facets, and schemes presented as "ontologies" for the "semantic Web" also code for non-semantic information. In bibliographic classifications, phenomena are often confused with the disciplines dealing with them, the latter being assumed to be the most useful starting point, for users will have either one or another perspective. A general citation order of dimensions- phenomena, perspective, carrier-is recommended, helping to concentrate most relevant information at the beginning of headings.
  8. Winske, E.: ¬The development and structure of an urban, regional, and local documents classification scheme (1996) 0.06
    0.057265695 = product of:
      0.095442824 = sum of:
        0.024317201 = weight(_text_:of in 7241) [ClassicSimilarity], result of:
          0.024317201 = score(doc=7241,freq=14.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.31997898 = fieldWeight in 7241, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7241)
        0.04808013 = weight(_text_:subject in 7241) [ClassicSimilarity], result of:
          0.04808013 = score(doc=7241,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.27661324 = fieldWeight in 7241, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7241)
        0.023045486 = product of:
          0.04609097 = sum of:
            0.04609097 = weight(_text_:22 in 7241) [ClassicSimilarity], result of:
              0.04609097 = score(doc=7241,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.2708308 = fieldWeight in 7241, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7241)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Discusses the reasons for the decision, taken at Florida International University Library to develop an in house classification system for their local documents collections. Reviews the structures of existing classification systems, noting their strengths and weaknesses in relation to the development of an in house system and describes the 5 components of the new system; geography, subject categories, extensions for population group and/or function, extensions for type of publication, and title/series designator
    Footnote
    Paper presented at conference on 'Local documents, a new classification scheme' at the Research Caucus of the Florida Library Association Annual Conference, Fort Lauderdale, Florida 22 Apr 95
    Source
    Journal of educational media and library sciences. 34(1996) no.1, S.19-34
  9. Green, R.: Relational aspects of subject authority control : the contributions of classificatory structure (2015) 0.05
    0.052081835 = product of:
      0.08680306 = sum of:
        0.02177373 = weight(_text_:of in 2282) [ClassicSimilarity], result of:
          0.02177373 = score(doc=2282,freq=22.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.28651062 = fieldWeight in 2282, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2282)
        0.048568267 = weight(_text_:subject in 2282) [ClassicSimilarity], result of:
          0.048568267 = score(doc=2282,freq=4.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.27942157 = fieldWeight in 2282, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2282)
        0.016461061 = product of:
          0.032922123 = sum of:
            0.032922123 = weight(_text_:22 in 2282) [ClassicSimilarity], result of:
              0.032922123 = score(doc=2282,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.19345059 = fieldWeight in 2282, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2282)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The structure of a classification system contributes in a variety of ways to representing semantic relationships between its topics in the context of subject authority control. We explore this claim using the Dewey Decimal Classification (DDC) system as a case study. The DDC links its classes into a notational hierarchy, supplemented by a network of relationships between topics, expressed in class descriptions and in the Relative Index (RI). Topics/subjects are expressed both by the natural language text of the caption and notes (including Manual notes) in a class description and by the controlled vocabulary of the RI's alphabetic index, which shows where topics are treated in the classificatory structure. The expression of relationships between topics depends on paradigmatic and syntagmatic relationships between natural language terms in captions, notes, and RI terms; on the meaning of specific note types; and on references recorded between RI terms. The specific means used in the DDC for capturing hierarchical (including disciplinary), equivalence and associative relationships are surveyed.
    Date
    8.11.2015 21:27:22
    Source
    Classification and authority control: expanding resource discovery: proceedings of the International UDC Seminar 2015, 29-30 October 2015, Lisbon, Portugal. Eds.: Slavic, A. u. M.I. Cordeiro
  10. Facets: a fruitful notion in many domains : special issue on facet analysis (2008) 0.05
    0.051081654 = product of:
      0.085136086 = sum of:
        0.036069524 = weight(_text_:list in 3262) [ClassicSimilarity], result of:
          0.036069524 = score(doc=3262,freq=2.0), product of:
            0.25191793 = queryWeight, product of:
              5.183657 = idf(docFreq=673, maxDocs=44218)
              0.04859849 = queryNorm
            0.14317966 = fieldWeight in 3262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.183657 = idf(docFreq=673, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3262)
        0.024782432 = weight(_text_:of in 3262) [ClassicSimilarity], result of:
          0.024782432 = score(doc=3262,freq=114.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.32610077 = fieldWeight in 3262, product of:
              10.677078 = tf(freq=114.0), with freq of:
                114.0 = termFreq=114.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3262)
        0.024284134 = weight(_text_:subject in 3262) [ClassicSimilarity], result of:
          0.024284134 = score(doc=3262,freq=4.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.13971078 = fieldWeight in 3262, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3262)
      0.6 = coord(3/5)
    
    Footnote
    Rez. in: KO 36(2009) no.1, S.62-63 (K. La Barre): "This special issue of Axiomathes presents an ambitious dual agenda. It attempts to highlight aspects of facet analysis (as used in LIS) that are shared by cognate approaches in philosophy, psychology, linguistics and computer science. Secondarily, the issue aims to attract others to the study and use of facet analysis. The authors represent a blend of lifetime involvement with facet analysis, such as Vickery, Broughton, Beghtol, and Dahlberg; those with well developed research agendas such as Tudhope, and Priss; and relative newcomers such as Gnoli, Cheti and Paradisi, and Slavic. Omissions are inescapable, but a more balanced issue would have resulted from inclusion of at least one researcher from the Indian school of facet theory. Another valuable addition might have been a reaction to the issue by one of the chief critics of facet analysis. Potentially useful, but absent, is a comprehensive bibliography of resources for those wishing to engage in further study, that now lie scattered throughout the issue. Several of the papers assume relative familiarity with facet analytical concepts and definitions, some of which are contested even within LIS. Gnoli's introduction (p. 127-130) traces the trajectory, extensions and new developments of this analytico- synthetic approach to subject access, while providing a laundry list of cognate approaches that are similar to facet analysis. This brief essay and the article by Priss (p. 243-255) directly addresses this first part of Gnoli's agenda. Priss provides detailed discussion of facet-like structures in computer science (p. 245- 246), and outlines the similarity between Formal Concept Analysis and facets. This comparison is equally fruitful for researchers in computer science and library and information science. By bridging into a discussion of visualization challenges for facet display, further research is also invited. Many of the remaining papers comprehensively detail the intellectual heritage of facet analysis (Beghtol; Broughton, p. 195-198; Dahlberg; Tudhope and Binding, p. 213-215; Vickery). Beghtol's (p. 131-144) examination of the origins of facet theory through the lens of the textbooks written by Ranganathan's mentor W.C.B. Sayers (1881-1960), Manual of Classification (1926, 1944, 1955) and a textbook written by Mills A Modern Outline of Classification (1964), serves to reveal the deep intellectual heritage of the changes in classification theory over time, as well as Ranganathan's own influence on and debt to Sayers.
    Several of the papers are clearly written as primers and neatly address the second agenda item: attracting others to the study and use of facet analysis. The most valuable papers are written in clear, approachable language. Vickery's paper (p. 145-160) is a clarion call for faceted classification and facet analysis. The heart of the paper is a primer for central concepts and techniques. Vickery explains the value of using faceted classification in document retrieval. Also provided are potential solutions to thorny interface and display issues with facets. Vickery looks to complementary themes in knowledge organization, such as thesauri and ontologies as potential areas for extending the facet concept. Broughton (p. 193-210) describes a rigorous approach to the application of facet analysis in the creation of a compatible thesaurus from the schedules of the 2nd edition of the Bliss Classification (BC2). This discussion of exemplary faceted thesauri, recent standards work, and difficulties encountered in the project will provide valuable guidance for future research in this area. Slavic (p. 257-271) provides a challenge to make faceted classification come 'alive' through promoting the use of machine-readable formats for use and exchange in applications such as Topic Maps and SKOS (Simple Knowledge Organization Systems), and as supported by the standard BS8723 (2005) Structured Vocabulary for Information Retrieval. She also urges designers of faceted classifications to get involved in standards work. Cheti and Paradisi (p. 223-241) outline a basic approach to converting an existing subject indexing tool, the Nuovo Soggetario, into a faceted thesaurus through the use of facet analysis. This discussion, well grounded in the canonical literature, may well serve as a primer for future efforts. Also useful for those who wish to construct faceted thesauri is the article by Tudhope and Binding (p. 211-222). This contains an outline of basic elements to be found in exemplar faceted thesauri, and a discussion of project FACET (Faceted Access to Cultural heritage Terminology) with algorithmically-based semantic query expansion in a dataset composed of items from the National Museum of Science and Industry indexed with AAT (Art and Architecture Thesaurus). This paper looks to the future hybridization of ontologies and facets through standards developments such as SKOS because of the "lightweight semantics" inherent in facets.
    Two of the papers revisit the interaction of facets with the theory of integrative levels, which posits that the organization of the natural world reflects increasingly interdependent complexity. This approach was tested as a basis for the creation of faceted classifications in the 1960s. These contemporary treatments of integrative levels are not discipline-driven as were the early approaches, but instead are ontological and phenomenological in focus. Dahlberg (p. 161-172) outlines the creation of the ICC (Information Coding System) and the application of the Systematifier in the generation of facets and the creation of a fully faceted classification. Gnoli (p. 177-192) proposes the use of fundamental categories as a way to redefine facets and fundamental categories in "more universal and level-independent ways" (p. 192). Given that Axiomathes has a stated focus on "contemporary issues in cognition and ontology" and the following thesis: "that real advances in contemporary science may depend upon a consideration of the origins and intellectual history of ideas at the forefront of current research," this venue seems well suited for the implementation of the stated agenda, to illustrate complementary approaches and to stimulate research. As situated, this special issue may well serve as a bridge to a more interdisciplinary dialogue about facet analysis than has previously been the case."
  11. Dousa, T.M.: Categories and the architectonics of system in Julius Otto Kaiser's method of systematic indexing (2014) 0.05
    0.049779568 = product of:
      0.08296594 = sum of:
        0.032161932 = weight(_text_:of in 1418) [ClassicSimilarity], result of:
          0.032161932 = score(doc=1418,freq=48.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.42320424 = fieldWeight in 1418, product of:
              6.928203 = tf(freq=48.0), with freq of:
                48.0 = termFreq=48.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1418)
        0.034342952 = weight(_text_:subject in 1418) [ClassicSimilarity], result of:
          0.034342952 = score(doc=1418,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.19758089 = fieldWeight in 1418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1418)
        0.016461061 = product of:
          0.032922123 = sum of:
            0.032922123 = weight(_text_:22 in 1418) [ClassicSimilarity], result of:
              0.032922123 = score(doc=1418,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.19345059 = fieldWeight in 1418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1418)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Categories, or concepts of high generality representing the most basic kinds of entities in the world, have long been understood to be a fundamental element in the construction of knowledge organization systems (KOSs), particularly faceted ones. Commentators on facet analysis have tended to foreground the role of categories in the structuring of controlled vocabularies and the construction of compound index terms, and the implications of this for subject representation and information retrieval. Less attention has been paid to the variety of ways in which categories can shape the overall architectonic framework of a KOS. This case study explores the range of functions that categories took in structuring various aspects of an early analytico-synthetic KOS, Julius Otto Kaiser's method of Systematic Indexing (SI). Within SI, categories not only functioned as mechanisms to partition an index vocabulary into smaller groupings of terms and as elements in the construction of compound index terms but also served as means of defining the units of indexing, or index items, incorporated into an index; determining the organization of card index files and the articulation of the guide card system serving as a navigational aids thereto; and setting structural constraints to the establishment of cross-references between terms. In all these ways, Kaiser's system of categories contributed to the general systematicity of SI.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  12. Svenonius, E.: Ranganathan and classification science (1992) 0.05
    0.04918603 = product of:
      0.12296507 = sum of:
        0.027791087 = weight(_text_:of in 2654) [ClassicSimilarity], result of:
          0.027791087 = score(doc=2654,freq=14.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.36569026 = fieldWeight in 2654, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2654)
        0.09517398 = weight(_text_:subject in 2654) [ClassicSimilarity], result of:
          0.09517398 = score(doc=2654,freq=6.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.5475522 = fieldWeight in 2654, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0625 = fieldNorm(doc=2654)
      0.4 = coord(2/5)
    
    Abstract
    This article discusses some of Ranganathan's contributions to the productive, practical and theoretical aspects of classification science. These include: (1) a set of design criteria to guide the designing of schemes for knowledge / subject classification; (2) a conceptual framework for organizing the universe of subjects; and (3) an understanding of the general principles underlying subject disciplines and classificatory languages. It concludes that Ranganathan has contributed significantly to laying the foundations for a science of subject classification.
  13. Gnoli, C.: Classifying phenomena : Part 1: dimensions (2016) 0.05
    0.049112182 = product of:
      0.12278045 = sum of:
        0.10202001 = weight(_text_:list in 3417) [ClassicSimilarity], result of:
          0.10202001 = score(doc=3417,freq=4.0), product of:
            0.25191793 = queryWeight, product of:
              5.183657 = idf(docFreq=673, maxDocs=44218)
              0.04859849 = queryNorm
            0.4049732 = fieldWeight in 3417, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.183657 = idf(docFreq=673, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3417)
        0.020760437 = weight(_text_:of in 3417) [ClassicSimilarity], result of:
          0.020760437 = score(doc=3417,freq=20.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.27317715 = fieldWeight in 3417, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3417)
      0.4 = coord(2/5)
    
    Abstract
    This is the first part of a study on the classification of phenomena. It starts by addressing the status of classification schemes among knowledge organization systems (KOSs), as some features of them have been overlooked in recent reviews of KOS types. It then considers the different dimensions implied in a KOS, which include: the observed phenomena, the cultural and disciplinary perspective under which they are treated, the features of documents carrying such treatment, the collections of such documents as managed in libraries, archives or museums, the information needs prompting to search and use these collections and the people experiencing such different information needs. Until now, most library classification schemes have given priority to the perspective dimension as they first list disciplines. However, an increasing number of voices are now considering the possibility of classification schemes giving priority to phenomena as advocated in the León Manifesto. Although these schemes first list phenomena as their main classes, they can as well express perspective or the other relevant dimensions that occur in a classified item. The independence of a phenomenon-based classification from the institutional divisions into disciplines contributes to giving knowledge organization a more proactive and influential role.
  14. Holman, E.E.: Statistical properties of large published classifications (1992) 0.05
    0.047061887 = product of:
      0.11765471 = sum of:
        0.03523163 = weight(_text_:of in 4250) [ClassicSimilarity], result of:
          0.03523163 = score(doc=4250,freq=10.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.46359703 = fieldWeight in 4250, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=4250)
        0.082423076 = weight(_text_:subject in 4250) [ClassicSimilarity], result of:
          0.082423076 = score(doc=4250,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.4741941 = fieldWeight in 4250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.09375 = fieldNorm(doc=4250)
      0.4 = coord(2/5)
    
    Abstract
    Reports the results of a survey of 23 published classifications taken from a variety of subject fields
    Source
    Journal of classification. 9(1992) no.2, S.187-210
  15. Minnigh, L.D.: Chaos in informatie, onderwerpsontsluiting en kennisoverdracht : de rol van de wetenschappelijke bibliotheek (1993) 0.05
    0.046472825 = product of:
      0.11618206 = sum of:
        0.021008085 = weight(_text_:of in 6606) [ClassicSimilarity], result of:
          0.021008085 = score(doc=6606,freq=8.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.27643585 = fieldWeight in 6606, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=6606)
        0.09517398 = weight(_text_:subject in 6606) [ClassicSimilarity], result of:
          0.09517398 = score(doc=6606,freq=6.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.5475522 = fieldWeight in 6606, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0625 = fieldNorm(doc=6606)
      0.4 = coord(2/5)
    
    Abstract
    Existing classification systems require constant expansion to accomodate new subject fields, while subject indexing techniques fail to display the relationship of subjects. Relational databases are currently being developed which will guide users through the differing levels of subjects, using the 'cartography of science'. Such developments will enable librarians to play a more interactive role in information retrieval and will have far-reaching consequences on the design of subject-indexing systems
  16. Foskett, D.J.: ¬The construction of a faceted classification for a special subject (1959) 0.05
    0.045816936 = product of:
      0.114542335 = sum of:
        0.018382076 = weight(_text_:of in 551) [ClassicSimilarity], result of:
          0.018382076 = score(doc=551,freq=2.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.24188137 = fieldWeight in 551, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=551)
        0.09616026 = weight(_text_:subject in 551) [ClassicSimilarity], result of:
          0.09616026 = score(doc=551,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.5532265 = fieldWeight in 551, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.109375 = fieldNorm(doc=551)
      0.4 = coord(2/5)
    
  17. Gnoli, C.: Classifying phenomena : Part 2: Types and levels (2017) 0.05
    0.045542862 = product of:
      0.11385715 = sum of:
        0.08656685 = weight(_text_:list in 3177) [ClassicSimilarity], result of:
          0.08656685 = score(doc=3177,freq=2.0), product of:
            0.25191793 = queryWeight, product of:
              5.183657 = idf(docFreq=673, maxDocs=44218)
              0.04859849 = queryNorm
            0.34363115 = fieldWeight in 3177, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.183657 = idf(docFreq=673, maxDocs=44218)
              0.046875 = fieldNorm(doc=3177)
        0.027290303 = weight(_text_:of in 3177) [ClassicSimilarity], result of:
          0.027290303 = score(doc=3177,freq=24.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.3591007 = fieldWeight in 3177, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3177)
      0.4 = coord(2/5)
    
    Abstract
    After making the case that phenomena can be the primary unit of classification (Part 1), some basic principles to group and sort phenomena are considered. Entities can be grouped together on the basis of both their similarity (morphology) and their common origin (phylogeny). The resulting groups will form the classical hierarchical chains of types and subtypes. At every hierarchical degree, phenomena can form ordered sets (arrays), where their sorting can reflect levels of increasing organization, corresponding to an evolutionary order of appearance (emergence). The theory of levels of reality has been investigated by many philosophers and applied to knowledge organization systems by various authors, which are briefly reviewed. At the broadest degree, it allows to identify some major strata of phenomena (forms, matter, life, minds, societies and culture) in turn divided into layers. A list of twenty-six layers is proposed to form the main classes of the Integrative Levels Classification system. A combination of morphology and phylogeny can determine whether a given phenomenon should be a type of an existing level, or a level on its own.
  18. Tennis, J.T.: ¬The strange case of eugenics : a subject's ontogeny in a long-lived classification scheme and the question of collocative integrity (2012) 0.04
    0.04296766 = product of:
      0.10741915 = sum of:
        0.02970992 = weight(_text_:of in 275) [ClassicSimilarity], result of:
          0.02970992 = score(doc=275,freq=16.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.39093933 = fieldWeight in 275, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=275)
        0.07770923 = weight(_text_:subject in 275) [ClassicSimilarity], result of:
          0.07770923 = score(doc=275,freq=4.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.4470745 = fieldWeight in 275, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0625 = fieldNorm(doc=275)
      0.4 = coord(2/5)
    
    Abstract
    This article introduces the problem of collocative integrity present in long-lived classification schemes that undergo several changes. A case study of the subject "eugenics" in the Dewey Decimal Classification is presented to illustrate this phenomenon. Eugenics is strange because of the kinds of changes it undergoes. The article closes with a discussion of subject ontogeny as the name for this phenomenon and describes implications for information searching and browsing.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.7, S.1350-1359
  19. Szostak, R.: ¬A schema for unifying human science : interdisciplinary perspectives on culture (2003) 0.04
    0.042964067 = product of:
      0.10741016 = sum of:
        0.08656685 = weight(_text_:list in 803) [ClassicSimilarity], result of:
          0.08656685 = score(doc=803,freq=2.0), product of:
            0.25191793 = queryWeight, product of:
              5.183657 = idf(docFreq=673, maxDocs=44218)
              0.04859849 = queryNorm
            0.34363115 = fieldWeight in 803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.183657 = idf(docFreq=673, maxDocs=44218)
              0.046875 = fieldNorm(doc=803)
        0.020843314 = weight(_text_:of in 803) [ClassicSimilarity], result of:
          0.020843314 = score(doc=803,freq=14.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.2742677 = fieldWeight in 803, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=803)
      0.4 = coord(2/5)
    
    Abstract
    This book develops a schema, consisting of a hierarchically organized list of the phenomena of interest to human scientists, and the causal links (influences) which exist among these. This organizing device, and particularly the "unpacking" of "culture" into its constituent phenomena, allows the true complexity of culture to be captured. Unpacking also allows us to sail between the twin dangers of culture bigotry and cultural relativism.
    Footnote
    Rez. in: KO 39(2012) no.4, S.300-303 (M.J. Fox) Vgl. auch: Szostak, R.: Speaking truth to power in classification: response to Fox's review of my work; KO 39:4, 300. In: Knowledge organization. 40(2013) no.1, S.76-77.
  20. Wang, Z.; Chaudhry, A.S.; Khoo, C.S.G.: Using classification schemes and thesauri to build an organizational taxonomy for organizing content and aiding navigation (2008) 0.04
    0.04257594 = product of:
      0.070959896 = sum of:
        0.018936433 = weight(_text_:of in 2346) [ClassicSimilarity], result of:
          0.018936433 = score(doc=2346,freq=26.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.2491759 = fieldWeight in 2346, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
        0.038854614 = weight(_text_:subject in 2346) [ClassicSimilarity], result of:
          0.038854614 = score(doc=2346,freq=4.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.22353725 = fieldWeight in 2346, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
        0.013168849 = product of:
          0.026337698 = sum of:
            0.026337698 = weight(_text_:22 in 2346) [ClassicSimilarity], result of:
              0.026337698 = score(doc=2346,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.15476047 = fieldWeight in 2346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2346)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Purpose - Potential and benefits of classification schemes and thesauri in building organizational taxonomies cannot be fully utilized by organizations. Empirical data of building an organizational taxonomy by the top-down approach of using classification schemes and thesauri appear to be lacking. The paper seeks to make a contribution in this regard. Design/methodology/approach - A case study of building an organizational taxonomy was conducted in the information studies domain for the Division of Information Studies at Nanyang Technology University, Singapore. The taxonomy was built by using the Dewey Decimal Classification, the Information Science Taxonomy, two information systems taxonomies, and three thesauri (ASIS&T, LISA, and ERIC). Findings - Classification schemes and thesauri were found to be helpful in creating the structure and categories related to the subject facet of the taxonomy, but organizational community sources had to be consulted and several methods had to be employed. The organizational activities and stakeholders' needs had to be identified to determine the objectives, facets, and the subject coverage of the taxonomy. Main categories were determined by identifying the stakeholders' interests and consulting organizational community sources and domain taxonomies. Category terms were selected from terminologies of classification schemes, domain taxonomies, and thesauri against the stakeholders' interests. Hierarchical structures of the main categories were constructed in line with the stakeholders' perspectives and the navigational role taking advantage of structures/term relationships from classification schemes and thesauri. Categories were determined in line with the concepts and the hierarchical levels. Format of categories were uniformed according to a commonly used standard. The consistency principle was employed to make the taxonomy structure and categories neater. Validation of the draft taxonomy through consultations with the stakeholders further refined the taxonomy. Originality/value - No similar study could be traced in the literature. The steps and methods used in the taxonomy development, and the information studies taxonomy itself, will be helpful for library and information schools and other similar organizations in their effort to develop taxonomies for organizing content and aiding navigation on organizational sites.
    Date
    7.11.2008 15:22:04
    Source
    Journal of documentation. 64(2008) no.6, S.842-876

Authors

Languages

Types

  • a 206
  • m 21
  • el 10
  • s 4
  • b 2
  • More… Less…