Search (198 results, page 2 of 10)

  • × language_ss:"e"
  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Svenonius, E.: Ranganathan and classification science (1992) 0.06
    0.058418572 = product of:
      0.20446499 = sum of:
        0.058800567 = weight(_text_:subject in 2654) [ClassicSimilarity], result of:
          0.058800567 = score(doc=2654,freq=6.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.5475522 = fieldWeight in 2654, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0625 = fieldNorm(doc=2654)
        0.053833168 = weight(_text_:classification in 2654) [ClassicSimilarity], result of:
          0.053833168 = score(doc=2654,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.5629819 = fieldWeight in 2654, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=2654)
        0.03799808 = product of:
          0.07599616 = sum of:
            0.07599616 = weight(_text_:schemes in 2654) [ClassicSimilarity], result of:
              0.07599616 = score(doc=2654,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.4729882 = fieldWeight in 2654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
        0.053833168 = weight(_text_:classification in 2654) [ClassicSimilarity], result of:
          0.053833168 = score(doc=2654,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.5629819 = fieldWeight in 2654, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=2654)
      0.2857143 = coord(4/14)
    
    Abstract
    This article discusses some of Ranganathan's contributions to the productive, practical and theoretical aspects of classification science. These include: (1) a set of design criteria to guide the designing of schemes for knowledge / subject classification; (2) a conceptual framework for organizing the universe of subjects; and (3) an understanding of the general principles underlying subject disciplines and classificatory languages. It concludes that Ranganathan has contributed significantly to laying the foundations for a science of subject classification.
  2. Furner, J.; Dunbar, A.W.: ¬The treatment of topics relating to people of mixed race in bibliographic classification schemes : a critical race-theoretic approach (2004) 0.06
    0.057748508 = product of:
      0.20211977 = sum of:
        0.0526639 = weight(_text_:classification in 2640) [ClassicSimilarity], result of:
          0.0526639 = score(doc=2640,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.55075383 = fieldWeight in 2640, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2640)
        0.047020227 = product of:
          0.09404045 = sum of:
            0.09404045 = weight(_text_:schemes in 2640) [ClassicSimilarity], result of:
              0.09404045 = score(doc=2640,freq=4.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.58529305 = fieldWeight in 2640, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2640)
          0.5 = coord(1/2)
        0.04977173 = weight(_text_:bibliographic in 2640) [ClassicSimilarity], result of:
          0.04977173 = score(doc=2640,freq=4.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.4258017 = fieldWeight in 2640, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2640)
        0.0526639 = weight(_text_:classification in 2640) [ClassicSimilarity], result of:
          0.0526639 = score(doc=2640,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.55075383 = fieldWeight in 2640, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2640)
      0.2857143 = coord(4/14)
    
    Abstract
    The classification of documents about topics relating to people of mixed race is problematic, partly because of the obscurity of racial categorization in general, and partly because of the limitations and inherent biases of bibliographic classification schemes designed primarily for usage in non-digital environments. Critical race theory is an approach that may prove useful in deterrnining how classification systems such as the Dewey Decimal Classification should most appropriately be stuctured.
  3. Quinlan, E.; Rafferty, P.: Astronomy classification : towards a faceted classification scheme (2019) 0.06
    0.056143455 = product of:
      0.19650209 = sum of:
        0.030006537 = weight(_text_:subject in 5313) [ClassicSimilarity], result of:
          0.030006537 = score(doc=5313,freq=4.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.27942157 = fieldWeight in 5313, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5313)
        0.07137337 = weight(_text_:classification in 5313) [ClassicSimilarity], result of:
          0.07137337 = score(doc=5313,freq=36.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.7464156 = fieldWeight in 5313, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5313)
        0.0237488 = product of:
          0.0474976 = sum of:
            0.0474976 = weight(_text_:schemes in 5313) [ClassicSimilarity], result of:
              0.0474976 = score(doc=5313,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2956176 = fieldWeight in 5313, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5313)
          0.5 = coord(1/2)
        0.07137337 = weight(_text_:classification in 5313) [ClassicSimilarity], result of:
          0.07137337 = score(doc=5313,freq=36.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.7464156 = fieldWeight in 5313, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5313)
      0.2857143 = coord(4/14)
    
    Abstract
    Astronomy classification is often overlooked in classification discourse. Its rarity and obscurity, especially within UK librarianship, suggests it is an underdeveloped strand of classification research and is possibly undervalued in modern librarianship. The purpose of this research is to investigate the suitability and practicalities of the discipline of astronomy adopting a subject-specific faceted classification scheme and to provide a provi-sional outline of a special faceted astronomy classification scheme. The research demonstrates that the application of universal schemes for astronomy classification had left the interdisciplinary subject ill catered for and outdated, making accurate classification difficult for specialist astronomy collections. A faceted approach to classification development is supported by two qualitative literature-based research methods: historical research into astronomy classification and an analytico-synthetic classification case study. The subsequent classification development is influenced through a pragmatic and scholarly-scientific approach and constructed by means of instruction from faceted classification guides by Vickery (1960) and Batley (2005), and faceted classification principles from Ranaganathan (1937). This research fills a gap within classification discourse on specialist interdisciplinary subjects, specifically within astronomy and demonstrates the best means for their classification. It provides a means of assessing further the value of faceted classification within astronomy librarianship.
  4. McIlwaine, I.C.: Where have all the flowers gone? : An investigation into the fate of some special classification schemes (2003) 0.06
    0.055654615 = product of:
      0.15583292 = sum of:
        0.016974261 = weight(_text_:subject in 2764) [ClassicSimilarity], result of:
          0.016974261 = score(doc=2764,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.15806471 = fieldWeight in 2764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=2764)
        0.040374875 = weight(_text_:classification in 2764) [ClassicSimilarity], result of:
          0.040374875 = score(doc=2764,freq=18.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.4222364 = fieldWeight in 2764, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03125 = fieldNorm(doc=2764)
        0.03799808 = product of:
          0.07599616 = sum of:
            0.07599616 = weight(_text_:schemes in 2764) [ClassicSimilarity], result of:
              0.07599616 = score(doc=2764,freq=8.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.4729882 = fieldWeight in 2764, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2764)
          0.5 = coord(1/2)
        0.020110816 = weight(_text_:bibliographic in 2764) [ClassicSimilarity], result of:
          0.020110816 = score(doc=2764,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.17204987 = fieldWeight in 2764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=2764)
        0.040374875 = weight(_text_:classification in 2764) [ClassicSimilarity], result of:
          0.040374875 = score(doc=2764,freq=18.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.4222364 = fieldWeight in 2764, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03125 = fieldNorm(doc=2764)
      0.35714287 = coord(5/14)
    
    Abstract
    Prior to the OPAC many institutions devised classifications to suit their special needs. Others expanded or altered general schemes to accommodate specific approaches. A driving force in the creation of these classifications was the Classification Research Group, celebrating its golden jubilee in 2002, whose work created a framework and body of principles that remain valid for the retrieval needs of today. The paper highlights some of these special schemes and highlights the fundamental principles which remain valid. 1. Introduction The distinction between a general and a special classification scheme is made frequently in the textbooks, but is one that it is sometimes difficult to draw. The Library of Congress classification could be described as the special classification par excellence. Normally, however, a special classification is taken to be one that is restricted to a specific subject, and quite often used in one specific context only, either a library or a bibliographic listing or for a specific purpose such as a search engine and it is in this sense that I propose to examine some of these schemes. Today, there is a widespread preference for searching an words as a supplement to the use of a standard system, usually the Dewey Decimal Classification (DDC). This is enhanced by the ability to search documents full-text in a computerized environment, a situation that did not exist 20 or 30 years ago. Today's situation is a great improvement in many ways, but it does depend upon the words used by the author and the searcher corresponding, and often presupposes the use of English. In libraries, the use of co-operative services and precatalogued records already provided with classification data has also spelt the demise of the special scheme. In many instances, the survival of a special classification depends upon its creaior and, with the passage of time, this becomes inevitably more precarious.
  5. Hjoerland, B.: Facet analysis : the logical approach to knowledge organization (2013) 0.06
    0.055603623 = product of:
      0.19461267 = sum of:
        0.03364573 = weight(_text_:classification in 2720) [ClassicSimilarity], result of:
          0.03364573 = score(doc=2720,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.35186368 = fieldWeight in 2720, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2720)
        0.042440403 = product of:
          0.08488081 = sum of:
            0.08488081 = weight(_text_:bliss in 2720) [ClassicSimilarity], result of:
              0.08488081 = score(doc=2720,freq=2.0), product of:
                0.21478812 = queryWeight, product of:
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.03002521 = queryNorm
                0.3951839 = fieldWeight in 2720, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2720)
          0.5 = coord(1/2)
        0.08488081 = weight(_text_:bliss in 2720) [ClassicSimilarity], result of:
          0.08488081 = score(doc=2720,freq=2.0), product of:
            0.21478812 = queryWeight, product of:
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.03002521 = queryNorm
            0.3951839 = fieldWeight in 2720, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2720)
        0.03364573 = weight(_text_:classification in 2720) [ClassicSimilarity], result of:
          0.03364573 = score(doc=2720,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.35186368 = fieldWeight in 2720, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2720)
      0.2857143 = coord(4/14)
    
    Abstract
    The facet-analytic paradigm is probably the most distinct approach to knowledge organization within Library and Information Science, and in many ways it has dominated what has be termed "modern classification theory". It was mainly developed by S.R. Ranganathan and the British Classification Research Group, but it is mostly based on principles of logical division developed more than two millennia ago. Colon Classification (CC) and Bliss 2 (BC2) are among the most important systems developed on this theoretical basis, but it has also influenced the development of other systems, such as the Dewey Decimal Classification (DDC) and is also applied in many websites. It still has a strong position in the field and it is the most explicit and "pure" theoretical approach to knowledge organization (KO) (but it is not by implication necessarily also the most important one). The strength of this approach is its logical principles and the way it provides structures in knowledge organization systems (KOS). The main weaknesses are (1) its lack of empirical basis and (2) its speculative ordering of knowledge without basis in the development or influence of theories and socio-historical studies. It seems to be based on the problematic assumption that relations between concepts are a priori and not established by the development of models, theories and laws.
  6. Wang, Z.; Chaudhry, A.S.; Khoo, C.S.G.: Using classification schemes and thesauri to build an organizational taxonomy for organizing content and aiding navigation (2008) 0.05
    0.05353349 = product of:
      0.14989378 = sum of:
        0.024005229 = weight(_text_:subject in 2346) [ClassicSimilarity], result of:
          0.024005229 = score(doc=2346,freq=4.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.22353725 = fieldWeight in 2346, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
        0.035607297 = weight(_text_:classification in 2346) [ClassicSimilarity], result of:
          0.035607297 = score(doc=2346,freq=14.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.37237754 = fieldWeight in 2346, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
        0.046537954 = product of:
          0.09307591 = sum of:
            0.09307591 = weight(_text_:schemes in 2346) [ClassicSimilarity], result of:
              0.09307591 = score(doc=2346,freq=12.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.57928985 = fieldWeight in 2346, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2346)
          0.5 = coord(1/2)
        0.035607297 = weight(_text_:classification in 2346) [ClassicSimilarity], result of:
          0.035607297 = score(doc=2346,freq=14.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.37237754 = fieldWeight in 2346, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
        0.008136002 = product of:
          0.016272005 = sum of:
            0.016272005 = weight(_text_:22 in 2346) [ClassicSimilarity], result of:
              0.016272005 = score(doc=2346,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.15476047 = fieldWeight in 2346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2346)
          0.5 = coord(1/2)
      0.35714287 = coord(5/14)
    
    Abstract
    Purpose - Potential and benefits of classification schemes and thesauri in building organizational taxonomies cannot be fully utilized by organizations. Empirical data of building an organizational taxonomy by the top-down approach of using classification schemes and thesauri appear to be lacking. The paper seeks to make a contribution in this regard. Design/methodology/approach - A case study of building an organizational taxonomy was conducted in the information studies domain for the Division of Information Studies at Nanyang Technology University, Singapore. The taxonomy was built by using the Dewey Decimal Classification, the Information Science Taxonomy, two information systems taxonomies, and three thesauri (ASIS&T, LISA, and ERIC). Findings - Classification schemes and thesauri were found to be helpful in creating the structure and categories related to the subject facet of the taxonomy, but organizational community sources had to be consulted and several methods had to be employed. The organizational activities and stakeholders' needs had to be identified to determine the objectives, facets, and the subject coverage of the taxonomy. Main categories were determined by identifying the stakeholders' interests and consulting organizational community sources and domain taxonomies. Category terms were selected from terminologies of classification schemes, domain taxonomies, and thesauri against the stakeholders' interests. Hierarchical structures of the main categories were constructed in line with the stakeholders' perspectives and the navigational role taking advantage of structures/term relationships from classification schemes and thesauri. Categories were determined in line with the concepts and the hierarchical levels. Format of categories were uniformed according to a commonly used standard. The consistency principle was employed to make the taxonomy structure and categories neater. Validation of the draft taxonomy through consultations with the stakeholders further refined the taxonomy. Originality/value - No similar study could be traced in the literature. The steps and methods used in the taxonomy development, and the information studies taxonomy itself, will be helpful for library and information schools and other similar organizations in their effort to develop taxonomies for organizing content and aiding navigation on organizational sites.
    Date
    7.11.2008 15:22:04
  7. Tennis, J.T.: ¬The strange case of eugenics : a subject's ontogeny in a long-lived classification scheme and the question of collocative integrity (2012) 0.05
    0.05121438 = product of:
      0.17925031 = sum of:
        0.048010457 = weight(_text_:subject in 275) [ClassicSimilarity], result of:
          0.048010457 = score(doc=275,freq=4.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.4470745 = fieldWeight in 275, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0625 = fieldNorm(doc=275)
        0.046620894 = weight(_text_:classification in 275) [ClassicSimilarity], result of:
          0.046620894 = score(doc=275,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.48755667 = fieldWeight in 275, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=275)
        0.03799808 = product of:
          0.07599616 = sum of:
            0.07599616 = weight(_text_:schemes in 275) [ClassicSimilarity], result of:
              0.07599616 = score(doc=275,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.4729882 = fieldWeight in 275, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0625 = fieldNorm(doc=275)
          0.5 = coord(1/2)
        0.046620894 = weight(_text_:classification in 275) [ClassicSimilarity], result of:
          0.046620894 = score(doc=275,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.48755667 = fieldWeight in 275, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=275)
      0.2857143 = coord(4/14)
    
    Abstract
    This article introduces the problem of collocative integrity present in long-lived classification schemes that undergo several changes. A case study of the subject "eugenics" in the Dewey Decimal Classification is presented to illustrate this phenomenon. Eugenics is strange because of the kinds of changes it undergoes. The article closes with a discussion of subject ontogeny as the name for this phenomenon and describes implications for information searching and browsing.
  8. Foskett, D.J.: Facet analysis (2009) 0.05
    0.04680501 = product of:
      0.16381752 = sum of:
        0.033948522 = weight(_text_:subject in 3754) [ClassicSimilarity], result of:
          0.033948522 = score(doc=3754,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.31612942 = fieldWeight in 3754, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0625 = fieldNorm(doc=3754)
        0.0380658 = weight(_text_:classification in 3754) [ClassicSimilarity], result of:
          0.0380658 = score(doc=3754,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.39808834 = fieldWeight in 3754, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=3754)
        0.053737402 = product of:
          0.107474804 = sum of:
            0.107474804 = weight(_text_:schemes in 3754) [ClassicSimilarity], result of:
              0.107474804 = score(doc=3754,freq=4.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.66890633 = fieldWeight in 3754, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3754)
          0.5 = coord(1/2)
        0.0380658 = weight(_text_:classification in 3754) [ClassicSimilarity], result of:
          0.0380658 = score(doc=3754,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.39808834 = fieldWeight in 3754, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=3754)
      0.2857143 = coord(4/14)
    
    Abstract
    The brothers Foskett, Anthony and Douglas, have both made major contributions to the theory and practice of subject analysis and description. Here, Douglas Foskett explains facet analysis, a vital technique in the development of both classification schemes and thesauri. Foskett himself created faceted classification schemes for specific disciplines, drawing from the philosophy of the great Indian classificationist, S.R. Ranganathan.
  9. Pocock, H.: Classification schemes : development and survival (1997) 0.05
    0.04646811 = product of:
      0.21685117 = sum of:
        0.06729146 = weight(_text_:classification in 762) [ClassicSimilarity], result of:
          0.06729146 = score(doc=762,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.70372736 = fieldWeight in 762, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.078125 = fieldNorm(doc=762)
        0.08226826 = product of:
          0.16453652 = sum of:
            0.16453652 = weight(_text_:schemes in 762) [ClassicSimilarity], result of:
              0.16453652 = score(doc=762,freq=6.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                1.0240495 = fieldWeight in 762, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.078125 = fieldNorm(doc=762)
          0.5 = coord(1/2)
        0.06729146 = weight(_text_:classification in 762) [ClassicSimilarity], result of:
          0.06729146 = score(doc=762,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.70372736 = fieldWeight in 762, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.078125 = fieldNorm(doc=762)
      0.21428572 = coord(3/14)
    
    Abstract
    Discusses the development of classification schemes and their ability to adapt to and accomodate changes in the information world in order to survive. Examines the revision plans for the major classification schemes and the future use of classification search facilities for OPACs
  10. Facets: a fruitful notion in many domains : special issue on facet analysis (2008) 0.05
    0.045087904 = product of:
      0.12624612 = sum of:
        0.015003269 = weight(_text_:subject in 3262) [ClassicSimilarity], result of:
          0.015003269 = score(doc=3262,freq=4.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.13971078 = fieldWeight in 3262, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3262)
        0.023791125 = weight(_text_:classification in 3262) [ClassicSimilarity], result of:
          0.023791125 = score(doc=3262,freq=16.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.24880521 = fieldWeight in 3262, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3262)
        0.021220202 = product of:
          0.042440403 = sum of:
            0.042440403 = weight(_text_:bliss in 3262) [ClassicSimilarity], result of:
              0.042440403 = score(doc=3262,freq=2.0), product of:
                0.21478812 = queryWeight, product of:
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.03002521 = queryNorm
                0.19759195 = fieldWeight in 3262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3262)
          0.5 = coord(1/2)
        0.042440403 = weight(_text_:bliss in 3262) [ClassicSimilarity], result of:
          0.042440403 = score(doc=3262,freq=2.0), product of:
            0.21478812 = queryWeight, product of:
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.03002521 = queryNorm
            0.19759195 = fieldWeight in 3262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3262)
        0.023791125 = weight(_text_:classification in 3262) [ClassicSimilarity], result of:
          0.023791125 = score(doc=3262,freq=16.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.24880521 = fieldWeight in 3262, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3262)
      0.35714287 = coord(5/14)
    
    Footnote
    Rez. in: KO 36(2009) no.1, S.62-63 (K. La Barre): "This special issue of Axiomathes presents an ambitious dual agenda. It attempts to highlight aspects of facet analysis (as used in LIS) that are shared by cognate approaches in philosophy, psychology, linguistics and computer science. Secondarily, the issue aims to attract others to the study and use of facet analysis. The authors represent a blend of lifetime involvement with facet analysis, such as Vickery, Broughton, Beghtol, and Dahlberg; those with well developed research agendas such as Tudhope, and Priss; and relative newcomers such as Gnoli, Cheti and Paradisi, and Slavic. Omissions are inescapable, but a more balanced issue would have resulted from inclusion of at least one researcher from the Indian school of facet theory. Another valuable addition might have been a reaction to the issue by one of the chief critics of facet analysis. Potentially useful, but absent, is a comprehensive bibliography of resources for those wishing to engage in further study, that now lie scattered throughout the issue. Several of the papers assume relative familiarity with facet analytical concepts and definitions, some of which are contested even within LIS. Gnoli's introduction (p. 127-130) traces the trajectory, extensions and new developments of this analytico- synthetic approach to subject access, while providing a laundry list of cognate approaches that are similar to facet analysis. This brief essay and the article by Priss (p. 243-255) directly addresses this first part of Gnoli's agenda. Priss provides detailed discussion of facet-like structures in computer science (p. 245- 246), and outlines the similarity between Formal Concept Analysis and facets. This comparison is equally fruitful for researchers in computer science and library and information science. By bridging into a discussion of visualization challenges for facet display, further research is also invited. Many of the remaining papers comprehensively detail the intellectual heritage of facet analysis (Beghtol; Broughton, p. 195-198; Dahlberg; Tudhope and Binding, p. 213-215; Vickery). Beghtol's (p. 131-144) examination of the origins of facet theory through the lens of the textbooks written by Ranganathan's mentor W.C.B. Sayers (1881-1960), Manual of Classification (1926, 1944, 1955) and a textbook written by Mills A Modern Outline of Classification (1964), serves to reveal the deep intellectual heritage of the changes in classification theory over time, as well as Ranganathan's own influence on and debt to Sayers.
    Several of the papers are clearly written as primers and neatly address the second agenda item: attracting others to the study and use of facet analysis. The most valuable papers are written in clear, approachable language. Vickery's paper (p. 145-160) is a clarion call for faceted classification and facet analysis. The heart of the paper is a primer for central concepts and techniques. Vickery explains the value of using faceted classification in document retrieval. Also provided are potential solutions to thorny interface and display issues with facets. Vickery looks to complementary themes in knowledge organization, such as thesauri and ontologies as potential areas for extending the facet concept. Broughton (p. 193-210) describes a rigorous approach to the application of facet analysis in the creation of a compatible thesaurus from the schedules of the 2nd edition of the Bliss Classification (BC2). This discussion of exemplary faceted thesauri, recent standards work, and difficulties encountered in the project will provide valuable guidance for future research in this area. Slavic (p. 257-271) provides a challenge to make faceted classification come 'alive' through promoting the use of machine-readable formats for use and exchange in applications such as Topic Maps and SKOS (Simple Knowledge Organization Systems), and as supported by the standard BS8723 (2005) Structured Vocabulary for Information Retrieval. She also urges designers of faceted classifications to get involved in standards work. Cheti and Paradisi (p. 223-241) outline a basic approach to converting an existing subject indexing tool, the Nuovo Soggetario, into a faceted thesaurus through the use of facet analysis. This discussion, well grounded in the canonical literature, may well serve as a primer for future efforts. Also useful for those who wish to construct faceted thesauri is the article by Tudhope and Binding (p. 211-222). This contains an outline of basic elements to be found in exemplar faceted thesauri, and a discussion of project FACET (Faceted Access to Cultural heritage Terminology) with algorithmically-based semantic query expansion in a dataset composed of items from the National Museum of Science and Industry indexed with AAT (Art and Architecture Thesaurus). This paper looks to the future hybridization of ontologies and facets through standards developments such as SKOS because of the "lightweight semantics" inherent in facets.
    Two of the papers revisit the interaction of facets with the theory of integrative levels, which posits that the organization of the natural world reflects increasingly interdependent complexity. This approach was tested as a basis for the creation of faceted classifications in the 1960s. These contemporary treatments of integrative levels are not discipline-driven as were the early approaches, but instead are ontological and phenomenological in focus. Dahlberg (p. 161-172) outlines the creation of the ICC (Information Coding System) and the application of the Systematifier in the generation of facets and the creation of a fully faceted classification. Gnoli (p. 177-192) proposes the use of fundamental categories as a way to redefine facets and fundamental categories in "more universal and level-independent ways" (p. 192). Given that Axiomathes has a stated focus on "contemporary issues in cognition and ontology" and the following thesis: "that real advances in contemporary science may depend upon a consideration of the origins and intellectual history of ideas at the forefront of current research," this venue seems well suited for the implementation of the stated agenda, to illustrate complementary approaches and to stimulate research. As situated, this special issue may well serve as a bridge to a more interdisciplinary dialogue about facet analysis than has previously been the case."
  11. Beghtol, C.: Semantic validity : concepts of warrants in bibliographic classification systems (1986) 0.04
    0.04465949 = product of:
      0.15630822 = sum of:
        0.044509117 = weight(_text_:classification in 3487) [ClassicSimilarity], result of:
          0.044509117 = score(doc=3487,freq=14.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.46547192 = fieldWeight in 3487, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3487)
        0.0237488 = product of:
          0.0474976 = sum of:
            0.0474976 = weight(_text_:schemes in 3487) [ClassicSimilarity], result of:
              0.0474976 = score(doc=3487,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2956176 = fieldWeight in 3487, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3487)
          0.5 = coord(1/2)
        0.043541197 = weight(_text_:bibliographic in 3487) [ClassicSimilarity], result of:
          0.043541197 = score(doc=3487,freq=6.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.3724989 = fieldWeight in 3487, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3487)
        0.044509117 = weight(_text_:classification in 3487) [ClassicSimilarity], result of:
          0.044509117 = score(doc=3487,freq=14.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.46547192 = fieldWeight in 3487, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3487)
      0.2857143 = coord(4/14)
    
    Abstract
    This paper argues that the semantic axis of bibliographic classification systems can be found in the various warrants that have been used to justify the utility of classification systems. Classificationists, theorists, and critics have emphasized the syntactic aspects of classification theories and systems, but a number of semantic warrants can be identified. The evolution of four semantic warrants is traced through the development of twentieth-century classification theory: literary warrant, scientific/philosophical warrant, educational warrant, and cultural warrant. It is concluded that further examination of semantic warrants might make possible a rationalized approach to the creation of classification systems for particular uses. The attention of scholars on faceted schemes and classificatory structures had heretofore pulled our attention to the syntactic aspects (e.g., concept division and citation order), with semantics being considered more or less a question of the terms and their relationships and somewhat taken for granted, or at least construed as a unitary aspect. Attention is on the choice of the classes and their meaning, as well as their connection to the world, and not so much on their syntactic relationship. This notion is developed by providing an historical and conceptual overview of the various kinds of warrant discernible in working with bibliographic systems. In Beghtol's definition, warrant concerns more than just the selection of terms, but rather the mapping of a classification system to the context and uses.
  12. Vickery, B.C.: Faceted classification : A guide to construction and use of special schemes (1986) 0.04
    0.04458441 = product of:
      0.15604542 = sum of:
        0.02546139 = weight(_text_:subject in 2475) [ClassicSimilarity], result of:
          0.02546139 = score(doc=2475,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.23709705 = fieldWeight in 2475, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=2475)
        0.045140486 = weight(_text_:classification in 2475) [ClassicSimilarity], result of:
          0.045140486 = score(doc=2475,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.4720747 = fieldWeight in 2475, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=2475)
        0.040303055 = product of:
          0.08060611 = sum of:
            0.08060611 = weight(_text_:schemes in 2475) [ClassicSimilarity], result of:
              0.08060611 = score(doc=2475,freq=4.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.5016798 = fieldWeight in 2475, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2475)
          0.5 = coord(1/2)
        0.045140486 = weight(_text_:classification in 2475) [ClassicSimilarity], result of:
          0.045140486 = score(doc=2475,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.4720747 = fieldWeight in 2475, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=2475)
      0.2857143 = coord(4/14)
    
    Abstract
    A perfect little book, with just 63 pages of text. From chapter A, Introduction, to U, Mechanization, it covers everything about making a faceted classification: what they are, why they are needed, how to do facet analysis, examples from existing faceted schemes, orderings, common subdivisions, the contents of each facet, notation, filing order, how to perform classification with the created system, and indexing. Each chapter is brief but has full coverage of the subject. "The technique of constructing a special faceted classification is not a settled, automatic, codified procedure. Nothing so complex as the field of knowledge could be analysed and organized by rule-of-thumb. We can therefore offer no more than a guide, describing tested procedures and discussing some difficulties." Vickery was a member of the Classification Research Group and one of the foremost classificationists.
  13. Blake, J.: Some issues in the classification of zoology (2011) 0.04
    0.04393636 = product of:
      0.15377726 = sum of:
        0.021217827 = weight(_text_:subject in 4845) [ClassicSimilarity], result of:
          0.021217827 = score(doc=4845,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.19758089 = fieldWeight in 4845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4845)
        0.044509117 = weight(_text_:classification in 4845) [ClassicSimilarity], result of:
          0.044509117 = score(doc=4845,freq=14.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.46547192 = fieldWeight in 4845, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4845)
        0.043541197 = weight(_text_:bibliographic in 4845) [ClassicSimilarity], result of:
          0.043541197 = score(doc=4845,freq=6.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.3724989 = fieldWeight in 4845, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4845)
        0.044509117 = weight(_text_:classification in 4845) [ClassicSimilarity], result of:
          0.044509117 = score(doc=4845,freq=14.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.46547192 = fieldWeight in 4845, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4845)
      0.2857143 = coord(4/14)
    
    Abstract
    This paper identifies and discusses features of the classification of mammals that are relevant to the bibliographic classification of the subject. The tendency of zoological classifications to change, the differing sizes of groups of species, the use zoologists make of groupings other than taxa, and the links in zoology between classification and nomenclature, are identified as key themes the bibliographic classificationist needs to be aware of. The impact of cladistics, a novel classificatory method and philosophy adopted by zoologists in the last few decades, is identified as the defining feature of the current, rather turbulent, state of zoological classification. However because zoologists still employ some non-cladistic classifications, because cladistic classifications are in some way unsuited to optimal information storage and retrieval, and because some of their consequences for zoological classification are as yet unknown, bibliographic classifications cannot be modelled entirely on them.
    Content
    This paper is based on a thesis of the same title, completed as part of an MA in Library and Information Studies at University College London in 2009, and available at http://62.32.98.6/elibsql2uk_Z10300UK_Documents/Catalogued_PDFs/ Some_issues_in_the_classification_of_zoology.PDF. Thanks are due to Vanda Broughton, who supervised the MA thesis; and to Diane Tough of the Natural History Museum, London and Ann Sylph of the Zoological Society of London, who both provided valuable insights into the classification of zoological literature.
  14. Molholt, P.: Qualities of classification schemes for the Information Superhighway (1995) 0.04
    0.042900864 = product of:
      0.15015301 = sum of:
        0.05319857 = weight(_text_:classification in 5562) [ClassicSimilarity], result of:
          0.05319857 = score(doc=5562,freq=20.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.55634534 = fieldWeight in 5562, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5562)
        0.033585876 = product of:
          0.06717175 = sum of:
            0.06717175 = weight(_text_:schemes in 5562) [ClassicSimilarity], result of:
              0.06717175 = score(doc=5562,freq=4.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.41806644 = fieldWeight in 5562, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5562)
          0.5 = coord(1/2)
        0.05319857 = weight(_text_:classification in 5562) [ClassicSimilarity], result of:
          0.05319857 = score(doc=5562,freq=20.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.55634534 = fieldWeight in 5562, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5562)
        0.010170003 = product of:
          0.020340007 = sum of:
            0.020340007 = weight(_text_:22 in 5562) [ClassicSimilarity], result of:
              0.020340007 = score(doc=5562,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.19345059 = fieldWeight in 5562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5562)
          0.5 = coord(1/2)
      0.2857143 = coord(4/14)
    
    Abstract
    For my segment of this program I'd like to focus on some basic qualities of classification schemes. These qualities are critical to our ability to truly organize knowledge for access. As I see it, there are at least five qualities of note. The first one of these properties that I want to talk about is "authoritative." By this I mean standardized, but I mean more than standardized with a built in consensus-building process. A classification scheme constructed by a collaborative, consensus-building process carries the approval, and the authority, of the discipline groups that contribute to it and that it affects... The next property of classification systems is "expandable," living, responsive, with a clear locus of responsibility for its continuous upkeep. The worst thing you can do with a thesaurus, or a classification scheme, is to finish it. You can't ever finish it because it reflects ongoing intellectual activity... The third property is "intuitive." That is, the system has to be approachable, it has to be transparent, or at least capable of being transparent. It has to have an underlying logic that supports the classification scheme but doesn't dominate it... The fourth property is "organized and logical." I advocate very strongly, and agree with Lois Chan, that classification must be based on a rule-based structure, on somebody's world-view of the syndetic structure... The fifth property is "universal" by which I mean the classification scheme needs be useable by any specific system or application, and be available as a language for multiple purposes.
    Footnote
    Paper presented at the 36th Allerton Institute, 23-25 Oct 94, Allerton Park, Monticello, IL: "New Roles for Classification in Libraries and Information Networks: Presentation and Reports"
    Source
    Cataloging and classification quarterly. 21(1995) no.2, S.19-22
  15. Hjoerland, B.: ¬The methodology of constructing classification schemes : a discussion of the state-of-the-art (2003) 0.04
    0.042717364 = product of:
      0.14951077 = sum of:
        0.029400283 = weight(_text_:subject in 2760) [ClassicSimilarity], result of:
          0.029400283 = score(doc=2760,freq=6.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.2737761 = fieldWeight in 2760, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=2760)
        0.046620894 = weight(_text_:classification in 2760) [ClassicSimilarity], result of:
          0.046620894 = score(doc=2760,freq=24.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.48755667 = fieldWeight in 2760, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03125 = fieldNorm(doc=2760)
        0.026868701 = product of:
          0.053737402 = sum of:
            0.053737402 = weight(_text_:schemes in 2760) [ClassicSimilarity], result of:
              0.053737402 = score(doc=2760,freq=4.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.33445317 = fieldWeight in 2760, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
        0.046620894 = weight(_text_:classification in 2760) [ClassicSimilarity], result of:
          0.046620894 = score(doc=2760,freq=24.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.48755667 = fieldWeight in 2760, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03125 = fieldNorm(doc=2760)
      0.2857143 = coord(4/14)
    
    Abstract
    Special classifications have been somewhat neglected in KO compared to general classifications. The methodology of constructing special classifications is important, however, also for the methodology of constructing general classification schemes. The methodology of constructing special classifications can be regarded as one among about a dozen approaches to domain analysis. The methodology of (special) classification in LIS has been dominated by the rationalistic facet-analytic tradition, which, however, neglects the question of the empirical basis of classification. The empirical basis is much better grasped by, for example, bibliometric methods. Even the combination of rational and empirical methods is insufficient. This presentation will provide evidence for the necessity of historical and pragmatic methods for the methodology of classification and will point to the necessity of analyzing "paradigms". The presentation covers the methods of constructing classifications from Ranganathan to the design of ontologies in computer science and further to the recent "paradigm shift" in classification research. 1. Introduction Classification of a subject field is one among about eleven approaches to analyzing a domain that are specific for information science and in my opinion define the special competencies of information specialists (Hjoerland, 2002a). Classification and knowledge organization are commonly regarded as core qualifications of librarians and information specialists. Seen from this perspective one expects a firm methodological basis for the field. This paper tries to explore the state-of-the-art conceming the methodology of classification. 2. Classification: Science or non-science? As it is part of the curriculum at universities and subject in scientific journals and conferences like ISKO, orte expects classification/knowledge organization to be a scientific or scholarly activity and a scientific field. However, very often when information specialists classify or index documents and when they revise classification system, the methods seem to be rather ad hoc. Research libraries or scientific databases may employ people with adequate subject knowledge. When information scientists construct or evaluate systems, they very often elicit the knowledge from "experts" (Hjorland, 2002b, p. 260). Mostly no specific arguments are provided for the specific decisions in these processes.
  16. Gnoli, C.: ¬The meaning of facets in non-disciplinary classifications (2006) 0.04
    0.041248932 = product of:
      0.14437126 = sum of:
        0.03761707 = weight(_text_:classification in 2291) [ClassicSimilarity], result of:
          0.03761707 = score(doc=2291,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.39339557 = fieldWeight in 2291, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2291)
        0.033585876 = product of:
          0.06717175 = sum of:
            0.06717175 = weight(_text_:schemes in 2291) [ClassicSimilarity], result of:
              0.06717175 = score(doc=2291,freq=4.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.41806644 = fieldWeight in 2291, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2291)
          0.5 = coord(1/2)
        0.035551235 = weight(_text_:bibliographic in 2291) [ClassicSimilarity], result of:
          0.035551235 = score(doc=2291,freq=4.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.30414405 = fieldWeight in 2291, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2291)
        0.03761707 = weight(_text_:classification in 2291) [ClassicSimilarity], result of:
          0.03761707 = score(doc=2291,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.39339557 = fieldWeight in 2291, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2291)
      0.2857143 = coord(4/14)
    
    Abstract
    Disciplines are felt by many to be a constraint in classification, though they are a structuring principle of most bibliographic classification schemes. A non-disciplinary approach has been explored by the Classification Research Group, and research in this direction has been resumed recently by the Integrative Level Classification project. This paper focuses on the role and the definition of facets in non-disciplinary schemes. A generalized definition of facets is suggested with reference to predicate logic, allowing for having facets of phenomena as well as facets of disciplines. The general categories under which facets are often subsumed can be related ontologically to the evolutionary sequence of integrative levels. As a facet can be semantically connected with phenomena from any other part of a general scheme, its values can belong to three types, here called extra-defined foci (either special or general), and context-defined foci. Non-disciplinary freely faceted classification is being tested by applying it to little bibliographic samples stored in a MySQL database, and developing Web search interfaces to demonstrate possible uses of the described techniques.
  17. Kumar, K.: Theoretical bases for universal classification systems (1982) 0.04
    0.040882602 = product of:
      0.19078547 = sum of:
        0.05092278 = weight(_text_:subject in 34) [ClassicSimilarity], result of:
          0.05092278 = score(doc=34,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.4741941 = fieldWeight in 34, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.09375 = fieldNorm(doc=34)
        0.06993134 = weight(_text_:classification in 34) [ClassicSimilarity], result of:
          0.06993134 = score(doc=34,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.731335 = fieldWeight in 34, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.09375 = fieldNorm(doc=34)
        0.06993134 = weight(_text_:classification in 34) [ClassicSimilarity], result of:
          0.06993134 = score(doc=34,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.731335 = fieldWeight in 34, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.09375 = fieldNorm(doc=34)
      0.21428572 = coord(3/14)
    
    Source
    Universal classification I: subject analysis and ordering systems. Proc. of the 4th Int. Study Conf. on Classification research, Augsburg, 28.6.-2.7.1982. Ed.: I. Dahlberg
  18. Husain, S.: Library classification : facets and analyses (1993) 0.04
    0.040601723 = product of:
      0.1894747 = sum of:
        0.07811319 = weight(_text_:classification in 3752) [ClassicSimilarity], result of:
          0.07811319 = score(doc=3752,freq=22.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.81689996 = fieldWeight in 3752, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3752)
        0.03324832 = product of:
          0.06649664 = sum of:
            0.06649664 = weight(_text_:schemes in 3752) [ClassicSimilarity], result of:
              0.06649664 = score(doc=3752,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.41386467 = fieldWeight in 3752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3752)
          0.5 = coord(1/2)
        0.07811319 = weight(_text_:classification in 3752) [ClassicSimilarity], result of:
          0.07811319 = score(doc=3752,freq=22.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.81689996 = fieldWeight in 3752, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3752)
      0.21428572 = coord(3/14)
    
    Content
    Enthält folgende Kapitel: (1) Definition, need and purpose of classification, (2) History of library classification, (3) Terminology of classification, (4) Development of a theory of classification, (5) Work of classification in three planes and their interrelationship, (6) Work of classification in idea plane, (7) Verbal plane, (8) Notation, definition, need functions, (9) Multidimensional nature of subjects, (10) Growing universe of subjects: problems and solutions, (11) Postulational approach to classification, (12) Formation of sharpening of isolates, (13) Species of classification schemes, (14) DDC, UDC and CC, (15) Designing the depth schedules of classification, (16) Recent trends in classification
  19. Araghi, G.F.: ¬A new scheme for library classification (2004) 0.04
    0.03984243 = product of:
      0.18593134 = sum of:
        0.029704956 = weight(_text_:subject in 5659) [ClassicSimilarity], result of:
          0.029704956 = score(doc=5659,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.27661324 = fieldWeight in 5659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5659)
        0.07811319 = weight(_text_:classification in 5659) [ClassicSimilarity], result of:
          0.07811319 = score(doc=5659,freq=22.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.81689996 = fieldWeight in 5659, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5659)
        0.07811319 = weight(_text_:classification in 5659) [ClassicSimilarity], result of:
          0.07811319 = score(doc=5659,freq=22.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.81689996 = fieldWeight in 5659, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5659)
      0.21428572 = coord(3/14)
    
    Abstract
    This proposed new classification scheme is based on two main elements: hierarchism and binary theory. Hence, it is called Universal Binary Classification (UBC). Some advantages of this classification are highlighted including are subject heading development, construction of a thesaurus and all terms with meaningful features arranged in tabular form that can help researchers, through a semantic process, to find what they need. This classification scheme is fully consistent with the classification of knowledge. The classification of knowledge is also based on hierarchism and binary principle. Finally, a survey on randomly selected books in McLennan Library of McGill University is presented to compare the codes of this new classification with the currently employed Library of Congress Classification (LCC) numbers in the discipline of Library and Information Sciences.
    Object
    Universal Binary Classification
    Source
    Cataloging and classification quarterly. 38(2004) no.2, S.xx-xx
  20. Winske, E.: ¬The development and structure of an urban, regional, and local documents classification scheme (1996) 0.04
    0.039471716 = product of:
      0.138151 = sum of:
        0.029704956 = weight(_text_:subject in 7241) [ClassicSimilarity], result of:
          0.029704956 = score(doc=7241,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.27661324 = fieldWeight in 7241, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7241)
        0.047104023 = weight(_text_:classification in 7241) [ClassicSimilarity], result of:
          0.047104023 = score(doc=7241,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.49260917 = fieldWeight in 7241, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7241)
        0.047104023 = weight(_text_:classification in 7241) [ClassicSimilarity], result of:
          0.047104023 = score(doc=7241,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.49260917 = fieldWeight in 7241, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7241)
        0.014238005 = product of:
          0.02847601 = sum of:
            0.02847601 = weight(_text_:22 in 7241) [ClassicSimilarity], result of:
              0.02847601 = score(doc=7241,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2708308 = fieldWeight in 7241, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7241)
          0.5 = coord(1/2)
      0.2857143 = coord(4/14)
    
    Abstract
    Discusses the reasons for the decision, taken at Florida International University Library to develop an in house classification system for their local documents collections. Reviews the structures of existing classification systems, noting their strengths and weaknesses in relation to the development of an in house system and describes the 5 components of the new system; geography, subject categories, extensions for population group and/or function, extensions for type of publication, and title/series designator
    Footnote
    Paper presented at conference on 'Local documents, a new classification scheme' at the Research Caucus of the Florida Library Association Annual Conference, Fort Lauderdale, Florida 22 Apr 95

Authors

Types

  • a 168
  • m 23
  • el 10
  • s 4
  • b 2
  • More… Less…