Search (74 results, page 1 of 4)

  • × theme_ss:"Universale Facettenklassifikationen"
  • × type_ss:"a"
  1. Coates, E.J.: BC2 and BSO : presentation at the 36th Allerton Institute, 1994 session on preparing traditional classifications for the future (1995) 0.27
    0.27004385 = product of:
      0.5400877 = sum of:
        0.029138058 = weight(_text_:classification in 5566) [ClassicSimilarity], result of:
          0.029138058 = score(doc=5566,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.3047229 = fieldWeight in 5566, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5566)
        0.07350893 = product of:
          0.14701787 = sum of:
            0.14701787 = weight(_text_:bliss in 5566) [ClassicSimilarity], result of:
              0.14701787 = score(doc=5566,freq=6.0), product of:
                0.21478812 = queryWeight, product of:
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.03002521 = queryNorm
                0.6844786 = fieldWeight in 5566, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5566)
          0.5 = coord(1/2)
        0.10212677 = weight(_text_:henry in 5566) [ClassicSimilarity], result of:
          0.10212677 = score(doc=5566,freq=2.0), product of:
            0.23560001 = queryWeight, product of:
              7.84674 = idf(docFreq=46, maxDocs=44218)
              0.03002521 = queryNorm
            0.43347523 = fieldWeight in 5566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.84674 = idf(docFreq=46, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5566)
        0.13401954 = weight(_text_:evelyn in 5566) [ClassicSimilarity], result of:
          0.13401954 = score(doc=5566,freq=2.0), product of:
            0.26989174 = queryWeight, product of:
              8.988837 = idf(docFreq=14, maxDocs=44218)
              0.03002521 = queryNorm
            0.4965678 = fieldWeight in 5566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.988837 = idf(docFreq=14, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5566)
        0.14701787 = weight(_text_:bliss in 5566) [ClassicSimilarity], result of:
          0.14701787 = score(doc=5566,freq=6.0), product of:
            0.21478812 = queryWeight, product of:
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.03002521 = queryNorm
            0.6844786 = fieldWeight in 5566, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5566)
        0.02513852 = weight(_text_:bibliographic in 5566) [ClassicSimilarity], result of:
          0.02513852 = score(doc=5566,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.21506234 = fieldWeight in 5566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5566)
        0.029138058 = weight(_text_:classification in 5566) [ClassicSimilarity], result of:
          0.029138058 = score(doc=5566,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.3047229 = fieldWeight in 5566, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5566)
      0.5 = coord(7/14)
    
    Abstract
    This article pertains to two further general classifications, which, in contrast to the reigning classifications just mentioned, incorporate in a thoroughgoing manner a modem view of the world. One of these was announced in 1910, to a chorus of disapproval, saw the light of day as a completed scheme in 1935, fell into suspended animation after the death of its author in the 1950s, and was revived, drastically revised and expanded in England by Jack Mills in 1967. A large part of the expanded scheme has appeared in the form of separately published fascicles; the remainder mostly in the areas of science and technology are in an advanced state of preparation. I refer of course to the Bliss Bibliographic Classification. I use the expression "of course" with some slight hesitation having once met a North American library school academic who thought that Henry Evelyn Bliss was an Englishman who lived in the London inner suburb of Islington. This was an unconscious tribute to Jack Mills, though perhaps unfair to Bliss himself, not to mention America, whose son he was.
    Footnote
    Paper presented at the 36th Allerton Institute, 23-25 Oct 94, Allerton Park, Monticello, IL: "New Roles for Classification in Libraries and Information Networks: Presentation and Reports"
    Source
    Cataloging and classification quarterly. 21(1995) no.2, S.59-67
  2. Thomas, A.R.: Bliss Bibliographic Classification 2nd Edition : principles features and applications (1992) 0.21
    0.21151799 = product of:
      0.493542 = sum of:
        0.029704956 = weight(_text_:subject in 541) [ClassicSimilarity], result of:
          0.029704956 = score(doc=541,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.27661324 = fieldWeight in 541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=541)
        0.0526639 = weight(_text_:classification in 541) [ClassicSimilarity], result of:
          0.0526639 = score(doc=541,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.55075383 = fieldWeight in 541, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=541)
        0.10291251 = product of:
          0.20582502 = sum of:
            0.20582502 = weight(_text_:bliss in 541) [ClassicSimilarity], result of:
              0.20582502 = score(doc=541,freq=6.0), product of:
                0.21478812 = queryWeight, product of:
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.03002521 = queryNorm
                0.95827 = fieldWeight in 541, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=541)
          0.5 = coord(1/2)
        0.20582502 = weight(_text_:bliss in 541) [ClassicSimilarity], result of:
          0.20582502 = score(doc=541,freq=6.0), product of:
            0.21478812 = queryWeight, product of:
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.03002521 = queryNorm
            0.95827 = fieldWeight in 541, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.0546875 = fieldNorm(doc=541)
        0.04977173 = weight(_text_:bibliographic in 541) [ClassicSimilarity], result of:
          0.04977173 = score(doc=541,freq=4.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.4258017 = fieldWeight in 541, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=541)
        0.0526639 = weight(_text_:classification in 541) [ClassicSimilarity], result of:
          0.0526639 = score(doc=541,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.55075383 = fieldWeight in 541, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=541)
      0.42857143 = coord(6/14)
    
    Abstract
    Publication of the 2nd ed. of the Bliss Bibliographic Classification presents librarians with a fresh opportunity to reassess the nature and benefits of helpful order for their collections and records. Half the parts are now available, exhibiting major expansion, revision, and development of the scheme. The new edition is sponsored by the Bliss Classification Association which welcomes the views and inputs of American librarians. It has been applied to libraries and information centers and used in thesaurus construction. This edition provides intensive subject specifity through detailed term listings and full synthetic capability. The notation is designed to be as brief as possible for the detail attainable. The classification allows a large measure of flexibility in arrangement and syntax
    Source
    Cataloging and classification quarterly. 15(1992) no.4, S.3-17
  3. Broughton, V.: Bliss Bibliographic Classification Second Edition (2009) 0.16
    0.16165833 = product of:
      0.4526433 = sum of:
        0.053833168 = weight(_text_:classification in 3755) [ClassicSimilarity], result of:
          0.053833168 = score(doc=3755,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.5629819 = fieldWeight in 3755, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=3755)
        0.09603167 = product of:
          0.19206335 = sum of:
            0.19206335 = weight(_text_:bliss in 3755) [ClassicSimilarity], result of:
              0.19206335 = score(doc=3755,freq=4.0), product of:
                0.21478812 = queryWeight, product of:
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.03002521 = queryNorm
                0.8941991 = fieldWeight in 3755, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3755)
          0.5 = coord(1/2)
        0.19206335 = weight(_text_:bliss in 3755) [ClassicSimilarity], result of:
          0.19206335 = score(doc=3755,freq=4.0), product of:
            0.21478812 = queryWeight, product of:
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.03002521 = queryNorm
            0.8941991 = fieldWeight in 3755, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.0625 = fieldNorm(doc=3755)
        0.056881975 = weight(_text_:bibliographic in 3755) [ClassicSimilarity], result of:
          0.056881975 = score(doc=3755,freq=4.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.4866305 = fieldWeight in 3755, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=3755)
        0.053833168 = weight(_text_:classification in 3755) [ClassicSimilarity], result of:
          0.053833168 = score(doc=3755,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.5629819 = fieldWeight in 3755, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=3755)
      0.35714287 = coord(5/14)
    
    Abstract
    This entry looks at the origins of the Bliss Bibliographic Classification 2nd edition and the theory on which it is built. The reasons for the decision to revise the classification are examined, as are the influences on classification theory of the mid-twentieth century. The process of revision and construction of schedules using facet analysis is described. The use of BC2 is considered along with some recent development work on thesaural and digital formats.
  4. Broughton, V.: ¬A faceted classification as the basis of a faceted terminology : conversion of a classified structure to thesaurus format in the Bliss Bibliographic Classification, 2nd Edition (2008) 0.16
    0.16092443 = product of:
      0.37549034 = sum of:
        0.036007844 = weight(_text_:subject in 1857) [ClassicSimilarity], result of:
          0.036007844 = score(doc=1857,freq=4.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.33530587 = fieldWeight in 1857, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=1857)
        0.04037488 = weight(_text_:classification in 1857) [ClassicSimilarity], result of:
          0.04037488 = score(doc=1857,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.42223644 = fieldWeight in 1857, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=1857)
        0.07202375 = product of:
          0.1440475 = sum of:
            0.1440475 = weight(_text_:bliss in 1857) [ClassicSimilarity], result of:
              0.1440475 = score(doc=1857,freq=4.0), product of:
                0.21478812 = queryWeight, product of:
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.03002521 = queryNorm
                0.6706493 = fieldWeight in 1857, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1857)
          0.5 = coord(1/2)
        0.1440475 = weight(_text_:bliss in 1857) [ClassicSimilarity], result of:
          0.1440475 = score(doc=1857,freq=4.0), product of:
            0.21478812 = queryWeight, product of:
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.03002521 = queryNorm
            0.6706493 = fieldWeight in 1857, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.046875 = fieldNorm(doc=1857)
        0.042661484 = weight(_text_:bibliographic in 1857) [ClassicSimilarity], result of:
          0.042661484 = score(doc=1857,freq=4.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.3649729 = fieldWeight in 1857, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=1857)
        0.04037488 = weight(_text_:classification in 1857) [ClassicSimilarity], result of:
          0.04037488 = score(doc=1857,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.42223644 = fieldWeight in 1857, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=1857)
      0.42857143 = coord(6/14)
    
    Abstract
    Facet analysis is an established methodology for building classifications and subject indexing systems, but has been less rigorously applied to thesauri. The process of creating a compatible thesaurus from the schedules of the Bliss Bibliographic Classification 2nd edition highlights the ways in which the conceptual relationships in a subject field are handled in the two types of retrieval languages. An underlying uniformity of theory is established, and the way in which software can manage the relationships is discussed. The manner of displaying verbal expressions of concepts (vocabulary control) is also considered, but is found to be less well controlled in the classification than in the thesaurus. Nevertheless, there is good reason to think that facet analysis provides a sound basis for structuring a variety of knowledge organization tools.
  5. Broughton, V.: Finding Bliss on the Web : some problems of representing faceted terminologies in digital environments 0.14
    0.14221488 = product of:
      0.3318347 = sum of:
        0.028549349 = weight(_text_:classification in 3532) [ClassicSimilarity], result of:
          0.028549349 = score(doc=3532,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.29856625 = fieldWeight in 3532, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=3532)
        0.02849856 = product of:
          0.05699712 = sum of:
            0.05699712 = weight(_text_:schemes in 3532) [ClassicSimilarity], result of:
              0.05699712 = score(doc=3532,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.35474116 = fieldWeight in 3532, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3532)
          0.5 = coord(1/2)
        0.07202375 = product of:
          0.1440475 = sum of:
            0.1440475 = weight(_text_:bliss in 3532) [ClassicSimilarity], result of:
              0.1440475 = score(doc=3532,freq=4.0), product of:
                0.21478812 = queryWeight, product of:
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.03002521 = queryNorm
                0.6706493 = fieldWeight in 3532, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3532)
          0.5 = coord(1/2)
        0.1440475 = weight(_text_:bliss in 3532) [ClassicSimilarity], result of:
          0.1440475 = score(doc=3532,freq=4.0), product of:
            0.21478812 = queryWeight, product of:
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.03002521 = queryNorm
            0.6706493 = fieldWeight in 3532, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.046875 = fieldNorm(doc=3532)
        0.030166224 = weight(_text_:bibliographic in 3532) [ClassicSimilarity], result of:
          0.030166224 = score(doc=3532,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.2580748 = fieldWeight in 3532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=3532)
        0.028549349 = weight(_text_:classification in 3532) [ClassicSimilarity], result of:
          0.028549349 = score(doc=3532,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.29856625 = fieldWeight in 3532, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=3532)
      0.42857143 = coord(6/14)
    
    Abstract
    The Bliss Bibliographic Classification is the only example of a fully faceted general classification scheme in the Western world. Although it is the object of much interest as a model for other tools it suffers from the lack of a web presence, and remedying this is an immediate objective for its editors. Understanding how this might be done presents some challenges, as the scheme is semantically very rich and complex in the range and nature of the relationships it contains. The automatic management of these is already in place using local software, but exporting this to a common data format needs careful thought and planning. Various encoding schemes, both for traditional classifications, and for digital materials, represent variously: the concepts; their functional roles; and the relationships between them. Integrating these aspects in a coherent and interchangeable manner appears to be achievable, but the most appropriate format is as yet unclear.
  6. Mills, J.: Faceted classification and logical division in information retrieval (2004) 0.10
    0.10274685 = product of:
      0.28769118 = sum of:
        0.036007844 = weight(_text_:subject in 831) [ClassicSimilarity], result of:
          0.036007844 = score(doc=831,freq=4.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.33530587 = fieldWeight in 831, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=831)
        0.049448926 = weight(_text_:classification in 831) [ClassicSimilarity], result of:
          0.049448926 = score(doc=831,freq=12.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.5171319 = fieldWeight in 831, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=831)
        0.050928485 = product of:
          0.10185697 = sum of:
            0.10185697 = weight(_text_:bliss in 831) [ClassicSimilarity], result of:
              0.10185697 = score(doc=831,freq=2.0), product of:
                0.21478812 = queryWeight, product of:
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.03002521 = queryNorm
                0.4742207 = fieldWeight in 831, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.046875 = fieldNorm(doc=831)
          0.5 = coord(1/2)
        0.10185697 = weight(_text_:bliss in 831) [ClassicSimilarity], result of:
          0.10185697 = score(doc=831,freq=2.0), product of:
            0.21478812 = queryWeight, product of:
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.03002521 = queryNorm
            0.4742207 = fieldWeight in 831, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.046875 = fieldNorm(doc=831)
        0.049448926 = weight(_text_:classification in 831) [ClassicSimilarity], result of:
          0.049448926 = score(doc=831,freq=12.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.5171319 = fieldWeight in 831, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=831)
      0.35714287 = coord(5/14)
    
    Abstract
    The main object of the paper is to demonstrate in detail the role of classification in information retrieval (IR) and the design of classificatory structures by the application of logical division to all forms of the content of records, subject and imaginative. The natural product of such division is a faceted classification. The latter is seen not as a particular kind of library classification but the only viable form enabling the locating and relating of information to be optimally predictable. A detailed exposition of the practical steps in facet analysis is given, drawing on the experience of the new Bliss Classification (BC2). The continued existence of the library as a highly organized information store is assumed. But, it is argued, it must acknowledge the relevance of the revolution in library classification that has taken place. It considers also how alphabetically arranged subject indexes may utilize controlled use of categorical (generically inclusive) and syntactic relations to produce similarly predictable locating and relating systems for IR.
  7. Broughton, V.: Facet analysis as a tool for modelling subject domains and terminologies (2011) 0.09
    0.09482564 = product of:
      0.22125982 = sum of:
        0.021217827 = weight(_text_:subject in 4826) [ClassicSimilarity], result of:
          0.021217827 = score(doc=4826,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.19758089 = fieldWeight in 4826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4826)
        0.023791125 = weight(_text_:classification in 4826) [ClassicSimilarity], result of:
          0.023791125 = score(doc=4826,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.24880521 = fieldWeight in 4826, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4826)
        0.042440403 = product of:
          0.08488081 = sum of:
            0.08488081 = weight(_text_:bliss in 4826) [ClassicSimilarity], result of:
              0.08488081 = score(doc=4826,freq=2.0), product of:
                0.21478812 = queryWeight, product of:
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.03002521 = queryNorm
                0.3951839 = fieldWeight in 4826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4826)
          0.5 = coord(1/2)
        0.08488081 = weight(_text_:bliss in 4826) [ClassicSimilarity], result of:
          0.08488081 = score(doc=4826,freq=2.0), product of:
            0.21478812 = queryWeight, product of:
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.03002521 = queryNorm
            0.3951839 = fieldWeight in 4826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4826)
        0.02513852 = weight(_text_:bibliographic in 4826) [ClassicSimilarity], result of:
          0.02513852 = score(doc=4826,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.21506234 = fieldWeight in 4826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4826)
        0.023791125 = weight(_text_:classification in 4826) [ClassicSimilarity], result of:
          0.023791125 = score(doc=4826,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.24880521 = fieldWeight in 4826, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4826)
      0.42857143 = coord(6/14)
    
    Abstract
    Facet analysis is proposed as a general theory of knowledge organization, with an associated methodology that may be applied to the development of terminology tools in a variety of contexts and formats. Faceted classifications originated as a means of representing complexity in semantic content that facilitates logical organization and effective retrieval in a physical environment. This is achieved through meticulous analysis of concepts, their structural and functional status (based on fundamental categories), and their inter-relationships. These features provide an excellent basis for the general conceptual modelling of domains, and for the generation of KOS other than systematic classifications. This is demonstrated by the adoption of a faceted approach to many web search and visualization tools, and by the emergence of a facet based methodology for the construction of thesauri. Current work on the Bliss Bibliographic Classification (Second Edition) is investigating the ways in which the full complexity of faceted structures may be represented through encoded data, capable of generating intellectually and mechanically compatible forms of indexing tools from a single source. It is suggested that a number of research questions relating to the Semantic Web could be tackled through the medium of facet analysis.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
  8. Broughton, V.: Concepts and terms in the faceted classification : the case of UDC (2010) 0.08
    0.08131924 = product of:
      0.22769387 = sum of:
        0.03761707 = weight(_text_:classification in 4065) [ClassicSimilarity], result of:
          0.03761707 = score(doc=4065,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.39339557 = fieldWeight in 4065, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4065)
        0.042440403 = product of:
          0.08488081 = sum of:
            0.08488081 = weight(_text_:bliss in 4065) [ClassicSimilarity], result of:
              0.08488081 = score(doc=4065,freq=2.0), product of:
                0.21478812 = queryWeight, product of:
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.03002521 = queryNorm
                0.3951839 = fieldWeight in 4065, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4065)
          0.5 = coord(1/2)
        0.08488081 = weight(_text_:bliss in 4065) [ClassicSimilarity], result of:
          0.08488081 = score(doc=4065,freq=2.0), product of:
            0.21478812 = queryWeight, product of:
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.03002521 = queryNorm
            0.3951839 = fieldWeight in 4065, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4065)
        0.02513852 = weight(_text_:bibliographic in 4065) [ClassicSimilarity], result of:
          0.02513852 = score(doc=4065,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.21506234 = fieldWeight in 4065, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4065)
        0.03761707 = weight(_text_:classification in 4065) [ClassicSimilarity], result of:
          0.03761707 = score(doc=4065,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.39339557 = fieldWeight in 4065, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4065)
      0.35714287 = coord(5/14)
    
    Abstract
    Recent revision of UDC classes has aimed at implementing a more faceted approach. Many compound classes have been removed from the main tables, and more radical revisions of classes (particularly those for Medicine and Religion) have introduced a rigorous analysis, a clearer sense of citation order, and building of compound classes according to a more logical system syntax. The faceted approach provides a means of formalizing the relationships in the classification and making them explicit for machine recognition. In the Bliss Bibliographic Classification (BC2) (which has been a source for both UDC classes mentioned above), terminologies are encoded for automatic generation of hierarchical and associative relationships. Nevertheless, difficulties are encountered in vocabulary control, and a similar phenomenon is observed in UDC. Current work has revealed differences in the vocabulary of humanities and science, notably the way in which terms in the humanities should be handled when these are semantically complex. Achieving a balance between rigour in the structure of the classification and the complexity of natural language expression remains partially unresolved at present, but provides a fertile field for further research.
    Content
    Teil von: Papers from Classification at a Crossroads: Multiple Directions to Usability: International UDC Seminar 2009-Part 2
  9. Broughton, V.: Language related problems in the construction of faceted terminologies and their automatic management (2008) 0.08
    0.07526281 = product of:
      0.21073586 = sum of:
        0.029138058 = weight(_text_:classification in 2497) [ClassicSimilarity], result of:
          0.029138058 = score(doc=2497,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.3047229 = fieldWeight in 2497, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2497)
        0.042440403 = product of:
          0.08488081 = sum of:
            0.08488081 = weight(_text_:bliss in 2497) [ClassicSimilarity], result of:
              0.08488081 = score(doc=2497,freq=2.0), product of:
                0.21478812 = queryWeight, product of:
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.03002521 = queryNorm
                0.3951839 = fieldWeight in 2497, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2497)
          0.5 = coord(1/2)
        0.08488081 = weight(_text_:bliss in 2497) [ClassicSimilarity], result of:
          0.08488081 = score(doc=2497,freq=2.0), product of:
            0.21478812 = queryWeight, product of:
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.03002521 = queryNorm
            0.3951839 = fieldWeight in 2497, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2497)
        0.02513852 = weight(_text_:bibliographic in 2497) [ClassicSimilarity], result of:
          0.02513852 = score(doc=2497,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.21506234 = fieldWeight in 2497, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2497)
        0.029138058 = weight(_text_:classification in 2497) [ClassicSimilarity], result of:
          0.029138058 = score(doc=2497,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.3047229 = fieldWeight in 2497, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2497)
      0.35714287 = coord(5/14)
    
    Content
    The paper describes current work on the generation of a thesaurus format from the schedules of the Bliss Bibliographic Classification 2nd edition (BC2). The practical problems that occur in moving from a concept based approach to a terminological approach cluster around issues of vocabulary control that are not fully addressed in a systematic structure. These difficulties can be exacerbated within domains in the humanities because large numbers of culture specific terms may need to be accommodated in any thesaurus. The ways in which these problems can be resolved within the context of a semi-automated approach to the thesaurus generation have consequences for the management of classification data in the source vocabulary. The way in which the vocabulary is marked up for the purpose of machine manipulation is described, and some of the implications for editorial policy are discussed and examples given. The value of the classification notation as a language independent representation and mapping tool should not be sacrificed in such an exercise.
  10. Panigrahi, P.: Ranganathan and Dewey in hierarchical subject classification : some similarities (2015) 0.05
    0.051693633 = product of:
      0.18092771 = sum of:
        0.033948522 = weight(_text_:subject in 2789) [ClassicSimilarity], result of:
          0.033948522 = score(doc=2789,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.31612942 = fieldWeight in 2789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0625 = fieldNorm(doc=2789)
        0.046620894 = weight(_text_:classification in 2789) [ClassicSimilarity], result of:
          0.046620894 = score(doc=2789,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.48755667 = fieldWeight in 2789, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=2789)
        0.053737402 = product of:
          0.107474804 = sum of:
            0.107474804 = weight(_text_:schemes in 2789) [ClassicSimilarity], result of:
              0.107474804 = score(doc=2789,freq=4.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.66890633 = fieldWeight in 2789, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2789)
          0.5 = coord(1/2)
        0.046620894 = weight(_text_:classification in 2789) [ClassicSimilarity], result of:
          0.046620894 = score(doc=2789,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.48755667 = fieldWeight in 2789, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=2789)
      0.2857143 = coord(4/14)
    
    Abstract
    S R Ranganathan and Melvil Dewey devised two types of classification schemes viz., faceted and enumerative. Ranganathan's faceted classification scheme is based on postulates, principles and canons. It has a strong theory. While working with the two schemes, similarities are observed. This paper tries to identify and present some relationships.
  11. Mills, J.: Bibliographic classification (1976) 0.05
    0.04986567 = product of:
      0.23270646 = sum of:
        0.0761316 = weight(_text_:classification in 1272) [ClassicSimilarity], result of:
          0.0761316 = score(doc=1272,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.7961767 = fieldWeight in 1272, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.125 = fieldNorm(doc=1272)
        0.08044326 = weight(_text_:bibliographic in 1272) [ClassicSimilarity], result of:
          0.08044326 = score(doc=1272,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.68819946 = fieldWeight in 1272, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.125 = fieldNorm(doc=1272)
        0.0761316 = weight(_text_:classification in 1272) [ClassicSimilarity], result of:
          0.0761316 = score(doc=1272,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.7961767 = fieldWeight in 1272, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.125 = fieldNorm(doc=1272)
      0.21428572 = coord(3/14)
    
    Source
    Classification in the 1970s: a second look. Rev. ed. Ed.: A. Maltby
  12. Tennis, J.T.: Facets and fugit tempus : considering time's effect on faceted classification schemes (2012) 0.04
    0.04214625 = product of:
      0.14751187 = sum of:
        0.046620894 = weight(_text_:classification in 826) [ClassicSimilarity], result of:
          0.046620894 = score(doc=826,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.48755667 = fieldWeight in 826, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=826)
        0.03799808 = product of:
          0.07599616 = sum of:
            0.07599616 = weight(_text_:schemes in 826) [ClassicSimilarity], result of:
              0.07599616 = score(doc=826,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.4729882 = fieldWeight in 826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0625 = fieldNorm(doc=826)
          0.5 = coord(1/2)
        0.046620894 = weight(_text_:classification in 826) [ClassicSimilarity], result of:
          0.046620894 = score(doc=826,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.48755667 = fieldWeight in 826, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=826)
        0.016272005 = product of:
          0.03254401 = sum of:
            0.03254401 = weight(_text_:22 in 826) [ClassicSimilarity], result of:
              0.03254401 = score(doc=826,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.30952093 = fieldWeight in 826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=826)
          0.5 = coord(1/2)
      0.2857143 = coord(4/14)
    
    Abstract
    Describes the effect of scheme change on the semantics in faceted classification. Two types of change are identified: ecological change and lexical change. Examples from different editions of the Colon Classification are used to illustrate change.
    Date
    2. 6.2013 18:33:22
  13. Gnoli, C.: ¬The meaning of facets in non-disciplinary classifications (2006) 0.04
    0.041248932 = product of:
      0.14437126 = sum of:
        0.03761707 = weight(_text_:classification in 2291) [ClassicSimilarity], result of:
          0.03761707 = score(doc=2291,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.39339557 = fieldWeight in 2291, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2291)
        0.033585876 = product of:
          0.06717175 = sum of:
            0.06717175 = weight(_text_:schemes in 2291) [ClassicSimilarity], result of:
              0.06717175 = score(doc=2291,freq=4.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.41806644 = fieldWeight in 2291, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2291)
          0.5 = coord(1/2)
        0.035551235 = weight(_text_:bibliographic in 2291) [ClassicSimilarity], result of:
          0.035551235 = score(doc=2291,freq=4.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.30414405 = fieldWeight in 2291, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2291)
        0.03761707 = weight(_text_:classification in 2291) [ClassicSimilarity], result of:
          0.03761707 = score(doc=2291,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.39339557 = fieldWeight in 2291, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2291)
      0.2857143 = coord(4/14)
    
    Abstract
    Disciplines are felt by many to be a constraint in classification, though they are a structuring principle of most bibliographic classification schemes. A non-disciplinary approach has been explored by the Classification Research Group, and research in this direction has been resumed recently by the Integrative Level Classification project. This paper focuses on the role and the definition of facets in non-disciplinary schemes. A generalized definition of facets is suggested with reference to predicate logic, allowing for having facets of phenomena as well as facets of disciplines. The general categories under which facets are often subsumed can be related ontologically to the evolutionary sequence of integrative levels. As a facet can be semantically connected with phenomena from any other part of a general scheme, its values can belong to three types, here called extra-defined foci (either special or general), and context-defined foci. Non-disciplinary freely faceted classification is being tested by applying it to little bibliographic samples stored in a MySQL database, and developing Web search interfaces to demonstrate possible uses of the described techniques.
  14. Austin, D.: Basic concept classes and primitive relations (1982) 0.04
    0.035382897 = product of:
      0.16512018 = sum of:
        0.05092278 = weight(_text_:subject in 6580) [ClassicSimilarity], result of:
          0.05092278 = score(doc=6580,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.4741941 = fieldWeight in 6580, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.09375 = fieldNorm(doc=6580)
        0.057098698 = weight(_text_:classification in 6580) [ClassicSimilarity], result of:
          0.057098698 = score(doc=6580,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.5971325 = fieldWeight in 6580, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.09375 = fieldNorm(doc=6580)
        0.057098698 = weight(_text_:classification in 6580) [ClassicSimilarity], result of:
          0.057098698 = score(doc=6580,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.5971325 = fieldWeight in 6580, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.09375 = fieldNorm(doc=6580)
      0.21428572 = coord(3/14)
    
    Source
    Universal classification I: subject analysis and ordering systems. Proc. of the 4th Int. Study Conf. on Classification research, Augsburg, 28.6.-2.7.1982. Ed.: I. Dahlberg
  15. Szostak, R.: Basic Concepts Classification (BCC) (2020) 0.03
    0.03434308 = product of:
      0.12020077 = sum of:
        0.021217827 = weight(_text_:subject in 5883) [ClassicSimilarity], result of:
          0.021217827 = score(doc=5883,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.19758089 = fieldWeight in 5883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5883)
        0.03761707 = weight(_text_:classification in 5883) [ClassicSimilarity], result of:
          0.03761707 = score(doc=5883,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.39339557 = fieldWeight in 5883, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5883)
        0.0237488 = product of:
          0.0474976 = sum of:
            0.0474976 = weight(_text_:schemes in 5883) [ClassicSimilarity], result of:
              0.0474976 = score(doc=5883,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2956176 = fieldWeight in 5883, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5883)
          0.5 = coord(1/2)
        0.03761707 = weight(_text_:classification in 5883) [ClassicSimilarity], result of:
          0.03761707 = score(doc=5883,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.39339557 = fieldWeight in 5883, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5883)
      0.2857143 = coord(4/14)
    
    Abstract
    The Basics Concept Classification (BCC) is a "universal" scheme: it attempts to encompass all areas of human understanding. Whereas most universal schemes are organized around scholarly disciplines, the BCC is instead organized around phenomena (things), the relationships that exist among phenomena, and the properties that phenomena and relators may possess. This structure allows the BCC to apply facet analysis without requiring the use of "facet indicators." The main motivation for the BCC was a recognition that existing classifications that are organized around disciplines serve interdisciplinary scholarship poorly. Complex concepts that might be understood quite differently across groups and individuals can generally be broken into basic concepts for which there is enough shared understanding for the purposes of classification. Documents, ideas, and objects are classified synthetically by combining entries from the schedules of phenomena, relators, and properties. The inclusion of separate schedules of-generally verb-like-relators is one of the most unusual aspects of the BCC. This (and the schedules of properties that serve as adjectives or adverbs) allows the production of sentence-like subject strings. Documents can then be classified in terms of the main arguments made in the document. BCC provides very precise descriptors of documents by combining phenomena, relators, and properties synthetically. The terminology employed in the BCC reduces terminological ambiguity. The BCC is still being developed and it needs to be fleshed out in certain respects. Yet it also needs to be applied; only in application can the feasibility and desirability of the classification be adequately assessed.
    Object
    Basics Concept Classification
  16. Sharada, B.A.: Ranganathan's Colon Classification : Kannada-English Version 'dwibindu vargiikaraNa' (2012) 0.03
    0.031212548 = product of:
      0.14565855 = sum of:
        0.0514505 = weight(_text_:subject in 827) [ClassicSimilarity], result of:
          0.0514505 = score(doc=827,freq=6.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.4791082 = fieldWeight in 827, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=827)
        0.047104023 = weight(_text_:classification in 827) [ClassicSimilarity], result of:
          0.047104023 = score(doc=827,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.49260917 = fieldWeight in 827, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=827)
        0.047104023 = weight(_text_:classification in 827) [ClassicSimilarity], result of:
          0.047104023 = score(doc=827,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.49260917 = fieldWeight in 827, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=827)
      0.21428572 = coord(3/14)
    
    Abstract
    "dwibindu vargiikaraNa" is the Kannada rendering of the revised Colon Classification, 7th Edition, intended essentially for the classification of macro documents. This paper discusses the planning, preparation, and features of Colon Classification (CC) in Kannada, one of the major Indian languages as well as the Official Language of Karnataka, and uploading the CC on the web. Linguistic issues related to the Kannada rendering are discussed with possible solutions. It creates facilities in the field of Indexing Language (IL) to prepare products such as, Subject Heading List, Information Retrieval Thesaurus, and creation of subject glossaries or updating the available subject dictionaries in Kannada.
  17. Beghtol, C.: From the universe of knowledge to the universe of concepts : the structural revolution in classification for information retrieval (2008) 0.03
    0.03041751 = product of:
      0.14194837 = sum of:
        0.05319857 = weight(_text_:classification in 1856) [ClassicSimilarity], result of:
          0.05319857 = score(doc=1856,freq=20.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.55634534 = fieldWeight in 1856, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1856)
        0.035551235 = weight(_text_:bibliographic in 1856) [ClassicSimilarity], result of:
          0.035551235 = score(doc=1856,freq=4.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.30414405 = fieldWeight in 1856, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1856)
        0.05319857 = weight(_text_:classification in 1856) [ClassicSimilarity], result of:
          0.05319857 = score(doc=1856,freq=20.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.55634534 = fieldWeight in 1856, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1856)
      0.21428572 = coord(3/14)
    
    Abstract
    During the twentieth century, bibliographic classification theory underwent a structural revolution. The first modern bibliographic classifications were top-down systems that started at the universe of knowledge and subdivided that universe downward to minute subclasses. After the invention of faceted classification by S.R. Ranganathan, the ideal was to build bottom-up classifications that started with the universe of concepts and built upward to larger and larger faceted classes. This ideal has not been achieved, and the two kinds of classification systems are not mutually exclusive. This paper examines the process by which this structural revolution was accomplished by looking at the spread of facet theory after 1924 when Ranganathan attended the School of Librarianship, London, through selected classification textbooks that were published after that date. To this end, the paper examines the role of W.C.B. Sayers as a teacher and author of three editions of The Manual of Classification for Librarians and Bibliographers. Sayers influenced both Ranganathan and the various members of the Classification Research Group (CRG) who were his students. Further, the paper contrasts the methods of evaluating classification systems that arose between Sayers's Canons of Classification in 1915- 1916 and J. Mills's A Modern Outline of Library Classification in 1960 in order to demonstrate the speed with which one kind of classificatory structure was overtaken by another.
  18. Austin, D.: Prospects for a new general classification (1969) 0.03
    0.029497927 = product of:
      0.10324274 = sum of:
        0.021217827 = weight(_text_:subject in 1519) [ClassicSimilarity], result of:
          0.021217827 = score(doc=1519,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.19758089 = fieldWeight in 1519, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1519)
        0.029138058 = weight(_text_:classification in 1519) [ClassicSimilarity], result of:
          0.029138058 = score(doc=1519,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.3047229 = fieldWeight in 1519, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1519)
        0.0237488 = product of:
          0.0474976 = sum of:
            0.0474976 = weight(_text_:schemes in 1519) [ClassicSimilarity], result of:
              0.0474976 = score(doc=1519,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2956176 = fieldWeight in 1519, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1519)
          0.5 = coord(1/2)
        0.029138058 = weight(_text_:classification in 1519) [ClassicSimilarity], result of:
          0.029138058 = score(doc=1519,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.3047229 = fieldWeight in 1519, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1519)
      0.2857143 = coord(4/14)
    
    Abstract
    In traditional classification schemes, the universe of knowledge is brokeii down into self- contained disciplines which are further analysed to the point at which a particular concept is located. This leads to problems of: (a) currency: keeping the scheme in line with new discoveries. (b) hospitality: allowing room for insertion of new subjects (c) cross-classification: a concept may be considered in such a way that it fits as logically into one discipline as another. Machine retrieval is also hampered by the fact that any individual concept is notated differently, depending on where in the scheme it appears. The approach now considered is from an organized universe of concepts, every concept being set down only once in an appropriate vocabulary, where it acquires the notation which identifies it wherever it is used. It has been found that all the concepts present in any compound subject can be handled as though they belong to one of two basic concept types, being either Entities or Attributes. In classing, these concepts are identified, and notation is selected from appropriate schedules. Subjects are then built according to formal rules, the final class number incorporating operators which convey the fundamental relationships between concepts. From this viewpoint, the Rules and Operators of the proposed system can be seen as the grammar of an IR language, and the schedules of Entities and Attributes as its vocabulary.
  19. Gnoli, C.; Merli, G.; Pavan, G.; Bernuzzi, E.; Priano, M.: Freely faceted classification for a Web-based bibliographic archive : the BioAcoustic Reference Database (2010) 0.03
    0.029314283 = product of:
      0.102599986 = sum of:
        0.03364573 = weight(_text_:classification in 3739) [ClassicSimilarity], result of:
          0.03364573 = score(doc=3739,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.35186368 = fieldWeight in 3739, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3739)
        0.02513852 = weight(_text_:bibliographic in 3739) [ClassicSimilarity], result of:
          0.02513852 = score(doc=3739,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.21506234 = fieldWeight in 3739, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3739)
        0.03364573 = weight(_text_:classification in 3739) [ClassicSimilarity], result of:
          0.03364573 = score(doc=3739,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.35186368 = fieldWeight in 3739, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3739)
        0.010170003 = product of:
          0.020340007 = sum of:
            0.020340007 = weight(_text_:22 in 3739) [ClassicSimilarity], result of:
              0.020340007 = score(doc=3739,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.19345059 = fieldWeight in 3739, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3739)
          0.5 = coord(1/2)
      0.2857143 = coord(4/14)
    
    Abstract
    The Integrative Level Classification (ILC) research project is experimenting with a knowledge organization system based on phenomena rather than disciplines. Each phenomenon has a constant notation, which can be combined with that of any other phenomenon in a freely faceted structure. Citation order can express differential focality of the facets. Very specific subjects can have long classmarks, although their complexity is reduced by various devices. Freely faceted classification is being tested by indexing a corpus of about 3300 papers in the interdisciplinary domain of bioacoustics. The subjects of these papers often include phenomena from a wide variety of integrative levels (mechanical waves, animals, behaviour, vessels, fishing, law, ...) as well as information about the methods of study, as predicted in the León Manifesto. The archive is recorded in a MySQL database, and can be fed and searched through PHP Web interfaces. Indexer's work is made easier by mechanisms that suggest possible classes on the basis of matching title words with terms in the ILC schedules, and synthesize automatically the verbal caption corresponding to the classmark being edited. Users can search the archive by selecting and combining values in each facet. Search refinement should be improved, especially for the cases where no record, or too many records, match the faceted query. However, experience is being gained progressively, showing that freely faceted classification by phenomena, theories, and methods is feasible and successfully working.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  20. Frické, M.: Logical division (2016) 0.03
    0.028953599 = product of:
      0.10133759 = sum of:
        0.030006537 = weight(_text_:subject in 3183) [ClassicSimilarity], result of:
          0.030006537 = score(doc=3183,freq=4.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.27942157 = fieldWeight in 3183, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3183)
        0.023791125 = weight(_text_:classification in 3183) [ClassicSimilarity], result of:
          0.023791125 = score(doc=3183,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.24880521 = fieldWeight in 3183, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3183)
        0.0237488 = product of:
          0.0474976 = sum of:
            0.0474976 = weight(_text_:schemes in 3183) [ClassicSimilarity], result of:
              0.0474976 = score(doc=3183,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2956176 = fieldWeight in 3183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3183)
          0.5 = coord(1/2)
        0.023791125 = weight(_text_:classification in 3183) [ClassicSimilarity], result of:
          0.023791125 = score(doc=3183,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.24880521 = fieldWeight in 3183, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3183)
      0.2857143 = coord(4/14)
    
    Abstract
    Division is obviously important to Knowledge Organization. Typically, an organizational infrastructure might acknowledge three types of connecting relationships: class hierarchies, where some classes are subclasses of others, partitive hierarchies, where some items are parts of others, and instantiation, where some items are members of some classes (see Z39.19 ANSI/NISO 2005 as an example). The first two of these involve division (the third, instantiation, does not involve division). Logical division would usually be a part of hierarchical classification systems, which, in turn, are central to shelving in libraries, to subject classification schemes, to controlled vocabularies, and to thesauri. Partitive hierarchies, and partitive division, are often essential to controlled vocabularies, thesauri, and subject tagging systems. Partitive hierarchies also relate to the bearers of information; for example, a journal would typically have its component articles as parts and, in turn, they might have sections as their parts, and, of course, components might be arrived at by partitive division (see Tillett 2009 as an illustration). Finally, verbal division, disambiguating homographs, is basic to controlled vocabularies. Thus Division is a broad and relevant topic. This article, though, is going to focus on Logical Division.