Search (81 results, page 1 of 5)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Slavic, A.: On the nature and typology of documentary classifications and their use in a networked environment (2007) 0.07
    0.06961468 = product of:
      0.11138349 = sum of:
        0.025048172 = weight(_text_:retrieval in 780) [ClassicSimilarity], result of:
          0.025048172 = score(doc=780,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.20052543 = fieldWeight in 780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=780)
        0.051335193 = weight(_text_:use in 780) [ClassicSimilarity], result of:
          0.051335193 = score(doc=780,freq=8.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.40597942 = fieldWeight in 780, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=780)
        0.011594418 = weight(_text_:of in 780) [ClassicSimilarity], result of:
          0.011594418 = score(doc=780,freq=6.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.17955035 = fieldWeight in 780, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=780)
        0.006621159 = product of:
          0.013242318 = sum of:
            0.013242318 = weight(_text_:on in 780) [ClassicSimilarity], result of:
              0.013242318 = score(doc=780,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.14580199 = fieldWeight in 780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=780)
          0.5 = coord(1/2)
        0.016784549 = product of:
          0.033569098 = sum of:
            0.033569098 = weight(_text_:22 in 780) [ClassicSimilarity], result of:
              0.033569098 = score(doc=780,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.23214069 = fieldWeight in 780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=780)
          0.5 = coord(1/2)
      0.625 = coord(5/8)
    
    Abstract
    Networked orientated standards for vocabulary publishing and exchange and proposals for terminological services and terminology registries will improve sharing and use of all knowledge organization systems in the networked information environment. This means that documentary classifications may also become more applicable for use outside their original domain of application. The paper summarises some characteristics common to documentary classifications and explains some terminological, functional and implementation aspects. The original purpose behind each classification scheme determines the functions that the vocabulary is designed to facilitate. These functions influence the structure, semantics and syntax, scheme coverage and format in which classification data are published and made available. The author suggests that attention should be paid to the differences between documentary classifications as these may determine their suitability for a certain purpose and may impose different requirements with respect to their use online. As we speak, many classifications are being created for knowledge organization and it may be important to promote expertise from the bibliographic domain with respect to building and using classification systems.
    Date
    22.12.2007 17:22:31
    Theme
    Klassifikationssysteme im Online-Retrieval
  2. Broughton, V.: ¬The need for a faceted classification as the basis of all methods of information retrieval (2006) 0.06
    0.05646444 = product of:
      0.11292888 = sum of:
        0.046674512 = weight(_text_:retrieval in 2874) [ClassicSimilarity], result of:
          0.046674512 = score(doc=2874,freq=10.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.37365708 = fieldWeight in 2874, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2874)
        0.030249555 = weight(_text_:use in 2874) [ClassicSimilarity], result of:
          0.030249555 = score(doc=2874,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23922569 = fieldWeight in 2874, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2874)
        0.023667008 = weight(_text_:of in 2874) [ClassicSimilarity], result of:
          0.023667008 = score(doc=2874,freq=36.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.36650562 = fieldWeight in 2874, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2874)
        0.012337802 = product of:
          0.024675604 = sum of:
            0.024675604 = weight(_text_:on in 2874) [ClassicSimilarity], result of:
              0.024675604 = score(doc=2874,freq=10.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.271686 = fieldWeight in 2874, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2874)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Purpose - The aim of this article is to estimate the impact of faceted classification and the faceted analytical method on the development of various information retrieval tools over the latter part of the twentieth and early twenty-first centuries. Design/methodology/approach - The article presents an examination of various subject access tools intended for retrieval of both print and digital materials to determine whether they exhibit features of faceted systems. Some attention is paid to use of the faceted approach as a means of structuring information on commercial web sites. The secondary and research literature is also surveyed for commentary on and evaluation of facet analysis as a basis for the building of vocabulary and conceptual tools. Findings - The study finds that faceted systems are now very common, with a major increase in their use over the last 15 years. Most LIS subject indexing tools (classifications, subject heading lists and thesauri) now demonstrate features of facet analysis to a greater or lesser degree. A faceted approach is frequently taken to the presentation of product information on commercial web sites, and there is an independent strand of theory and documentation related to this application. There is some significant research on semi-automatic indexing and retrieval (query expansion and query formulation) using facet analytical techniques. Originality/value - This article provides an overview of an important conceptual approach to information retrieval, and compares different understandings and applications of this methodology.
  3. Mills, J.: Faceted classification and logical division in information retrieval (2004) 0.05
    0.05079969 = product of:
      0.10159938 = sum of:
        0.04338471 = weight(_text_:retrieval in 831) [ClassicSimilarity], result of:
          0.04338471 = score(doc=831,freq=6.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.34732026 = fieldWeight in 831, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=831)
        0.025667597 = weight(_text_:use in 831) [ClassicSimilarity], result of:
          0.025667597 = score(doc=831,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.20298971 = fieldWeight in 831, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=831)
        0.02592591 = weight(_text_:of in 831) [ClassicSimilarity], result of:
          0.02592591 = score(doc=831,freq=30.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.4014868 = fieldWeight in 831, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=831)
        0.006621159 = product of:
          0.013242318 = sum of:
            0.013242318 = weight(_text_:on in 831) [ClassicSimilarity], result of:
              0.013242318 = score(doc=831,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.14580199 = fieldWeight in 831, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=831)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    The main object of the paper is to demonstrate in detail the role of classification in information retrieval (IR) and the design of classificatory structures by the application of logical division to all forms of the content of records, subject and imaginative. The natural product of such division is a faceted classification. The latter is seen not as a particular kind of library classification but the only viable form enabling the locating and relating of information to be optimally predictable. A detailed exposition of the practical steps in facet analysis is given, drawing on the experience of the new Bliss Classification (BC2). The continued existence of the library as a highly organized information store is assumed. But, it is argued, it must acknowledge the relevance of the revolution in library classification that has taken place. It considers also how alphabetically arranged subject indexes may utilize controlled use of categorical (generically inclusive) and syntactic relations to produce similarly predictable locating and relating systems for IR.
    Footnote
    Artikel in einem Themenheft: The philosophy of information
    Theme
    Klassifikationssysteme im Online-Retrieval
  4. Slavic, A.; Cordeiro, M.I.: Core requirements for automation of analytico-synthetic classifications (2004) 0.04
    0.044956923 = product of:
      0.089913845 = sum of:
        0.035423465 = weight(_text_:retrieval in 2651) [ClassicSimilarity], result of:
          0.035423465 = score(doc=2651,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.2835858 = fieldWeight in 2651, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2651)
        0.025667597 = weight(_text_:use in 2651) [ClassicSimilarity], result of:
          0.025667597 = score(doc=2651,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.20298971 = fieldWeight in 2651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=2651)
        0.022201622 = weight(_text_:of in 2651) [ClassicSimilarity], result of:
          0.022201622 = score(doc=2651,freq=22.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.34381276 = fieldWeight in 2651, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2651)
        0.006621159 = product of:
          0.013242318 = sum of:
            0.013242318 = weight(_text_:on in 2651) [ClassicSimilarity], result of:
              0.013242318 = score(doc=2651,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.14580199 = fieldWeight in 2651, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2651)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    The paper analyses the importance of data presentation and modelling and its role in improving the management, use and exchange of analytico-synthetic classifications in automated systems. Inefficiencies, in this respect, hinder the automation of classification systems that offer the possibility of building compound index/search terms. The lack of machine readable data expressing the semantics and structure of a classification vocabulary has negative effects on information management and retrieval, thus restricting the potential of both automated systems and classifications themselves. The authors analysed the data representation structure of three general analytico-synthetic classification systems (BC2-Bliss Bibliographic Classification; BSO-Broad System of Ordering; UDC-Universal Decimal Classification) and put forward some core requirements for classification data representation
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
    Theme
    Klassifikationssysteme im Online-Retrieval
  5. Wang, Z.; Chaudhry, A.S.; Khoo, C.S.G.: Using classification schemes and thesauri to build an organizational taxonomy for organizing content and aiding navigation (2008) 0.04
    0.043977253 = product of:
      0.117272675 = sum of:
        0.016090471 = weight(_text_:of in 2346) [ClassicSimilarity], result of:
          0.016090471 = score(doc=2346,freq=26.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2491759 = fieldWeight in 2346, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
        0.08999251 = sum of:
          0.008828212 = weight(_text_:on in 2346) [ClassicSimilarity], result of:
            0.008828212 = score(doc=2346,freq=2.0), product of:
              0.090823986 = queryWeight, product of:
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.041294612 = queryNorm
              0.097201325 = fieldWeight in 2346, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.03125 = fieldNorm(doc=2346)
          0.08116429 = weight(_text_:line in 2346) [ClassicSimilarity], result of:
            0.08116429 = score(doc=2346,freq=4.0), product of:
              0.23157367 = queryWeight, product of:
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.041294612 = queryNorm
              0.35049015 = fieldWeight in 2346, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.03125 = fieldNorm(doc=2346)
        0.0111897 = product of:
          0.0223794 = sum of:
            0.0223794 = weight(_text_:22 in 2346) [ClassicSimilarity], result of:
              0.0223794 = score(doc=2346,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.15476047 = fieldWeight in 2346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2346)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Purpose - Potential and benefits of classification schemes and thesauri in building organizational taxonomies cannot be fully utilized by organizations. Empirical data of building an organizational taxonomy by the top-down approach of using classification schemes and thesauri appear to be lacking. The paper seeks to make a contribution in this regard. Design/methodology/approach - A case study of building an organizational taxonomy was conducted in the information studies domain for the Division of Information Studies at Nanyang Technology University, Singapore. The taxonomy was built by using the Dewey Decimal Classification, the Information Science Taxonomy, two information systems taxonomies, and three thesauri (ASIS&T, LISA, and ERIC). Findings - Classification schemes and thesauri were found to be helpful in creating the structure and categories related to the subject facet of the taxonomy, but organizational community sources had to be consulted and several methods had to be employed. The organizational activities and stakeholders' needs had to be identified to determine the objectives, facets, and the subject coverage of the taxonomy. Main categories were determined by identifying the stakeholders' interests and consulting organizational community sources and domain taxonomies. Category terms were selected from terminologies of classification schemes, domain taxonomies, and thesauri against the stakeholders' interests. Hierarchical structures of the main categories were constructed in line with the stakeholders' perspectives and the navigational role taking advantage of structures/term relationships from classification schemes and thesauri. Categories were determined in line with the concepts and the hierarchical levels. Format of categories were uniformed according to a commonly used standard. The consistency principle was employed to make the taxonomy structure and categories neater. Validation of the draft taxonomy through consultations with the stakeholders further refined the taxonomy. Originality/value - No similar study could be traced in the literature. The steps and methods used in the taxonomy development, and the information studies taxonomy itself, will be helpful for library and information schools and other similar organizations in their effort to develop taxonomies for organizing content and aiding navigation on organizational sites.
    Date
    7.11.2008 15:22:04
    Source
    Journal of documentation. 64(2008) no.6, S.842-876
  6. Kublik, A.; Clevette, V.; Ward, D.; Olson, H.A.: Adapting dominant classifications to particular contexts (2003) 0.04
    0.04283978 = product of:
      0.08567956 = sum of:
        0.025048172 = weight(_text_:retrieval in 5516) [ClassicSimilarity], result of:
          0.025048172 = score(doc=5516,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.20052543 = fieldWeight in 5516, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5516)
        0.036299463 = weight(_text_:use in 5516) [ClassicSimilarity], result of:
          0.036299463 = score(doc=5516,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.2870708 = fieldWeight in 5516, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=5516)
        0.017710768 = weight(_text_:of in 5516) [ClassicSimilarity], result of:
          0.017710768 = score(doc=5516,freq=14.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2742677 = fieldWeight in 5516, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5516)
        0.006621159 = product of:
          0.013242318 = sum of:
            0.013242318 = weight(_text_:on in 5516) [ClassicSimilarity], result of:
              0.013242318 = score(doc=5516,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.14580199 = fieldWeight in 5516, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5516)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    This paper addresses the process of adapting to a particular culture or context a classification that has grown out of western culture to become a global standard. The authors use a project that adapts DDC for use in a feminist/women's issues context to demonstrate an approach that works. The project is particularly useful as an interdisciplinary example. Discussion consists of four parts: (1) definition of the problem indicating the need for adaptation and efforts to date; (2) description of the methodology developed for creating an expansion; (3) description of the interface developed for actually doing the work, with its potential for a distributed group to work on it together (could even be internationally distributed); and (4) generalization of how the methodology could be used for particular contexts by country, ethnicity, perspective or other defining factors.
    Content
    Beitrag eines Themenheftes "Knowledge organization and classification in international information retrieval"
  7. Ereshefsky, M.: ¬The poverty of the Linnaean hierarchy : a philosophical study of biological taxonomy (2007) 0.04
    0.04098825 = product of:
      0.109302 = sum of:
        0.01711173 = weight(_text_:use in 2493) [ClassicSimilarity], result of:
          0.01711173 = score(doc=2493,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.13532647 = fieldWeight in 2493, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=2493)
        0.02231347 = weight(_text_:of in 2493) [ClassicSimilarity], result of:
          0.02231347 = score(doc=2493,freq=50.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.34554482 = fieldWeight in 2493, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2493)
        0.0698768 = sum of:
          0.012484977 = weight(_text_:on in 2493) [ClassicSimilarity], result of:
            0.012484977 = score(doc=2493,freq=4.0), product of:
              0.090823986 = queryWeight, product of:
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.041294612 = queryNorm
              0.13746344 = fieldWeight in 2493, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.03125 = fieldNorm(doc=2493)
          0.05739182 = weight(_text_:line in 2493) [ClassicSimilarity], result of:
            0.05739182 = score(doc=2493,freq=2.0), product of:
              0.23157367 = queryWeight, product of:
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.041294612 = queryNorm
              0.24783395 = fieldWeight in 2493, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.03125 = fieldNorm(doc=2493)
      0.375 = coord(3/8)
    
    Abstract
    The question of whether biologists should continue to use the Linnaean hierarchy has been a hotly debated issue. Ereshefsky argues that biologists should abandon the Linnaean system and adopt an alternative that is in line with evolutionary theory. He then makes specific recommendations for a post-Linnaean method of classification.
    Content
    Part I: The historical turn 1. The philosophy of classification 2. A primer of biological taxonomy 3. History and classification Part II: The multiplicity of nature 4. Species pluralism 5. How to be a discerning pluralist Part III: Hierarchies and nomenclature 6. The evolution of the Linnaean hierarchy 7. Post-Linnaean taxonomy 8. The future of biological nomenclature
    Footnote
    Rez. in: KO 35(2008) no.4, S.255-259 (B. Hjoerland): "This book was published in 2000 simultaneously in hardback and as an electronic resource, and, in 2007, as a paperback. The author is a professor of philosophy at the University of Calgary, Canada. He has an impressive list of contributions, mostly addressing issues in biological taxonomy such as units of evolution, natural kinds and the species concept. The book is a scholarly criticism of the famous classification system developed by the Swedish botanist Carl Linnaeus (1707-1778). This system consists of both a set of rules for the naming of living organisms (biological nomenclature) and principles of classification. Linné's system has been used and adapted by biologists over a period of almost 250 years. Under the current system of codes, it is now applied to more than two million species of organisms. Inherent in the Linnaean system is the indication of hierarchic relationships. The Linnaean system has been justified primarily on the basis of stability. Although it has been criticized and alternatives have been suggested, it still has its advocates (e.g., Schuh, 2003). One of the alternatives being developed is The International Code of Phylogenetic Nomenclature, known as the PhyloCode for short, a system that radically alters the current nomenclatural rules. The new proposals have provoked hot debate on nomenclatural issues in biology. . . ."
  8. Mai, J.E.: ¬The future of general classification (2003) 0.04
    0.03634274 = product of:
      0.09691397 = sum of:
        0.047231287 = weight(_text_:retrieval in 5478) [ClassicSimilarity], result of:
          0.047231287 = score(doc=5478,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.37811437 = fieldWeight in 5478, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=5478)
        0.03422346 = weight(_text_:use in 5478) [ClassicSimilarity], result of:
          0.03422346 = score(doc=5478,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.27065295 = fieldWeight in 5478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0625 = fieldNorm(doc=5478)
        0.0154592255 = weight(_text_:of in 5478) [ClassicSimilarity], result of:
          0.0154592255 = score(doc=5478,freq=6.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.23940048 = fieldWeight in 5478, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5478)
      0.375 = coord(3/8)
    
    Abstract
    Discusses problems related to accessing multiple collections using a single retrieval language. Surveys the concepts of interoperability and switching language. Finds that mapping between more indexing languages always will be an approximation. Surveys the issues related to general classification and contrasts that to special classifications. Argues for the use of general classifications to provide access to collections nationally and internationally.
    Content
    Beitrag eines Themenheftes "Knowledge organization and classification in international information retrieval"
  9. Broughton, V.; Slavic, A.: Building a faceted classification for the humanities : principles and procedures (2007) 0.03
    0.033844307 = product of:
      0.067688614 = sum of:
        0.023615643 = weight(_text_:retrieval in 2875) [ClassicSimilarity], result of:
          0.023615643 = score(doc=2875,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.18905719 = fieldWeight in 2875, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2875)
        0.024199642 = weight(_text_:use in 2875) [ClassicSimilarity], result of:
          0.024199642 = score(doc=2875,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.19138055 = fieldWeight in 2875, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=2875)
        0.0154592255 = weight(_text_:of in 2875) [ClassicSimilarity], result of:
          0.0154592255 = score(doc=2875,freq=24.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.23940048 = fieldWeight in 2875, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2875)
        0.004414106 = product of:
          0.008828212 = sum of:
            0.008828212 = weight(_text_:on in 2875) [ClassicSimilarity], result of:
              0.008828212 = score(doc=2875,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.097201325 = fieldWeight in 2875, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2875)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Purpose - This paper aims to provide an overview of principles and procedures involved in creating a faceted classification scheme for use in resource discovery in an online environment. Design/methodology/approach - Facet analysis provides an established rigorous methodology for the conceptual organization of a subject field, and the structuring of an associated classification or controlled vocabulary. This paper explains how that methodology was applied to the humanities in the FATKS project, where the objective was to explore the potential of facet analytical theory for creating a controlled vocabulary for the humanities, and to establish the requirements of a faceted classification appropriate to an online environment. A detailed faceted vocabulary was developed for two areas of the humanities within a broader facet framework for the whole of knowledge. Research issues included how to create a data model which made the faceted structure explicit and machine-readable and provided for its further development and use. Findings - In order to support easy facet combination in indexing, and facet searching and browsing on the interface, faceted classification requires a formalized data structure and an appropriate tool for its management. The conceptual framework of a faceted system proper can be applied satisfactorily to humanities, and fully integrated within a vocabulary management system. Research limitations/implications - The procedures described in this paper are concerned only with the structuring of the classification, and do not extend to indexing, retrieval and application issues. Practical implications - Many stakeholders in the domain of resource discovery consider developing their own classification system and supporting tools. The methods described in this paper may clarify the process of building a faceted classification and may provide some useful ideas with respect to the vocabulary maintenance tool. Originality/value - As far as the authors are aware there is no comparable research in this area.
    Source
    Journal of documentation. 63(2007) no.5, S.727-754
    Theme
    Klassifikationssysteme im Online-Retrieval
  10. Facets: a fruitful notion in many domains : special issue on facet analysis (2008) 0.03
    0.03339647 = product of:
      0.06679294 = sum of:
        0.014759776 = weight(_text_:retrieval in 3262) [ClassicSimilarity], result of:
          0.014759776 = score(doc=3262,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.11816074 = fieldWeight in 3262, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3262)
        0.026196882 = weight(_text_:use in 3262) [ClassicSimilarity], result of:
          0.026196882 = score(doc=3262,freq=12.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.20717552 = fieldWeight in 3262, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3262)
        0.021057876 = weight(_text_:of in 3262) [ClassicSimilarity], result of:
          0.021057876 = score(doc=3262,freq=114.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.32610077 = fieldWeight in 3262, product of:
              10.677078 = tf(freq=114.0), with freq of:
                114.0 = termFreq=114.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3262)
        0.00477841 = product of:
          0.00955682 = sum of:
            0.00955682 = weight(_text_:on in 3262) [ClassicSimilarity], result of:
              0.00955682 = score(doc=3262,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.10522352 = fieldWeight in 3262, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3262)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Footnote
    Rez. in: KO 36(2009) no.1, S.62-63 (K. La Barre): "This special issue of Axiomathes presents an ambitious dual agenda. It attempts to highlight aspects of facet analysis (as used in LIS) that are shared by cognate approaches in philosophy, psychology, linguistics and computer science. Secondarily, the issue aims to attract others to the study and use of facet analysis. The authors represent a blend of lifetime involvement with facet analysis, such as Vickery, Broughton, Beghtol, and Dahlberg; those with well developed research agendas such as Tudhope, and Priss; and relative newcomers such as Gnoli, Cheti and Paradisi, and Slavic. Omissions are inescapable, but a more balanced issue would have resulted from inclusion of at least one researcher from the Indian school of facet theory. Another valuable addition might have been a reaction to the issue by one of the chief critics of facet analysis. Potentially useful, but absent, is a comprehensive bibliography of resources for those wishing to engage in further study, that now lie scattered throughout the issue. Several of the papers assume relative familiarity with facet analytical concepts and definitions, some of which are contested even within LIS. Gnoli's introduction (p. 127-130) traces the trajectory, extensions and new developments of this analytico- synthetic approach to subject access, while providing a laundry list of cognate approaches that are similar to facet analysis. This brief essay and the article by Priss (p. 243-255) directly addresses this first part of Gnoli's agenda. Priss provides detailed discussion of facet-like structures in computer science (p. 245- 246), and outlines the similarity between Formal Concept Analysis and facets. This comparison is equally fruitful for researchers in computer science and library and information science. By bridging into a discussion of visualization challenges for facet display, further research is also invited. Many of the remaining papers comprehensively detail the intellectual heritage of facet analysis (Beghtol; Broughton, p. 195-198; Dahlberg; Tudhope and Binding, p. 213-215; Vickery). Beghtol's (p. 131-144) examination of the origins of facet theory through the lens of the textbooks written by Ranganathan's mentor W.C.B. Sayers (1881-1960), Manual of Classification (1926, 1944, 1955) and a textbook written by Mills A Modern Outline of Classification (1964), serves to reveal the deep intellectual heritage of the changes in classification theory over time, as well as Ranganathan's own influence on and debt to Sayers.
    Several of the papers are clearly written as primers and neatly address the second agenda item: attracting others to the study and use of facet analysis. The most valuable papers are written in clear, approachable language. Vickery's paper (p. 145-160) is a clarion call for faceted classification and facet analysis. The heart of the paper is a primer for central concepts and techniques. Vickery explains the value of using faceted classification in document retrieval. Also provided are potential solutions to thorny interface and display issues with facets. Vickery looks to complementary themes in knowledge organization, such as thesauri and ontologies as potential areas for extending the facet concept. Broughton (p. 193-210) describes a rigorous approach to the application of facet analysis in the creation of a compatible thesaurus from the schedules of the 2nd edition of the Bliss Classification (BC2). This discussion of exemplary faceted thesauri, recent standards work, and difficulties encountered in the project will provide valuable guidance for future research in this area. Slavic (p. 257-271) provides a challenge to make faceted classification come 'alive' through promoting the use of machine-readable formats for use and exchange in applications such as Topic Maps and SKOS (Simple Knowledge Organization Systems), and as supported by the standard BS8723 (2005) Structured Vocabulary for Information Retrieval. She also urges designers of faceted classifications to get involved in standards work. Cheti and Paradisi (p. 223-241) outline a basic approach to converting an existing subject indexing tool, the Nuovo Soggetario, into a faceted thesaurus through the use of facet analysis. This discussion, well grounded in the canonical literature, may well serve as a primer for future efforts. Also useful for those who wish to construct faceted thesauri is the article by Tudhope and Binding (p. 211-222). This contains an outline of basic elements to be found in exemplar faceted thesauri, and a discussion of project FACET (Faceted Access to Cultural heritage Terminology) with algorithmically-based semantic query expansion in a dataset composed of items from the National Museum of Science and Industry indexed with AAT (Art and Architecture Thesaurus). This paper looks to the future hybridization of ontologies and facets through standards developments such as SKOS because of the "lightweight semantics" inherent in facets.
    Two of the papers revisit the interaction of facets with the theory of integrative levels, which posits that the organization of the natural world reflects increasingly interdependent complexity. This approach was tested as a basis for the creation of faceted classifications in the 1960s. These contemporary treatments of integrative levels are not discipline-driven as were the early approaches, but instead are ontological and phenomenological in focus. Dahlberg (p. 161-172) outlines the creation of the ICC (Information Coding System) and the application of the Systematifier in the generation of facets and the creation of a fully faceted classification. Gnoli (p. 177-192) proposes the use of fundamental categories as a way to redefine facets and fundamental categories in "more universal and level-independent ways" (p. 192). Given that Axiomathes has a stated focus on "contemporary issues in cognition and ontology" and the following thesis: "that real advances in contemporary science may depend upon a consideration of the origins and intellectual history of ideas at the forefront of current research," this venue seems well suited for the implementation of the stated agenda, to illustrate complementary approaches and to stimulate research. As situated, this special issue may well serve as a bridge to a more interdisciplinary dialogue about facet analysis than has previously been the case."
  11. Beghtol, C.: Naïve classification systems and the global information society (2004) 0.03
    0.033332326 = product of:
      0.06666465 = sum of:
        0.029519552 = weight(_text_:retrieval in 3483) [ClassicSimilarity], result of:
          0.029519552 = score(doc=3483,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23632148 = fieldWeight in 3483, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3483)
        0.017640345 = weight(_text_:of in 3483) [ClassicSimilarity], result of:
          0.017640345 = score(doc=3483,freq=20.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.27317715 = fieldWeight in 3483, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3483)
        0.0055176322 = product of:
          0.0110352645 = sum of:
            0.0110352645 = weight(_text_:on in 3483) [ClassicSimilarity], result of:
              0.0110352645 = score(doc=3483,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.121501654 = fieldWeight in 3483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3483)
          0.5 = coord(1/2)
        0.013987125 = product of:
          0.02797425 = sum of:
            0.02797425 = weight(_text_:22 in 3483) [ClassicSimilarity], result of:
              0.02797425 = score(doc=3483,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.19345059 = fieldWeight in 3483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3483)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Classification is an activity that transcends time and space and that bridges the divisions between different languages and cultures, including the divisions between academic disciplines. Classificatory activity, however, serves different purposes in different situations. Classifications for infonnation retrieval can be called "professional" classifications and classifications in other fields can be called "naïve" classifications because they are developed by people who have no particular interest in classificatory issues. The general purpose of naïve classification systems is to discover new knowledge. In contrast, the general purpose of information retrieval classifications is to classify pre-existing knowledge. Different classificatory purposes may thus inform systems that are intended to span the cultural specifics of the globalized information society. This paper builds an previous research into the purposes and characteristics of naïve classifications. It describes some of the relationships between the purpose and context of a naive classification, the units of analysis used in it, and the theory that the context and the units of analysis imply.
    Footnote
    Vgl.: Jacob, E.K.: Proposal for a classification of classifications built on Beghtol's distinction between "Naïve Classification" and "Professional Classification". In: Knowledge organization. 37(2010) no.2, S.111-120.
    Pages
    S.19-22
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
  12. Qin, J.: Evolving paradigms of knowledge representation and organization : a comparative study of classification, XML/DTD and ontology (2003) 0.03
    0.03222632 = product of:
      0.06445264 = sum of:
        0.016698781 = weight(_text_:retrieval in 2763) [ClassicSimilarity], result of:
          0.016698781 = score(doc=2763,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.13368362 = fieldWeight in 2763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.01711173 = weight(_text_:use in 2763) [ClassicSimilarity], result of:
          0.01711173 = score(doc=2763,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.13532647 = fieldWeight in 2763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.019452432 = weight(_text_:of in 2763) [ClassicSimilarity], result of:
          0.019452432 = score(doc=2763,freq=38.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.30123898 = fieldWeight in 2763, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.0111897 = product of:
          0.0223794 = sum of:
            0.0223794 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
              0.0223794 = score(doc=2763,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.15476047 = fieldWeight in 2763, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2763)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    The different points of views an knowledge representation and organization from various research communities reflect underlying philosophies and paradigms in these communities. This paper reviews differences and relations in knowledge representation and organization and generalizes four paradigms-integrative and disintegrative pragmatism and integrative and disintegrative epistemologism. Examples such as classification, XML schemas, and ontologies are compared based an how they specify concepts, build data models, and encode knowledge organization structures. 1. Introduction Knowledge representation (KR) is a term that several research communities use to refer to somewhat different aspects of the same research area. The artificial intelligence (AI) community considers KR as simply "something to do with writing down, in some language or communications medium, descriptions or pictures that correspond in some salient way to the world or a state of the world" (Duce & Ringland, 1988, p. 3). It emphasizes the ways in which knowledge can be encoded in a computer program (Bench-Capon, 1990). For the library and information science (LIS) community, KR is literally the synonym of knowledge organization, i.e., KR is referred to as the process of organizing knowledge into classifications, thesauri, or subject heading lists. KR has another meaning in LIS: it "encompasses every type and method of indexing, abstracting, cataloguing, classification, records management, bibliography and the creation of textual or bibliographic databases for information retrieval" (Anderson, 1996, p. 336). Adding the social dimension to knowledge organization, Hjoerland (1997) states that knowledge is a part of human activities and tied to the division of labor in society, which should be the primary organization of knowledge. Knowledge organization in LIS is secondary or derived, because knowledge is organized in learned institutions and publications. These different points of views an KR suggest that an essential difference in the understanding of KR between both AI and LIS lies in the source of representationwhether KR targets human activities or derivatives (knowledge produced) from human activities. This difference also decides their difference in purpose-in AI KR is mainly computer-application oriented or pragmatic and the result of representation is used to support decisions an human activities, while in LIS KR is conceptually oriented or abstract and the result of representation is used for access to derivatives from human activities.
    Date
    12. 9.2004 17:22:35
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
  13. Broughton, V.: Essential classification (2004) 0.03
    0.028888686 = product of:
      0.05777737 = sum of:
        0.016698781 = weight(_text_:retrieval in 2824) [ClassicSimilarity], result of:
          0.016698781 = score(doc=2824,freq=8.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.13368362 = fieldWeight in 2824, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.01711173 = weight(_text_:use in 2824) [ClassicSimilarity], result of:
          0.01711173 = score(doc=2824,freq=8.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.13532647 = fieldWeight in 2824, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.018127548 = weight(_text_:of in 2824) [ClassicSimilarity], result of:
          0.018127548 = score(doc=2824,freq=132.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.28072193 = fieldWeight in 2824, product of:
              11.489125 = tf(freq=132.0), with freq of:
                132.0 = termFreq=132.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.005839314 = product of:
          0.011678628 = sum of:
            0.011678628 = weight(_text_:on in 2824) [ClassicSimilarity], result of:
              0.011678628 = score(doc=2824,freq=14.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.12858528 = fieldWeight in 2824, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2824)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Classification is a crucial skill for all information workers involved in organizing collections, but it is a difficult concept to grasp - and is even more difficult to put into practice. Essential Classification offers full guidance an how to go about classifying a document from scratch. This much-needed text leads the novice classifier step by step through the basics of subject cataloguing, with an emphasis an practical document analysis and classification. It deals with fundamental questions of the purpose of classification in different situations, and the needs and expectations of end users. The novice is introduced to the ways in which document content can be assessed, and how this can best be expressed for translation into the language of specific indexing and classification systems. The characteristics of the major general schemes of classification are discussed, together with their suitability for different classification needs.
    Footnote
    Rez. in: KO 32(2005) no.1, S.47-49 (M. Hudon): "Vanda Broughton's Essential Classification is the most recent addition to a very small set of classification textbooks published over the past few years. The book's 21 chapters are based very closely an the cataloguing and classification module at the School of Library, Archive, and Information studies at University College, London. The author's main objective is clear: this is "first and foremost a book about how to classify. The emphasis throughout is an the activity of classification rather than the theory, the practical problems of the organization of collections, and the needs of the users" (p. 1). This is not a theoretical work, but a basic course in classification and classification scheme application. For this reviewer, who also teaches "Classification 101," this is also a fascinating peek into how a colleague organizes content and structures her course. "Classification is everywhere" (p. 1): the first sentence of this book is also one of the first statements in my own course, and Professor Broughton's metaphors - the supermarket, canned peas, flowers, etc. - are those that are used by our colleagues around the world. The combination of tone, writing style and content display are reader-friendly; they are in fact what make this book remarkable and what distinguishes it from more "formal" textbooks, such as The Organization of Information, the superb text written and recently updated (2004) by Professor Arlene Taylor (2nd ed. Westport, Conn.: Libraries Unlimited, 2004). Reading Essential Classification, at times, feels like being in a classroom, facing a teacher who assures you that "you don't need to worry about this at this stage" (p. 104), and reassures you that, although you now speed a long time looking for things, "you will soon speed up when you get to know the scheme better" (p. 137). This teacher uses redundancy in a productive fashion, and she is not afraid to express her own opinions ("I think that if these concepts are helpful they may be used" (p. 245); "It's annoying that LCC doesn't provide clearer instructions, but if you keep your head and take them one step at a time [i.e. the tables] they're fairly straightforward" (p. 174)). Chapters 1 to 7 present the essential theoretical concepts relating to knowledge organization and to bibliographic classification. The author is adept at making and explaining distinctions: known-item retrieval versus subject retrieval, personal versus public/shared/official classification systems, scientific versus folk classification systems, object versus aspect classification systems, semantic versus syntactic relationships, and so on. Chapters 8 and 9 discuss the practice of classification, through content analysis and subject description. A short discussion of difficult subjects, namely the treatment of unique concepts (persons, places, etc.) as subjects seems a little advanced for a beginners' class.
    In Chapter 10, "Controlled indexing languages," Professor Broughton states that a classification scheme is truly a language "since it permits communication and the exchange of information" (p. 89), a Statement with which this reviewer wholly agrees. Chapter 11, however, "Word-based approaches to retrieval," moves us to a different field altogether, offering only a narrow view of the whole world of controlled indexing languages such as thesauri, and presenting disconnected discussions of alphabetical filing, form and structure of subject headings, modern developments in alphabetical subject indexing, etc. Chapters 12 and 13 focus an the Library of Congress Subject Headings (LCSH), without even a passing reference to existing subject headings lists in other languages (French RAMEAU, German SWK, etc.). If it is not surprising to see a section on subject headings in a book on classification, the two subjects being taught together in most library schools, the location of this section in the middle of this particular book is more difficult to understand. Chapter 14 brings the reader back to classification, for a discussion of essentials of classification scheme application. The following five chapters present in turn each one of the three major and currently used bibliographic classification schemes, in order of increasing complexity and difficulty of application. The Library of Congress Classification (LCC), the easiest to use, is covered in chapters 15 and 16. The Dewey Decimal Classification (DDC) deserves only a one-chapter treatment (Chapter 17), while the functionalities of the Universal Decimal Classification (UDC), which Professor Broughton knows extremely well, are described in chapters 18 and 19. Chapter 20 is a general discussion of faceted classification, on par with the first seven chapters for its theoretical content. Chapter 21, an interesting last chapter on managing classification, addresses down-to-earth matters such as the cost of classification, the need for re-classification, advantages and disadvantages of using print versions or e-versions of classification schemes, choice of classification scheme, general versus special scheme. But although the questions are interesting, the chapter provides only a very general overview of what appropriate answers might be. To facilitate reading and learning, summaries are strategically located at various places in the text, and always before switching to a related subject. Professor Broughton's choice of examples is always interesting, and sometimes even entertaining (see for example "Inside out: A brief history of underwear" (p. 71)). With many examples, however, and particularly those that appear in the five chapters an classification scheme applications, the novice reader would have benefited from more detailed explanations. On page 221, for example, "The history and social influence of the potato" results in this analysis of concepts: Potato - Sociology, and in the UDC class number: 635.21:316. What happened to the "history" aspect? Some examples are not very convincing: in Animals RT Reproduction and Art RT Reproduction (p. 102), the associative relationship is not appropriate as it is used to distinguish homographs and would do nothing to help either the indexer or the user at the retrieval stage.
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
  14. Gnoli, C.; Mei, H.: Freely faceted classification for Web-based information retrieval (2006) 0.03
    0.0258523 = product of:
      0.06893947 = sum of:
        0.04338471 = weight(_text_:retrieval in 534) [ClassicSimilarity], result of:
          0.04338471 = score(doc=534,freq=6.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.34732026 = fieldWeight in 534, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=534)
        0.018933605 = weight(_text_:of in 534) [ClassicSimilarity], result of:
          0.018933605 = score(doc=534,freq=16.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2932045 = fieldWeight in 534, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=534)
        0.006621159 = product of:
          0.013242318 = sum of:
            0.013242318 = weight(_text_:on in 534) [ClassicSimilarity], result of:
              0.013242318 = score(doc=534,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.14580199 = fieldWeight in 534, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=534)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    In free classification, each concept is expressed by a constant notation, and classmarks are formed by free combinations of them, allowing the retrieval of records from a database by searching any of the component concepts. A refinement of free classification is freely faceted classification, where notation can include facets, expressing the kind of relations held between the concepts. The Integrative Level Classification project aims at testing free and freely faceted classification by applying them to small bibliographical samples in various domains. A sample, called the Dandelion Bibliography of Facet Analysis, is described here. Experience was gained using this system to classify 300 specialized papers dealing with facet analysis itself recorded on a MySQL database and building a Web interface exploiting freely faceted notation. The interface is written in PHP and uses string functions to process the queries and to yield relevant results selected and ordered according to the principles of integrative levels.
    Source
    New review of hypermedia and multimedia. 12(2006) no.1, S.63-81
    Theme
    Klassifikationssysteme im Online-Retrieval
  15. McIlwaine, I.C.: Where have all the flowers gone? : An investigation into the fate of some special classification schemes (2003) 0.02
    0.023857918 = product of:
      0.06362111 = sum of:
        0.016698781 = weight(_text_:retrieval in 2764) [ClassicSimilarity], result of:
          0.016698781 = score(doc=2764,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.13368362 = fieldWeight in 2764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2764)
        0.02963839 = weight(_text_:use in 2764) [ClassicSimilarity], result of:
          0.02963839 = score(doc=2764,freq=6.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23439234 = fieldWeight in 2764, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=2764)
        0.01728394 = weight(_text_:of in 2764) [ClassicSimilarity], result of:
          0.01728394 = score(doc=2764,freq=30.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.26765788 = fieldWeight in 2764, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2764)
      0.375 = coord(3/8)
    
    Abstract
    Prior to the OPAC many institutions devised classifications to suit their special needs. Others expanded or altered general schemes to accommodate specific approaches. A driving force in the creation of these classifications was the Classification Research Group, celebrating its golden jubilee in 2002, whose work created a framework and body of principles that remain valid for the retrieval needs of today. The paper highlights some of these special schemes and highlights the fundamental principles which remain valid. 1. Introduction The distinction between a general and a special classification scheme is made frequently in the textbooks, but is one that it is sometimes difficult to draw. The Library of Congress classification could be described as the special classification par excellence. Normally, however, a special classification is taken to be one that is restricted to a specific subject, and quite often used in one specific context only, either a library or a bibliographic listing or for a specific purpose such as a search engine and it is in this sense that I propose to examine some of these schemes. Today, there is a widespread preference for searching an words as a supplement to the use of a standard system, usually the Dewey Decimal Classification (DDC). This is enhanced by the ability to search documents full-text in a computerized environment, a situation that did not exist 20 or 30 years ago. Today's situation is a great improvement in many ways, but it does depend upon the words used by the author and the searcher corresponding, and often presupposes the use of English. In libraries, the use of co-operative services and precatalogued records already provided with classification data has also spelt the demise of the special scheme. In many instances, the survival of a special classification depends upon its creaior and, with the passage of time, this becomes inevitably more precarious.
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
  16. McIlwaine, I.C.: ¬A question of place (2004) 0.02
    0.02309519 = product of:
      0.061587177 = sum of:
        0.020873476 = weight(_text_:retrieval in 2650) [ClassicSimilarity], result of:
          0.020873476 = score(doc=2650,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.16710453 = fieldWeight in 2650, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2650)
        0.021389665 = weight(_text_:use in 2650) [ClassicSimilarity], result of:
          0.021389665 = score(doc=2650,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.1691581 = fieldWeight in 2650, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2650)
        0.019324033 = weight(_text_:of in 2650) [ClassicSimilarity], result of:
          0.019324033 = score(doc=2650,freq=24.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2992506 = fieldWeight in 2650, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2650)
      0.375 = coord(3/8)
    
    Abstract
    This paper looks at the problems raised by maintaining an Area Table in a general scheme of classification. It examines the tools available to assist in producing a standardized listing and demonstrates how recent developments in the Universal Decimal Classification enable users to have a retrieval tool suitable for use in a networked environment which acts as both a gazetteer and a classification.
    Content
    1. Introduction The representation of place in classification schemes presents a number of problems. This paper examines some of them and presents different ways in which a solution may be sought. Firstly, what is meant by place? The simple answer is a geographical area, large or small. The reality is not so simple. Place, or Topos to Aristotle was more than just an area, it was a state of mind. But even staying an the less philosophical plane, the way in which a place can be expressed is infinitely variable. Toponymy is a well defined field of study, comparable with taxonomy in the biological sciences. It comprehends the proper name by which any geographical entity is known, and part of the world, feature of earth's surface, organic aggregate (reef, forest) an organizational unit (country, borough, diocese), limits of Earth (poles, hemispheres) parts of Earth (oceans, continents), lakes, mountain passes, capital cities or sea parts.
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
  17. Giunchiglia, F.; Zaihrayeu, I.; Farazi, F.: Converting classifications into OWL ontologies (2009) 0.02
    0.019539041 = product of:
      0.05210411 = sum of:
        0.025667597 = weight(_text_:use in 4690) [ClassicSimilarity], result of:
          0.025667597 = score(doc=4690,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.20298971 = fieldWeight in 4690, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=4690)
        0.014968331 = weight(_text_:of in 4690) [ClassicSimilarity], result of:
          0.014968331 = score(doc=4690,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.23179851 = fieldWeight in 4690, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4690)
        0.011468184 = product of:
          0.022936368 = sum of:
            0.022936368 = weight(_text_:on in 4690) [ClassicSimilarity], result of:
              0.022936368 = score(doc=4690,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.25253648 = fieldWeight in 4690, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4690)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Classification schemes, such as the DMoZ web directory, provide a convenient and intuitive way for humans to access classified contents. While being easy to be dealt with for humans, classification schemes remain hard to be reasoned about by automated software agents. Among other things, this hardness is conditioned by the ambiguous na- ture of the natural language used to describe classification categories. In this paper we describe how classification schemes can be converted into OWL ontologies, thus enabling reasoning on them by Semantic Web applications. The proposed solution is based on a two phase approach in which category names are first encoded in a concept language and then, together with the structure of the classification scheme, are converted into an OWL ontology. We demonstrate the practical applicability of our approach by showing how the results of reasoning on these OWL ontologies can help improve the organization and use of web directories.
  18. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.02
    0.019400286 = product of:
      0.03880057 = sum of:
        0.010436738 = weight(_text_:retrieval in 2467) [ClassicSimilarity], result of:
          0.010436738 = score(doc=2467,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.08355226 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.010694833 = weight(_text_:use in 2467) [ClassicSimilarity], result of:
          0.010694833 = score(doc=2467,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.08457905 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.011500099 = weight(_text_:of in 2467) [ClassicSimilarity], result of:
          0.011500099 = score(doc=2467,freq=34.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.17808972 = fieldWeight in 2467, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.006168901 = product of:
          0.012337802 = sum of:
            0.012337802 = weight(_text_:on in 2467) [ClassicSimilarity], result of:
              0.012337802 = score(doc=2467,freq=10.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.135843 = fieldWeight in 2467, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2467)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    This is a classified, annotated bibliography about how to design faceted classification systems and make them usable on the World Wide Web. It is the first of three works I will be doing. The second, based on the material here and elsewhere, will discuss how to actually make the faceted system and put it online. The third will be a report of how I did just that, what worked, what didn't, and what I learned. Almost every article or book listed here begins with an explanation of what a faceted classification system is, so I won't (but see Steckel in Background below if you don't already know). They all agree that faceted systems are very appropriate for the web. Even pre-web articles (such as Duncan's in Background, below) assert that hypertext and facets will go together well. Combined, it is possible to take a set of documents and classify them or apply subject headings to describe what they are about, then build a navigational structure so that any user, no matter how he or she approaches the material, no matter what his or her goals, can move and search in a way that makes sense to them, but still get to the same useful results as someone else following a different path to the same goal. There is no one way that everyone will always use when looking for information. The more flexible the organization of the information, the more accommodating it is. Facets are more flexible for hypertext browsing than any enumerative or hierarchical system.
    Consider movie listings in newspapers. Most Canadian newspapers list movie showtimes in two large blocks, for the two major theatre chains. The listings are ordered by region (in large cities), then theatre, then movie, and finally by showtime. Anyone wondering where and when a particular movie is playing must scan the complete listings. Determining what movies are playing in the next half hour is very difficult. When movie listings went onto the web, most sites used a simple faceted organization, always with movie name and theatre, and perhaps with region or neighbourhood (thankfully, theatre chains were left out). They make it easy to pick a theatre and see what movies are playing there, or to pick a movie and see what theatres are showing it. To complete the system, the sites should allow users to browse by neighbourhood and showtime, and to order the results in any way they desired. Thus could people easily find answers to such questions as, "Where is the new James Bond movie playing?" "What's showing at the Roxy tonight?" "I'm going to be out in in Little Finland this afternoon with three hours to kill starting at 2 ... is anything interesting playing?" A hypertext, faceted classification system makes more useful information more easily available to the user. Reading the books and articles below in chronological order will show a certain progression: suggestions that faceting and hypertext might work well, confidence that facets would work well if only someone would make such a system, and finally the beginning of serious work on actually designing, building, and testing faceted web sites. There is a solid basis of how to make faceted classifications (see Vickery in Recommended), but their application online is just starting. Work on XFML (see Van Dijck's work in Recommended) the Exchangeable Faceted Metadata Language, will make this easier. If it follows previous patterns, parts of the Internet community will embrace the idea and make open source software available for others to reuse. It will be particularly beneficial if professionals in both information studies and computer science can work together to build working systems, standards, and code. Each can benefit from the other's expertise in what can be a very complicated and technical area. One particularly nice thing about this area of research is that people interested in combining facets and the web often have web sites where they post their writings.
    This bibliography is not meant to be exhaustive, but unfortunately it is not as complete as I wanted. Some books and articles are not be included, but they may be used in my future work. (These include two books and one article by B.C. Vickery: Faceted Classification Schemes (New Brunswick, NJ: Rutgers, 1966), Classification and Indexing in Science, 3rd ed. (London: Butterworths, 1975), and "Knowledge Representation: A Brief Review" (Journal of Documentation 42 no. 3 (September 1986): 145-159; and A.C. Foskett's "The Future of Faceted Classification" in The Future of Classification, edited by Rita Marcella and Arthur Maltby (Aldershot, England: Gower, 2000): 69-80). Nevertheless, I hope this bibliography will be useful for those both new to or familiar with faceted hypertext systems. Some very basic resources are listed, as well as some very advanced ones. Some example web sites are mentioned, but there is no detailed technical discussion of any software. The user interface to any web site is extremely important, and this is briefly mentioned in two or three places (for example the discussion of lawforwa.org (see Example Web Sites)). The larger question of how to display information graphically and with hypertext is outside the scope of this bibliography. There are five sections: Recommended, Background, Not Relevant, Example Web Sites, and Mailing Lists. Background material is either introductory, advanced, or of peripheral interest, and can be read after the Recommended resources if the reader wants to know more. The Not Relevant category contains articles that may appear in bibliographies but are not relevant for my purposes.
    Theme
    Klassifikationssysteme im Online-Retrieval
  19. Hjoerland, B.; Nicolaisen, J.: Scientific and scholarly classifications are not "naïve" : a comment to Begthol (2003) (2004) 0.02
    0.01925009 = product of:
      0.051333573 = sum of:
        0.036153924 = weight(_text_:retrieval in 3023) [ClassicSimilarity], result of:
          0.036153924 = score(doc=3023,freq=6.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.28943354 = fieldWeight in 3023, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3023)
        0.009662016 = weight(_text_:of in 3023) [ClassicSimilarity], result of:
          0.009662016 = score(doc=3023,freq=6.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.1496253 = fieldWeight in 3023, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3023)
        0.0055176322 = product of:
          0.0110352645 = sum of:
            0.0110352645 = weight(_text_:on in 3023) [ClassicSimilarity], result of:
              0.0110352645 = score(doc=3023,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.121501654 = fieldWeight in 3023, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3023)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Relationships between Knowledge Organization in LIS and Scientific & Scholarly Classifications In her paper "Classification for Information Retrieval and Classification for Knowledge Discovery: Relationships between 'Professional' and 'Naive' Classifications" (KO v30, no.2, 2003), Beghtol outlines how Scholarly activities and research lead to classification systems which subsequently are disseminated in publications which are classified in information retrieval systems, retrieved by the users and again used in Scholarly activities and so on. We think this model is correct and that its point is important. What we are reacting to is the fact that Beghtol describes the Classifications developed by scholars as "naive" while she describes the Classifications developed by librarians and information scientists as "professional." We fear that this unfortunate terminology is rooted in deeply ar chored misjudgments about the relationships between scientific and Scholarly classification an the one side and LIS Classifications an the other. Only a correction of this misjudgment may give us in the field of knowledge organization a Chance to do a job that is not totally disrespected and disregarded by the rest of the intellectual world.
    Footnote
    Bezugnahme auf: Beghtol, C.: Classification for information retrieval and classification for knowledge discovery: relationships between 'professional' and 'naive' classifications" in: Knowledge organization. 30(2003), no.2, S.64-73; vgl. dazu auch die Erwiderung von C. Beghtol in: Knowledge organization. 31(2004) no.1, S.62-63.
  20. Beghtol, C.: Response to Hjoerland and Nicolaisen (2004) 0.02
    0.018979685 = product of:
      0.050612494 = sum of:
        0.014611433 = weight(_text_:retrieval in 3536) [ClassicSimilarity], result of:
          0.014611433 = score(doc=3536,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.11697317 = fieldWeight in 3536, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3536)
        0.014972764 = weight(_text_:use in 3536) [ClassicSimilarity], result of:
          0.014972764 = score(doc=3536,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.11841066 = fieldWeight in 3536, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3536)
        0.021028299 = weight(_text_:of in 3536) [ClassicSimilarity], result of:
          0.021028299 = score(doc=3536,freq=58.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.32564276 = fieldWeight in 3536, product of:
              7.615773 = tf(freq=58.0), with freq of:
                58.0 = termFreq=58.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3536)
      0.375 = coord(3/8)
    
    Abstract
    I am writing to correct some of the misconceptions that Hjoerland and Nicolaisen appear to have about my paper in the previous issue of Knowledge Organization. I would like to address aspects of two of these misapprehensions. The first is the faulty interpretation they have given to my use of the term "naïve classification," and the second is the kinds of classification systems that they appear to believe are discussed in my paper as examples of "naïve classifications." First, the term "naïve classification" is directly analogous to the widely-understood and widelyaccepted term "naïve indexing." It is not analogous to the terms to which Hjorland and Nicolaisen compare it (i.e., "naïve physics", "naïve biology"). The term as I have defined it is not pejorative. It does not imply that the scholars who have developed naïve classifications have not given profoundly serious thought to their own scholarly work. My paper distinguishes between classifications for new knowledge developed by scholars in the various disciplines for the purposes of advancing disciplinary knowledge ("naïve classifications") and classifications for previously existing knowledge developed by information professionals for the purposes of creating access points in information retrieval systems ("professional classifications"). This distinction rests primarily an the purpose of the kind of classification system in question and only secondarily an the knowledge base of the scholars who have created it. Hjoerland and Nicolaisen appear to have misunderstood this point, which is made clearly and adequately in the title, in the abstract and throughout the text of my paper.
    Second, the paper posits that these different reasons for creating classification systems strongly influence the content and extent of the two kinds of classifications, but not necessarily their structures. By definition, naïve classifications for new knowledge have been developed for discrete areas of disciplinary inquiry in new areas of knowledge. These classifications do not attempt to classify the whole of that disciplinary area. That is, naïve classifications have a explicit purpose that is significantly different from the purpose of the major disciplinary classifications Hjoer-land and Nicolaisen provide as examples of classifications they think I discuss under the rubric of "naïve classifications" (e.g., classifications for the entire field of archaeology, biology, linguistics, music, psychology, etc.). My paper is not concerned with these important classifications for major disciplinary areas. Instead, it is concerned solely and specifically with scholarly classifications for small areas of new knowledge within these major disciplines (e.g., cloth of aresta, double harpsichords, child-rearing practices, anomalous phenomena, etc.). Thus, I have nowhere suggested or implied that the broad disciplinary classifications mentioned by Hjoerland and Nicolaisen are appropriately categorized as "naïve classifications." For example, I have not associated the Periodic System of the Elements with naïve classifications, as Hjoerland and Nicolaisen state that I have done. Indeed, broad classifications of this type fall well outside the definition of naïve classifications set out in my paper. In this case, too, 1 believe that Hjorland and Nicolaisen have misunderstood an important point in my paper. I agree with a number of points made in Hjorland and Nicolaisen's paper. In particular, I agree that researchers in the knowledge organization field should adhere to the highest standards of scholarly and scientific precision. For that reason, I am glad to have had the opportunity to respond to their paper.

Languages

  • e 80
  • chi 1
  • More… Less…

Types

  • a 69
  • m 7
  • el 4
  • s 2
  • b 1
  • More… Less…