Search (194 results, page 1 of 10)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  • × type_ss:"a"
  1. Kent, R.E.: Organizing conceptual knowledge online : metadata interoperability and faceted classification (1998) 0.08
    0.08254091 = product of:
      0.13756818 = sum of:
        0.098406665 = weight(_text_:philosophy in 57) [ClassicSimilarity], result of:
          0.098406665 = score(doc=57,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.426834 = fieldWeight in 57, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.0546875 = fieldNorm(doc=57)
        0.01935205 = weight(_text_:of in 57) [ClassicSimilarity], result of:
          0.01935205 = score(doc=57,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.29624295 = fieldWeight in 57, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=57)
        0.019809462 = product of:
          0.039618924 = sum of:
            0.039618924 = weight(_text_:22 in 57) [ClassicSimilarity], result of:
              0.039618924 = score(doc=57,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2708308 = fieldWeight in 57, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=57)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Conceptual Knowledge Markup Language (CKML), an application of XML, is a new standard being promoted for the specification of online conceptual knowledge (Kent and Shrivastava, 1998). CKML follows the philosophy of Conceptual Knowledge Processing (Wille, 1982), a principled approach to knowledge representation and data analysis, which advocates the development of methodologies and techniques to support people in their rational thinking, judgement and actions. CKML was developed and is being used in the WAVE networked information discovery and retrieval system (Kent and Neuss, 1994) as a standard for the specification of conceptual knowledge
    Date
    30.12.2001 16:22:41
    Source
    Structures and relations in knowledge organization: Proceedings of the 5th International ISKO-Conference, Lille, 25.-29.8.1998. Ed.: W. Mustafa el Hadi et al
  2. Mills, J.: Faceted classification and logical division in information retrieval (2004) 0.04
    0.04423027 = product of:
      0.110575676 = sum of:
        0.084348574 = weight(_text_:philosophy in 831) [ClassicSimilarity], result of:
          0.084348574 = score(doc=831,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.36585772 = fieldWeight in 831, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.046875 = fieldNorm(doc=831)
        0.0262271 = weight(_text_:of in 831) [ClassicSimilarity], result of:
          0.0262271 = score(doc=831,freq=30.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.4014868 = fieldWeight in 831, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=831)
      0.4 = coord(2/5)
    
    Abstract
    The main object of the paper is to demonstrate in detail the role of classification in information retrieval (IR) and the design of classificatory structures by the application of logical division to all forms of the content of records, subject and imaginative. The natural product of such division is a faceted classification. The latter is seen not as a particular kind of library classification but the only viable form enabling the locating and relating of information to be optimally predictable. A detailed exposition of the practical steps in facet analysis is given, drawing on the experience of the new Bliss Classification (BC2). The continued existence of the library as a highly organized information store is assumed. But, it is argued, it must acknowledge the relevance of the revolution in library classification that has taken place. It considers also how alphabetically arranged subject indexes may utilize controlled use of categorical (generically inclusive) and syntactic relations to produce similarly predictable locating and relating systems for IR.
    Footnote
    Artikel in einem Themenheft: The philosophy of information
  3. Ellis, D.; Vasconcelos, A.: Ranganathan and the Net : using facet analysis to search and organise the World Wide Web (1999) 0.04
    0.04037442 = product of:
      0.10093605 = sum of:
        0.084348574 = weight(_text_:philosophy in 726) [ClassicSimilarity], result of:
          0.084348574 = score(doc=726,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.36585772 = fieldWeight in 726, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
        0.016587472 = weight(_text_:of in 726) [ClassicSimilarity], result of:
          0.016587472 = score(doc=726,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.25392252 = fieldWeight in 726, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
      0.4 = coord(2/5)
    
    Abstract
    This paper documents the continuing relevance of facet analysis as a technique for searching and organising WWW based materials. The 2 approaches underlying WWW searching and indexing - word and concept based indexing - are outlined. It is argued that facet analysis as an a posteriori approach to classification using words from the subject field as the concept terms in the classification derived represents an excellent approach to searching and organising the results of WWW searches using either search engines or search directories. Finally it is argued that the underlying philosophy of facet analysis is better suited to the disparate nature of WWW resources and searchers than the assumptions of contemporaray IR research.
    This article gives a cheerfully brief and undetailed account of how to make a faceted classification system, then describes information retrieval and searching on the web. It concludes by saying that facets would be excellent in helping users search and browse the web, but offers no real clues as to how this can be done.
  4. Hill, J.S.: Online classification number access : some practical considerations (1984) 0.03
    0.02533477 = product of:
      0.06333692 = sum of:
        0.018058153 = weight(_text_:of in 7684) [ClassicSimilarity], result of:
          0.018058153 = score(doc=7684,freq=2.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.27643585 = fieldWeight in 7684, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.125 = fieldNorm(doc=7684)
        0.045278773 = product of:
          0.090557545 = sum of:
            0.090557545 = weight(_text_:22 in 7684) [ClassicSimilarity], result of:
              0.090557545 = score(doc=7684,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.61904186 = fieldWeight in 7684, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=7684)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Journal of academic librarianship. 10(1984), S.17-22
  5. McIlwaine, I.C.: ¬The UDC and the World Wide Web (2003) 0.02
    0.02436502 = product of:
      0.06091255 = sum of:
        0.015961302 = weight(_text_:of in 3814) [ClassicSimilarity], result of:
          0.015961302 = score(doc=3814,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.24433708 = fieldWeight in 3814, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3814)
        0.04495125 = product of:
          0.0899025 = sum of:
            0.0899025 = weight(_text_:mind in 3814) [ClassicSimilarity], result of:
              0.0899025 = score(doc=3814,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.34480107 = fieldWeight in 3814, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3814)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The paper examines the potentiality of the Universal Decimal Classification as a means for retrieving subjects from the World Wide Web. The analytico-synthetic basis of the scheme provides the facility to link concepts at the input or search stage and to isolate concepts via the notation so as to retrieve the separate parts of a compound subject individually if required. Its notation permits hierarchical searching and overrides the shortcomings of natural language. Recent revisions have been constructed with this purpose in mind, the most recent being for Management. The use of the classification embedded in metadata, as in the GERHARD system or as a basis for subject trees is discussed. Its application as a gazetteer is another Web application to which it is put. The range of up to date editions in many languages and the availability of a Web-based version make its use as a switching language increasingly valuable.
    Source
    Subject retrieval in a networked environment: Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC. Ed.: I.C. McIlwaine
  6. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.02
    0.019001076 = product of:
      0.04750269 = sum of:
        0.013543615 = weight(_text_:of in 6040) [ClassicSimilarity], result of:
          0.013543615 = score(doc=6040,freq=2.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.20732689 = fieldWeight in 6040, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=6040)
        0.033959076 = product of:
          0.06791815 = sum of:
            0.06791815 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
              0.06791815 = score(doc=6040,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.46428138 = fieldWeight in 6040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6040)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 6.2002 19:42:47
  7. Comaromi, C.L.: Summation of classification as an enhancement of intellectual access to information in an online environment (1990) 0.02
    0.017704215 = product of:
      0.044260535 = sum of:
        0.015961302 = weight(_text_:of in 3576) [ClassicSimilarity], result of:
          0.015961302 = score(doc=3576,freq=4.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.24433708 = fieldWeight in 3576, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=3576)
        0.028299233 = product of:
          0.056598466 = sum of:
            0.056598466 = weight(_text_:22 in 3576) [ClassicSimilarity], result of:
              0.056598466 = score(doc=3576,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.38690117 = fieldWeight in 3576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3576)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    8. 1.2007 12:22:40
  8. Peereboom, M.: DutchESS : Dutch Electronic Subject Service - a Dutch national collaborative effort (2000) 0.02
    0.017131606 = product of:
      0.042829014 = sum of:
        0.02018963 = weight(_text_:of in 4869) [ClassicSimilarity], result of:
          0.02018963 = score(doc=4869,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.3090647 = fieldWeight in 4869, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4869)
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 4869) [ClassicSimilarity], result of:
              0.045278773 = score(doc=4869,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 4869, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4869)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article gives an overview of the design and organisation of DutchESS, a Dutch information subject gateway created as a national collaborative effort of the National Library and a number of academic libraries. The combined centralised and distributed model of DutchESS is discussed, as well as its selection policy, its metadata format, classification scheme and retrieval options. Also some options for future collaboration on an international level are explored
    Date
    22. 6.2002 19:39:23
  9. Ellis, D.; Vasconcelos, A.: ¬The relevance of facet analysis for World Wide Web subject organization and searching (2000) 0.02
    0.015779553 = product of:
      0.039448883 = sum of:
        0.014111101 = product of:
          0.0705555 = sum of:
            0.0705555 = weight(_text_:problem in 2477) [ClassicSimilarity], result of:
              0.0705555 = score(doc=2477,freq=4.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.39792046 = fieldWeight in 2477, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2477)
          0.2 = coord(1/5)
        0.025337784 = weight(_text_:of in 2477) [ClassicSimilarity], result of:
          0.025337784 = score(doc=2477,freq=28.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.38787308 = fieldWeight in 2477, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2477)
      0.4 = coord(2/5)
    
    Abstract
    Different forms of indexing and search facilities available on the Web are described. Use of facet analysis to structure hypertext concept structures is outlined in relation to work on (1) development of hypertext knowledge bases for designers of learning materials and (2) construction of knowledge based hypertext interfaces. The problem of lack of closeness between page designers and potential users is examined. Facet analysis is suggested as a way of alleviating some difficulties associated with this problem of designing for the unknown user.
    This is a revised version of the earlier article by Ellis and Vasconcelos (1999) (see Not Relevant, below), though that is not indicated, and much of it is identical, word for word. There is a new section covering the work of Elizabeth Duncan, which is useful and informative, but the reader is better advised to go to the originals if available.
    Source
    Journal of Internet cataloging. 2(2000) nos.3/4, S.97-114
  10. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.02
    0.015664605 = product of:
      0.03916151 = sum of:
        0.01935205 = weight(_text_:of in 1673) [ClassicSimilarity], result of:
          0.01935205 = score(doc=1673,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.29624295 = fieldWeight in 1673, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.019809462 = product of:
          0.039618924 = sum of:
            0.039618924 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.039618924 = score(doc=1673,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia; vgl. auch: http://www7.scu.edu.au/programme/posters/1846/com1846.htm.
  11. Vizine-Goetz, D.: OCLC investigates using classification tools to organize Internet data (1998) 0.01
    0.014990156 = product of:
      0.03747539 = sum of:
        0.017665926 = weight(_text_:of in 2342) [ClassicSimilarity], result of:
          0.017665926 = score(doc=2342,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2704316 = fieldWeight in 2342, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2342)
        0.019809462 = product of:
          0.039618924 = sum of:
            0.039618924 = weight(_text_:22 in 2342) [ClassicSimilarity], result of:
              0.039618924 = score(doc=2342,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2708308 = fieldWeight in 2342, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2342)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The knowledge structures that form traditional library classification schemes hold great potential for improving resource description and discovery on the Internet and for organizing electronic document collections. The advantages of assigning subject tokens (classes) to documents from a scheme like the DDC system are well documented
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  12. Ferris, A.M.: If you buy it, will they use it? : a case study on the use of Classification web (2006) 0.01
    0.014990156 = product of:
      0.03747539 = sum of:
        0.017665926 = weight(_text_:of in 88) [ClassicSimilarity], result of:
          0.017665926 = score(doc=88,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2704316 = fieldWeight in 88, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=88)
        0.019809462 = product of:
          0.039618924 = sum of:
            0.039618924 = weight(_text_:22 in 88) [ClassicSimilarity], result of:
              0.039618924 = score(doc=88,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2708308 = fieldWeight in 88, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=88)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents a study conducted at the University of Colorado at Boulder (CU-Boulder) to assess the extent to which its catalogers were using Classification Web (Class Web), the subscription-based, online cataloging documentation resource provided by the Library of Congress. In addition, this paper will explore assumptions made by management regarding CU-Boulder catalogers' use of the product, possible reasons for the lower-than-expected use, and recommendations for promoting a more efficient and cost-effective use of Class Web at other institutions similar to CU-Boulder.
    Date
    10. 9.2000 17:38:22
  13. Frâncu, V.; Sabo, C.-N.: Implementation of a UDC-based multilingual thesaurus in a library catalogue : the case of BiblioPhil (2010) 0.01
    0.014453242 = product of:
      0.036133103 = sum of:
        0.019153563 = weight(_text_:of in 3697) [ClassicSimilarity], result of:
          0.019153563 = score(doc=3697,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2932045 = fieldWeight in 3697, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3697)
        0.016979538 = product of:
          0.033959076 = sum of:
            0.033959076 = weight(_text_:22 in 3697) [ClassicSimilarity], result of:
              0.033959076 = score(doc=3697,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23214069 = fieldWeight in 3697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3697)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In order to enhance the use of Universal Decimal Classification (UDC) numbers in information retrieval, the authors have represented classification with multilingual thesaurus descriptors and implemented this solution in an automated way. The authors illustrate a solution implemented in a BiblioPhil library system. The standard formats used are UNIMARC for subject authority records (i.e. the UDC-based multilingual thesaurus) and MARC XML support for data transfer. The multilingual thesaurus was built according to existing standards, the constituent parts of the classification notations being used as the basis for search terms in the multilingual information retrieval. The verbal equivalents, descriptors and non-descriptors, are used to expand the number of concepts and are given in Romanian, English and French. This approach saves the time of the indexer and provides more user-friendly and easier access to the bibliographic information. The multilingual aspect of the thesaurus enhances information access for a greater number of online users
    Date
    22. 7.2010 20:40:56
  14. Dack, D.: Australian attends conference on Dewey (1989) 0.01
    0.014244139 = product of:
      0.035610348 = sum of:
        0.015800884 = weight(_text_:of in 2509) [ClassicSimilarity], result of:
          0.015800884 = score(doc=2509,freq=8.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.24188137 = fieldWeight in 2509, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2509)
        0.019809462 = product of:
          0.039618924 = sum of:
            0.039618924 = weight(_text_:22 in 2509) [ClassicSimilarity], result of:
              0.039618924 = score(doc=2509,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2708308 = fieldWeight in 2509, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2509)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Edited version of a report to the Australian Library and Information Association on the Conference on classification theory in the computer age, Albany, New York, 18-19 Nov 88, and on the meeting of the Dewey Editorial Policy Committee which preceded it. The focus of the Editorial Policy Committee Meeting lay in the following areas: browsing; potential for improved subject access; system design; potential conflict between shelf location and information retrieval; and users. At the Conference on classification theory in the computer age the following papers were presented: Applications of artificial intelligence to bibliographic classification, by Irene Travis; Automation and classification, By Elaine Svenonious; Subject classification and language processing for retrieval in large data bases, by Diana Scott; Implications for information processing, by Carol Mandel; and implications for information science education, by Richard Halsey.
    Date
    8.11.1995 11:52:22
  15. Doyle, B.: ¬The classification and evaluation of Content Management Systems (2003) 0.01
    0.014163372 = product of:
      0.03540843 = sum of:
        0.0127690425 = weight(_text_:of in 2871) [ClassicSimilarity], result of:
          0.0127690425 = score(doc=2871,freq=4.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.19546966 = fieldWeight in 2871, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2871)
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 2871) [ClassicSimilarity], result of:
              0.045278773 = score(doc=2871,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 2871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2871)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This is a report on how Doyle and others made a faceted classification scheme for content management systems and made it browsable on the web (see CMS Review in Example Web Sites, below). They discuss why they did it, how, their use of OPML and XFML, how they did research to find terms and categories, and they also include their taxonomy. It is interesting to see facets used in a business environment.
    Date
    30. 7.2004 12:22:52
  16. Drabenstott, K.M.: Classification to the rescue : handling the problems of too many and too few retrievals (1996) 0.01
    0.01297504 = product of:
      0.0324376 = sum of:
        0.009978054 = product of:
          0.04989027 = sum of:
            0.04989027 = weight(_text_:problem in 5164) [ClassicSimilarity], result of:
              0.04989027 = score(doc=5164,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.28137225 = fieldWeight in 5164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5164)
          0.2 = coord(1/5)
        0.022459546 = weight(_text_:of in 5164) [ClassicSimilarity], result of:
          0.022459546 = score(doc=5164,freq=22.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34381276 = fieldWeight in 5164, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5164)
      0.4 = coord(2/5)
    
    Abstract
    The first studies of online catalog use demonstrated that the problems of too many and too few retrievals plagued the earliest online catalog users. Despite 15 years of system development, implementation, and evaluation, these problems still adversely affect the subject searches of today's online catalog users. In fact, the large-retrievals problem has grown more acute due to the growth of online catalog databases. This paper explores the use of library classifications for consolidating and summarizing high-posted subject searches and for handling subject searches that result in no or too few retrievals. Findings are presented in the form of generalization about retrievals and library classifications, needed improvements to classification terminology, and suggestions for improved functionality to facilitate the display of retrieved titles in online catalogs
    Source
    Knowledge organization and change: Proceedings of the Fourth International ISKO Conference, 15-18 July 1996, Library of Congress, Washington, DC. Ed.: R. Green
  17. Kwasnik, B.H.: ¬The role of classification in knowledge representation (1999) 0.01
    0.012848704 = product of:
      0.03212176 = sum of:
        0.015142222 = weight(_text_:of in 2464) [ClassicSimilarity], result of:
          0.015142222 = score(doc=2464,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.23179851 = fieldWeight in 2464, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2464)
        0.016979538 = product of:
          0.033959076 = sum of:
            0.033959076 = weight(_text_:22 in 2464) [ClassicSimilarity], result of:
              0.033959076 = score(doc=2464,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23214069 = fieldWeight in 2464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2464)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    A fascinating, broad-ranging article about classification, knowledge, and how they relate. Hierarchies, trees, paradigms (a two-dimensional classification that can look something like a spreadsheet), and facets are covered, with descriptions of how they work and how they can be used for knowledge discovery and creation. Kwasnick outlines how to make a faceted classification: choose facets, develop facets, analyze entities using the facets, and make a citation order. Facets are useful for many reasons: they do not require complete knowledge of the entire body of material; they are hospitable, flexible, and expressive; they do not require a rigid background theory; they can mix theoretical structures and models; and they allow users to view things from many perspectives. Facets do have faults: it can be hard to pick the right ones; it is hard to show relations between them; and it is difficult to visualize them. The coverage of the other methods is equally thorough and there is much to consider for anyone putting a classification on the web.
    Source
    Library trends. 48(1999) no.1, S.22-47
  18. Chandler, A.; LeBlanc, J.: Exploring the potential of a virtual undergraduate library collection based on the hierarchical interface to LC Classification (2006) 0.01
    0.012848704 = product of:
      0.03212176 = sum of:
        0.015142222 = weight(_text_:of in 769) [ClassicSimilarity], result of:
          0.015142222 = score(doc=769,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.23179851 = fieldWeight in 769, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=769)
        0.016979538 = product of:
          0.033959076 = sum of:
            0.033959076 = weight(_text_:22 in 769) [ClassicSimilarity], result of:
              0.033959076 = score(doc=769,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23214069 = fieldWeight in 769, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=769)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Hierarchical Interface to Library of Congress Classification (HILCC) is a system developed by the Columbia University Library to leverage call number data from the MARC holdings records in Columbia's online catalog to create a structured, hierarchical menuing system that provides subject access to the library's electronic resources. In this paper, the authors describe a research initiative at the Cornell University Library to discover if the Columbia HILCC scheme can be used as developed or in modified form to create a virtual undergraduate print collection outside the context of the traditional online catalog. Their results indicate that, with certain adjustments, an HILCC model can indeed, be used to represent the holdings of a large research library's undergraduate collection of approximately 150,000 titles, but that such a model is not infinitely scalable and may require a new approach to browsing such a large information space.
    Date
    10. 9.2000 17:38:22
  19. Pollitt, A.S.; Tinker, A.J.: Enhanced view-based searching through the decomposition of Dewey Decimal Classification codes (2000) 0.01
    0.0126845185 = product of:
      0.031711295 = sum of:
        0.011521665 = product of:
          0.057608325 = sum of:
            0.057608325 = weight(_text_:problem in 6486) [ClassicSimilarity], result of:
              0.057608325 = score(doc=6486,freq=6.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.32490072 = fieldWeight in 6486, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6486)
          0.2 = coord(1/5)
        0.02018963 = weight(_text_:of in 6486) [ClassicSimilarity], result of:
          0.02018963 = score(doc=6486,freq=40.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.3090647 = fieldWeight in 6486, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=6486)
      0.4 = coord(2/5)
    
    Abstract
    The scatter of items dealing with similar concepts through the physical library is a consequence of a classification process that produces a single notation to enable relative location. Compromises must be made to place an item where it is most appropriate for a given user community. No such compromise is needed with a digital library where the item can be considered to occupy a very large number of relative locations, as befits the needs of the user. Interfaces to these digital libraries can reuse the knowledge structures of their physical counterparts yet still address the problem of scatter. View-based searching is an approach that takes advantage of the knowledge structures but addresses the problem of scatter by applying a facetted approach to information retrieval. This paper describes the most recent developments in the implementation of a view-based searching system for a University Library OPAC. The user interface exploits the knowledge structures in the Dewey Decimal Classification Scheme (DDC) in navigable views with implicit Boolean searching. DDC classifies multifaceted items by building a single relative code from components. These codes may already have been combined in the schedules or be built according to well-documented instructions. Rules can be applied to decode these numbers to provide codes for each additional facet. To enhance the retrieval power of the view-based searching system, multiple facet codes are being extracted through decomposition from single Dewey Class Codes. This paper presents the results of applying automatic decomposition in respect of Geographic Area and the creation of a view (by Geographic Area) for the full collection of over 250,000 library items. This is the first step in demonstrating how the problem of scatter of subject matter across the disciplines of the Dewey Decimal Classification and the physical library collection can be addressed through the use of facets and view-based searching
    Source
    Dynamism and stability in knowledge organization: Proceedings of the 6th International ISKO-Conference, 10-13 July 2000, Toronto, Canada. Ed.: C. Beghtol et al
  20. Sparck Jones, K.: Some thoughts on classification for retrieval (1970) 0.01
    0.012632976 = product of:
      0.03158244 = sum of:
        0.008315044 = product of:
          0.041575223 = sum of:
            0.041575223 = weight(_text_:problem in 4327) [ClassicSimilarity], result of:
              0.041575223 = score(doc=4327,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23447686 = fieldWeight in 4327, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4327)
          0.2 = coord(1/5)
        0.023267398 = weight(_text_:of in 4327) [ClassicSimilarity], result of:
          0.023267398 = score(doc=4327,freq=34.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.35617945 = fieldWeight in 4327, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4327)
      0.4 = coord(2/5)
    
    Abstract
    The suggestion that classifications for retrieval should be constructed automatically raises some serious problems concerning the sorts of classification which are required, and the way in which formal classification theories should be exploited, given that a retrieval classification is required for a purpose. These difficulties have not been sufficiently considered, and the paper therefore attempts an analysis of them, though no solution of immediate application can be suggested. Starting with the illustrative proposition that a polythetic, multiple, unordered classification is required in automatic thesaurus construction, this is considered in the context of classification in general, where eight sorts of classification can be distinguished, each covering a range of class definitions and class-finding algorithms. The problem which follows is that since there is generally no natural or best classification of a set of objects as such, the evaluation of alternative classifications requires either formal criteria of goodness of fit, or, if a classification is required for a purpose, a precises statement of that purpose. In any case a substantive theory of classification is needed, which does not exist; and since sufficiently precise specifications of retrieval requirements are also lacking, the only currently available approach to automatic classification experiments for information retrieval is to do enough of them
    Footnote
    Wiederabdruck in: Journal of documentation. 61(2005) no.5, S.571-581.
    Source
    Journal of documentation. 26(1970), S.89-101

Authors

Years

Languages