Search (229 results, page 1 of 12)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  1. Kent, R.E.: Organizing conceptual knowledge online : metadata interoperability and faceted classification (1998) 0.08
    0.08254091 = product of:
      0.13756818 = sum of:
        0.098406665 = weight(_text_:philosophy in 57) [ClassicSimilarity], result of:
          0.098406665 = score(doc=57,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.426834 = fieldWeight in 57, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.0546875 = fieldNorm(doc=57)
        0.01935205 = weight(_text_:of in 57) [ClassicSimilarity], result of:
          0.01935205 = score(doc=57,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.29624295 = fieldWeight in 57, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=57)
        0.019809462 = product of:
          0.039618924 = sum of:
            0.039618924 = weight(_text_:22 in 57) [ClassicSimilarity], result of:
              0.039618924 = score(doc=57,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2708308 = fieldWeight in 57, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=57)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Conceptual Knowledge Markup Language (CKML), an application of XML, is a new standard being promoted for the specification of online conceptual knowledge (Kent and Shrivastava, 1998). CKML follows the philosophy of Conceptual Knowledge Processing (Wille, 1982), a principled approach to knowledge representation and data analysis, which advocates the development of methodologies and techniques to support people in their rational thinking, judgement and actions. CKML was developed and is being used in the WAVE networked information discovery and retrieval system (Kent and Neuss, 1994) as a standard for the specification of conceptual knowledge
    Date
    30.12.2001 16:22:41
    Source
    Structures and relations in knowledge organization: Proceedings of the 5th International ISKO-Conference, Lille, 25.-29.8.1998. Ed.: W. Mustafa el Hadi et al
  2. Slavic-Overfield, A.: Classification management and use in a networked environment : the case of the Universal Decimal Classification (2005) 0.07
    0.06615092 = product of:
      0.11025153 = sum of:
        0.05623238 = weight(_text_:philosophy in 2191) [ClassicSimilarity], result of:
          0.05623238 = score(doc=2191,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.24390514 = fieldWeight in 2191, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.03125 = fieldNorm(doc=2191)
        0.018058153 = weight(_text_:of in 2191) [ClassicSimilarity], result of:
          0.018058153 = score(doc=2191,freq=32.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.27643585 = fieldWeight in 2191, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2191)
        0.035961 = product of:
          0.071922 = sum of:
            0.071922 = weight(_text_:mind in 2191) [ClassicSimilarity], result of:
              0.071922 = score(doc=2191,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.27584085 = fieldWeight in 2191, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2191)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    In the Internet information space, advanced information retrieval (IR) methods and automatic text processing are used in conjunction with traditional knowledge organization systems (KOS). New information technology provides a platform for better KOS publishing, exploitation and sharing both for human and machine use. Networked KOS services are now being planned and developed as powerful tools for resource discovery. They will enable automatic contextualisation, interpretation and query matching to different indexing languages. The Semantic Web promises to be an environment in which the quality of semantic relationships in bibliographic classification systems can be fully exploited. Their use in the networked environment is, however, limited by the fact that they are not prepared or made available for advanced machine processing. The UDC was chosen for this research because of its widespread use and its long-term presence in online information retrieval systems. It was also the first system to be used for the automatic classification of Internet resources, and the first to be made available as a classification tool on the Web. The objective of this research is to establish the advantages of using UDC for information retrieval in a networked environment, to highlight the problems of automation and classification exchange, and to offer possible solutions. The first research question was is there enough evidence of the use of classification on the Internet to justify further development with this particular environment in mind? The second question is what are the automation requirements for the full exploitation of UDC and its exchange? The third question is which areas are in need of improvement and what specific recommendations can be made for implementing the UDC in a networked environment? A summary of changes required in the management and development of the UDC to facilitate its full adaptation for future use is drawn from this analysis.
    Content
    Thesis submitted for the Degree of Doctor of Philosophy at the University of London
  3. Mills, J.: Faceted classification and logical division in information retrieval (2004) 0.04
    0.04423027 = product of:
      0.110575676 = sum of:
        0.084348574 = weight(_text_:philosophy in 831) [ClassicSimilarity], result of:
          0.084348574 = score(doc=831,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.36585772 = fieldWeight in 831, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.046875 = fieldNorm(doc=831)
        0.0262271 = weight(_text_:of in 831) [ClassicSimilarity], result of:
          0.0262271 = score(doc=831,freq=30.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.4014868 = fieldWeight in 831, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=831)
      0.4 = coord(2/5)
    
    Abstract
    The main object of the paper is to demonstrate in detail the role of classification in information retrieval (IR) and the design of classificatory structures by the application of logical division to all forms of the content of records, subject and imaginative. The natural product of such division is a faceted classification. The latter is seen not as a particular kind of library classification but the only viable form enabling the locating and relating of information to be optimally predictable. A detailed exposition of the practical steps in facet analysis is given, drawing on the experience of the new Bliss Classification (BC2). The continued existence of the library as a highly organized information store is assumed. But, it is argued, it must acknowledge the relevance of the revolution in library classification that has taken place. It considers also how alphabetically arranged subject indexes may utilize controlled use of categorical (generically inclusive) and syntactic relations to produce similarly predictable locating and relating systems for IR.
    Footnote
    Artikel in einem Themenheft: The philosophy of information
  4. Ellis, D.; Vasconcelos, A.: Ranganathan and the Net : using facet analysis to search and organise the World Wide Web (1999) 0.04
    0.04037442 = product of:
      0.10093605 = sum of:
        0.084348574 = weight(_text_:philosophy in 726) [ClassicSimilarity], result of:
          0.084348574 = score(doc=726,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.36585772 = fieldWeight in 726, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
        0.016587472 = weight(_text_:of in 726) [ClassicSimilarity], result of:
          0.016587472 = score(doc=726,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.25392252 = fieldWeight in 726, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
      0.4 = coord(2/5)
    
    Abstract
    This paper documents the continuing relevance of facet analysis as a technique for searching and organising WWW based materials. The 2 approaches underlying WWW searching and indexing - word and concept based indexing - are outlined. It is argued that facet analysis as an a posteriori approach to classification using words from the subject field as the concept terms in the classification derived represents an excellent approach to searching and organising the results of WWW searches using either search engines or search directories. Finally it is argued that the underlying philosophy of facet analysis is better suited to the disparate nature of WWW resources and searchers than the assumptions of contemporaray IR research.
    This article gives a cheerfully brief and undetailed account of how to make a faceted classification system, then describes information retrieval and searching on the web. It concludes by saying that facets would be excellent in helping users search and browse the web, but offers no real clues as to how this can be done.
  5. Classification research for knowledge representation and organization : Proc. of the 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991 (1992) 0.03
    0.025750859 = product of:
      0.064377144 = sum of:
        0.042174287 = weight(_text_:philosophy in 2072) [ClassicSimilarity], result of:
          0.042174287 = score(doc=2072,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.18292886 = fieldWeight in 2072, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2072)
        0.022202855 = weight(_text_:of in 2072) [ClassicSimilarity], result of:
          0.022202855 = score(doc=2072,freq=86.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.33988333 = fieldWeight in 2072, product of:
              9.273619 = tf(freq=86.0), with freq of:
                86.0 = termFreq=86.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2072)
      0.4 = coord(2/5)
    
    Abstract
    This volume deals with both theoretical and empirical research in classification and encompasses universal classification systems, special classification systems, thesauri and the place of classification in a broad spectrum of document and information systems. Papers fall into one or three major areas as follows: 1) general principles and policies 2) structure and logic in classification; and empirical investigation; classification in the design of various types of document/information systems. The papers originate from the ISCCR '91 conference and have been selected according to the following criteria: relevance to the conference theme; importance of the topic in the representation and organization of knowledge; quality; and originality in terms of potential contribution to research and new knowledge.
    Content
    Enthält die Beiträge: SVENONIUS, E.: Classification: prospects, problems, and possibilities; BEALL, J.: Editing the Dewey Decimal Classification online: the evolution of the DDC database; BEGHTOL, C.: Toward a theory of fiction analysis for information storage and retrieval; CRAVEN, T.C.: Concept relation structures and their graphic display; FUGMANN, R.: Illusory goals in information science research; GILCHRIST, A.: UDC: the 1990's and beyond; GREEN, R.: The expression of syntagmatic relationships in indexing: are frame-based index languages the answer?; HUMPHREY, S.M.: Use and management of classification systems for knowledge-based indexing; MIKSA, F.L.: The concept of the universe of knowledge and the purpose of LIS classification; SCOTT, M. u. A.F. FONSECA: Methodology for functional appraisal of records and creation of a functional thesaurus; ALBRECHTSEN, H.: PRESS: a thesaurus-based information system for software reuse; AMAESHI, B.: A preliminary AAT compatible African art thesaurus; CHATTERJEE, A.: Structures of Indian classification systems of the pre-Ranganathan era and their impact on the Colon Classification; COCHRANE, P.A.: Indexing and searching thesauri, the Janus or Proteus of information retrieval; CRAVEN, T.C.: A general versus a special algorithm in the graphic display of thesauri; DAHLBERG, I.: The basis of a new universal classification system seen from a philosophy of science point of view: DRABENSTOTT, K.M., RIESTER, L.C. u. B.A.DEDE: Shelflisting using expert systems; FIDEL, R.: Thesaurus requirements for an intermediary expert system; GREEN, R.: Insights into classification from the cognitive sciences: ramifications for index languages; GROLIER, E. de: Towards a syndetic information retrieval system; GUENTHER, R.: The USMARC format for classification data: development and implementation; HOWARTH, L.C.: Factors influencing policies for the adoption and integration of revisions to classification schedules; HUDON, M.: Term definitions in subject thesauri: the Canadian literacy thesaurus experience; HUSAIN, S.: Notational techniques for the accomodation of subjects in Colon Classification 7th edition: theoretical possibility vis-à-vis practical need; KWASNIK, B.H. u. C. JORGERSEN: The exploration by means of repertory grids of semantic differences among names of official documents; MICCO, M.: Suggestions for automating the Library of Congress Classification schedules; PERREAULT, J.M.: An essay on the prehistory of general categories (II): G.W. Leibniz, Conrad Gesner; REES-POTTER, L.K.: How well do thesauri serve the social sciences?; REVIE, C.W. u. G. SMART: The construction and the use of faceted classification schema in technical domains; ROCKMORE, M.: Structuring a flexible faceted thsaurus record for corporate information retrieval; ROULIN, C.: Sub-thesauri as part of a metathesaurus; SMITH, L.C.: UNISIST revisited: compatibility in the context of collaboratories; STILES, W.G.: Notes concerning the use chain indexing as a possible means of simulating the inductive leap within artificial intelligence; SVENONIUS, E., LIU, S. u. B. SUBRAHMANYAM: Automation in chain indexing; TURNER, J.: Structure in data in the Stockshot database at the National Film Board of Canada; VIZINE-GOETZ, D.: The Dewey Decimal Classification as an online classification tool; WILLIAMSON, N.J.: Restructuring UDC: problems and possibilies; WILSON, A.: The hierarchy of belief: ideological tendentiousness in universal classification; WILSON, B.F.: An evaluation of the systematic botany schedule of the Universal Decimal Classification (English full edition, 1979); ZENG, L.: Research and development of classification and thesauri in China; CONFERENCE SUMMARY AND CONCLUSIONS
    Footnote
    Rez. in: International classification 19(1992) no.4, S.228-229 (B.C. Vickery); Journal of classification 11(1994) no.2, S.255-256 (W. Gödert)
    LCSH
    Knowledge, Theory of / Congresses
    Subject
    Knowledge, Theory of / Congresses
  6. Hill, J.S.: Online classification number access : some practical considerations (1984) 0.03
    0.02533477 = product of:
      0.06333692 = sum of:
        0.018058153 = weight(_text_:of in 7684) [ClassicSimilarity], result of:
          0.018058153 = score(doc=7684,freq=2.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.27643585 = fieldWeight in 7684, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.125 = fieldNorm(doc=7684)
        0.045278773 = product of:
          0.090557545 = sum of:
            0.090557545 = weight(_text_:22 in 7684) [ClassicSimilarity], result of:
              0.090557545 = score(doc=7684,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.61904186 = fieldWeight in 7684, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=7684)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Journal of academic librarianship. 10(1984), S.17-22
  7. McIlwaine, I.C.: ¬The UDC and the World Wide Web (2003) 0.02
    0.02436502 = product of:
      0.06091255 = sum of:
        0.015961302 = weight(_text_:of in 3814) [ClassicSimilarity], result of:
          0.015961302 = score(doc=3814,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.24433708 = fieldWeight in 3814, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3814)
        0.04495125 = product of:
          0.0899025 = sum of:
            0.0899025 = weight(_text_:mind in 3814) [ClassicSimilarity], result of:
              0.0899025 = score(doc=3814,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.34480107 = fieldWeight in 3814, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3814)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The paper examines the potentiality of the Universal Decimal Classification as a means for retrieving subjects from the World Wide Web. The analytico-synthetic basis of the scheme provides the facility to link concepts at the input or search stage and to isolate concepts via the notation so as to retrieve the separate parts of a compound subject individually if required. Its notation permits hierarchical searching and overrides the shortcomings of natural language. Recent revisions have been constructed with this purpose in mind, the most recent being for Management. The use of the classification embedded in metadata, as in the GERHARD system or as a basis for subject trees is discussed. Its application as a gazetteer is another Web application to which it is put. The range of up to date editions in many languages and the availability of a Web-based version make its use as a switching language increasingly valuable.
    Source
    Subject retrieval in a networked environment: Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC. Ed.: I.C. McIlwaine
  8. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.02
    0.019001076 = product of:
      0.04750269 = sum of:
        0.013543615 = weight(_text_:of in 6040) [ClassicSimilarity], result of:
          0.013543615 = score(doc=6040,freq=2.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.20732689 = fieldWeight in 6040, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=6040)
        0.033959076 = product of:
          0.06791815 = sum of:
            0.06791815 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
              0.06791815 = score(doc=6040,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.46428138 = fieldWeight in 6040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6040)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 6.2002 19:42:47
  9. Comaromi, C.L.: Summation of classification as an enhancement of intellectual access to information in an online environment (1990) 0.02
    0.017704215 = product of:
      0.044260535 = sum of:
        0.015961302 = weight(_text_:of in 3576) [ClassicSimilarity], result of:
          0.015961302 = score(doc=3576,freq=4.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.24433708 = fieldWeight in 3576, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=3576)
        0.028299233 = product of:
          0.056598466 = sum of:
            0.056598466 = weight(_text_:22 in 3576) [ClassicSimilarity], result of:
              0.056598466 = score(doc=3576,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.38690117 = fieldWeight in 3576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3576)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    8. 1.2007 12:22:40
  10. National Seminar on Classification in the Digital Environment : Papers contributed to the National Seminar an Classification in the Digital Environment, Bangalore, 9-11 August 2001 (2001) 0.02
    0.0176873 = product of:
      0.029478831 = sum of:
        0.0057608327 = product of:
          0.028804163 = sum of:
            0.028804163 = weight(_text_:problem in 2047) [ClassicSimilarity], result of:
              0.028804163 = score(doc=2047,freq=6.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.16245036 = fieldWeight in 2047, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2047)
          0.2 = coord(1/5)
        0.018058153 = weight(_text_:of in 2047) [ClassicSimilarity], result of:
          0.018058153 = score(doc=2047,freq=128.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.27643585 = fieldWeight in 2047, product of:
              11.313708 = tf(freq=128.0), with freq of:
                128.0 = termFreq=128.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.015625 = fieldNorm(doc=2047)
        0.0056598466 = product of:
          0.011319693 = sum of:
            0.011319693 = weight(_text_:22 in 2047) [ClassicSimilarity], result of:
              0.011319693 = score(doc=2047,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.07738023 = fieldWeight in 2047, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2047)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Date
    2. 1.2004 10:35:22
    Footnote
    Rez. in: Knowledge organization 30(2003) no.1, S.40-42 (J.-E. Mai): "Introduction: This is a collection of papers presented at the National Seminar an Classification in the Digital Environment held in Bangalore, India, an August 9-11 2001. The collection contains 18 papers dealing with various issues related to knowledge organization and classification theory. The issue of transferring the knowledge, traditions, and theories of bibliographic classification to the digital environment is an important one, and I was excited to learn that proceedings from this seminar were available. Many of us experience frustration an a daily basis due to poorly constructed Web search mechanisms and Web directories. As a community devoted to making information easily accessible we have something to offer the Web community and a seminar an the topic was indeed much needed. Below are brief summaries of the 18 papers presented at the seminar. The order of the summaries follows the order of the papers in the proceedings. The titles of the paper are given in parentheses after the author's name. AHUJA and WESLEY (From "Subject" to "Need": Shift in Approach to Classifying Information an the Internet/Web) argue that traditional bibliographic classification systems fall in the digital environment. One problem is that bibliographic classification systems have been developed to organize library books an shelves and as such are unidimensional and tied to the paper-based environment. Another problem is that they are "subject" oriented in the sense that they assume a relatively stable universe of knowledge containing basic and fixed compartments of knowledge that can be identified and represented. Ahuja and Wesley suggest that classification in the digital environment should be need-oriented instead of subjectoriented ("One important link that binds knowledge and human being is his societal need. ... Hence, it will be ideal to organise knowledge based upon need instead of subject." (p. 10)).
    AHUJA and SATIJA (Relevance of Ranganathan's Classification Theory in the Age of Digital Libraries) note that traditional bibliographic classification systems have been applied in the digital environment with only limited success. They find that the "inherent flexibility of electronic manipulation of documents or their surrogates should allow a more organic approach to allocation of new subjects and appropriate linkages between subject hierarchies." (p. 18). Ahija and Satija also suggest that it is necessary to shift from a "subject" focus to a "need" focus when applying classification theory in the digital environment. They find Ranganathan's framework applicable in the digital environment. Although Ranganathan's focus is "subject oriented and hence emphasise the hierarchical and linear relationships" (p. 26), his framework "can be successfully adopted with certain modifications ... in the digital environment." (p. 26). SHAH and KUMAR (Model for System Unification of Geographical Schedules (Space Isolates)) report an a plan to develop a single schedule for geographical Subdivision that could be used across all classification systems. The authors argue that this is needed in order to facilitate interoperability in the digital environment. SAN SEGUNDO MANUEL (The Representation of Knowledge as a Symbolization of Productive Electronic Information) distills different approaches and definitions of the term "representation" as it relates to representation of knowledge in the library and information science literature and field. SHARADA (Linguistic and Document Classification: Paradigmatic Merger Possibilities) suggests the development of a universal indexing language. The foundation for the universal indexing language is Chomsky's Minimalist Program and Ranganathan's analytico-synthetic classification theory; Acording to the author, based an these approaches, it "should not be a problem" (p. 62) to develop a universal indexing language.
    SELVI (Knowledge Classification of Digital Information Materials with Special Reference to Clustering Technique) finds that it is essential to classify digital material since the amount of material that is becoming available is growing. Selvi suggests using automated classification to "group together those digital information materials or documents that are "most similar" (p. 65). This can be attained by using Cluster analysis methods. PRADHAN and THULASI (A Study of the Use of Classification and Indexing Systems by Web Resource Directories) compare and contrast the classificatory structures of Google, Yahoo, and Looksmart's directories and compare the directories to Dewey Decimal Classification, Library of Congress Classification and Colon Classification's classificatory structures. They find differentes between the directories' and the bibliographic classification systems' classificatory structures and principles. These differentes stem from the fact that bibliographic classification systems are used to "classify academic resources for the research community" (p. 83) and directories "aim to categorize a wider breath of information groups, entertainment, recreation, govt. information, commercial information" (p. 83). NEELAMEGHAN (Hierarchy, Hierarchical Relation and Hierarchical Arrangement) reviews the concept of hierarchy and the formation of hierarchical structures across a variety of domains. NEELAMEGHAN and PRADAD (Digitized Schemes for Subject Classification and Thesauri: Complementary Roles) demonstrate how thesaural relationships (NT, BT, and RT) can be applied to a classification scheme, the Colon Classification in this Gase. NEELAMEGHAN and ASUNDI (Metadata Framework for Describing Embodied Knowledge and Subject Content) propose to use the Generalized Facet Structure framework which is based an Ranganathan's General Theory of Knowledge Classification as a framework for describing the content of documents in a metadata element set for the representation of web documents. CHUDAMANI (Classified Catalogue as a Tool for Subject Based Information Retrieval in both Traditional and Electronic Library Environment) explains why the classified catalogue is superior to the alphabetic cata logue and argues that the same is true in the digital environment.
    PARAMESWARAN (Classification and Indexing: Impact of Classification Theory an PRECIS) reviews the PRECIS system and finds that "it Gould not escape from the impact of the theory of classification" (p. 131). The author further argues that the purpose of classification and subject indexing is the same and that both approaches depends an syntax. This leads to the conclusion that "there is an absolute syntax as the Indian theory of classification points out" (p. 131). SATYAPAL and SANJIVINI SATYAPAL (Classifying Documents According to Postulational Approach: 1. SA TSAN- A Computer Based Learning Package) and SATYAPAL and SANJIVINI SATYAPAL (Classifying Documents According to Postulational Approach: 2. Semi-Automatic Synthesis of CC Numbers) present an application to automate classification using a facet classification system, in this Gase, the Colon Classification system. GAIKAIWARI (An Interactive Application for Faceted Classification Systems) presents an application, called SRR, for managing and using a faceted classification scheme in a digital environment. IYER (Use of Instructional Technology to Support Traditional Classroom Learning: A Case Study) describes a course an "Information and Knowledge Organization" that she teaches at the University at Albany (SUNY). The course is a conceptual course that introduces the student to various aspects of knowledge organization. GOPINATH (Universal Classification: How can it be used?) lists fifteen uses of universal classifications and discusses the entities of a number of disciplines. GOPINATH (Knowledge Classification: The Theory of Classification) briefly reviews the foundations for research in automatic classification, summarizes the history of classification, and places Ranganathan's thought in the history of classification.
    Discussion The proceedings of the National Seminar an Classification in the Digital Environment give some insights. However, the depth of analysis and discussion is very uneven across the papers. Some of the papers have substantive research content while others appear to be notes used in the oral presentation. The treatments of the topics are very general in nature. Some papers have a very limited list of references while others have no bibliography. No index has been provided. The transfer of bibliographic knowledge organization theory to the digital environment is an important topic. However, as the papers at this conference have shown, it is also a difficult task. Of the 18 papers presented at this seminar an classification in the digital environment, only 4-5 papers actually deal directly with this important topic. The remaining papers deal with issues that are more or less relevant to classification in the digital environment without explicitly discussing the relation. The reason could be that the authors take up issues in knowledge organization that still need to be investigated and clarified before their application in the digital environment can be considered. Nonetheless, one wishes that the knowledge organization community would discuss the application of classification theory in the digital environment in greater detail. It is obvious from the comparisons of the classificatory structures of bibliographic classification systems and Web directories that these are different and that they probably should be different, since they serve different purposes. Interesting questions in the transformation of bibliographic classification theories to the digital environment are: "Given the existing principles in bibliographic knowledge organization, what are the optimum principles for organization of information, irrespectively of context?" and "What are the fundamental theoretical and practical principles for the construction of Web directories?" Unfortunately, the papers presented at this seminar do not attempt to answer or discuss these questions."
  11. Peereboom, M.: DutchESS : Dutch Electronic Subject Service - a Dutch national collaborative effort (2000) 0.02
    0.017131606 = product of:
      0.042829014 = sum of:
        0.02018963 = weight(_text_:of in 4869) [ClassicSimilarity], result of:
          0.02018963 = score(doc=4869,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.3090647 = fieldWeight in 4869, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4869)
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 4869) [ClassicSimilarity], result of:
              0.045278773 = score(doc=4869,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 4869, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4869)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article gives an overview of the design and organisation of DutchESS, a Dutch information subject gateway created as a national collaborative effort of the National Library and a number of academic libraries. The combined centralised and distributed model of DutchESS is discussed, as well as its selection policy, its metadata format, classification scheme and retrieval options. Also some options for future collaboration on an international level are explored
    Date
    22. 6.2002 19:39:23
  12. Van Dijck, P.: Introduction to XFML (2003) 0.02
    0.016279016 = product of:
      0.040697537 = sum of:
        0.018058153 = weight(_text_:of in 2474) [ClassicSimilarity], result of:
          0.018058153 = score(doc=2474,freq=8.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.27643585 = fieldWeight in 2474, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2474)
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 2474) [ClassicSimilarity], result of:
              0.045278773 = score(doc=2474,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 2474, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2474)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Van Dijck builds up an example of actual XFML by showing how to organize tourist information about what restaurants in what cities feature which kind of music: <facet id="city">City</facet> and <topic id="ny" facetid="city"><name>New York</name></topic> combine to mean that New York is the name of a city internally represented as "ny". It is written in the usual clear and practical style of articles on xml.com. Highly recommended as an introduction for anyone interested in XFML.
    Source
    http://www.xml.com/lpt/a/2003/01/22/xfml.html
  13. Ellis, D.; Vasconcelos, A.: ¬The relevance of facet analysis for World Wide Web subject organization and searching (2000) 0.02
    0.015779553 = product of:
      0.039448883 = sum of:
        0.014111101 = product of:
          0.0705555 = sum of:
            0.0705555 = weight(_text_:problem in 2477) [ClassicSimilarity], result of:
              0.0705555 = score(doc=2477,freq=4.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.39792046 = fieldWeight in 2477, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2477)
          0.2 = coord(1/5)
        0.025337784 = weight(_text_:of in 2477) [ClassicSimilarity], result of:
          0.025337784 = score(doc=2477,freq=28.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.38787308 = fieldWeight in 2477, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2477)
      0.4 = coord(2/5)
    
    Abstract
    Different forms of indexing and search facilities available on the Web are described. Use of facet analysis to structure hypertext concept structures is outlined in relation to work on (1) development of hypertext knowledge bases for designers of learning materials and (2) construction of knowledge based hypertext interfaces. The problem of lack of closeness between page designers and potential users is examined. Facet analysis is suggested as a way of alleviating some difficulties associated with this problem of designing for the unknown user.
    This is a revised version of the earlier article by Ellis and Vasconcelos (1999) (see Not Relevant, below), though that is not indicated, and much of it is identical, word for word. There is a new section covering the work of Elizabeth Duncan, which is useful and informative, but the reader is better advised to go to the originals if available.
    Source
    Journal of Internet cataloging. 2(2000) nos.3/4, S.97-114
  14. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.02
    0.015664605 = product of:
      0.03916151 = sum of:
        0.01935205 = weight(_text_:of in 1673) [ClassicSimilarity], result of:
          0.01935205 = score(doc=1673,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.29624295 = fieldWeight in 1673, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.019809462 = product of:
          0.039618924 = sum of:
            0.039618924 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.039618924 = score(doc=1673,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia; vgl. auch: http://www7.scu.edu.au/programme/posters/1846/com1846.htm.
  15. Vizine-Goetz, D.: OCLC investigates using classification tools to organize Internet data (1998) 0.01
    0.014990156 = product of:
      0.03747539 = sum of:
        0.017665926 = weight(_text_:of in 2342) [ClassicSimilarity], result of:
          0.017665926 = score(doc=2342,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2704316 = fieldWeight in 2342, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2342)
        0.019809462 = product of:
          0.039618924 = sum of:
            0.039618924 = weight(_text_:22 in 2342) [ClassicSimilarity], result of:
              0.039618924 = score(doc=2342,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2708308 = fieldWeight in 2342, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2342)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The knowledge structures that form traditional library classification schemes hold great potential for improving resource description and discovery on the Internet and for organizing electronic document collections. The advantages of assigning subject tokens (classes) to documents from a scheme like the DDC system are well documented
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  16. Ferris, A.M.: If you buy it, will they use it? : a case study on the use of Classification web (2006) 0.01
    0.014990156 = product of:
      0.03747539 = sum of:
        0.017665926 = weight(_text_:of in 88) [ClassicSimilarity], result of:
          0.017665926 = score(doc=88,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2704316 = fieldWeight in 88, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=88)
        0.019809462 = product of:
          0.039618924 = sum of:
            0.039618924 = weight(_text_:22 in 88) [ClassicSimilarity], result of:
              0.039618924 = score(doc=88,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2708308 = fieldWeight in 88, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=88)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents a study conducted at the University of Colorado at Boulder (CU-Boulder) to assess the extent to which its catalogers were using Classification Web (Class Web), the subscription-based, online cataloging documentation resource provided by the Library of Congress. In addition, this paper will explore assumptions made by management regarding CU-Boulder catalogers' use of the product, possible reasons for the lower-than-expected use, and recommendations for promoting a more efficient and cost-effective use of Class Web at other institutions similar to CU-Boulder.
    Date
    10. 9.2000 17:38:22
  17. Tunkelang, D.: Dynamic category sets : an approach for faceted search (2006) 0.01
    0.014786437 = product of:
      0.036966093 = sum of:
        0.023282124 = product of:
          0.11641062 = sum of:
            0.11641062 = weight(_text_:problem in 3082) [ClassicSimilarity], result of:
              0.11641062 = score(doc=3082,freq=8.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.6565352 = fieldWeight in 3082, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3082)
          0.2 = coord(1/5)
        0.013683967 = weight(_text_:of in 3082) [ClassicSimilarity], result of:
          0.013683967 = score(doc=3082,freq=6.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.20947541 = fieldWeight in 3082, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3082)
      0.4 = coord(2/5)
    
    Abstract
    In this paper, we present Dynamic Category Sets, a novel approach that addresses the vocabulary problem for faceted data. In their paper on the vocabulary problem, Furnas et al. note that "the keywords that are assigned by indexers are often at odds with those tried by searchers." Faceted search systems exhibit an interesting aspect of this problem: users do not necessarily understand an information space in terms of the same facets as the indexers who designed it. Our approach addresses this problem by employing a data-driven approach to discover sets of values across multiple facets that best match the query. When there are multiple candidates, we offer a clarification dialog that allows the user to disambiguate them.
  18. Frâncu, V.; Sabo, C.-N.: Implementation of a UDC-based multilingual thesaurus in a library catalogue : the case of BiblioPhil (2010) 0.01
    0.014453242 = product of:
      0.036133103 = sum of:
        0.019153563 = weight(_text_:of in 3697) [ClassicSimilarity], result of:
          0.019153563 = score(doc=3697,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2932045 = fieldWeight in 3697, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3697)
        0.016979538 = product of:
          0.033959076 = sum of:
            0.033959076 = weight(_text_:22 in 3697) [ClassicSimilarity], result of:
              0.033959076 = score(doc=3697,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23214069 = fieldWeight in 3697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3697)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In order to enhance the use of Universal Decimal Classification (UDC) numbers in information retrieval, the authors have represented classification with multilingual thesaurus descriptors and implemented this solution in an automated way. The authors illustrate a solution implemented in a BiblioPhil library system. The standard formats used are UNIMARC for subject authority records (i.e. the UDC-based multilingual thesaurus) and MARC XML support for data transfer. The multilingual thesaurus was built according to existing standards, the constituent parts of the classification notations being used as the basis for search terms in the multilingual information retrieval. The verbal equivalents, descriptors and non-descriptors, are used to expand the number of concepts and are given in Romanian, English and French. This approach saves the time of the indexer and provides more user-friendly and easier access to the bibliographic information. The multilingual aspect of the thesaurus enhances information access for a greater number of online users
    Date
    22. 7.2010 20:40:56
  19. Dack, D.: Australian attends conference on Dewey (1989) 0.01
    0.014244139 = product of:
      0.035610348 = sum of:
        0.015800884 = weight(_text_:of in 2509) [ClassicSimilarity], result of:
          0.015800884 = score(doc=2509,freq=8.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.24188137 = fieldWeight in 2509, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2509)
        0.019809462 = product of:
          0.039618924 = sum of:
            0.039618924 = weight(_text_:22 in 2509) [ClassicSimilarity], result of:
              0.039618924 = score(doc=2509,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2708308 = fieldWeight in 2509, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2509)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Edited version of a report to the Australian Library and Information Association on the Conference on classification theory in the computer age, Albany, New York, 18-19 Nov 88, and on the meeting of the Dewey Editorial Policy Committee which preceded it. The focus of the Editorial Policy Committee Meeting lay in the following areas: browsing; potential for improved subject access; system design; potential conflict between shelf location and information retrieval; and users. At the Conference on classification theory in the computer age the following papers were presented: Applications of artificial intelligence to bibliographic classification, by Irene Travis; Automation and classification, By Elaine Svenonious; Subject classification and language processing for retrieval in large data bases, by Diana Scott; Implications for information processing, by Carol Mandel; and implications for information science education, by Richard Halsey.
    Date
    8.11.1995 11:52:22
  20. Doyle, B.: ¬The classification and evaluation of Content Management Systems (2003) 0.01
    0.014163372 = product of:
      0.03540843 = sum of:
        0.0127690425 = weight(_text_:of in 2871) [ClassicSimilarity], result of:
          0.0127690425 = score(doc=2871,freq=4.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.19546966 = fieldWeight in 2871, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2871)
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 2871) [ClassicSimilarity], result of:
              0.045278773 = score(doc=2871,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 2871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2871)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This is a report on how Doyle and others made a faceted classification scheme for content management systems and made it browsable on the web (see CMS Review in Example Web Sites, below). They discuss why they did it, how, their use of OPML and XFML, how they did research to find terms and categories, and they also include their taxonomy. It is interesting to see facets used in a business environment.
    Date
    30. 7.2004 12:22:52

Years

Languages

Types

  • a 194
  • el 28
  • m 8
  • s 7
  • d 1
  • p 1
  • r 1
  • x 1
  • More… Less…