Search (113 results, page 1 of 6)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  1. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.20
    0.20477724 = product of:
      0.27303633 = sum of:
        0.07053544 = weight(_text_:web in 1673) [ClassicSimilarity], result of:
          0.07053544 = score(doc=1673,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43716836 = fieldWeight in 1673, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.046190813 = weight(_text_:search in 1673) [ClassicSimilarity], result of:
          0.046190813 = score(doc=1673,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 1673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.15631008 = sum of:
          0.10942154 = weight(_text_:engine in 1673) [ClassicSimilarity], result of:
            0.10942154 = score(doc=1673,freq=2.0), product of:
              0.26447627 = queryWeight, product of:
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.049439456 = queryNorm
              0.41372913 = fieldWeight in 1673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1673)
          0.046888545 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
            0.046888545 = score(doc=1673,freq=2.0), product of:
              0.17312855 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049439456 = queryNorm
              0.2708308 = fieldWeight in 1673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1673)
      0.75 = coord(3/4)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia; vgl. auch: http://www7.scu.edu.au/programme/posters/1846/com1846.htm.
  2. Poynder, R.: Web research engines? (1996) 0.17
    0.1664457 = product of:
      0.22192761 = sum of:
        0.07805218 = weight(_text_:web in 5698) [ClassicSimilarity], result of:
          0.07805218 = score(doc=5698,freq=10.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.48375595 = fieldWeight in 5698, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5698)
        0.0969805 = weight(_text_:search in 5698) [ClassicSimilarity], result of:
          0.0969805 = score(doc=5698,freq=12.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.5643796 = fieldWeight in 5698, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=5698)
        0.04689494 = product of:
          0.09378988 = sum of:
            0.09378988 = weight(_text_:engine in 5698) [ClassicSimilarity], result of:
              0.09378988 = score(doc=5698,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.35462496 = fieldWeight in 5698, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5698)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Describes the shortcomings of search engines for the WWW comparing their current capabilities to those of the first generation CD-ROM products. Some allow phrase searching and most are improving their Boolean searching. Few allow truncation, wild cards or nested logic. They are stateless, losing previous search criteria. Unlike the indexing and classification systems for today's CD-ROMs, those for Web pages are random, unstructured and of variable quality. Considers that at best Web search engines can only offer free text searching. Discusses whether automatic data classification systems such as Infoseek Ultra can overcome the haphazard nature of the Web with neural network technology, and whether Boolean search techniques may be redundant when replaced by technology such as the Euroferret search engine. However, artificial intelligence is rarely successful on huge, varied databases. Relevance ranking and automatic query expansion still use the same simple inverted indexes. Most Web search engines do nothing more than word counting. Further complications arise with foreign languages
  3. Ardo, A.; Lundberg, S.: ¬A regional distributed WWW search and indexing service : the DESIRE way (1998) 0.10
    0.0971244 = product of:
      0.1294992 = sum of:
        0.06981198 = weight(_text_:web in 4190) [ClassicSimilarity], result of:
          0.06981198 = score(doc=4190,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43268442 = fieldWeight in 4190, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4190)
        0.03959212 = weight(_text_:search in 4190) [ClassicSimilarity], result of:
          0.03959212 = score(doc=4190,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 4190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=4190)
        0.02009509 = product of:
          0.04019018 = sum of:
            0.04019018 = weight(_text_:22 in 4190) [ClassicSimilarity], result of:
              0.04019018 = score(doc=4190,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23214069 = fieldWeight in 4190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4190)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Creates an open, metadata aware system for distributed, collaborative WWW indexing. The system has 3 main components: a harvester (for collecting information), a database (for making the collection searchable), and a user interface (for making the information available). all components can be distributed across networked computers, thus supporting scalability. The system is metadata aware and thus allows searches on several fields including title, document author and URL. Nordic Web Index (NWI) is an application using this system to create a regional Nordic Web-indexing service. NWI is built using 5 collaborating service points within the Nordic countries. The NWI databases can be used to build additional services
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
    Object
    Nordic Web Index
  4. Ellis, D.; Vasconcelos, A.: Ranganathan and the Net : using facet analysis to search and organise the World Wide Web (1999) 0.07
    0.069821596 = product of:
      0.13964319 = sum of:
        0.060458954 = weight(_text_:web in 726) [ClassicSimilarity], result of:
          0.060458954 = score(doc=726,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.37471575 = fieldWeight in 726, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
        0.07918424 = weight(_text_:search in 726) [ClassicSimilarity], result of:
          0.07918424 = score(doc=726,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.460814 = fieldWeight in 726, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
      0.5 = coord(2/4)
    
    Abstract
    This paper documents the continuing relevance of facet analysis as a technique for searching and organising WWW based materials. The 2 approaches underlying WWW searching and indexing - word and concept based indexing - are outlined. It is argued that facet analysis as an a posteriori approach to classification using words from the subject field as the concept terms in the classification derived represents an excellent approach to searching and organising the results of WWW searches using either search engines or search directories. Finally it is argued that the underlying philosophy of facet analysis is better suited to the disparate nature of WWW resources and searchers than the assumptions of contemporaray IR research.
    This article gives a cheerfully brief and undetailed account of how to make a faceted classification system, then describes information retrieval and searching on the web. It concludes by saying that facets would be excellent in helping users search and browse the web, but offers no real clues as to how this can be done.
  5. Peereboom, M.: Dwerg tussen reuzen? : het Nederlandse basisclassificatie Web (1997) 0.06
    0.063819066 = product of:
      0.12763813 = sum of:
        0.08144732 = weight(_text_:web in 515) [ClassicSimilarity], result of:
          0.08144732 = score(doc=515,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.50479853 = fieldWeight in 515, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=515)
        0.046190813 = weight(_text_:search in 515) [ClassicSimilarity], result of:
          0.046190813 = score(doc=515,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 515, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=515)
      0.5 = coord(2/4)
    
    Abstract
    Developments in electronic communication technology have meda online databases a normal part of library collections. To provide users with direct access to Internet resources the Dutch Royal Library has cooperated with several university libraries in the Netherlands to develop the Nederlandse Basisclassificatie Web. Subject specialists select sources, add English summaries and NBW code, and input them to the online database. A Web desk and training workshops have been provided to assist users, and improvements to the system will simplify search procedures
    Footnote
    Übers. d. Titels: A dwarf amongst giants?: the Dutch Basic classification of Web resources
  6. Tunkelang, D.: Faceted search (2009) 0.06
    0.06217189 = product of:
      0.12434378 = sum of:
        0.032909684 = weight(_text_:web in 26) [ClassicSimilarity], result of:
          0.032909684 = score(doc=26,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2039694 = fieldWeight in 26, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=26)
        0.0914341 = weight(_text_:search in 26) [ClassicSimilarity], result of:
          0.0914341 = score(doc=26,freq=24.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.5321022 = fieldWeight in 26, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=26)
      0.5 = coord(2/4)
    
    Abstract
    We live in an information age that requires us, more than ever, to represent, access, and use information. Over the last several decades, we have developed a modern science and technology for information retrieval, relentlessly pursuing the vision of a "memex" that Vannevar Bush proposed in his seminal article, "As We May Think." Faceted search plays a key role in this program. Faceted search addresses weaknesses of conventional search approaches and has emerged as a foundation for interactive information retrieval. User studies demonstrate that faceted search provides more effective information-seeking support to users than best-first search. Indeed, faceted search has become increasingly prevalent in online information access systems, particularly for e-commerce and site search. In this lecture, we explore the history, theory, and practice of faceted search. Although we cannot hope to be exhaustive, our aim is to provide sufficient depth and breadth to offer a useful resource to both researchers and practitioners. Because faceted search is an area of interest to computer scientists, information scientists, interface designers, and usability researchers, we do not assume that the reader is a specialist in any of these fields. Rather, we offer a self-contained treatment of the topic, with an extensive bibliography for those who would like to pursue particular aspects in more depth.
    LCSH
    Web search engines / Research
    Subject
    Web search engines / Research
  7. Dodd, D.G.: Grass-roots cataloging and classification : food for thought from World Wide Web subject-oriented hierarchical lists (1996) 0.06
    0.061457813 = product of:
      0.122915626 = sum of:
        0.05759195 = weight(_text_:web in 7270) [ClassicSimilarity], result of:
          0.05759195 = score(doc=7270,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.35694647 = fieldWeight in 7270, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7270)
        0.06532367 = weight(_text_:search in 7270) [ClassicSimilarity], result of:
          0.06532367 = score(doc=7270,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.38015217 = fieldWeight in 7270, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7270)
      0.5 = coord(2/4)
    
    Abstract
    The explosion of the use of the Internet by the genral public, particularly via the WWW, has given rise to the proliferation of semiprofessional attempts to give some subject based access to Internet resources via hierarchical guides (hotlists) on Web search engines such as Yahoo and Magellan. Examines the structure and principles of various hierachical lists, and compares them, when possible, to broad LCC and DDC schemes, and to LCSH. Explores the approaches taken by non librarians in their efforts to organize and provide access to materials on the Internet. Focuses on the dichotomy between the hierarchical 'browse' and the analytical 'search' approaches to finding materials, as exemplified by these various attempts to organize the Internet
  8. Chowdhury, S.; Chowdhury, G.G.: Using DDC to create a visual knowledge map as an aid to online information retrieval (2004) 0.06
    0.058188606 = product of:
      0.11637721 = sum of:
        0.032909684 = weight(_text_:web in 2643) [ClassicSimilarity], result of:
          0.032909684 = score(doc=2643,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2039694 = fieldWeight in 2643, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2643)
        0.08346753 = weight(_text_:search in 2643) [ClassicSimilarity], result of:
          0.08346753 = score(doc=2643,freq=20.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.48574063 = fieldWeight in 2643, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=2643)
      0.5 = coord(2/4)
    
    Abstract
    Selection of search terms in an online search environment can be facilitated by the visual display of a knowledge map showing the various concepts and their links. This paper reports an a preliminary research aimed at designing a prototype knowledge map using DDC and its visual display. The prototype knowledge map created using the Protégé and TGViz freeware has been demonstrated, and further areas of research in this field are discussed.
    Content
    1. Introduction Web search engines and digital libraries usually expect the users to use search terms that most accurately represent their information needs. Finding the most appropriate search terms to represent an information need is an age old problem in information retrieval. Keyword or phrase search may produce good search results as long as the search terms or phrase(s) match those used by the authors and have been chosen for indexing by the concerned information retrieval system. Since this does not always happen, a large number of false drops are produced by information retrieval systems. The retrieval results become worse in very large systems that deal with millions of records, such as the Web search engines and digital libraries. Vocabulary control tools are used to improve the performance of text retrieval systems. Thesauri, the most common type of vocabulary control tool used in information retrieval, appeared in the late fifties, designed for use with the emerging post-coordinate indexing systems of that time. They are used to exert terminology control in indexing, and to aid in searching by allowing the searcher to select appropriate search terms. A large volume of literature exists describing the design features, and experiments with the use, of thesauri in various types of information retrieval systems (see for example, Furnas et.al., 1987; Bates, 1986, 1998; Milstead, 1997, and Shiri et al., 2002).
  9. Yu, N.: Readings & Web resources for faceted classification 0.05
    0.052678123 = product of:
      0.105356246 = sum of:
        0.049364526 = weight(_text_:web in 4394) [ClassicSimilarity], result of:
          0.049364526 = score(doc=4394,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3059541 = fieldWeight in 4394, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4394)
        0.055991717 = weight(_text_:search in 4394) [ClassicSimilarity], result of:
          0.055991717 = score(doc=4394,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.3258447 = fieldWeight in 4394, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=4394)
      0.5 = coord(2/4)
    
    Abstract
    The term "facet" has been used in various places, while in most cases it is just a buzz word to replace what is indeed "aspect" or "category". The references below either define and explain the original concept of facet or provide guidelines for building 'real' faceted search/browse. I was interested in faceted classification because it seems to be a natural and efficient way for organizing and browsing Web collections. However, to automatically generate facets and their isolates is extremely difficult since it involves concept extraction and concept grouping, both of which are difficult problems by themselves. And it is almost impossible to achieve mutually exclusive and jointly exhaustive 'true' facets without human judgment. Nowadays, faceted search/browse widely exists, implicitly or explicitly, on a majority of retail websites due to the multi-aspects nature of the data. However, it is still rarely seen on any digital library sites. (I could be wrong since I haven't kept myself updated with this field for a while.)
  10. Ferris, A.M.: If you buy it, will they use it? : a case study on the use of Classification web (2006) 0.05
    0.052445795 = product of:
      0.10489159 = sum of:
        0.08144732 = weight(_text_:web in 88) [ClassicSimilarity], result of:
          0.08144732 = score(doc=88,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.50479853 = fieldWeight in 88, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=88)
        0.023444273 = product of:
          0.046888545 = sum of:
            0.046888545 = weight(_text_:22 in 88) [ClassicSimilarity], result of:
              0.046888545 = score(doc=88,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.2708308 = fieldWeight in 88, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=88)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper presents a study conducted at the University of Colorado at Boulder (CU-Boulder) to assess the extent to which its catalogers were using Classification Web (Class Web), the subscription-based, online cataloging documentation resource provided by the Library of Congress. In addition, this paper will explore assumptions made by management regarding CU-Boulder catalogers' use of the product, possible reasons for the lower-than-expected use, and recommendations for promoting a more efficient and cost-effective use of Class Web at other institutions similar to CU-Boulder.
    Date
    10. 9.2000 17:38:22
  11. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing application for organizing and accessing internet resources (2003) 0.05
    0.048875913 = product of:
      0.097751826 = sum of:
        0.05203478 = weight(_text_:web in 3966) [ClassicSimilarity], result of:
          0.05203478 = score(doc=3966,freq=10.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.32250395 = fieldWeight in 3966, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3966)
        0.04571705 = weight(_text_:search in 3966) [ClassicSimilarity], result of:
          0.04571705 = score(doc=3966,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2660511 = fieldWeight in 3966, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=3966)
      0.5 = coord(2/4)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the WWW. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing System of Ranganathan. A prototype software system has been designed to create a database of records specifying Web documents according to the Dublin Core and input a faceted subject heading according to DSIS. Synonymous terms are added to the standard terms in the heading using appropriate symbols. Once the data are entered along with a description and URL of the Web document, the record is stored in the system. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The system stores the surrogates and keeps the faceted subject headings separately after establishing a link. Search is carried out an index entries derived from the faceted subject heading using chain indexing technique. If a single term is input, the system searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved headings is too large (running into more than a page) then the user has the option of entering another search term to be searched in combination. The system searches subject headings already retrieved and look for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL the original document an the web can be accessed. The prototype system developed under Windows NT environment using ASP and web server is under rigorous testing. The database and indexes management routines need further development.
  12. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing based system for organizing and accessing Internet resources (2002) 0.05
    0.048797183 = product of:
      0.097594365 = sum of:
        0.05759195 = weight(_text_:web in 97) [ClassicSimilarity], result of:
          0.05759195 = score(doc=97,freq=16.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.35694647 = fieldWeight in 97, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=97)
        0.04000242 = weight(_text_:search in 97) [ClassicSimilarity], result of:
          0.04000242 = score(doc=97,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.23279473 = fieldWeight in 97, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.02734375 = fieldNorm(doc=97)
      0.5 = coord(2/4)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the World Wide Web. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing system of Ranganathan. A prototype Software System has been designed to create a database of records specifying Web documents according to the Dublin Core and to input a faceted subject heading according to DSIS. Synonymous terms are added to the Standard terms in the heading using appropriate symbols. Once the data are entered along with a description and the URL of the web document, the record is stored in the System. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The System stores the Surrogates and keeps the faceted subject headings separately after establishing a link. The search is carried out an index entries derived from the faceted subject heading using the chain indexing technique. If a single term is Input, the System searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved Keadings is too large (running into more than a page) the user has the option of entering another search term to be searched in combination. The System searches subject headings already retrieved and looks for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL, the original document an the web can be accessed. The prototype system developed in a Windows NT environment using ASP and a web server is under rigorous testing. The database and Index management routines need further development.
    An interesting but somewhat confusing article telling how the writers described web pages with Dublin Core metadata, including a faceted classification, and built a system that lets users browse the collection through the facets. They seem to want to cover too much in a short article, and unnecessary space is given over to screen shots showing how Dublin Core metadata was entered. The screen shots of the resulting browsable system are, unfortunately, not as enlightening as one would hope, and there is no discussion of how the system was actually written or the technology behind it. Still, it could be worth reading as an example of such a system and how it is treated in journals.
    Footnote
    Vgl. auch: Devadason, F.J.: Facet analysis and Semantic Web: musings of a student of Ranganathan. Unter: http://www.geocities.com/devadason.geo/FASEMWEB.html#FacetedIndex.
  13. Doyle, B.: ¬The classification and evaluation of Content Management Systems (2003) 0.05
    0.04630641 = product of:
      0.09261282 = sum of:
        0.06581937 = weight(_text_:web in 2871) [ClassicSimilarity], result of:
          0.06581937 = score(doc=2871,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.4079388 = fieldWeight in 2871, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=2871)
        0.026793454 = product of:
          0.053586908 = sum of:
            0.053586908 = weight(_text_:22 in 2871) [ClassicSimilarity], result of:
              0.053586908 = score(doc=2871,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.30952093 = fieldWeight in 2871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2871)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This is a report on how Doyle and others made a faceted classification scheme for content management systems and made it browsable on the web (see CMS Review in Example Web Sites, below). They discuss why they did it, how, their use of OPML and XFML, how they did research to find terms and categories, and they also include their taxonomy. It is interesting to see facets used in a business environment.
    Date
    30. 7.2004 12:22:52
  14. McIlwaine, I.C.: ¬The UDC and the World Wide Web (2003) 0.05
    0.045585044 = product of:
      0.09117009 = sum of:
        0.05817665 = weight(_text_:web in 3814) [ClassicSimilarity], result of:
          0.05817665 = score(doc=3814,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.36057037 = fieldWeight in 3814, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3814)
        0.032993436 = weight(_text_:search in 3814) [ClassicSimilarity], result of:
          0.032993436 = score(doc=3814,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.19200584 = fieldWeight in 3814, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3814)
      0.5 = coord(2/4)
    
    Abstract
    The paper examines the potentiality of the Universal Decimal Classification as a means for retrieving subjects from the World Wide Web. The analytico-synthetic basis of the scheme provides the facility to link concepts at the input or search stage and to isolate concepts via the notation so as to retrieve the separate parts of a compound subject individually if required. Its notation permits hierarchical searching and overrides the shortcomings of natural language. Recent revisions have been constructed with this purpose in mind, the most recent being for Management. The use of the classification embedded in metadata, as in the GERHARD system or as a basis for subject trees is discussed. Its application as a gazetteer is another Web application to which it is put. The range of up to date editions in many languages and the availability of a Web-based version make its use as a switching language increasingly valuable.
  15. Ellis, D.; Vasconcelos, A.: ¬The relevance of facet analysis for World Wide Web subject organization and searching (2000) 0.04
    0.044478323 = product of:
      0.08895665 = sum of:
        0.049364526 = weight(_text_:web in 2477) [ClassicSimilarity], result of:
          0.049364526 = score(doc=2477,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3059541 = fieldWeight in 2477, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2477)
        0.03959212 = weight(_text_:search in 2477) [ClassicSimilarity], result of:
          0.03959212 = score(doc=2477,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 2477, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=2477)
      0.5 = coord(2/4)
    
    Abstract
    Different forms of indexing and search facilities available on the Web are described. Use of facet analysis to structure hypertext concept structures is outlined in relation to work on (1) development of hypertext knowledge bases for designers of learning materials and (2) construction of knowledge based hypertext interfaces. The problem of lack of closeness between page designers and potential users is examined. Facet analysis is suggested as a way of alleviating some difficulties associated with this problem of designing for the unknown user.
  16. Golub, K.; Lykke, M.: Automated classification of web pages in hierarchical browsing (2009) 0.04
    0.043898437 = product of:
      0.087796874 = sum of:
        0.041137107 = weight(_text_:web in 3614) [ClassicSimilarity], result of:
          0.041137107 = score(doc=3614,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25496176 = fieldWeight in 3614, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3614)
        0.046659768 = weight(_text_:search in 3614) [ClassicSimilarity], result of:
          0.046659768 = score(doc=3614,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.27153727 = fieldWeight in 3614, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3614)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - The purpose of this study is twofold: to investigate whether it is meaningful to use the Engineering Index (Ei) classification scheme for browsing, and then, if proven useful, to investigate the performance of an automated classification algorithm based on the Ei classification scheme. Design/methodology/approach - A user study was conducted in which users solved four controlled searching tasks. The users browsed the Ei classification scheme in order to examine the suitability of the classification systems for browsing. The classification algorithm was evaluated by the users who judged the correctness of the automatically assigned classes. Findings - The study showed that the Ei classification scheme is suited for browsing. Automatically assigned classes were on average partly correct, with some classes working better than others. Success of browsing showed to be correlated and dependent on classification correctness. Research limitations/implications - Further research should address problems of disparate evaluations of one and the same web page. Additional reasons behind browsing failures in the Ei classification scheme also need further investigation. Practical implications - Improvements for browsing were identified: describing class captions and/or listing their subclasses from start; allowing for searching for words from class captions with synonym search (easily provided for Ei since the classes are mapped to thesauri terms); when searching for class captions, returning the hierarchical tree expanded around the class in which caption the search term is found. The need for improvements of classification schemes was also indicated. Originality/value - A user-based evaluation of automated subject classification in the context of browsing has not been conducted before; hence the study also presents new findings concerning methodology.
  17. National Seminar on Classification in the Digital Environment : Papers contributed to the National Seminar an Classification in the Digital Environment, Bangalore, 9-11 August 2001 (2001) 0.04
    0.039604064 = product of:
      0.05280542 = sum of:
        0.032909684 = weight(_text_:web in 2047) [ClassicSimilarity], result of:
          0.032909684 = score(doc=2047,freq=16.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2039694 = fieldWeight in 2047, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=2047)
        0.0131973745 = weight(_text_:search in 2047) [ClassicSimilarity], result of:
          0.0131973745 = score(doc=2047,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.076802336 = fieldWeight in 2047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.015625 = fieldNorm(doc=2047)
        0.0066983635 = product of:
          0.013396727 = sum of:
            0.013396727 = weight(_text_:22 in 2047) [ClassicSimilarity], result of:
              0.013396727 = score(doc=2047,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.07738023 = fieldWeight in 2047, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2047)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Date
    2. 1.2004 10:35:22
    Footnote
    Rez. in: Knowledge organization 30(2003) no.1, S.40-42 (J.-E. Mai): "Introduction: This is a collection of papers presented at the National Seminar an Classification in the Digital Environment held in Bangalore, India, an August 9-11 2001. The collection contains 18 papers dealing with various issues related to knowledge organization and classification theory. The issue of transferring the knowledge, traditions, and theories of bibliographic classification to the digital environment is an important one, and I was excited to learn that proceedings from this seminar were available. Many of us experience frustration an a daily basis due to poorly constructed Web search mechanisms and Web directories. As a community devoted to making information easily accessible we have something to offer the Web community and a seminar an the topic was indeed much needed. Below are brief summaries of the 18 papers presented at the seminar. The order of the summaries follows the order of the papers in the proceedings. The titles of the paper are given in parentheses after the author's name. AHUJA and WESLEY (From "Subject" to "Need": Shift in Approach to Classifying Information an the Internet/Web) argue that traditional bibliographic classification systems fall in the digital environment. One problem is that bibliographic classification systems have been developed to organize library books an shelves and as such are unidimensional and tied to the paper-based environment. Another problem is that they are "subject" oriented in the sense that they assume a relatively stable universe of knowledge containing basic and fixed compartments of knowledge that can be identified and represented. Ahuja and Wesley suggest that classification in the digital environment should be need-oriented instead of subjectoriented ("One important link that binds knowledge and human being is his societal need. ... Hence, it will be ideal to organise knowledge based upon need instead of subject." (p. 10)).
    SELVI (Knowledge Classification of Digital Information Materials with Special Reference to Clustering Technique) finds that it is essential to classify digital material since the amount of material that is becoming available is growing. Selvi suggests using automated classification to "group together those digital information materials or documents that are "most similar" (p. 65). This can be attained by using Cluster analysis methods. PRADHAN and THULASI (A Study of the Use of Classification and Indexing Systems by Web Resource Directories) compare and contrast the classificatory structures of Google, Yahoo, and Looksmart's directories and compare the directories to Dewey Decimal Classification, Library of Congress Classification and Colon Classification's classificatory structures. They find differentes between the directories' and the bibliographic classification systems' classificatory structures and principles. These differentes stem from the fact that bibliographic classification systems are used to "classify academic resources for the research community" (p. 83) and directories "aim to categorize a wider breath of information groups, entertainment, recreation, govt. information, commercial information" (p. 83). NEELAMEGHAN (Hierarchy, Hierarchical Relation and Hierarchical Arrangement) reviews the concept of hierarchy and the formation of hierarchical structures across a variety of domains. NEELAMEGHAN and PRADAD (Digitized Schemes for Subject Classification and Thesauri: Complementary Roles) demonstrate how thesaural relationships (NT, BT, and RT) can be applied to a classification scheme, the Colon Classification in this Gase. NEELAMEGHAN and ASUNDI (Metadata Framework for Describing Embodied Knowledge and Subject Content) propose to use the Generalized Facet Structure framework which is based an Ranganathan's General Theory of Knowledge Classification as a framework for describing the content of documents in a metadata element set for the representation of web documents. CHUDAMANI (Classified Catalogue as a Tool for Subject Based Information Retrieval in both Traditional and Electronic Library Environment) explains why the classified catalogue is superior to the alphabetic cata logue and argues that the same is true in the digital environment.
    Discussion The proceedings of the National Seminar an Classification in the Digital Environment give some insights. However, the depth of analysis and discussion is very uneven across the papers. Some of the papers have substantive research content while others appear to be notes used in the oral presentation. The treatments of the topics are very general in nature. Some papers have a very limited list of references while others have no bibliography. No index has been provided. The transfer of bibliographic knowledge organization theory to the digital environment is an important topic. However, as the papers at this conference have shown, it is also a difficult task. Of the 18 papers presented at this seminar an classification in the digital environment, only 4-5 papers actually deal directly with this important topic. The remaining papers deal with issues that are more or less relevant to classification in the digital environment without explicitly discussing the relation. The reason could be that the authors take up issues in knowledge organization that still need to be investigated and clarified before their application in the digital environment can be considered. Nonetheless, one wishes that the knowledge organization community would discuss the application of classification theory in the digital environment in greater detail. It is obvious from the comparisons of the classificatory structures of bibliographic classification systems and Web directories that these are different and that they probably should be different, since they serve different purposes. Interesting questions in the transformation of bibliographic classification theories to the digital environment are: "Given the existing principles in bibliographic knowledge organization, what are the optimum principles for organization of information, irrespectively of context?" and "What are the fundamental theoretical and practical principles for the construction of Web directories?" Unfortunately, the papers presented at this seminar do not attempt to answer or discuss these questions."
  18. Lardera, M.; Gnoli, C.; Rolandi, C.; Trzmielewski, M.: Developing SciGator, a DDC-based library browsing tool (2017) 0.04
    0.038043402 = product of:
      0.076086804 = sum of:
        0.055991717 = weight(_text_:search in 4144) [ClassicSimilarity], result of:
          0.055991717 = score(doc=4144,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.3258447 = fieldWeight in 4144, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=4144)
        0.02009509 = product of:
          0.04019018 = sum of:
            0.04019018 = weight(_text_:22 in 4144) [ClassicSimilarity], result of:
              0.04019018 = score(doc=4144,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23214069 = fieldWeight in 4144, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4144)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Exploring collections by their subject matter is an important functionality for library users. We developed an online tool called SciGator in order to allow users to browse the Dewey Decimal Classification (DDC) classes used in different libraries at the University of Pavia and to perform different types of search in the OPAC. Besides navigation of DDC hierarchies, SciGator suggests "see-also" relationships with related classes and maps equivalent classes in local shelving schemes, thus allowing the expansion of search queries to include subjects contiguous to the initial one. We are developing new features, including the possibility to expand searches even more to national and international catalogues.
    Content
    Beitrag eines Special Issue: ISKO-Italy: 8' Incontro ISKO Italia, Università di Bologna, 22 maggio 2017, Bologna, Italia.
  19. Kwasnik, B.H.: Commercial Web sites and the use of classification schemes : the case of Amazon.Com (2003) 0.04
    0.037249055 = product of:
      0.07449811 = sum of:
        0.03490599 = weight(_text_:web in 2696) [ClassicSimilarity], result of:
          0.03490599 = score(doc=2696,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 2696, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2696)
        0.03959212 = weight(_text_:search in 2696) [ClassicSimilarity], result of:
          0.03959212 = score(doc=2696,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 2696, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=2696)
      0.5 = coord(2/4)
    
    Abstract
    The structure and use of the classification for books on the amazon.com website are described and analyzed. The contents of this very large website are changing constantly and the access mechanisms have the main purpose of enabling searchers to find books for purchase. This includes finding books the searcher knows about at the start of the search, as well as those that might present themselves in the course of searching and that are related in some way. Underlying the many access paths to books is a classification scheme comprising a rich network of terms in an enumerative and multihierarchical structure.
  20. Broughton, V.; Lane, H.: Classification schemes revisited : applications to Web indexing and searching (2000) 0.04
    0.03706527 = product of:
      0.07413054 = sum of:
        0.041137107 = weight(_text_:web in 2476) [ClassicSimilarity], result of:
          0.041137107 = score(doc=2476,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25496176 = fieldWeight in 2476, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2476)
        0.032993436 = weight(_text_:search in 2476) [ClassicSimilarity], result of:
          0.032993436 = score(doc=2476,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.19200584 = fieldWeight in 2476, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2476)
      0.5 = coord(2/4)
    
    Abstract
    Basic skills of classification and subject indexing have been little taught in British library schools since automation was introduced into libraries. However, development of the Internet as a major medium of publication has stretched the capability of search engines to cope with retrieval. Consequently, there has been interest in applying existing systems of knowledge organization to electronic resources. Unfortunately, the classification systems have been adopted without a full understanding of modern classification principles. Analytico-synthetic schemes have been used crudely, as in the case of the Universal Decimal Classification (UDC). The fully faceted Bliss Bibliographical Classification, 2nd edition (BC2) with its potential as a tool for electronic resource retrieval is virtually unknown outside academic libraries
    Content
    A short discussion of using classification systems to organize the web, one of many such. The authors are both involved with BC2 and naturally think it is the best system for organizing information online. They list reasons why faceted classifications are best (e.g. no theoretical limits to specificity or exhaustivity; easier to handle complex subjects; flexible enough to accommodate different user needs) and take a brief look at how BC2 works. They conclude with a discussion of how and why it should be applied to online resources, and a plea for recognition of the importance of classification and subject analysis skills, even when full-text searching is available and databases respond instantly.

Years

Languages

  • e 98
  • d 12
  • nl 2
  • es 1
  • More… Less…

Types

  • a 96
  • el 18
  • m 4
  • s 3
  • p 1
  • x 1
  • More… Less…

Classifications