Search (36 results, page 1 of 2)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  • × year_i:[1990 TO 2000}
  1. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.13
    0.1318309 = product of:
      0.19774634 = sum of:
        0.067437425 = weight(_text_:wide in 1673) [ClassicSimilarity], result of:
          0.067437425 = score(doc=1673,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 1673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.063368805 = weight(_text_:web in 1673) [ClassicSimilarity], result of:
          0.063368805 = score(doc=1673,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43716836 = fieldWeight in 1673, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.04587784 = weight(_text_:computer in 1673) [ClassicSimilarity], result of:
          0.04587784 = score(doc=1673,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28263903 = fieldWeight in 1673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.021062255 = product of:
          0.04212451 = sum of:
            0.04212451 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.04212451 = score(doc=1673,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.6666667 = coord(4/6)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia; vgl. auch: http://www7.scu.edu.au/programme/posters/1846/com1846.htm.
    Source
    Computer networks and ISDN systems. 30(1998) nos.1/7, S.646-648
  2. Ardo, A.; Lundberg, S.: ¬A regional distributed WWW search and indexing service : the DESIRE way (1998) 0.12
    0.11859971 = product of:
      0.17789957 = sum of:
        0.057803504 = weight(_text_:wide in 4190) [ClassicSimilarity], result of:
          0.057803504 = score(doc=4190,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 4190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4190)
        0.062718846 = weight(_text_:web in 4190) [ClassicSimilarity], result of:
          0.062718846 = score(doc=4190,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 4190, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4190)
        0.039323866 = weight(_text_:computer in 4190) [ClassicSimilarity], result of:
          0.039323866 = score(doc=4190,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 4190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=4190)
        0.01805336 = product of:
          0.03610672 = sum of:
            0.03610672 = weight(_text_:22 in 4190) [ClassicSimilarity], result of:
              0.03610672 = score(doc=4190,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.23214069 = fieldWeight in 4190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4190)
          0.5 = coord(1/2)
      0.6666667 = coord(4/6)
    
    Abstract
    Creates an open, metadata aware system for distributed, collaborative WWW indexing. The system has 3 main components: a harvester (for collecting information), a database (for making the collection searchable), and a user interface (for making the information available). all components can be distributed across networked computers, thus supporting scalability. The system is metadata aware and thus allows searches on several fields including title, document author and URL. Nordic Web Index (NWI) is an application using this system to create a regional Nordic Web-indexing service. NWI is built using 5 collaborating service points within the Nordic countries. The NWI databases can be used to build additional services
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
    Object
    Nordic Web Index
    Source
    Computer networks and ISDN systems. 30(1998) nos.1/7, S.149-159
  3. Robbins, F.: ¬An exploration of the application of classification systems as a method for resource delivery on the World Wide Web (1999) 0.06
    0.059441954 = product of:
      0.17832586 = sum of:
        0.11560701 = weight(_text_:wide in 400) [ClassicSimilarity], result of:
          0.11560701 = score(doc=400,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.5874411 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.09375 = fieldNorm(doc=400)
        0.062718846 = weight(_text_:web in 400) [ClassicSimilarity], result of:
          0.062718846 = score(doc=400,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=400)
      0.33333334 = coord(2/6)
    
  4. Wätjen, H.-J.: Automatisches Sammeln, Klassifizieren und Indexieren von wissenschaftlich relevanten Informationsressourcen im deutschen World Wide Web : das DFG-Projekt GERHARD (1998) 0.05
    0.04953496 = product of:
      0.14860488 = sum of:
        0.09633918 = weight(_text_:wide in 3066) [ClassicSimilarity], result of:
          0.09633918 = score(doc=3066,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.48953426 = fieldWeight in 3066, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.078125 = fieldNorm(doc=3066)
        0.052265707 = weight(_text_:web in 3066) [ClassicSimilarity], result of:
          0.052265707 = score(doc=3066,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.36057037 = fieldWeight in 3066, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=3066)
      0.33333334 = coord(2/6)
    
  5. Möller, G.: Automatic classification of the World Wide Web using Universal Decimal Classification (1999) 0.05
    0.04953496 = product of:
      0.14860488 = sum of:
        0.09633918 = weight(_text_:wide in 494) [ClassicSimilarity], result of:
          0.09633918 = score(doc=494,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.48953426 = fieldWeight in 494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.078125 = fieldNorm(doc=494)
        0.052265707 = weight(_text_:web in 494) [ClassicSimilarity], result of:
          0.052265707 = score(doc=494,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.36057037 = fieldWeight in 494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=494)
      0.33333334 = coord(2/6)
    
  6. McKiernan, G.: Parallel universe : the organization of information elements and access in a World Wide Web (WWW) Virtual Library (1996) 0.04
    0.04203181 = product of:
      0.12609543 = sum of:
        0.0817465 = weight(_text_:wide in 5184) [ClassicSimilarity], result of:
          0.0817465 = score(doc=5184,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.4153836 = fieldWeight in 5184, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5184)
        0.04434892 = weight(_text_:web in 5184) [ClassicSimilarity], result of:
          0.04434892 = score(doc=5184,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3059541 = fieldWeight in 5184, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5184)
      0.33333334 = coord(2/6)
    
    Abstract
    For generations, libraries have facilitated access to information sources by the development and use of a wide range of appropriate organizational processes. Within a Web-based demonstration prototype, we have applied several established library procedures, principles and practices to enhance access to selected Internet resources in science and technology. In seeking to manage these sources, we have established defined collection, adopted an established library classification scheme as an organizational framework, and sought to stimulate the features and functions of a physical library collection and conventional reference sourcees. This paper describes the key components of this prototype, reviews research which supports its approach, and profiles suggested enhancements which could further facilitate identification, access and use of significant Internet and WWW resources
  7. Dodd, D.G.: Grass-roots cataloging and classification : food for thought from World Wide Web subject-oriented hierarchical lists (1996) 0.04
    0.039725944 = product of:
      0.11917783 = sum of:
        0.067437425 = weight(_text_:wide in 7270) [ClassicSimilarity], result of:
          0.067437425 = score(doc=7270,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 7270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7270)
        0.05174041 = weight(_text_:web in 7270) [ClassicSimilarity], result of:
          0.05174041 = score(doc=7270,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.35694647 = fieldWeight in 7270, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7270)
      0.33333334 = coord(2/6)
    
    Abstract
    The explosion of the use of the Internet by the genral public, particularly via the WWW, has given rise to the proliferation of semiprofessional attempts to give some subject based access to Internet resources via hierarchical guides (hotlists) on Web search engines such as Yahoo and Magellan. Examines the structure and principles of various hierachical lists, and compares them, when possible, to broad LCC and DDC schemes, and to LCSH. Explores the approaches taken by non librarians in their efforts to organize and provide access to materials on the Internet. Focuses on the dichotomy between the hierarchical 'browse' and the analytical 'search' approaches to finding materials, as exemplified by these various attempts to organize the Internet
  8. Wätjen, H.-J.: GERHARD : Automatisches Sammeln, Klassifizieren und Indexieren von wissenschaftlich relevanten Informationsressourcen im deutschen World Wide Web (1998) 0.04
    0.039725944 = product of:
      0.11917783 = sum of:
        0.067437425 = weight(_text_:wide in 3064) [ClassicSimilarity], result of:
          0.067437425 = score(doc=3064,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 3064, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3064)
        0.05174041 = weight(_text_:web in 3064) [ClassicSimilarity], result of:
          0.05174041 = score(doc=3064,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.35694647 = fieldWeight in 3064, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3064)
      0.33333334 = coord(2/6)
    
    Abstract
    Die intellektuelle Erschließung des Internet befindet sich in einer Krise. Yahoo und andere Dienste können mit dem Wachstum des Web nicht mithalten. GERHARD ist derzeit weltweit der einzige Such- und Navigationsdienst, der die mit einem Roboter gesammelten Internetressourcen mit computerlinguistischen und statistischen Verfahren auch automatisch vollständig klassifiziert. Weit über eine Million HTML-Dokumente von wissenschaftlich relevanten Servern in Deutschland können wie bei anderen Suchmaschinen in der Datenbank gesucht, aber auch über die Navigation in der dreisprachigen Universalen Dezimalklassifikation (ETH-Bibliothek Zürich) recherchiert werden
  9. Ellis, D.; Vasconcelos, A.: Ranganathan and the Net : using facet analysis to search and organise the World Wide Web (1999) 0.04
    0.03737321 = product of:
      0.11211963 = sum of:
        0.057803504 = weight(_text_:wide in 726) [ClassicSimilarity], result of:
          0.057803504 = score(doc=726,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 726, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
        0.054316122 = weight(_text_:web in 726) [ClassicSimilarity], result of:
          0.054316122 = score(doc=726,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.37471575 = fieldWeight in 726, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
      0.33333334 = coord(2/6)
    
    Abstract
    This article gives a cheerfully brief and undetailed account of how to make a faceted classification system, then describes information retrieval and searching on the web. It concludes by saying that facets would be excellent in helping users search and browse the web, but offers no real clues as to how this can be done.
  10. Allen, R.B.: ¬Two digital library interfaces that exploit hierarchical structure (1995) 0.03
    0.03237579 = product of:
      0.09712737 = sum of:
        0.057803504 = weight(_text_:wide in 2416) [ClassicSimilarity], result of:
          0.057803504 = score(doc=2416,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 2416, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2416)
        0.039323866 = weight(_text_:computer in 2416) [ClassicSimilarity], result of:
          0.039323866 = score(doc=2416,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 2416, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=2416)
      0.33333334 = coord(2/6)
    
    Abstract
    Two library classification system interfaces have been implemented for navigating and searching large collections of document and book records. One interface allows the user to browse book records organized by the DDC hierarchy. A Book Shelf display reflects the facet position in the classification hierarchy during browsing, and it dynamically updates to reflect search hits and attribute selections. The other interface provides access to records describing computer science documents classified by the ACM Computing Reviews (CR) system. The CR classification system is a type of faceted classification in which documents can appear at several points in the hierarchy. These two interfaces demonstrate that classification structure can be effectively utilized for organizing digital libraries and, potentiall, collections of Internet-wide information services
  11. Gödert, W.: Facet classification in online retrieval (1991) 0.03
    0.028375676 = product of:
      0.085127026 = sum of:
        0.04816959 = weight(_text_:wide in 5825) [ClassicSimilarity], result of:
          0.04816959 = score(doc=5825,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 5825, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5825)
        0.036957435 = weight(_text_:web in 5825) [ClassicSimilarity], result of:
          0.036957435 = score(doc=5825,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25496176 = fieldWeight in 5825, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5825)
      0.33333334 = coord(2/6)
    
    Abstract
    "Online retrieval" conjures up a very different mental image now than in 1991, the year this article was written, and the year Tim Berners-Lee first revealed the new hypertext system he called the World Wide Web. Gödert shows that truncation and Boolean logic, combined with notation from a faceted classification system, will be a powerful way of searching for information. It undoubtedly is, but no system built now would require a user searching for material on "nervous systems of bone fish" to enter "Fdd$ and Leaa$". This is worth reading for someone interested in seeing how searching and facets can go together, but the web has made this article quite out of date.
  12. Kwasnik, B.H.: ¬The role of classification in knowledge representation (1999) 0.02
    0.016470928 = product of:
      0.049412783 = sum of:
        0.031359423 = weight(_text_:web in 2464) [ClassicSimilarity], result of:
          0.031359423 = score(doc=2464,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 2464, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2464)
        0.01805336 = product of:
          0.03610672 = sum of:
            0.03610672 = weight(_text_:22 in 2464) [ClassicSimilarity], result of:
              0.03610672 = score(doc=2464,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.23214069 = fieldWeight in 2464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2464)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    A fascinating, broad-ranging article about classification, knowledge, and how they relate. Hierarchies, trees, paradigms (a two-dimensional classification that can look something like a spreadsheet), and facets are covered, with descriptions of how they work and how they can be used for knowledge discovery and creation. Kwasnick outlines how to make a faceted classification: choose facets, develop facets, analyze entities using the facets, and make a citation order. Facets are useful for many reasons: they do not require complete knowledge of the entire body of material; they are hospitable, flexible, and expressive; they do not require a rigid background theory; they can mix theoretical structures and models; and they allow users to view things from many perspectives. Facets do have faults: it can be hard to pick the right ones; it is hard to show relations between them; and it is difficult to visualize them. The coverage of the other methods is equally thorough and there is much to consider for anyone putting a classification on the web.
    Source
    Library trends. 48(1999) no.1, S.22-47
  13. Mitchell, J.S.: Flexible structures in the Dewey Decimal Classification (1998) 0.02
    0.01605653 = product of:
      0.09633918 = sum of:
        0.09633918 = weight(_text_:wide in 4561) [ClassicSimilarity], result of:
          0.09633918 = score(doc=4561,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.48953426 = fieldWeight in 4561, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.078125 = fieldNorm(doc=4561)
      0.16666667 = coord(1/6)
    
    Abstract
    Discusses how a general library classification such as the DDC can be transformed into a general knowledge organisation tool for the world-wide electronic information environment
  14. Buxton, A.: Computer searching of UDC numbers (1993) 0.02
    0.015292614 = product of:
      0.09175568 = sum of:
        0.09175568 = weight(_text_:computer in 42) [ClassicSimilarity], result of:
          0.09175568 = score(doc=42,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.56527805 = fieldWeight in 42, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.109375 = fieldNorm(doc=42)
      0.16666667 = coord(1/6)
    
  15. Dhyani, P.: Library classification in computer age (1999) 0.02
    0.015135763 = product of:
      0.090814576 = sum of:
        0.090814576 = weight(_text_:computer in 3153) [ClassicSimilarity], result of:
          0.090814576 = score(doc=3153,freq=6.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.5594802 = fieldWeight in 3153, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=3153)
      0.16666667 = coord(1/6)
    
    Abstract
    Library classification is constantly being influenced by multifaceted, multidimensional, and infinite growth of literature on one hand and the users needs on the other. Dewey pioneered in devising a scheme of classification for the documentation utility of the organised knowledge. Subsequent schemes of classification worked purely without any theoretical foundation, colon classification being the exception. With the emergence of computer technology the library classification is being metamorphised. This paper attempts to delve a state-of-the-art of library classification in the new computer age.
  16. Buxton, A.B.: Computer searching of UDC numbers (1990) 0.01
    0.013107955 = product of:
      0.07864773 = sum of:
        0.07864773 = weight(_text_:computer in 5406) [ClassicSimilarity], result of:
          0.07864773 = score(doc=5406,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.48452407 = fieldWeight in 5406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.09375 = fieldNorm(doc=5406)
      0.16666667 = coord(1/6)
    
  17. Peereboom, M.: Dwerg tussen reuzen? : het Nederlandse basisclassificatie Web (1997) 0.01
    0.012195333 = product of:
      0.073171996 = sum of:
        0.073171996 = weight(_text_:web in 515) [ClassicSimilarity], result of:
          0.073171996 = score(doc=515,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.50479853 = fieldWeight in 515, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=515)
      0.16666667 = coord(1/6)
    
    Abstract
    Developments in electronic communication technology have meda online databases a normal part of library collections. To provide users with direct access to Internet resources the Dutch Royal Library has cooperated with several university libraries in the Netherlands to develop the Nederlandse Basisclassificatie Web. Subject specialists select sources, add English summaries and NBW code, and input them to the online database. A Web desk and training workshops have been provided to assist users, and improvements to the system will simplify search procedures
    Footnote
    Übers. d. Titels: A dwarf amongst giants?: the Dutch Basic classification of Web resources
  18. Poynder, R.: Web research engines? (1996) 0.01
    0.011686969 = product of:
      0.07012181 = sum of:
        0.07012181 = weight(_text_:web in 5698) [ClassicSimilarity], result of:
          0.07012181 = score(doc=5698,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.48375595 = fieldWeight in 5698, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5698)
      0.16666667 = coord(1/6)
    
    Abstract
    Describes the shortcomings of search engines for the WWW comparing their current capabilities to those of the first generation CD-ROM products. Some allow phrase searching and most are improving their Boolean searching. Few allow truncation, wild cards or nested logic. They are stateless, losing previous search criteria. Unlike the indexing and classification systems for today's CD-ROMs, those for Web pages are random, unstructured and of variable quality. Considers that at best Web search engines can only offer free text searching. Discusses whether automatic data classification systems such as Infoseek Ultra can overcome the haphazard nature of the Web with neural network technology, and whether Boolean search techniques may be redundant when replaced by technology such as the Euroferret search engine. However, artificial intelligence is rarely successful on huge, varied databases. Relevance ranking and automatic query expansion still use the same simple inverted indexes. Most Web search engines do nothing more than word counting. Further complications arise with foreign languages
  19. Guenther, R.S.: Bringing the Library of Congress into the computer age : converting LCC to machine-readable form (1996) 0.01
    0.010923296 = product of:
      0.06553978 = sum of:
        0.06553978 = weight(_text_:computer in 4578) [ClassicSimilarity], result of:
          0.06553978 = score(doc=4578,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.40377006 = fieldWeight in 4578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=4578)
      0.16666667 = coord(1/6)
    
  20. Liu, S.: Decomposing DDC synthesized numbers (1997) 0.01
    0.009268723 = product of:
      0.05561234 = sum of:
        0.05561234 = weight(_text_:computer in 5968) [ClassicSimilarity], result of:
          0.05561234 = score(doc=5968,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.34261024 = fieldWeight in 5968, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=5968)
      0.16666667 = coord(1/6)
    
    Abstract
    Some empirical studies have explored the direct use of traditional classification schemes in the online environment; none has manipulated these manual classifications in such a way as to take full advantage of the power of both the classification and computer. It has been suggested that this power could be realized if the individual components of synthesized DDC numbers could be identified and indexed. Looks at the feasibility of automatically decomposing DDC synthesized numbers and the implications of such decompositions for informational retrieval. 1.701 sythesized numbers were decomposed by a computer system called DND (Dewey Number Decomposer). 600 were randomly selected for examination by 3 judges, each evaluating 200 numbers. The decomposition success rate was 100% and it was concluded that synthesized DDC numbers can be accurately decomposed automatically. The study has implications for information retrieval, expert systems for assigning DDC numbers, automatic indexing, switching language development and other important areas of cataloguing and classification