Search (36 results, page 1 of 2)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  1. Hill, J.S.: Online classification number access : some practical considerations (1984) 0.01
    0.012333291 = product of:
      0.049333163 = sum of:
        0.049333163 = product of:
          0.098666325 = sum of:
            0.098666325 = weight(_text_:22 in 7684) [ClassicSimilarity], result of:
              0.098666325 = score(doc=7684,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.61904186 = fieldWeight in 7684, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=7684)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Journal of academic librarianship. 10(1984), S.17-22
  2. XFML Core - eXchangeable Faceted Metadata Language (2003) 0.01
    0.0098929675 = product of:
      0.03957187 = sum of:
        0.03957187 = product of:
          0.07914374 = sum of:
            0.07914374 = weight(_text_:software in 6673) [ClassicSimilarity], result of:
              0.07914374 = score(doc=6673,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.43831247 = fieldWeight in 6673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6673)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The specification for XFML, a markup language designed to handle faceted classifications. Browsing the site (http://www.xfml.org/) will reveal news about XFML and links to related software and web sites. XFML is not an officially recognized Internet standard, but is the de facto standard.
  3. Satyapal, B.G.; Satyapal, N.S.: SATSAN AUTOMATRIX Version 1 : a computer programme for synthesis of Colon class number according to the postulational approach (2006) 0.01
    0.0098929675 = product of:
      0.03957187 = sum of:
        0.03957187 = product of:
          0.07914374 = sum of:
            0.07914374 = weight(_text_:software in 1492) [ClassicSimilarity], result of:
              0.07914374 = score(doc=1492,freq=8.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.43831247 = fieldWeight in 1492, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1492)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Describes the features und capabilities of the software SATSAN AUTOMATRIX version 1 for semi-automatic synthesis of Colon Class number (CCN) for a given subject according to the Postulational Approach formulated by S.R. Ranganathan. The present Auto-Matrix version l gives the user more facilities to carry out facet analysis of a subject (simple, compound. or complex) preparatory to synthesizing the corresponding CCN. The software also enables searching for and using previously constructed class numbers automatically, maintenance and use of databases of CC Index, facet formulae and CC schedules for subjects going with different Basic Subjects. The paper begins with a brief account of the authors' consultations with und directions received from. Prof A. Neelameghan in the course of developing the software. Oracle 8 and VB6 have been used in writing the programmes. But for operating SATSAN it is not necessary for users to he proficient in VB6 and Oracle 8 languages. Any computer literate with the basic knowledge of Microsoft Word will he able to use this application software.
  4. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.01
    0.0092499675 = product of:
      0.03699987 = sum of:
        0.03699987 = product of:
          0.07399974 = sum of:
            0.07399974 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
              0.07399974 = score(doc=6040,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.46428138 = fieldWeight in 6040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6040)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 6.2002 19:42:47
  5. Buxton, A.B.: UDC in online systems (1991) 0.01
    0.007914375 = product of:
      0.0316575 = sum of:
        0.0316575 = product of:
          0.063315 = sum of:
            0.063315 = weight(_text_:software in 7935) [ClassicSimilarity], result of:
              0.063315 = score(doc=7935,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.35064998 = fieldWeight in 7935, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7935)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Examines ho well UDC numbers performs as a subject retrieval device in online systems. Discusses: truncation, coordination, USC as a discipline based scheme, ranges, and requirements in search software. Gives examples of UDC in pre-coordinated and post-coordinated working systems. Discusses the possible use of UDC as a thesaurus. Outlines improvements that would enable its use in online retrieval
  6. Comaromi, C.L.: Summation of classification as an enhancement of intellectual access to information in an online environment (1990) 0.01
    0.007708307 = product of:
      0.030833228 = sum of:
        0.030833228 = product of:
          0.061666455 = sum of:
            0.061666455 = weight(_text_:22 in 3576) [ClassicSimilarity], result of:
              0.061666455 = score(doc=3576,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.38690117 = fieldWeight in 3576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3576)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    8. 1.2007 12:22:40
  7. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.007708307 = product of:
      0.030833228 = sum of:
        0.030833228 = product of:
          0.061666455 = sum of:
            0.061666455 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.061666455 = score(doc=611,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 8.2009 12:54:24
  8. Guenther, R.S.: Automating the Library of Congress Classification Scheme : implementation of the USMARC format for classification data (1996) 0.01
    0.006925077 = product of:
      0.027700309 = sum of:
        0.027700309 = product of:
          0.055400617 = sum of:
            0.055400617 = weight(_text_:software in 5578) [ClassicSimilarity], result of:
              0.055400617 = score(doc=5578,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.30681872 = fieldWeight in 5578, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5578)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Potential uses for classification data in machine readable form and reasons for the development of a standard, the USMARC Format for Classification Data, which allows for classification data to interact with other USMARC bibliographic and authority data are discussed. The development, structure, content, and use of the standard is reviewed with implementation decisions for the Library of Congress Classification scheme noted. The author examines the implementation of USMARC classification at LC, the conversion of the schedules, and the functionality of the software being used. Problems in the effort are explored, and enhancements desired for the online classification system are considered.
  9. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.01
    0.0065407157 = product of:
      0.026162863 = sum of:
        0.026162863 = product of:
          0.052325726 = sum of:
            0.052325726 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
              0.052325726 = score(doc=4379,freq=4.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.32829654 = fieldWeight in 4379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  10. Doyle, B.: ¬The classification and evaluation of Content Management Systems (2003) 0.01
    0.0061666453 = product of:
      0.024666581 = sum of:
        0.024666581 = product of:
          0.049333163 = sum of:
            0.049333163 = weight(_text_:22 in 2871) [ClassicSimilarity], result of:
              0.049333163 = score(doc=2871,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.30952093 = fieldWeight in 2871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2871)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 7.2004 12:22:52
  11. Peereboom, M.: DutchESS : Dutch Electronic Subject Service - a Dutch national collaborative effort (2000) 0.01
    0.0061666453 = product of:
      0.024666581 = sum of:
        0.024666581 = product of:
          0.049333163 = sum of:
            0.049333163 = weight(_text_:22 in 4869) [ClassicSimilarity], result of:
              0.049333163 = score(doc=4869,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.30952093 = fieldWeight in 4869, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4869)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 6.2002 19:39:23
  12. Van Dijck, P.: Introduction to XFML (2003) 0.01
    0.0061666453 = product of:
      0.024666581 = sum of:
        0.024666581 = product of:
          0.049333163 = sum of:
            0.049333163 = weight(_text_:22 in 2474) [ClassicSimilarity], result of:
              0.049333163 = score(doc=2474,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.30952093 = fieldWeight in 2474, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2474)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    http://www.xml.com/lpt/a/2003/01/22/xfml.html
  13. Neelameghan, A.: S.R. Ranganathan's general theory of knowledge classification in designing, indexing and retrieving from specialised databases (1997) 0.01
    0.0059357807 = product of:
      0.023743123 = sum of:
        0.023743123 = product of:
          0.047486246 = sum of:
            0.047486246 = weight(_text_:software in 3) [ClassicSimilarity], result of:
              0.047486246 = score(doc=3,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.2629875 = fieldWeight in 3, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Summarizes some experiences of the application of the priciples and postulates of S.R. Ranganathan's General Theory of Knowledge Classification, incorporating the freely faceted approach and analytico synthetic methods, to the design and development of specialized databases, including indexing, user interfaces and retrieval. Enumerates some of the earlier instances of the facet method in machine based systems, beginning with Hollerith's punched card system for the data processing of the US Census. Elaborates on Ranganathan's holistic approach to information systems and services provided by his normative principles. Notes similarities between the design of databases and faceted classification systems. Examples from working systems are given to demonstrate the usefulness of selected canons and principles of classification and the analytico synthetic methodology to database design. The examples are mostly operational database systems developed using Unesco's Micro CDS-ISIS software
  14. Broughton, V.: Finding Bliss on the Web : some problems of representing faceted terminologies in digital environments 0.01
    0.0059357807 = product of:
      0.023743123 = sum of:
        0.023743123 = product of:
          0.047486246 = sum of:
            0.047486246 = weight(_text_:software in 3532) [ClassicSimilarity], result of:
              0.047486246 = score(doc=3532,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.2629875 = fieldWeight in 3532, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3532)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The Bliss Bibliographic Classification is the only example of a fully faceted general classification scheme in the Western world. Although it is the object of much interest as a model for other tools it suffers from the lack of a web presence, and remedying this is an immediate objective for its editors. Understanding how this might be done presents some challenges, as the scheme is semantically very rich and complex in the range and nature of the relationships it contains. The automatic management of these is already in place using local software, but exporting this to a common data format needs careful thought and planning. Various encoding schemes, both for traditional classifications, and for digital materials, represent variously: the concepts; their functional roles; and the relationships between them. Integrating these aspects in a coherent and interchangeable manner appears to be achievable, but the most appropriate format is as yet unclear.
  15. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing application for organizing and accessing internet resources (2003) 0.01
    0.005596308 = product of:
      0.022385232 = sum of:
        0.022385232 = product of:
          0.044770464 = sum of:
            0.044770464 = weight(_text_:software in 3966) [ClassicSimilarity], result of:
              0.044770464 = score(doc=3966,freq=4.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.24794699 = fieldWeight in 3966, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3966)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the WWW. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing System of Ranganathan. A prototype software system has been designed to create a database of records specifying Web documents according to the Dublin Core and input a faceted subject heading according to DSIS. Synonymous terms are added to the standard terms in the heading using appropriate symbols. Once the data are entered along with a description and URL of the Web document, the record is stored in the system. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The system stores the surrogates and keeps the faceted subject headings separately after establishing a link. Search is carried out an index entries derived from the faceted subject heading using chain indexing technique. If a single term is input, the system searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved headings is too large (running into more than a page) then the user has the option of entering another search term to be searched in combination. The system searches subject headings already retrieved and look for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL the original document an the web can be accessed. The prototype system developed under Windows NT environment using ASP and web server is under rigorous testing. The database and indexes management routines need further development.
  16. Dack, D.: Australian attends conference on Dewey (1989) 0.01
    0.005395815 = product of:
      0.02158326 = sum of:
        0.02158326 = product of:
          0.04316652 = sum of:
            0.04316652 = weight(_text_:22 in 2509) [ClassicSimilarity], result of:
              0.04316652 = score(doc=2509,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.2708308 = fieldWeight in 2509, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2509)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    8.11.1995 11:52:22
  17. Vizine-Goetz, D.: OCLC investigates using classification tools to organize Internet data (1998) 0.01
    0.005395815 = product of:
      0.02158326 = sum of:
        0.02158326 = product of:
          0.04316652 = sum of:
            0.04316652 = weight(_text_:22 in 2342) [ClassicSimilarity], result of:
              0.04316652 = score(doc=2342,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.2708308 = fieldWeight in 2342, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2342)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 9.1997 19:16:05
  18. Kent, R.E.: Organizing conceptual knowledge online : metadata interoperability and faceted classification (1998) 0.01
    0.005395815 = product of:
      0.02158326 = sum of:
        0.02158326 = product of:
          0.04316652 = sum of:
            0.04316652 = weight(_text_:22 in 57) [ClassicSimilarity], result of:
              0.04316652 = score(doc=57,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.2708308 = fieldWeight in 57, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=57)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30.12.2001 16:22:41
  19. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.005395815 = product of:
      0.02158326 = sum of:
        0.02158326 = product of:
          0.04316652 = sum of:
            0.04316652 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.04316652 = score(doc=1673,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 8.1996 22:08:06
  20. Alex, H.; Heiner-Freiling, M.: Melvil (2005) 0.01
    0.005395815 = product of:
      0.02158326 = sum of:
        0.02158326 = product of:
          0.04316652 = sum of:
            0.04316652 = weight(_text_:22 in 4321) [ClassicSimilarity], result of:
              0.04316652 = score(doc=4321,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.2708308 = fieldWeight in 4321, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4321)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Ab Januar 2006 wird Die Deutsche Bibliothek ein neues Webangebot mit dem Namen Melvil starten, das ein Ergebnis ihres Engagements für die DDC und das Projekt DDC Deutsch ist. Der angebotene Webservice basiert auf der Übersetzung der 22. Ausgabe der DDC, die im Oktober 2005 als Druckausgabe im K. G. Saur Verlag erscheint. Er bietet jedoch darüber hinausgehende Features, die den Klassifizierer bei seiner Arbeit unterstützen und erstmals eine verbale Recherche für Endnutzer über DDCerschlossene Titel ermöglichen. Der Webservice Melvil gliedert sich in drei Anwendungen: - MelvilClass, - MelvilSearch und - MelvilSoap.