Search (91 results, page 1 of 5)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  1. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.02
    0.019470131 = product of:
      0.05841039 = sum of:
        0.012374603 = product of:
          0.024749206 = sum of:
            0.024749206 = weight(_text_:web in 4379) [ClassicSimilarity], result of:
              0.024749206 = score(doc=4379,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.21634221 = fieldWeight in 4379, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.5 = coord(1/2)
        0.04603579 = product of:
          0.06905368 = sum of:
            0.028754493 = weight(_text_:29 in 4379) [ClassicSimilarity], result of:
              0.028754493 = score(doc=4379,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.23319192 = fieldWeight in 4379, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
            0.04029919 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
              0.04029919 = score(doc=4379,freq=4.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.32829654 = fieldWeight in 4379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.6666667 = coord(2/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  2. Lin, Z.Y.: Classification practice and implications for subject directories of the Chinese language Web-based digital library (2000) 0.01
    0.014639623 = product of:
      0.04391887 = sum of:
        0.024749206 = product of:
          0.049498413 = sum of:
            0.049498413 = weight(_text_:web in 3438) [ClassicSimilarity], result of:
              0.049498413 = score(doc=3438,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.43268442 = fieldWeight in 3438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3438)
          0.5 = coord(1/2)
        0.019169662 = product of:
          0.057508986 = sum of:
            0.057508986 = weight(_text_:29 in 3438) [ClassicSimilarity], result of:
              0.057508986 = score(doc=3438,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.46638384 = fieldWeight in 3438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3438)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Source
    Journal of Internet cataloging. 3(2000) no.4, S.29-50
  3. Ferris, A.M.: If you buy it, will they use it? : a case study on the use of Classification web (2006) 0.01
    0.013318596 = product of:
      0.039955787 = sum of:
        0.028874075 = product of:
          0.05774815 = sum of:
            0.05774815 = weight(_text_:web in 88) [ClassicSimilarity], result of:
              0.05774815 = score(doc=88,freq=8.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.50479853 = fieldWeight in 88, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=88)
          0.5 = coord(1/2)
        0.01108171 = product of:
          0.03324513 = sum of:
            0.03324513 = weight(_text_:22 in 88) [ClassicSimilarity], result of:
              0.03324513 = score(doc=88,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.2708308 = fieldWeight in 88, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=88)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper presents a study conducted at the University of Colorado at Boulder (CU-Boulder) to assess the extent to which its catalogers were using Classification Web (Class Web), the subscription-based, online cataloging documentation resource provided by the Library of Congress. In addition, this paper will explore assumptions made by management regarding CU-Boulder catalogers' use of the product, possible reasons for the lower-than-expected use, and recommendations for promoting a more efficient and cost-effective use of Class Web at other institutions similar to CU-Boulder.
    Date
    10. 9.2000 17:38:22
  4. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.01
    0.012722294 = product of:
      0.07633376 = sum of:
        0.07633376 = product of:
          0.11450064 = sum of:
            0.057508986 = weight(_text_:29 in 6040) [ClassicSimilarity], result of:
              0.057508986 = score(doc=6040,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.46638384 = fieldWeight in 6040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6040)
            0.05699165 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
              0.05699165 = score(doc=6040,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.46428138 = fieldWeight in 6040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6040)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    22. 6.2002 19:42:47
    Source
    International cataloguing and bibliographic control. 29(2000) no.3, S.45-48
  5. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.012029132 = product of:
      0.036087394 = sum of:
        0.025005683 = product of:
          0.050011367 = sum of:
            0.050011367 = weight(_text_:web in 1673) [ClassicSimilarity], result of:
              0.050011367 = score(doc=1673,freq=6.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.43716836 = fieldWeight in 1673, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
        0.01108171 = product of:
          0.03324513 = sum of:
            0.03324513 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.03324513 = score(doc=1673,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia; vgl. auch: http://www7.scu.edu.au/programme/posters/1846/com1846.htm.
  6. Doyle, B.: ¬The classification and evaluation of Content Management Systems (2003) 0.01
    0.011999531 = product of:
      0.03599859 = sum of:
        0.023333777 = product of:
          0.046667553 = sum of:
            0.046667553 = weight(_text_:web in 2871) [ClassicSimilarity], result of:
              0.046667553 = score(doc=2871,freq=4.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.4079388 = fieldWeight in 2871, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2871)
          0.5 = coord(1/2)
        0.012664813 = product of:
          0.037994437 = sum of:
            0.037994437 = weight(_text_:22 in 2871) [ClassicSimilarity], result of:
              0.037994437 = score(doc=2871,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.30952093 = fieldWeight in 2871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2871)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    This is a report on how Doyle and others made a faceted classification scheme for content management systems and made it browsable on the web (see CMS Review in Example Web Sites, below). They discuss why they did it, how, their use of OPML and XFML, how they did research to find terms and categories, and they also include their taxonomy. It is interesting to see facets used in a business environment.
    Date
    30. 7.2004 12:22:52
  7. Ardo, A.; Lundberg, S.: ¬A regional distributed WWW search and indexing service : the DESIRE way (1998) 0.01
    0.011415939 = product of:
      0.034247816 = sum of:
        0.024749206 = product of:
          0.049498413 = sum of:
            0.049498413 = weight(_text_:web in 4190) [ClassicSimilarity], result of:
              0.049498413 = score(doc=4190,freq=8.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.43268442 = fieldWeight in 4190, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4190)
          0.5 = coord(1/2)
        0.009498609 = product of:
          0.028495826 = sum of:
            0.028495826 = weight(_text_:22 in 4190) [ClassicSimilarity], result of:
              0.028495826 = score(doc=4190,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.23214069 = fieldWeight in 4190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4190)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Creates an open, metadata aware system for distributed, collaborative WWW indexing. The system has 3 main components: a harvester (for collecting information), a database (for making the collection searchable), and a user interface (for making the information available). all components can be distributed across networked computers, thus supporting scalability. The system is metadata aware and thus allows searches on several fields including title, document author and URL. Nordic Web Index (NWI) is an application using this system to create a regional Nordic Web-indexing service. NWI is built using 5 collaborating service points within the Nordic countries. The NWI databases can be used to build additional services
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
    Object
    Nordic Web Index
  8. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing based system for organizing and accessing Internet resources (2002) 0.01
    0.008669402 = product of:
      0.026008207 = sum of:
        0.020417055 = product of:
          0.04083411 = sum of:
            0.04083411 = weight(_text_:web in 97) [ClassicSimilarity], result of:
              0.04083411 = score(doc=97,freq=16.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.35694647 = fieldWeight in 97, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=97)
          0.5 = coord(1/2)
        0.0055911513 = product of:
          0.016773453 = sum of:
            0.016773453 = weight(_text_:29 in 97) [ClassicSimilarity], result of:
              0.016773453 = score(doc=97,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.13602862 = fieldWeight in 97, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=97)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the World Wide Web. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing system of Ranganathan. A prototype Software System has been designed to create a database of records specifying Web documents according to the Dublin Core and to input a faceted subject heading according to DSIS. Synonymous terms are added to the Standard terms in the heading using appropriate symbols. Once the data are entered along with a description and the URL of the web document, the record is stored in the System. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The System stores the Surrogates and keeps the faceted subject headings separately after establishing a link. The search is carried out an index entries derived from the faceted subject heading using the chain indexing technique. If a single term is Input, the System searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved Keadings is too large (running into more than a page) the user has the option of entering another search term to be searched in combination. The System searches subject headings already retrieved and looks for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL, the original document an the web can be accessed. The prototype system developed in a Windows NT environment using ASP and a web server is under rigorous testing. The database and Index management routines need further development.
    An interesting but somewhat confusing article telling how the writers described web pages with Dublin Core metadata, including a faceted classification, and built a system that lets users browse the collection through the facets. They seem to want to cover too much in a short article, and unnecessary space is given over to screen shots showing how Dublin Core metadata was entered. The screen shots of the resulting browsable system are, unfortunately, not as enlightening as one would hope, and there is no discussion of how the system was actually written or the technology behind it. Still, it could be worth reading as an example of such a system and how it is treated in journals.
    Footnote
    Vgl. auch: Devadason, F.J.: Facet analysis and Semantic Web: musings of a student of Ranganathan. Unter: http://www.geocities.com/devadason.geo/FASEMWEB.html#FacetedIndex.
    Source
    Knowledge organization. 29(2002) no.2, S.61-77
  9. LaBarre, K.: Adventures in faceted classification: a brave new world or a world of confusion? (2004) 0.01
    0.00853978 = product of:
      0.02561934 = sum of:
        0.0144370375 = product of:
          0.028874075 = sum of:
            0.028874075 = weight(_text_:web in 2634) [ClassicSimilarity], result of:
              0.028874075 = score(doc=2634,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.25239927 = fieldWeight in 2634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2634)
          0.5 = coord(1/2)
        0.011182303 = product of:
          0.033546906 = sum of:
            0.033546906 = weight(_text_:29 in 2634) [ClassicSimilarity], result of:
              0.033546906 = score(doc=2634,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.27205724 = fieldWeight in 2634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2634)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    A preliminary, purposive survey of definitions and current applications of facet analytical theory (FA) is used to develop a framework for the analysis of Websites. This set of guidelines may well serve to highlight commonalities and differences among FA applications an the Web. Rather than identifying FA as the terrain of a particular interest group, the goal is to explore current practices, uncover common misconceptions, extend understanding, and highlight developments that augment the traditional practice of FA and faceted classification (FC).
    Date
    29. 8.2004 9:42:50
  10. Ménard, E.; Mas, S.; Alberts, I.: Faceted classification for museum artefacts : a methodology to support web site development of large cultural organizations (2010) 0.01
    0.0076297866 = product of:
      0.022889359 = sum of:
        0.01649947 = product of:
          0.03299894 = sum of:
            0.03299894 = weight(_text_:web in 3945) [ClassicSimilarity], result of:
              0.03299894 = score(doc=3945,freq=8.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.2884563 = fieldWeight in 3945, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3945)
          0.5 = coord(1/2)
        0.0063898875 = product of:
          0.019169662 = sum of:
            0.019169662 = weight(_text_:29 in 3945) [ClassicSimilarity], result of:
              0.019169662 = score(doc=3945,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.15546128 = fieldWeight in 3945, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3945)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - This research project aims to provide a new visual representation of the Artefacts Canada digital collection, as well as a means for users to browse this content. Artefacts Canada Humanities is a database containing approximately 3.5 million records describing the different collections of Canadian museums. Design/methodology/approach - A four-step methodology was adopted for the development of the faceted taxonomy model. First, a best practice review consisting of an extensive analysis of existing terminology standards in museum communities and public web interfaces of large cultural organizations was performed. The second step of the methodology entailed a domain analysis; this involved extracting and comparing relevant concepts from terminological authoritative sources. The third step proceeded to term clustering and entity listing,which involved the breaking-up of the taxonomy domains into potential facets. An incremental user testing was also realized in order to validate and refine the taxonomy components (facets, values, and relationships). Findings - The project resulted in a bilingual and expandable vocabulary structure that will further be used to describe the Artefacts Canada database records. The new taxonomy simplifies the representation of complex content by grouping objects into similar facets to classify all records of the Artefacts Canada database. The user-friendly bilingual taxonomy provides worldwide visitors with the means to better access Canadian virtual museum collections. Originality/value - Few methodological tools are available for museums which wish to adopt a faceted approach in the development of their web sites. For practitioners, the methodology developed within this project is a direct contribution to support web site development of large cultural organizations.
    Date
    29. 8.2010 12:31:55
  11. Kwasnik, B.H.: ¬The role of classification in knowledge representation (1999) 0.01
    0.007291071 = product of:
      0.021873213 = sum of:
        0.012374603 = product of:
          0.024749206 = sum of:
            0.024749206 = weight(_text_:web in 2464) [ClassicSimilarity], result of:
              0.024749206 = score(doc=2464,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.21634221 = fieldWeight in 2464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2464)
          0.5 = coord(1/2)
        0.009498609 = product of:
          0.028495826 = sum of:
            0.028495826 = weight(_text_:22 in 2464) [ClassicSimilarity], result of:
              0.028495826 = score(doc=2464,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.23214069 = fieldWeight in 2464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2464)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    A fascinating, broad-ranging article about classification, knowledge, and how they relate. Hierarchies, trees, paradigms (a two-dimensional classification that can look something like a spreadsheet), and facets are covered, with descriptions of how they work and how they can be used for knowledge discovery and creation. Kwasnick outlines how to make a faceted classification: choose facets, develop facets, analyze entities using the facets, and make a citation order. Facets are useful for many reasons: they do not require complete knowledge of the entire body of material; they are hospitable, flexible, and expressive; they do not require a rigid background theory; they can mix theoretical structures and models; and they allow users to view things from many perspectives. Facets do have faults: it can be hard to pick the right ones; it is hard to show relations between them; and it is difficult to visualize them. The coverage of the other methods is equally thorough and there is much to consider for anyone putting a classification on the web.
    Source
    Library trends. 48(1999) no.1, S.22-47
  12. Sydler, J.-P.: UDC-Automatisierung und ihre Folgerungen (1978) 0.01
    0.0070875753 = product of:
      0.04252545 = sum of:
        0.04252545 = product of:
          0.0850509 = sum of:
            0.0850509 = weight(_text_:seite in 1414) [ClassicSimilarity], result of:
              0.0850509 = score(doc=1414,freq=2.0), product of:
                0.19633847 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.03505379 = queryNorm
                0.4331851 = fieldWeight in 1414, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1414)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Die ersten automatisierten Dokumentationsrecherchen basierten auf dem Vorhandensein von Suchwörtern in den bibliographischen Notizen. Um die Arbeit der Benutzer zu vereinfachen, wurden die Synonyme durch ein linguistisches Verfahren, und die benachbarten Begriffe durch eine Systematik verknüpft. So enstanden Thesauri, die die Suche nach Begriffen und nicht mehr nach Wörtern erlauben. Die Grundklassifikation sollte dezimal sein, um den Entscheidungsprozep vor dem Bildschirm zu erlauben. Die UDC als möglich Lösung könnte die heterogenen Systematiken der verschiedenen Thesauri vereinheitlichen. Dabei würde der Benutzer nur die linguistische Seite, der Computer aber den systematischen Teil gebrauchen
  13. Chowdhury, S.; Chowdhury, G.G.: Using DDC to create a visual knowledge map as an aid to online information retrieval (2004) 0.01
    0.0060189255 = product of:
      0.018056776 = sum of:
        0.011666888 = product of:
          0.023333777 = sum of:
            0.023333777 = weight(_text_:web in 2643) [ClassicSimilarity], result of:
              0.023333777 = score(doc=2643,freq=4.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.2039694 = fieldWeight in 2643, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2643)
          0.5 = coord(1/2)
        0.0063898875 = product of:
          0.019169662 = sum of:
            0.019169662 = weight(_text_:29 in 2643) [ClassicSimilarity], result of:
              0.019169662 = score(doc=2643,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.15546128 = fieldWeight in 2643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2643)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Content
    1. Introduction Web search engines and digital libraries usually expect the users to use search terms that most accurately represent their information needs. Finding the most appropriate search terms to represent an information need is an age old problem in information retrieval. Keyword or phrase search may produce good search results as long as the search terms or phrase(s) match those used by the authors and have been chosen for indexing by the concerned information retrieval system. Since this does not always happen, a large number of false drops are produced by information retrieval systems. The retrieval results become worse in very large systems that deal with millions of records, such as the Web search engines and digital libraries. Vocabulary control tools are used to improve the performance of text retrieval systems. Thesauri, the most common type of vocabulary control tool used in information retrieval, appeared in the late fifties, designed for use with the emerging post-coordinate indexing systems of that time. They are used to exert terminology control in indexing, and to aid in searching by allowing the searcher to select appropriate search terms. A large volume of literature exists describing the design features, and experiments with the use, of thesauri in various types of information retrieval systems (see for example, Furnas et.al., 1987; Bates, 1986, 1998; Milstead, 1997, and Shiri et al., 2002).
    Date
    29. 8.2004 13:37:50
  14. Ferris, A.M.: Results of an expanded survey on the use of Classification Web : they will use it, if you buy it! (2009) 0.01
    0.005380366 = product of:
      0.032282196 = sum of:
        0.032282196 = product of:
          0.06456439 = sum of:
            0.06456439 = weight(_text_:web in 2991) [ClassicSimilarity], result of:
              0.06456439 = score(doc=2991,freq=10.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.5643819 = fieldWeight in 2991, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2991)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper presents the results of a survey examining the extent to which working catalogers use Classification Web, the Library of Congress' online resource for subject heading and classification documentation. An earlier survey analyzed Class Web's usefulness on an institutional level. This broader survey expands on that analysis and provides information on such questions as: what types of institutions subscribe to Class Web; what are the reasons for using Class Web when performing original or copy cataloging; and what other resources do catalogers use for classification/subject heading analysis?
    Object
    Classification Web
  15. Sandner, M.; Jahns, Y.: Kurzbericht zum DDC-Übersetzer- und Anwendertreffen bei der IFLA-Konferenz 2005 in Oslo, Norwegen (2005) 0.01
    0.0050627314 = product of:
      0.030376388 = sum of:
        0.030376388 = product of:
          0.04556458 = sum of:
            0.016773453 = weight(_text_:29 in 4406) [ClassicSimilarity], result of:
              0.016773453 = score(doc=4406,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.13602862 = fieldWeight in 4406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4406)
            0.028791128 = weight(_text_:22 in 4406) [ClassicSimilarity], result of:
              0.028791128 = score(doc=4406,freq=6.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.23454636 = fieldWeight in 4406, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4406)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Content
    "Am 16. August 2005 fand in Oslo im Rahmen der heurigen IFLA-Konferenz das alljährliche Treffen der DDC-Übersetzer und der weltweiten DeweyAnwender-Institutionen (Nationalbibliotheken, Ersteller von Nationalbibliografien) statt. Die im Sommer 2005 bereits abgeschlossene deutsche Übersetzung wird in der Druckfassung Ende des Jahres in 4 Bänden vorliegen, beim K. G. Saur Verlag in München erscheinen (ISBN 3-598-11651-9) und 2006 vom ebenfalls erstmals ins Deutsche übersetzten DDC-Lehrbuch (ISBN 3-598-11748-5) begleitet. Pläne für neu startende Übersetzungen der DDC 22 gibt es für folgende Sprachen: Arabisch (mit der wachsenden Notwendigkeit, Klasse 200 Religion zu revidieren), Französisch (es erschien zuletzt eine neue Kurzausgabe 14, nun werden eine vierbändige Druckausgabe und eine frz. Webversion anvisiert), Schwedisch, Vietnamesisch (hierfür wird eine an die Sprache und Schrift angepasste Version des deutschen Übersetzungstools zum Einsatz kommen).
    Allgemein DDC 22 ist im Gegensatz zu den früheren Neuauflagen der Standard Edition eine Ausgabe ohne generelle Überarbeitung einer gesamten Klasse. Sie enthält jedoch zahlreiche Änderungen und Expansionen in fast allen Disziplinen und in vielen Hilfstafeln. Es erschien auch eine Sonderausgabe der Klasse 200, Religion. In der aktuellen Kurzausgabe der DDC 22 (14, aus 2004) sind all diese Neuerungen berücksichtigt. Auch die elektronische Version exisitiert in einer vollständigen (WebDewey) und in einer KurzVariante (Abridged WebDewey) und ist immer auf dem jüngsten Stand der Klassifikation. Ein Tutorial für die Nutzung von WebDewey steht unter www.oclc.org /dewey/ resourcesitutorial zur Verfügung. Der Index enthält in dieser elektronischen Fassung weit mehr zusammengesetzte Notationen und verbale Sucheinstiege (resultierend aus den Titeldaten des "WorldCat") als die Druckausgabe, sowie Mappings zu den aktuellsten Normdatensätzen aus LCSH und McSH. Aktuell Die personelle Zusammensetzung des EPC (Editorial Policy Committee) hat sich im letzten Jahr verändert. Dieses oberste Gremium der DDC hat Prioritäten für den aktuellen Arbeitsplan festgelegt. Es wurde vereinbart, größere Änderungsvorhaben via Dewey-Website künftig wie in einem Stellungnahmeverfahren zur fachlichen Diskussion zu stellen. www.oclc.org/dewey/discussion/."
    Date
    6.11.2005 12:27:29
  16. National Seminar on Classification in the Digital Environment : Papers contributed to the National Seminar an Classification in the Digital Environment, Bangalore, 9-11 August 2001 (2001) 0.00
    0.004944364 = product of:
      0.014833092 = sum of:
        0.011666888 = product of:
          0.023333777 = sum of:
            0.023333777 = weight(_text_:web in 2047) [ClassicSimilarity], result of:
              0.023333777 = score(doc=2047,freq=16.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.2039694 = fieldWeight in 2047, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2047)
          0.5 = coord(1/2)
        0.0031662032 = product of:
          0.009498609 = sum of:
            0.009498609 = weight(_text_:22 in 2047) [ClassicSimilarity], result of:
              0.009498609 = score(doc=2047,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.07738023 = fieldWeight in 2047, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2047)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    2. 1.2004 10:35:22
    Footnote
    Rez. in: Knowledge organization 30(2003) no.1, S.40-42 (J.-E. Mai): "Introduction: This is a collection of papers presented at the National Seminar an Classification in the Digital Environment held in Bangalore, India, an August 9-11 2001. The collection contains 18 papers dealing with various issues related to knowledge organization and classification theory. The issue of transferring the knowledge, traditions, and theories of bibliographic classification to the digital environment is an important one, and I was excited to learn that proceedings from this seminar were available. Many of us experience frustration an a daily basis due to poorly constructed Web search mechanisms and Web directories. As a community devoted to making information easily accessible we have something to offer the Web community and a seminar an the topic was indeed much needed. Below are brief summaries of the 18 papers presented at the seminar. The order of the summaries follows the order of the papers in the proceedings. The titles of the paper are given in parentheses after the author's name. AHUJA and WESLEY (From "Subject" to "Need": Shift in Approach to Classifying Information an the Internet/Web) argue that traditional bibliographic classification systems fall in the digital environment. One problem is that bibliographic classification systems have been developed to organize library books an shelves and as such are unidimensional and tied to the paper-based environment. Another problem is that they are "subject" oriented in the sense that they assume a relatively stable universe of knowledge containing basic and fixed compartments of knowledge that can be identified and represented. Ahuja and Wesley suggest that classification in the digital environment should be need-oriented instead of subjectoriented ("One important link that binds knowledge and human being is his societal need. ... Hence, it will be ideal to organise knowledge based upon need instead of subject." (p. 10)).
    SELVI (Knowledge Classification of Digital Information Materials with Special Reference to Clustering Technique) finds that it is essential to classify digital material since the amount of material that is becoming available is growing. Selvi suggests using automated classification to "group together those digital information materials or documents that are "most similar" (p. 65). This can be attained by using Cluster analysis methods. PRADHAN and THULASI (A Study of the Use of Classification and Indexing Systems by Web Resource Directories) compare and contrast the classificatory structures of Google, Yahoo, and Looksmart's directories and compare the directories to Dewey Decimal Classification, Library of Congress Classification and Colon Classification's classificatory structures. They find differentes between the directories' and the bibliographic classification systems' classificatory structures and principles. These differentes stem from the fact that bibliographic classification systems are used to "classify academic resources for the research community" (p. 83) and directories "aim to categorize a wider breath of information groups, entertainment, recreation, govt. information, commercial information" (p. 83). NEELAMEGHAN (Hierarchy, Hierarchical Relation and Hierarchical Arrangement) reviews the concept of hierarchy and the formation of hierarchical structures across a variety of domains. NEELAMEGHAN and PRADAD (Digitized Schemes for Subject Classification and Thesauri: Complementary Roles) demonstrate how thesaural relationships (NT, BT, and RT) can be applied to a classification scheme, the Colon Classification in this Gase. NEELAMEGHAN and ASUNDI (Metadata Framework for Describing Embodied Knowledge and Subject Content) propose to use the Generalized Facet Structure framework which is based an Ranganathan's General Theory of Knowledge Classification as a framework for describing the content of documents in a metadata element set for the representation of web documents. CHUDAMANI (Classified Catalogue as a Tool for Subject Based Information Retrieval in both Traditional and Electronic Library Environment) explains why the classified catalogue is superior to the alphabetic cata logue and argues that the same is true in the digital environment.
    Discussion The proceedings of the National Seminar an Classification in the Digital Environment give some insights. However, the depth of analysis and discussion is very uneven across the papers. Some of the papers have substantive research content while others appear to be notes used in the oral presentation. The treatments of the topics are very general in nature. Some papers have a very limited list of references while others have no bibliography. No index has been provided. The transfer of bibliographic knowledge organization theory to the digital environment is an important topic. However, as the papers at this conference have shown, it is also a difficult task. Of the 18 papers presented at this seminar an classification in the digital environment, only 4-5 papers actually deal directly with this important topic. The remaining papers deal with issues that are more or less relevant to classification in the digital environment without explicitly discussing the relation. The reason could be that the authors take up issues in knowledge organization that still need to be investigated and clarified before their application in the digital environment can be considered. Nonetheless, one wishes that the knowledge organization community would discuss the application of classification theory in the digital environment in greater detail. It is obvious from the comparisons of the classificatory structures of bibliographic classification systems and Web directories that these are different and that they probably should be different, since they serve different purposes. Interesting questions in the transformation of bibliographic classification theories to the digital environment are: "Given the existing principles in bibliographic knowledge organization, what are the optimum principles for organization of information, irrespectively of context?" and "What are the fundamental theoretical and practical principles for the construction of Web directories?" Unfortunately, the papers presented at this seminar do not attempt to answer or discuss these questions."
  17. Peereboom, M.: Dwerg tussen reuzen? : het Nederlandse basisclassificatie Web (1997) 0.00
    0.004812346 = product of:
      0.028874075 = sum of:
        0.028874075 = product of:
          0.05774815 = sum of:
            0.05774815 = weight(_text_:web in 515) [ClassicSimilarity], result of:
              0.05774815 = score(doc=515,freq=8.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.50479853 = fieldWeight in 515, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=515)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Developments in electronic communication technology have meda online databases a normal part of library collections. To provide users with direct access to Internet resources the Dutch Royal Library has cooperated with several university libraries in the Netherlands to develop the Nederlandse Basisclassificatie Web. Subject specialists select sources, add English summaries and NBW code, and input them to the online database. A Web desk and training workshops have been provided to assist users, and improvements to the system will simplify search procedures
    Footnote
    Übers. d. Titels: A dwarf amongst giants?: the Dutch Basic classification of Web resources
  18. Quick Guide to Publishing a Classification Scheme on the Semantic Web (2008) 0.00
    0.004812346 = product of:
      0.028874075 = sum of:
        0.028874075 = product of:
          0.05774815 = sum of:
            0.05774815 = weight(_text_:web in 3061) [ClassicSimilarity], result of:
              0.05774815 = score(doc=3061,freq=8.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.50479853 = fieldWeight in 3061, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3061)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    This document describes in brief how to express the content and structure of a classification scheme, and metadata about a classification scheme, in RDF using the SKOS vocabulary. RDF allows data to be linked to and/or merged with other RDF data by semantic web applications. The Semantic Web, which is based on the Resource Description Framework (RDF), provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Publishing classifications schemes in SKOS will unify the great many of existing classification efforts in the framework of the Semantic Web.
  19. Poynder, R.: Web research engines? (1996) 0.00
    0.0046117427 = product of:
      0.027670456 = sum of:
        0.027670456 = product of:
          0.055340912 = sum of:
            0.055340912 = weight(_text_:web in 5698) [ClassicSimilarity], result of:
              0.055340912 = score(doc=5698,freq=10.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.48375595 = fieldWeight in 5698, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5698)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Describes the shortcomings of search engines for the WWW comparing their current capabilities to those of the first generation CD-ROM products. Some allow phrase searching and most are improving their Boolean searching. Few allow truncation, wild cards or nested logic. They are stateless, losing previous search criteria. Unlike the indexing and classification systems for today's CD-ROMs, those for Web pages are random, unstructured and of variable quality. Considers that at best Web search engines can only offer free text searching. Discusses whether automatic data classification systems such as Infoseek Ultra can overcome the haphazard nature of the Web with neural network technology, and whether Boolean search techniques may be redundant when replaced by technology such as the Euroferret search engine. However, artificial intelligence is rarely successful on huge, varied databases. Relevance ranking and automatic query expansion still use the same simple inverted indexes. Most Web search engines do nothing more than word counting. Further complications arise with foreign languages
  20. Hanke, M.: Bibliothekarische Klassifikationssysteme im semantischen Web : zu Chancen und Problemen von Linked-data-Repräsentationen ausgewählter Klassifikationssysteme (2014) 0.00
    0.0046117427 = product of:
      0.027670456 = sum of:
        0.027670456 = product of:
          0.055340912 = sum of:
            0.055340912 = weight(_text_:web in 2463) [ClassicSimilarity], result of:
              0.055340912 = score(doc=2463,freq=10.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.48375595 = fieldWeight in 2463, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2463)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Pflege und Anwendung von Klassifikationssystemen für Informationsressourcen sind traditionell eine Kernkompetenz von Bibliotheken. Diese Systeme sind häufig historisch gewachsen und die Veröffentlichung verschiedener Systeme ist in der Vergangenheit typischerweise durch gedruckte Regelwerke oder proprietäre Datenbanken erfolgt. Die Technologien des semantischen Web erlauben es, Klassifikationssysteme in einer standardisierten und maschinenlesbaren Weise zu repräsentieren, sowie als Linked (Open) Data für die Nachnutzung zugänglich zu machen. Anhand ausgewählter Beispiele von Klassifikationssystemen, die bereits als Linked (Open) Data publiziert wurden, werden in diesem Artikel zentrale semantische und technische Fragen erörtert, sowie mögliche Einsatzgebiete und Chancen dargestellt. So kann beispielsweise die für die Maschinenlesbarkeit erforderliche starke Strukturierung von Daten im semantischen Web zum besseren Verständnis der Klassifikationssysteme beitragen und möglicherweise positive Impulse für ihre Weiterentwicklung liefern. Für das semantische Web aufbereitete Repräsentationen von Klassifikationssystemen können unter anderem zur Kataloganreicherung oder für die anwendungsbezogene Erstellung von Konkordanzen zwischen verschiedenen Klassifikations- bzw. Begriffssystemen genutzt werden..
    Theme
    Semantic Web

Languages

  • e 75
  • d 12
  • nl 2
  • hu 1
  • i 1
  • More… Less…

Types

  • a 76
  • el 13
  • m 3
  • s 2
  • r 1
  • x 1
  • More… Less…