Search (25 results, page 1 of 2)

  • × type_ss:"x"
  • × year_i:[2000 TO 2010}
  1. Thielemann, A.: Sacherschließung für die Kunstgeschichte : Möglichkeiten und Grenzen von DDC 700: The Arts (2007) 0.02
    0.02108372 = product of:
      0.06325116 = sum of:
        0.06325116 = product of:
          0.09487674 = sum of:
            0.047652703 = weight(_text_:29 in 1409) [ClassicSimilarity], result of:
              0.047652703 = score(doc=1409,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.31092256 = fieldWeight in 1409, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1409)
            0.047224034 = weight(_text_:22 in 1409) [ClassicSimilarity], result of:
              0.047224034 = score(doc=1409,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.30952093 = fieldWeight in 1409, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1409)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Nach der Veröffentlichung einer deutschen Übersetzung der Dewey Decimal Classification 22 im Oktober 2005 und ihrer Nutzung zur Inhaltserschließung in der Deutschen Nationalbibliographie seit Januar 2006 stellt sich aus Sicht der deutschen kunsthistorischen Spezialbibliotheken die Frage nach einer möglichen Verwendung der DDC und ihrer generellen Eignung zur Inhalterschließung kunsthistorischer Publikationen. Diese Frage wird vor dem Hintergrund der bestehenden bibliothekarischen Strukturen für die Kunstgeschichte sowie mit Blick auf die inhaltlichen Besonderheiten, die Forschungsmethodik und die publizistischen Traditionen dieses Faches erörtert.
    Date
    14. 2.2008 19:56:29
  2. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.0153776 = product of:
      0.0461328 = sum of:
        0.0461328 = product of:
          0.1383984 = sum of:
            0.1383984 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.1383984 = score(doc=701,freq=2.0), product of:
                0.36937886 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043569047 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  3. Wunderlich, B.: ¬Die wissenschaftliche Erschließung von Bekleidung mit systematischen Ordnungssystemen im musealen Kontext : Wie bekommt man Hemd und Hose in die Datenbank? (2005) 0.01
    0.011231851 = product of:
      0.033695552 = sum of:
        0.033695552 = product of:
          0.101086654 = sum of:
            0.101086654 = weight(_text_:29 in 4173) [ClassicSimilarity], result of:
              0.101086654 = score(doc=4173,freq=4.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.6595664 = fieldWeight in 4173, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4173)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    28.11.1999 13:32:29
    27. 9.2005 14:29:52
  4. Krull, S.: Lesen im Informationszeitalter : Hypertext versus Buch (2000) 0.01
    0.009265803 = product of:
      0.027797408 = sum of:
        0.027797408 = product of:
          0.083392225 = sum of:
            0.083392225 = weight(_text_:29 in 4783) [ClassicSimilarity], result of:
              0.083392225 = score(doc=4783,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.5441145 = fieldWeight in 4783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4783)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    21. 5.2000 10:53:29
  5. Spies, B.: Website-Optimierung für Suchhilfen : Vorschläge für den Internetauftritt der Stiftung Naturschutz Berlin (2004) 0.01
    0.009265803 = product of:
      0.027797408 = sum of:
        0.027797408 = product of:
          0.083392225 = sum of:
            0.083392225 = weight(_text_:29 in 6791) [ClassicSimilarity], result of:
              0.083392225 = score(doc=6791,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.5441145 = fieldWeight in 6791, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6791)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    9.11.2005 10:43:29
  6. Schubert, P.: Revision von Aufbau und Anwendung des kontrollierten Vokabulars einer bibliografischen Datensammlung zum Thema Dramturgie (2003) 0.01
    0.009265803 = product of:
      0.027797408 = sum of:
        0.027797408 = product of:
          0.083392225 = sum of:
            0.083392225 = weight(_text_:29 in 2518) [ClassicSimilarity], result of:
              0.083392225 = score(doc=2518,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.5441145 = fieldWeight in 2518, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2518)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Pages
    29 S
  7. Sperling, R.: Anlage von Literaturreferenzen für Onlineressourcen auf einer virtuellen Lernplattform (2004) 0.01
    0.009182452 = product of:
      0.027547356 = sum of:
        0.027547356 = product of:
          0.08264206 = sum of:
            0.08264206 = weight(_text_:22 in 4635) [ClassicSimilarity], result of:
              0.08264206 = score(doc=4635,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.5416616 = fieldWeight in 4635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4635)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    26.11.2005 18:39:22
  8. Semenova, E.: Erstellung einer Dokumentationssprache : Am Beispiel der Oberbegriffsdatei für die Sonderausstellungsdatenbank im Institut für Museumskunde, Berlin (2004) 0.01
    0.007942118 = product of:
      0.023826351 = sum of:
        0.023826351 = product of:
          0.07147905 = sum of:
            0.07147905 = weight(_text_:29 in 1734) [ClassicSimilarity], result of:
              0.07147905 = score(doc=1734,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.46638384 = fieldWeight in 1734, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1734)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    28.11.1999 13:32:29
  9. Nicoletti, M.: Automatische Indexierung (2001) 0.01
    0.007942118 = product of:
      0.023826351 = sum of:
        0.023826351 = product of:
          0.07147905 = sum of:
            0.07147905 = weight(_text_:29 in 4326) [ClassicSimilarity], result of:
              0.07147905 = score(doc=4326,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.46638384 = fieldWeight in 4326, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4326)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    29. 9.2017 12:00:04
  10. Milanesi, C.: Möglichkeiten der Kooperation im Rahmen von Subject Gateways : das Euler-Projekt im Vergleich mit weiteren europäischen Projekten (2001) 0.01
    0.007870673 = product of:
      0.023612019 = sum of:
        0.023612019 = product of:
          0.07083605 = sum of:
            0.07083605 = weight(_text_:22 in 4865) [ClassicSimilarity], result of:
              0.07083605 = score(doc=4865,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.46428138 = fieldWeight in 4865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4865)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2002 19:41:59
  11. Kumpe, D.: Methoden zur automatischen Indexierung von Dokumenten (2006) 0.01
    0.007425352 = product of:
      0.022276055 = sum of:
        0.022276055 = product of:
          0.06682816 = sum of:
            0.06682816 = weight(_text_:network in 782) [ClassicSimilarity], result of:
              0.06682816 = score(doc=782,freq=2.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.3444231 = fieldWeight in 782, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=782)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Diese Diplomarbeit handelt von der Indexierung von unstrukturierten und natürlichsprachigen Dokumenten. Die zunehmende Informationsflut und die Zahl an veröffentlichten wissenschaftlichen Berichten und Büchern machen eine maschinelle inhaltliche Erschließung notwendig. Um die Anforderungen hierfür besser zu verstehen, werden Probleme der natürlichsprachigen schriftlichen Kommunikation untersucht. Die manuellen Techniken der Indexierung und die Dokumentationssprachen werden vorgestellt. Die Indexierung wird thematisch in den Bereich der inhaltlichen Erschließung und des Information Retrieval eingeordnet. Weiterhin werden Vor- und Nachteile von ausgesuchten Algorithmen untersucht und Softwareprodukte im Bereich des Information Retrieval auf ihre Arbeitsweise hin evaluiert. Anhand von Beispiel-Dokumenten werden die Ergebnisse einzelner Verfahren vorgestellt. Mithilfe des Projekts European Migration Network werden Probleme und grundlegende Anforderungen an die Durchführung einer inhaltlichen Erschließung identifiziert und Lösungsmöglichkeiten vorgeschlagen.
  12. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.01
    0.006558894 = product of:
      0.019676682 = sum of:
        0.019676682 = product of:
          0.059030045 = sum of:
            0.059030045 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.059030045 = score(doc=3406,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    30. 5.2010 16:22:35
  13. Hoffmann, R.: Mailinglisten für den bibliothekarischen Informationsdienst am Beispiel von RABE (2000) 0.01
    0.0055654063 = product of:
      0.016696218 = sum of:
        0.016696218 = product of:
          0.050088655 = sum of:
            0.050088655 = weight(_text_:22 in 4441) [ClassicSimilarity], result of:
              0.050088655 = score(doc=4441,freq=4.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.32829654 = fieldWeight in 4441, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4441)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    22. 2.2000 10:25:05
    Series
    Kölner Arbeitspapiere zur Bibliotheks- und Informationswissenschaft; Bd.22
  14. Munzner, T.: Interactive visualization of large graphs and networks (2000) 0.00
    0.004243058 = product of:
      0.012729174 = sum of:
        0.012729174 = product of:
          0.038187522 = sum of:
            0.038187522 = weight(_text_:network in 4746) [ClassicSimilarity], result of:
              0.038187522 = score(doc=4746,freq=2.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.1968132 = fieldWeight in 4746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4746)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.
  15. Tzitzikas, Y.: Collaborative ontology-based information indexing and retrieval (2002) 0.00
    0.004243058 = product of:
      0.012729174 = sum of:
        0.012729174 = product of:
          0.038187522 = sum of:
            0.038187522 = weight(_text_:network in 2281) [ClassicSimilarity], result of:
              0.038187522 = score(doc=2281,freq=2.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.1968132 = fieldWeight in 2281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2281)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    An information system like the Web is a continuously evolving system consisting of multiple heterogeneous information sources, covering a wide domain of discourse, and a huge number of users (human or software) with diverse characteristics and needs, that produce and consume information. The challenge nowadays is to build a scalable information infrastructure enabling the effective, accurate, content based retrieval of information, in a way that adapts to the characteristics and interests of the users. The aim of this work is to propose formally sound methods for building such an information network based on ontologies which are widely used and are easy to grasp by ordinary Web users. The main results of this work are: - A novel scheme for indexing and retrieving objects according to multiple aspects or facets. The proposed scheme is a faceted scheme enriched with a method for specifying the combinations of terms that are valid. We give a model-theoretic interpretation to this model and we provide mechanisms for inferring the valid combinations of terms. This inference service can be exploited for preventing errors during the indexing process, which is very important especially in the case where the indexing is done collaboratively by many users, and for deriving "complete" navigation trees suitable for browsing through the Web. The proposed scheme has several advantages over the hierarchical classification schemes currently employed by Web catalogs, namely, conceptual clarity (it is easier to understand), compactness (it takes less space), and scalability (the update operations can be formulated more easily and be performed more effciently). - A exible and effecient model for building mediators over ontology based information sources. The proposed mediators support several modes of query translation and evaluation which can accommodate various application needs and levels of answer quality. The proposed model can be used for providing users with customized views of Web catalogs. It can also complement the techniques for building mediators over relational sources so as to support approximate translation of partially ordered domain values.
  16. Mair, M.: Increasing the value of meta data by using associative semantic networks (2002) 0.00
    0.003971059 = product of:
      0.011913176 = sum of:
        0.011913176 = product of:
          0.035739526 = sum of:
            0.035739526 = weight(_text_:29 in 4972) [ClassicSimilarity], result of:
              0.035739526 = score(doc=4972,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.23319192 = fieldWeight in 4972, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4972)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    17. 3.2006 19:24:29
  17. Lorenz, S.: Konzeption und prototypische Realisierung einer begriffsbasierten Texterschließung (2006) 0.00
    0.0039353366 = product of:
      0.011806009 = sum of:
        0.011806009 = product of:
          0.035418026 = sum of:
            0.035418026 = weight(_text_:22 in 1746) [ClassicSimilarity], result of:
              0.035418026 = score(doc=1746,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.23214069 = fieldWeight in 1746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1746)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2015 9:17:30
  18. Buß, M.: Unternehmenssprache in internationalen Unternehmen : Probleme des Informationstransfers in der internen Kommunikation (2005) 0.00
    0.003279447 = product of:
      0.009838341 = sum of:
        0.009838341 = product of:
          0.029515022 = sum of:
            0.029515022 = weight(_text_:22 in 1482) [ClassicSimilarity], result of:
              0.029515022 = score(doc=1482,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.19345059 = fieldWeight in 1482, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1482)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    22. 5.2005 18:25:26
  19. Düring, M.: ¬Die Dewey Decimal Classification : Entstehung, Aufbau und Ausblick auf eine Nutzung in deutschen Bibliotheken (2003) 0.00
    0.003279447 = product of:
      0.009838341 = sum of:
        0.009838341 = product of:
          0.029515022 = sum of:
            0.029515022 = weight(_text_:22 in 2460) [ClassicSimilarity], result of:
              0.029515022 = score(doc=2460,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.19345059 = fieldWeight in 2460, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2460)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Die ständig steigende Zahl an publizierter Information in immer neuen Formen verlangt besonders von Informations- und Dokumentationseinrichtungen immer präzisere Lösungen zur Erschließung dieser Informationen und ihrer benutzerfreundlichen Aufbereitung. Besonders im derzeitigen Zeitalter der Datenbanken und Online-Kataloge ist die Kombination von verbaler und klassifikatorischer Sacherschließung gefordert, ohne dabei die Verbindung zu den älteren, vielerorts noch (zumindest zusätzlich) in Verwendung befindlichen, Zettelkatalogen zu verlieren. Weltweit ist eine Vielzahl an verschiedenen Klassifikationen im Einsatz. Die Wahl der für eine Einrichtung passenden Klassifikation ist abhängig von ihrer thematischen und informationellen Ausrichtung, der Größe und Art der Bestände und nicht zuletzt von technischen und personellen Voraussetzungen. Auf Seiten der zu wählenden Klassifikation sind die Einfachheit der Handhabung für den Bibliothekar, die Verständlichkeit für den Benutzer, die Erweiterungsfähigkeit der Klassifikation durch das Aufkommen neuer Wissensgebiete und die Einbindung in informationelle Netze mit anderen Einrichtungen von entscheidender Bedeutung. In dieser Arbeit soll die Dewey Dezimalklassifikation (DDC) hinsichtlich dieser Punkte näher beleuchtet werden. Sie ist die weltweit am häufigsten benutzte Klassifikation. Etwa 200.000 Bibliotheken in 135 Ländern erschließen ihre Bestände mit diesem System. Sie liegt derzeit bereits in der 22. ungekürzten Auflage vor und wurde bisher in 30 Sprachen übersetzt. Eine deutsche Komplettübersetzung wird im Jahre 2005 erscheinen. Trotz teils heftig geführter Standardisierungsdebatten und Plänen für die Übernahme von amerikanischen Formalerschließungsregeln herrscht in Bezug auf die Sacherschließung unter deutschen Bibliotheken wenig Einigkeit. Die DDC ist in Deutschland und anderen europäischen Ländern kaum verbreitet, sieht von Großbritannien und von der Verwendung in Bibliografien ab. Diese Arbeit geht demzufolge auf die historischen Gründe dieser Entwicklung ein und wagt einen kurzen Ausblick in die Zukunft der Dezimalklassifikation.
  20. Westermeyer, D.: Adaptive Techniken zur Informationsgewinnung : der Webcrawler InfoSpiders (2005) 0.00
    0.003279447 = product of:
      0.009838341 = sum of:
        0.009838341 = product of:
          0.029515022 = sum of:
            0.029515022 = weight(_text_:22 in 4333) [ClassicSimilarity], result of:
              0.029515022 = score(doc=4333,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.19345059 = fieldWeight in 4333, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4333)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Pages
    22 S