Search (19 results, page 1 of 1)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.23
    0.2307443 = product of:
      0.30765906 = sum of:
        0.07228978 = product of:
          0.21686934 = sum of:
            0.21686934 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.21686934 = score(doc=562,freq=2.0), product of:
                0.38587612 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045514934 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.21686934 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.21686934 = score(doc=562,freq=2.0), product of:
            0.38587612 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.045514934 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.018499935 = product of:
          0.03699987 = sum of:
            0.03699987 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.03699987 = score(doc=562,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Yao, H.; Etzkorn, L.H.; Virani, S.: Automated classification and retrieval of reusable software components (2008) 0.02
    0.017135125 = product of:
      0.0685405 = sum of:
        0.0685405 = product of:
          0.137081 = sum of:
            0.137081 = weight(_text_:software in 1382) [ClassicSimilarity], result of:
              0.137081 = score(doc=1382,freq=24.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.75917953 = fieldWeight in 1382, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1382)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The authors describe their research which improves software reuse by using an automated approach to semantically search for and retrieve reusable software components in large software component repositories and on the World Wide Web (WWW). Using automation and smart (semantic) techniques, their approach speeds up the search and retrieval of reusable software components, while retaining good accuracy, and therefore improves the affordability of software reuse. A program understanding of software components and natural language understanding of user queries was employed. Then the software component descriptions were compared by matching the resulting semantic representations of the user queries to the semantic representations of the software components to search for software components that best match the user queries. A proof of concept system was developed to test the authors' approach. The results of this proof of concept system were compared to human experts, and statistical analysis was performed on the collected experimental data. The results from these experiments demonstrate that this automated semantic-based approach for software reusable component classification and retrieval is successful when compared to the labor-intensive results from the experts, thus showing that this approach can significantly benefit software reuse classification and retrieval.
  3. Koch, T.; Ardö, A.: Automatic classification of full-text HTML-documents from one specific subject area : DESIRE II D3.6a, Working Paper 2 (2000) 0.01
    0.013708099 = product of:
      0.054832395 = sum of:
        0.054832395 = product of:
          0.10966479 = sum of:
            0.10966479 = weight(_text_:software in 1667) [ClassicSimilarity], result of:
              0.10966479 = score(doc=1667,freq=6.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.6073436 = fieldWeight in 1667, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1667)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    1 Introduction / 2 Method overview / 3 Ei thesaurus preprocessing / 4 Automatic classification process: 4.1 Matching -- 4.2 Weighting -- 4.3 Preparation for display / 5 Results of the classification process / 6 Evaluations / 7 Software / 8 Other applications / 9 Experiments with universal classification systems / References / Appendix A: Ei classification service: Software / Appendix B: Use of the classification software as subject filter in a WWW harvester.
  4. Adams, K.C.: Word wranglers : Automatic classification tools transform enterprise documents from "bags of words" into knowledge resources (2003) 0.01
    0.011060675 = product of:
      0.0442427 = sum of:
        0.0442427 = product of:
          0.0884854 = sum of:
            0.0884854 = weight(_text_:software in 1665) [ClassicSimilarity], result of:
              0.0884854 = score(doc=1665,freq=10.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.49004826 = fieldWeight in 1665, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1665)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Taxonomies are an important part of any knowledge management (KM) system, and automatic classification software is emerging as a "killer app" for consumer and enterprise portals. A number of companies such as Inxight Software , Mohomine, Metacode, and others claim to interpret the semantic content of any textual document and automatically classify text on the fly. The promise that software could automatically produce a Yahoo-style directory is a siren call not many IT managers are able to resist. KM needs have grown more complex due to the increasing amount of digital information, the declining effectiveness of keyword searching, and heterogeneous document formats in corporate databases. This environment requires innovative KM tools, and automatic classification technology is an example of this new kind of software. These products can be divided into three categories according to their underlying technology - rules-based, catalog-by-example, and statistical clustering. Evolving trends in this market include framing classification as a cyborg (computer- and human-based) activity and the increasing use of extensible markup language (XML) and support vector machine (SVM) technology. In this article, we'll survey the rapidly changing automatic classification software market and examine the features and capabilities of leading classification products.
  5. Montesi, M.; Navarrete, T.: Classifying web genres in context : A case study documenting the web genres used by a software engineer (2008) 0.01
    0.010281074 = product of:
      0.041124295 = sum of:
        0.041124295 = product of:
          0.08224859 = sum of:
            0.08224859 = weight(_text_:software in 2100) [ClassicSimilarity], result of:
              0.08224859 = score(doc=2100,freq=6.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.4555077 = fieldWeight in 2100, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2100)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This case study analyzes the Internet-based resources that a software engineer uses in his daily work. Methodologically, we studied the web browser history of the participant, classifying all the web pages he had seen over a period of 12 days into web genres. We interviewed him before and after the analysis of the web browser history. In the first interview, he spoke about his general information behavior; in the second, he commented on each web genre, explaining why and how he used them. As a result, three approaches allow us to describe the set of 23 web genres obtained: (a) the purposes they serve for the participant; (b) the role they play in the various work and search phases; (c) and the way they are used in combination with each other. Further observations concern the way the participant assesses quality of web-based resources, and his information behavior as a software engineer.
  6. Sebastiani, F.: Classification of text, automatic (2006) 0.01
    0.009793539 = product of:
      0.039174154 = sum of:
        0.039174154 = product of:
          0.07834831 = sum of:
            0.07834831 = weight(_text_:software in 5003) [ClassicSimilarity], result of:
              0.07834831 = score(doc=5003,freq=4.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.43390724 = fieldWeight in 5003, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5003)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Automatic text classification (ATC) is a discipline at the crossroads of information retrieval (IR), machine learning (ML), and computational linguistics (CL), and consists in the realization of text classifiers, i.e. software systems capable of assigning texts to one or more categories, or classes, from a predefined set. Applications range from the automated indexing of scientific articles, to e-mail routing, spam filtering, authorship attribution, and automated survey coding. This article will focus on the ML approach to ATC, whereby a software system (called the learner) automatically builds a classifier for the categories of interest by generalizing from a "training" set of pre-classified texts.
  7. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.0092499675 = product of:
      0.03699987 = sum of:
        0.03699987 = product of:
          0.07399974 = sum of:
            0.07399974 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.07399974 = score(doc=1046,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    5. 5.2003 14:17:22
  8. Brückner, T.; Dambeck, H.: Sortierautomaten : Grundlagen der Textklassifizierung (2003) 0.01
    0.007914375 = product of:
      0.0316575 = sum of:
        0.0316575 = product of:
          0.063315 = sum of:
            0.063315 = weight(_text_:software in 2398) [ClassicSimilarity], result of:
              0.063315 = score(doc=2398,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.35064998 = fieldWeight in 2398, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2398)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Rechnung, Kündigung oder Adressänderung? Eingehende Briefe und E-Mails werden immer häufiger von Software statt aufwändig von Menschenhand sortiert. Die Textklassifizierer arbeiten erstaunlich genau. Sie fahnden auch nach ähnlichen Texten und sorgen so für einen schnellen Überblick. Ihre Werkzeuge sind Linguistik, Statistik und Logik
  9. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.007708307 = product of:
      0.030833228 = sum of:
        0.030833228 = product of:
          0.061666455 = sum of:
            0.061666455 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.061666455 = score(doc=611,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 8.2009 12:54:24
  10. Wille, J.: Automatisches Klassifizieren bibliographischer Beschreibungsdaten : Vorgehensweise und Ergebnisse (2006) 0.01
    0.006925077 = product of:
      0.027700309 = sum of:
        0.027700309 = product of:
          0.055400617 = sum of:
            0.055400617 = weight(_text_:software in 6090) [ClassicSimilarity], result of:
              0.055400617 = score(doc=6090,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.30681872 = fieldWeight in 6090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6090)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    http://www.fbi.fh-koeln.de/institut/papers/abschlussarbeiten/abschlussarbeiten_ausgabe.php Vgl. auch: http://eprints.rclis.org/archive/00006659/01/wille_-_automatisches_klassifizieren_bibliographischer_beschreibungsdaten_(diplomarbeit).pdf. Für die Software vgl.: http://blackwinter.de/da/.
  11. Automatic classification research at OCLC (2002) 0.01
    0.005395815 = product of:
      0.02158326 = sum of:
        0.02158326 = product of:
          0.04316652 = sum of:
            0.04316652 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.04316652 = score(doc=1563,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    5. 5.2003 9:22:09
  12. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.01
    0.005395815 = product of:
      0.02158326 = sum of:
        0.02158326 = product of:
          0.04316652 = sum of:
            0.04316652 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.04316652 = score(doc=5273,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 16:24:52
  13. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.01
    0.005395815 = product of:
      0.02158326 = sum of:
        0.02158326 = product of:
          0.04316652 = sum of:
            0.04316652 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.04316652 = score(doc=2560,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 9.2008 18:31:54
  14. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.00
    0.0046249838 = product of:
      0.018499935 = sum of:
        0.018499935 = product of:
          0.03699987 = sum of:
            0.03699987 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.03699987 = score(doc=2760,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2009 19:11:54
  15. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.00
    0.0046249838 = product of:
      0.018499935 = sum of:
        0.018499935 = product of:
          0.03699987 = sum of:
            0.03699987 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.03699987 = score(doc=3051,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 8.2009 19:51:28
  16. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.00
    0.0038541534 = product of:
      0.015416614 = sum of:
        0.015416614 = product of:
          0.030833228 = sum of:
            0.030833228 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
              0.030833228 = score(doc=2765,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.19345059 = fieldWeight in 2765, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2765)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2009 19:14:43
  17. Khoo, C.S.G.; Ng, K.; Ou, S.: ¬An exploratory study of human clustering of Web pages (2003) 0.00
    0.0030833227 = product of:
      0.012333291 = sum of:
        0.012333291 = product of:
          0.024666581 = sum of:
            0.024666581 = weight(_text_:22 in 2741) [ClassicSimilarity], result of:
              0.024666581 = score(doc=2741,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.15476047 = fieldWeight in 2741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2741)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    12. 9.2004 9:56:22
  18. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.00
    0.0030833227 = product of:
      0.012333291 = sum of:
        0.012333291 = product of:
          0.024666581 = sum of:
            0.024666581 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.024666581 = score(doc=3284,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 1.2010 14:41:24
  19. Oberhauser, O.: Automatisches Klassifizieren : Entwicklungsstand - Methodik - Anwendungsbereiche (2005) 0.00
    0.0024732419 = product of:
      0.0098929675 = sum of:
        0.0098929675 = product of:
          0.019785935 = sum of:
            0.019785935 = weight(_text_:software in 38) [ClassicSimilarity], result of:
              0.019785935 = score(doc=38,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.10957812 = fieldWeight in 38, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=38)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Zum Inhalt Auf einen kurzen einleitenden Abschnitt folgt eine Einführung in die grundlegende Methodik des automatischen Klassifizierens. Oberhauser erklärt hier Begriffe wie Einfach- und Mehrfachklassifizierung, Klassen- und Dokumentzentrierung, und geht danach auf die hauptsächlichen Anwendungen der automatischen Klassifikation von Textdokumenten, maschinelle Lernverfahren und Techniken der Dimensionsreduktion bei der Indexierung ein. Zwei weitere Unterkapitel sind der Erstellung von Klassifikatoren und den Methoden für deren Auswertung gewidmet. Das Kapitel wird abgerundet von einer kurzen Auflistung einiger Softwareprodukte für automatisches Klassifizieren, die sowohl kommerzielle Software, als auch Projekte aus dem Open-Source-Bereich umfasst. Der Hauptteil des Buches ist den großen Projekten zur automatischen Erschließung von Webdokumenten gewidmet, die von OCLC (Scorpion) sowie an den Universitäten Lund (Nordic WAIS/WWW, DESIRE II), Wolverhampton (WWLib-TOS, WWLib-TNG, Old ACE, ACE) und Oldenburg (GERHARD, GERHARD II) durchgeführt worden sind. Der Autor beschreibt hier sehr detailliert - wobei der Detailliertheitsgrad unterschiedlich ist, je nachdem, was aus der Projektdokumentation geschlossen werden kann - die jeweilige Zielsetzung des Projektes, die verwendete Klassifikation, die methodische Vorgehensweise sowie die Evaluierungsmethoden und -ergebnisse. Sofern Querverweise zu anderen Projekten bestehen, werden auch diese besprochen. Der Verfasser geht hier sehr genau auf wichtige Aspekte wie Vokabularbildung, Textaufbereitung und Gewichtung ein, so dass der Leser eine gute Vorstellung von den Ansätzen und der möglichen Weiterentwicklung des Projektes bekommt. In einem weiteren Kapitel wird auf einige kleinere Projekte eingegangen, die dem für Bibliotheken besonders interessanten Thema des automatischen Klassifizierens von Büchern sowie den Bereichen Patentliteratur, Mediendokumentation und dem Einsatz bei Informationsdiensten gewidmet sind. Die Darstellung wird ergänzt von einem Literaturverzeichnis mit über 250 Titeln zu den konkreten Projekten sowie einem Abkürzungs- und einem Abbildungsverzeichnis. In der abschließenden Diskussion der beschriebenen Projekte wird einerseits auf die Bedeutung der einzelnen Projekte für den methodischen Fortschritt eingegangen, andererseits aber auch einiges an Kritik geäußert, v. a. bezüglich der mangelnden Auswertung der Projektergebnisse und des Fehlens an brauchbarer Dokumentation. So waren z. B. die Projektseiten des Projekts GERHARD (www.gerhard.de/) auf den Stand von 1998 eingefroren, zurzeit [11.07.06] sind sie überhaupt nicht mehr erreichbar. Mit einigem Erstaunen stellt Oberhauser auch fest, dass - abgesehen von der fast 15 Jahre alten Untersuchung von Larsen - »keine signifikanten Studien oder Anwendungen aus dem Bibliotheksbereich vorliegen« (S. 139). Wie der Autor aber selbst ergänzend ausführt, dürfte dies daran liegen, dass sich bibliografische Metadaten wegen des geringen Textumfangs sehr schlecht für automatische Klassifikation eignen, und dass - wie frühere Ergebnisse gezeigt haben - das übliche TF/IDF-Verfahren nicht für Katalogisate geeignet ist (ibd.).