Search (9 results, page 1 of 1)

  • × theme_ss:"Automatisches Klassifizieren"
  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.03
    0.029335327 = product of:
      0.09533981 = sum of:
        0.029377282 = weight(_text_:world in 3284) [ClassicSimilarity], result of:
          0.029377282 = score(doc=3284,freq=4.0), product of:
            0.122288436 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.031815533 = queryNorm
            0.24022943 = fieldWeight in 3284, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=3284)
        0.039036904 = weight(_text_:wide in 3284) [ClassicSimilarity], result of:
          0.039036904 = score(doc=3284,freq=4.0), product of:
            0.14096694 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031815533 = queryNorm
            0.2769224 = fieldWeight in 3284, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=3284)
        0.021178208 = weight(_text_:web in 3284) [ClassicSimilarity], result of:
          0.021178208 = score(doc=3284,freq=4.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.2039694 = fieldWeight in 3284, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3284)
        0.0057474207 = product of:
          0.017242262 = sum of:
            0.017242262 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.017242262 = score(doc=3284,freq=2.0), product of:
                0.11141258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031815533 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.33333334 = coord(1/3)
      0.30769232 = coord(4/13)
    
    Abstract
    Die Menge der zu klassifizierenden Veröffentlichungen steigt spätestens seit der Existenz des World Wide Web schneller an, als sie intellektuell sachlich erschlossen werden kann. Daher werden Verfahren gesucht, um die Klassifizierung von Textobjekten zu automatisieren oder die intellektuelle Klassifizierung zumindest zu unterstützen. Seit 1968 gibt es Verfahren zur automatischen Dokumentenklassifizierung (Information Retrieval, kurz: IR) und seit 1992 zur automatischen Textklassifizierung (ATC: Automated Text Categorization). Seit immer mehr digitale Objekte im World Wide Web zur Verfügung stehen, haben Arbeiten zur automatischen Textklassifizierung seit ca. 1998 verstärkt zugenommen. Dazu gehören seit 1996 auch Arbeiten zur automatischen DDC-Klassifizierung bzw. RVK-Klassifizierung von bibliografischen Titeldatensätzen und Volltextdokumenten. Bei den Entwicklungen handelt es sich unseres Wissens bislang um experimentelle und keine im ständigen Betrieb befindlichen Systeme. Auch das VZG-Projekt Colibri/DDC ist seit 2006 u. a. mit der automatischen DDC-Klassifizierung befasst. Die diesbezüglichen Untersuchungen und Entwicklungen dienen zur Beantwortung der Forschungsfrage: "Ist es möglich, eine inhaltlich stimmige DDC-Titelklassifikation aller GVK-PLUS-Titeldatensätze automatisch zu erzielen?"
    Date
    22. 1.2010 14:41:24
  2. Godby, C. J.; Stuler, J.: ¬The Library of Congress Classification as a knowledge base for automatic subject categorization (2001) 0.01
    0.008884234 = product of:
      0.11549504 = sum of:
        0.11549504 = weight(_text_:log in 1567) [ClassicSimilarity], result of:
          0.11549504 = score(doc=1567,freq=2.0), product of:
            0.20389368 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.031815533 = queryNorm
            0.5664474 = fieldWeight in 1567, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0625 = fieldNorm(doc=1567)
      0.07692308 = coord(1/13)
    
    Abstract
    This paper describes a set of experiments in adapting a subset of the Library of Congress Classification for use as a database for automatic classification. A high degree of concept integrity was obtained when subject headings were mapped from OCLC's WorldCat database and filtered using the log-likelihood statistic
  3. Koch, T.; Ardö, A.: Automatic classification of full-text HTML-documents from one specific subject area : DESIRE II D3.6a, Working Paper 2 (2000) 0.01
    0.005896702 = product of:
      0.076657124 = sum of:
        0.076657124 = weight(_text_:software in 1667) [ClassicSimilarity], result of:
          0.076657124 = score(doc=1667,freq=6.0), product of:
            0.12621705 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031815533 = queryNorm
            0.6073436 = fieldWeight in 1667, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=1667)
      0.07692308 = coord(1/13)
    
    Content
    1 Introduction / 2 Method overview / 3 Ei thesaurus preprocessing / 4 Automatic classification process: 4.1 Matching -- 4.2 Weighting -- 4.3 Preparation for display / 5 Results of the classification process / 6 Evaluations / 7 Software / 8 Other applications / 9 Experiments with universal classification systems / References / Appendix A: Ei classification service: Software / Appendix B: Use of the classification software as subject filter in a WWW harvester.
  4. Adams, K.C.: Word wranglers : Automatic classification tools transform enterprise documents from "bags of words" into knowledge resources (2003) 0.00
    0.0047578807 = product of:
      0.061852448 = sum of:
        0.061852448 = weight(_text_:software in 1665) [ClassicSimilarity], result of:
          0.061852448 = score(doc=1665,freq=10.0), product of:
            0.12621705 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031815533 = queryNorm
            0.49004826 = fieldWeight in 1665, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1665)
      0.07692308 = coord(1/13)
    
    Abstract
    Taxonomies are an important part of any knowledge management (KM) system, and automatic classification software is emerging as a "killer app" for consumer and enterprise portals. A number of companies such as Inxight Software , Mohomine, Metacode, and others claim to interpret the semantic content of any textual document and automatically classify text on the fly. The promise that software could automatically produce a Yahoo-style directory is a siren call not many IT managers are able to resist. KM needs have grown more complex due to the increasing amount of digital information, the declining effectiveness of keyword searching, and heterogeneous document formats in corporate databases. This environment requires innovative KM tools, and automatic classification technology is an example of this new kind of software. These products can be divided into three categories according to their underlying technology - rules-based, catalog-by-example, and statistical clustering. Evolving trends in this market include framing classification as a cyborg (computer- and human-based) activity and the increasing use of extensible markup language (XML) and support vector machine (SVM) technology. In this article, we'll survey the rapidly changing automatic classification software market and examine the features and capabilities of leading classification products.
  5. Lindholm, J.; Schönthal, T.; Jansson , K.: Experiences of harvesting Web resources in engineering using automatic classification (2003) 0.00
    0.0039904467 = product of:
      0.051875807 = sum of:
        0.051875807 = weight(_text_:web in 4088) [ClassicSimilarity], result of:
          0.051875807 = score(doc=4088,freq=6.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.49962097 = fieldWeight in 4088, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=4088)
      0.07692308 = coord(1/13)
    
    Abstract
    Authors describe the background and the work involved in setting up Engine-e, a Web index that uses automatic classification as a mean for the selection of resources in Engineering. Considerations in offering a robot-generated Web index as a successor to a manually indexed quality-controlled subject gateway are also discussed
  6. Prabowo, R.; Jackson, M.; Burden, P.; Knoell, H.-D.: Ontology-based automatic classification for the Web pages : design, implementation and evaluation (2002) 0.00
    0.0029928354 = product of:
      0.038906857 = sum of:
        0.038906857 = weight(_text_:web in 3383) [ClassicSimilarity], result of:
          0.038906857 = score(doc=3383,freq=6.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.37471575 = fieldWeight in 3383, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3383)
      0.07692308 = coord(1/13)
    
    Abstract
    In recent years, we have witnessed the continual growth in the use of ontologies in order to provide a mechanism to enable machine reasoning. This paper describes an automatic classifier, which focuses on the use of ontologies for classifying Web pages with respect to the Dewey Decimal Classification (DDC) and Library of Congress Classification (LCC) schemes. Firstly, we explain how these ontologies can be built in a modular fashion, and mapped into DDC and LCC. Secondly, we propose the formal definition of a DDC-LCC and an ontology-classification-scheme mapping. Thirdly, we explain the way the classifier uses these ontologies to assist classification. Finally, an experiment in which the accuracy of the classifier was evaluated is presented. The experiment shows that our approach results an improved classification in terms of accuracy. This improvement, however, comes at a cost in a low overage ratio due to the incompleteness of the ontologies used
    Content
    Beitrag bei: The Third International Conference on Web Information Systems Engineering (WISE'00) Dec., 12-14, 2002, Singapore, S.182.
  7. Hagedorn, K.; Chapman, S.; Newman, D.: Enhancing search and browse using automated clustering of subject metadata (2007) 0.00
    0.0017279141 = product of:
      0.022462882 = sum of:
        0.022462882 = weight(_text_:web in 1168) [ClassicSimilarity], result of:
          0.022462882 = score(doc=1168,freq=2.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.21634221 = fieldWeight in 1168, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1168)
      0.07692308 = coord(1/13)
    
    Abstract
    The Web puzzle of online information resources often hinders end-users from effective and efficient access to these resources. Clustering resources into appropriate subject-based groupings may help alleviate these difficulties, but will it work with heterogeneous material? The University of Michigan and the University of California Irvine joined forces to test automatically enhancing metadata records using the Topic Modeling algorithm on the varied OAIster corpus. We created labels for the resulting clusters of metadata records, matched the clusters to an in-house classification system, and developed a prototype that would showcase methods for search and retrieval using the enhanced records. Results indicated that while the algorithm was somewhat time-intensive to run and using a local classification scheme had its drawbacks, precise clustering of records was achieved and the prototype interface proved that faceted classification could be powerful in helping end-users find resources.
  8. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.00
    0.0011052734 = product of:
      0.014368553 = sum of:
        0.014368553 = product of:
          0.04310566 = sum of:
            0.04310566 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.04310566 = score(doc=611,freq=2.0), product of:
                0.11141258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031815533 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.33333334 = coord(1/3)
      0.07692308 = coord(1/13)
    
    Date
    22. 8.2009 12:54:24
  9. Automatic classification research at OCLC (2002) 0.00
    7.7369134E-4 = product of:
      0.010057987 = sum of:
        0.010057987 = product of:
          0.03017396 = sum of:
            0.03017396 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.03017396 = score(doc=1563,freq=2.0), product of:
                0.11141258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031815533 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.33333334 = coord(1/3)
      0.07692308 = coord(1/13)
    
    Date
    5. 5.2003 9:22:09