Search (59 results, page 1 of 3)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Bollmann, P.; Konrad, E.; Schneider, H.-J.; Zuse, H.: Anwendung automatischer Klassifikationsverfahren mit dem System FAKYR (1978) 0.05
    0.045702912 = product of:
      0.10664013 = sum of:
        0.026145121 = product of:
          0.1307256 = sum of:
            0.1307256 = weight(_text_:schneider in 82) [ClassicSimilarity], result of:
              0.1307256 = score(doc=82,freq=2.0), product of:
                0.22271816 = queryWeight, product of:
                  6.640641 = idf(docFreq=156, maxDocs=44218)
                  0.033538654 = queryNorm
                0.5869553 = fieldWeight in 82, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.640641 = idf(docFreq=156, maxDocs=44218)
                  0.0625 = fieldNorm(doc=82)
          0.2 = coord(1/5)
        0.038323246 = weight(_text_:p in 82) [ClassicSimilarity], result of:
          0.038323246 = score(doc=82,freq=2.0), product of:
            0.12058865 = queryWeight, product of:
              3.5955126 = idf(docFreq=3298, maxDocs=44218)
              0.033538654 = queryNorm
            0.31780142 = fieldWeight in 82, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5955126 = idf(docFreq=3298, maxDocs=44218)
              0.0625 = fieldNorm(doc=82)
        0.042171765 = weight(_text_:i in 82) [ClassicSimilarity], result of:
          0.042171765 = score(doc=82,freq=2.0), product of:
            0.12649874 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.033538654 = queryNorm
            0.33337694 = fieldWeight in 82, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.0625 = fieldNorm(doc=82)
      0.42857143 = coord(3/7)
    
    Source
    Kooperation in der Klassifikation I. Proc. der Sekt.1-3 der 2. Fachtagung der Gesellschaft für Klassifikation, Frankfurt-Hoechst, 6.-7.4.1978. Bearb.: W. Dahlberg
  2. Ingwersen, P.; Wormell, I.: Ranganathan in the perspective of advanced information retrieval (1992) 0.02
    0.022998577 = product of:
      0.080495015 = sum of:
        0.038323246 = weight(_text_:p in 7695) [ClassicSimilarity], result of:
          0.038323246 = score(doc=7695,freq=2.0), product of:
            0.12058865 = queryWeight, product of:
              3.5955126 = idf(docFreq=3298, maxDocs=44218)
              0.033538654 = queryNorm
            0.31780142 = fieldWeight in 7695, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5955126 = idf(docFreq=3298, maxDocs=44218)
              0.0625 = fieldNorm(doc=7695)
        0.042171765 = weight(_text_:i in 7695) [ClassicSimilarity], result of:
          0.042171765 = score(doc=7695,freq=2.0), product of:
            0.12649874 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.033538654 = queryNorm
            0.33337694 = fieldWeight in 7695, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.0625 = fieldNorm(doc=7695)
      0.2857143 = coord(2/7)
    
  3. Schulze, U.: Erfahrungen bei der Anwendung automatischer Klassifizierungsverfahren zur Inhaltsanalyse einer Dokumentenmenge (1978) 0.02
    0.021130372 = product of:
      0.073956296 = sum of:
        0.031784534 = weight(_text_:u in 83) [ClassicSimilarity], result of:
          0.031784534 = score(doc=83,freq=2.0), product of:
            0.109820455 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.033538654 = queryNorm
            0.28942272 = fieldWeight in 83, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=83)
        0.042171765 = weight(_text_:i in 83) [ClassicSimilarity], result of:
          0.042171765 = score(doc=83,freq=2.0), product of:
            0.12649874 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.033538654 = queryNorm
            0.33337694 = fieldWeight in 83, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.0625 = fieldNorm(doc=83)
      0.2857143 = coord(2/7)
    
    Source
    Kooperation in der Klassifikation I. Proc. der Sekt.1-3 der 2. Fachtagung der Gesellschaft für Klassifikation, Frankfurt-Hoechst, 6.-7.4.1978. Bearb.: W. Dahlberg
  4. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.02
    0.017843084 = product of:
      0.062450793 = sum of:
        0.039730668 = weight(_text_:u in 611) [ClassicSimilarity], result of:
          0.039730668 = score(doc=611,freq=2.0), product of:
            0.109820455 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.033538654 = queryNorm
            0.3617784 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=611)
        0.022720125 = product of:
          0.04544025 = sum of:
            0.04544025 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.04544025 = score(doc=611,freq=2.0), product of:
                0.11744665 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.033538654 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    22. 8.2009 12:54:24
  5. Ruiz, M.E.; Srinivasan, P.: Combining machine learning and hierarchical indexing structures for text categorization (2001) 0.02
    0.017526945 = product of:
      0.061344307 = sum of:
        0.03353284 = weight(_text_:p in 1595) [ClassicSimilarity], result of:
          0.03353284 = score(doc=1595,freq=2.0), product of:
            0.12058865 = queryWeight, product of:
              3.5955126 = idf(docFreq=3298, maxDocs=44218)
              0.033538654 = queryNorm
            0.27807623 = fieldWeight in 1595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5955126 = idf(docFreq=3298, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1595)
        0.027811468 = weight(_text_:u in 1595) [ClassicSimilarity], result of:
          0.027811468 = score(doc=1595,freq=2.0), product of:
            0.109820455 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.033538654 = queryNorm
            0.25324488 = fieldWeight in 1595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1595)
      0.2857143 = coord(2/7)
    
    Source
    Advances in classification research, vol.10: proceedings of the 10th ASIS SIG/CR Classification Research Workshop. Ed.: Albrechtsen, H. u. J.E. Mai
  6. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.01
    0.013895713 = product of:
      0.04863499 = sum of:
        0.037274927 = weight(_text_:i in 1107) [ClassicSimilarity], result of:
          0.037274927 = score(doc=1107,freq=4.0), product of:
            0.12649874 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.033538654 = queryNorm
            0.29466638 = fieldWeight in 1107, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.011360062 = product of:
          0.022720125 = sum of:
            0.022720125 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.022720125 = score(doc=1107,freq=2.0), product of:
                0.11744665 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.033538654 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
  7. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.01
    0.013026583 = product of:
      0.04559304 = sum of:
        0.031960964 = product of:
          0.15980482 = sum of:
            0.15980482 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.15980482 = score(doc=562,freq=2.0), product of:
                0.2843411 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.033538654 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.2 = coord(1/5)
        0.013632074 = product of:
          0.027264148 = sum of:
            0.027264148 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.027264148 = score(doc=562,freq=2.0), product of:
                0.11744665 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.033538654 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  8. Schek, M.: Automatische Klassifizierung in Erschließung und Recherche eines Pressearchivs (2006) 0.01
    0.010565186 = product of:
      0.036978148 = sum of:
        0.015892267 = weight(_text_:u in 6043) [ClassicSimilarity], result of:
          0.015892267 = score(doc=6043,freq=2.0), product of:
            0.109820455 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.033538654 = queryNorm
            0.14471136 = fieldWeight in 6043, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=6043)
        0.021085883 = weight(_text_:i in 6043) [ClassicSimilarity], result of:
          0.021085883 = score(doc=6043,freq=2.0), product of:
            0.12649874 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.033538654 = queryNorm
            0.16668847 = fieldWeight in 6043, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.03125 = fieldNorm(doc=6043)
      0.2857143 = coord(2/7)
    
    Object
    I-Views
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  9. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.01
    0.0104612205 = product of:
      0.03661427 = sum of:
        0.027526218 = weight(_text_:u in 3284) [ClassicSimilarity], result of:
          0.027526218 = score(doc=3284,freq=6.0), product of:
            0.109820455 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.033538654 = queryNorm
            0.25064746 = fieldWeight in 3284, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=3284)
        0.00908805 = product of:
          0.0181761 = sum of:
            0.0181761 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.0181761 = score(doc=3284,freq=2.0), product of:
                0.11744665 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.033538654 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Das Klassifizieren von Objekten (z. B. Fauna, Flora, Texte) ist ein Verfahren, das auf menschlicher Intelligenz basiert. In der Informatik - insbesondere im Gebiet der Künstlichen Intelligenz (KI) - wird u. a. untersucht, inweit Verfahren, die menschliche Intelligenz benötigen, automatisiert werden können. Hierbei hat sich herausgestellt, dass die Lösung von Alltagsproblemen eine größere Herausforderung darstellt, als die Lösung von Spezialproblemen, wie z. B. das Erstellen eines Schachcomputers. So ist "Rybka" der seit Juni 2007 amtierende Computerschach-Weltmeistern. Inwieweit Alltagsprobleme mit Methoden der Künstlichen Intelligenz gelöst werden können, ist eine - für den allgemeinen Fall - noch offene Frage. Beim Lösen von Alltagsproblemen spielt die Verarbeitung der natürlichen Sprache, wie z. B. das Verstehen, eine wesentliche Rolle. Den "gesunden Menschenverstand" als Maschine (in der Cyc-Wissensbasis in Form von Fakten und Regeln) zu realisieren, ist Lenat's Ziel seit 1984. Bezüglich des KI-Paradeprojektes "Cyc" gibt es CycOptimisten und Cyc-Pessimisten. Das Verstehen der natürlichen Sprache (z. B. Werktitel, Zusammenfassung, Vorwort, Inhalt) ist auch beim intellektuellen Klassifizieren von bibliografischen Titeldatensätzen oder Netzpublikationen notwendig, um diese Textobjekte korrekt klassifizieren zu können. Seit dem Jahr 2007 werden von der Deutschen Nationalbibliothek nahezu alle Veröffentlichungen mit der Dewey Dezimalklassifikation (DDC) intellektuell klassifiziert.
    Die Menge der zu klassifizierenden Veröffentlichungen steigt spätestens seit der Existenz des World Wide Web schneller an, als sie intellektuell sachlich erschlossen werden kann. Daher werden Verfahren gesucht, um die Klassifizierung von Textobjekten zu automatisieren oder die intellektuelle Klassifizierung zumindest zu unterstützen. Seit 1968 gibt es Verfahren zur automatischen Dokumentenklassifizierung (Information Retrieval, kurz: IR) und seit 1992 zur automatischen Textklassifizierung (ATC: Automated Text Categorization). Seit immer mehr digitale Objekte im World Wide Web zur Verfügung stehen, haben Arbeiten zur automatischen Textklassifizierung seit ca. 1998 verstärkt zugenommen. Dazu gehören seit 1996 auch Arbeiten zur automatischen DDC-Klassifizierung bzw. RVK-Klassifizierung von bibliografischen Titeldatensätzen und Volltextdokumenten. Bei den Entwicklungen handelt es sich unseres Wissens bislang um experimentelle und keine im ständigen Betrieb befindlichen Systeme. Auch das VZG-Projekt Colibri/DDC ist seit 2006 u. a. mit der automatischen DDC-Klassifizierung befasst. Die diesbezüglichen Untersuchungen und Entwicklungen dienen zur Beantwortung der Forschungsfrage: "Ist es möglich, eine inhaltlich stimmige DDC-Titelklassifikation aller GVK-PLUS-Titeldatensätze automatisch zu erzielen?"
    Date
    22. 1.2010 14:41:24
  10. Schek, M.: Automatische Klassifizierung und Visualisierung im Archiv der Süddeutschen Zeitung (2005) 0.01
    0.009244538 = product of:
      0.032355882 = sum of:
        0.013905734 = weight(_text_:u in 4884) [ClassicSimilarity], result of:
          0.013905734 = score(doc=4884,freq=2.0), product of:
            0.109820455 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.033538654 = queryNorm
            0.12662244 = fieldWeight in 4884, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4884)
        0.018450148 = weight(_text_:i in 4884) [ClassicSimilarity], result of:
          0.018450148 = score(doc=4884,freq=2.0), product of:
            0.12649874 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.033538654 = queryNorm
            0.14585242 = fieldWeight in 4884, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4884)
      0.2857143 = coord(2/7)
    
    Object
    i-views
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  11. Shafer, K.E.: Automatic Subject Assignment via the Scorpion System (2001) 0.01
    0.009036807 = product of:
      0.06325765 = sum of:
        0.06325765 = weight(_text_:i in 1043) [ClassicSimilarity], result of:
          0.06325765 = score(doc=1043,freq=2.0), product of:
            0.12649874 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.033538654 = queryNorm
            0.50006545 = fieldWeight in 1043, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.09375 = fieldNorm(doc=1043)
      0.14285715 = coord(1/7)
    
    Footnote
    Teil eines Themenheftes: OCLC and the Internet: An Historical Overview of Research Activities, 1990-1999 - Part I
  12. Cathey, R.J.; Jensen, E.C.; Beitzel, S.M.; Frieder, O.; Grossman, D.: Exploiting parallelism to support scalable hierarchical clustering (2007) 0.01
    0.0068434374 = product of:
      0.04790406 = sum of:
        0.04790406 = weight(_text_:p in 448) [ClassicSimilarity], result of:
          0.04790406 = score(doc=448,freq=8.0), product of:
            0.12058865 = queryWeight, product of:
              3.5955126 = idf(docFreq=3298, maxDocs=44218)
              0.033538654 = queryNorm
            0.39725178 = fieldWeight in 448, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.5955126 = idf(docFreq=3298, maxDocs=44218)
              0.0390625 = fieldNorm(doc=448)
      0.14285715 = coord(1/7)
    
    Abstract
    A distributed memory parallel version of the group average hierarchical agglomerative clustering algorithm is proposed to enable scaling the document clustering problem to large collections. Using standard message passing operations reduces interprocess communication while maintaining efficient load balancing. In a series of experiments using a subset of a standard Text REtrieval Conference (TREC) test collection, our parallel hierarchical clustering algorithm is shown to be scalable in terms of processors efficiently used and the collection size. Results show that our algorithm performs close to the expected O(n**2/p) time on p processors rather than the worst-case O(n**3/p) time. Furthermore, the O(n**2/p) memory complexity per node allows larger collections to be clustered as the number of nodes increases. While partitioning algorithms such as k-means are trivially parallelizable, our results confirm those of other studies which showed that hierarchical algorithms produce significantly tighter clusters in the document clustering task. Finally, we show how our parallel hierarchical agglomerative clustering algorithm can be used as the clustering subroutine for a parallel version of the buckshot algorithm to cluster the complete TREC collection at near theoretical runtime expectations.
  13. Panyr, J.: Automatische Klassifikation und Information Retrieval : Anwendung und Entwicklung komplexer Verfahren in Information-Retrieval-Systemen und ihre Evaluierung (1986) 0.01
    0.006810972 = product of:
      0.0476768 = sum of:
        0.0476768 = weight(_text_:u in 32) [ClassicSimilarity], result of:
          0.0476768 = score(doc=32,freq=2.0), product of:
            0.109820455 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.033538654 = queryNorm
            0.43413407 = fieldWeight in 32, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=32)
      0.14285715 = coord(1/7)
    
    Footnote
    Zugleich Dissertation U Saarbrücken 1085
  14. Reiner, U.: Automatic analysis of DDC notations (2007) 0.01
    0.006810972 = product of:
      0.0476768 = sum of:
        0.0476768 = weight(_text_:u in 118) [ClassicSimilarity], result of:
          0.0476768 = score(doc=118,freq=2.0), product of:
            0.109820455 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.033538654 = queryNorm
            0.43413407 = fieldWeight in 118, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=118)
      0.14285715 = coord(1/7)
    
  15. Fangmeyer, H.; Gloden, R.: Bewertung und Vergleich von Klassifikationsergebnissen bei automatischen Verfahren (1978) 0.01
    0.006024538 = product of:
      0.042171765 = sum of:
        0.042171765 = weight(_text_:i in 81) [ClassicSimilarity], result of:
          0.042171765 = score(doc=81,freq=2.0), product of:
            0.12649874 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.033538654 = queryNorm
            0.33337694 = fieldWeight in 81, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.0625 = fieldNorm(doc=81)
      0.14285715 = coord(1/7)
    
    Source
    Kooperation in der Klassifikation I. Proc. der Sekt.1-3 der 2. Fachtagung der Gesellschaft für Klassifikation, Frankfurt-Hoechst, 6.-7.4.1978. Bearb.: W. Dahlberg
  16. Cheng, P.T.K.; Wu, A.K.W.: ACS: an automatic classification system (1995) 0.01
    0.006024538 = product of:
      0.042171765 = sum of:
        0.042171765 = weight(_text_:i in 2188) [ClassicSimilarity], result of:
          0.042171765 = score(doc=2188,freq=2.0), product of:
            0.12649874 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.033538654 = queryNorm
            0.33337694 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
      0.14285715 = coord(1/7)
    
    Abstract
    In this paper, we introduce ACS, an automatic classification system for school libraries. First, various approaches towards automatic classification, namely (i) rule-based, (ii) browse and search, and (iii) partial match, are critically reviewed. The central issues of scheme selection, text analysis and similarity measures are discussed. A novel approach towards detecting book-class similarity with Modified Overlap Coefficient (MOC) is also proposed. Finally, the design and implementation of ACS is presented. The test result of over 80% correctness in automatic classification and a cost reduction of 75% compared to manual classification suggest that ACS is highly adoptable
  17. Panyr, J.: Automatische Indexierung und Klassifikation (1983) 0.01
    0.006024538 = product of:
      0.042171765 = sum of:
        0.042171765 = weight(_text_:i in 7692) [ClassicSimilarity], result of:
          0.042171765 = score(doc=7692,freq=2.0), product of:
            0.12649874 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.033538654 = queryNorm
            0.33337694 = fieldWeight in 7692, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.0625 = fieldNorm(doc=7692)
      0.14285715 = coord(1/7)
    
    Source
    Automatisierung in der Klassifikation. Proc. 7. Jahrestagung der Gesellschaft für Klassifikation (Teil 1), Königswinter, 5.-8.4.1983. Hrsg.: I. Dahlberg u.a
  18. Fuhr, N.: Klassifikationsverfahren bei der automatischen Indexierung (1983) 0.01
    0.006024538 = product of:
      0.042171765 = sum of:
        0.042171765 = weight(_text_:i in 7697) [ClassicSimilarity], result of:
          0.042171765 = score(doc=7697,freq=2.0), product of:
            0.12649874 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.033538654 = queryNorm
            0.33337694 = fieldWeight in 7697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.0625 = fieldNorm(doc=7697)
      0.14285715 = coord(1/7)
    
    Source
    Automatisierung in der Klassifikation. Proc. 7. Jahrestagung der Gesellschaft für Klassifikation (Teil 1), Königswinter, 5.-8.4.1983. Hrsg.: I. Dahlberg u.a
  19. Krauth, J.: Evaluation von Verfahren der automatischen Klassifikation (1983) 0.01
    0.006024538 = product of:
      0.042171765 = sum of:
        0.042171765 = weight(_text_:i in 111) [ClassicSimilarity], result of:
          0.042171765 = score(doc=111,freq=2.0), product of:
            0.12649874 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.033538654 = queryNorm
            0.33337694 = fieldWeight in 111, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.0625 = fieldNorm(doc=111)
      0.14285715 = coord(1/7)
    
    Source
    Automatisierung in der Klassifikation. Proc. 7. Jahrestagung der Gesellschaft für Klassifikation (Teil 1), Königswinter, 5.-8.4.1983. Hrsg.: I. Dahlberg u.a
  20. Malo, P.; Sinha, A.; Wallenius, J.; Korhonen, P.: Concept-based document classification using Wikipedia and value function (2011) 0.01
    0.005806849 = product of:
      0.04064794 = sum of:
        0.04064794 = weight(_text_:p in 4948) [ClassicSimilarity], result of:
          0.04064794 = score(doc=4948,freq=4.0), product of:
            0.12058865 = queryWeight, product of:
              3.5955126 = idf(docFreq=3298, maxDocs=44218)
              0.033538654 = queryNorm
            0.33707932 = fieldWeight in 4948, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5955126 = idf(docFreq=3298, maxDocs=44218)
              0.046875 = fieldNorm(doc=4948)
      0.14285715 = coord(1/7)
    

Languages

  • e 40
  • d 19

Types

  • a 47
  • el 8
  • m 3
  • r 2
  • x 2
  • d 1
  • s 1
  • More… Less…