Search (59 results, page 1 of 3)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Wätjen, H.-J.; Diekmann, B.; Möller, G.; Carstensen, K.-U.: Bericht zum DFG-Projekt: GERHARD : German Harvest Automated Retrieval and Directory (1998) 0.04
    0.039384715 = product of:
      0.098461784 = sum of:
        0.048695378 = weight(_text_:u in 3065) [ClassicSimilarity], result of:
          0.048695378 = score(doc=3065,freq=2.0), product of:
            0.13460001 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.041106213 = queryNorm
            0.3617784 = fieldWeight in 3065, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=3065)
        0.049766406 = weight(_text_:r in 3065) [ClassicSimilarity], result of:
          0.049766406 = score(doc=3065,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.36573532 = fieldWeight in 3065, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.078125 = fieldNorm(doc=3065)
      0.4 = coord(2/5)
    
    Type
    r
  2. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.03
    0.030616803 = product of:
      0.076542005 = sum of:
        0.048695378 = weight(_text_:u in 611) [ClassicSimilarity], result of:
          0.048695378 = score(doc=611,freq=2.0), product of:
            0.13460001 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.041106213 = queryNorm
            0.3617784 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=611)
        0.027846623 = product of:
          0.055693246 = sum of:
            0.055693246 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.055693246 = score(doc=611,freq=2.0), product of:
                0.14394696 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041106213 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 8.2009 12:54:24
  3. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.03
    0.02626946 = product of:
      0.06567365 = sum of:
        0.048965674 = product of:
          0.1958627 = sum of:
            0.1958627 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.1958627 = score(doc=562,freq=2.0), product of:
                0.34849894 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041106213 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.25 = coord(1/4)
        0.016707974 = product of:
          0.033415947 = sum of:
            0.033415947 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.033415947 = score(doc=562,freq=2.0), product of:
                0.14394696 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041106213 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  4. Oberhauser, O.: Automatisches Klassifizieren : Verfahren zur Erschließung elektronischer Dokumente (2004) 0.03
    0.026083654 = product of:
      0.065209135 = sum of:
        0.045730986 = weight(_text_:o in 2487) [ClassicSimilarity], result of:
          0.045730986 = score(doc=2487,freq=2.0), product of:
            0.20624171 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.041106213 = queryNorm
            0.2217349 = fieldWeight in 2487, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.03125 = fieldNorm(doc=2487)
        0.019478152 = weight(_text_:u in 2487) [ClassicSimilarity], result of:
          0.019478152 = score(doc=2487,freq=2.0), product of:
            0.13460001 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.041106213 = queryNorm
            0.14471136 = fieldWeight in 2487, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=2487)
      0.4 = coord(2/5)
    
    Theme
    Grundlagen u. Einführungen: Allgemeine Literatur
  5. Cathey, R.J.; Jensen, E.C.; Beitzel, S.M.; Frieder, O.; Grossman, D.: Exploiting parallelism to support scalable hierarchical clustering (2007) 0.02
    0.022865495 = product of:
      0.11432747 = sum of:
        0.11432747 = weight(_text_:o in 448) [ClassicSimilarity], result of:
          0.11432747 = score(doc=448,freq=8.0), product of:
            0.20624171 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.041106213 = queryNorm
            0.55433726 = fieldWeight in 448, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.0390625 = fieldNorm(doc=448)
      0.2 = coord(1/5)
    
    Abstract
    A distributed memory parallel version of the group average hierarchical agglomerative clustering algorithm is proposed to enable scaling the document clustering problem to large collections. Using standard message passing operations reduces interprocess communication while maintaining efficient load balancing. In a series of experiments using a subset of a standard Text REtrieval Conference (TREC) test collection, our parallel hierarchical clustering algorithm is shown to be scalable in terms of processors efficiently used and the collection size. Results show that our algorithm performs close to the expected O(n**2/p) time on p processors rather than the worst-case O(n**3/p) time. Furthermore, the O(n**2/p) memory complexity per node allows larger collections to be clustered as the number of nodes increases. While partitioning algorithms such as k-means are trivially parallelizable, our results confirm those of other studies which showed that hierarchical algorithms produce significantly tighter clusters in the document clustering task. Finally, we show how our parallel hierarchical agglomerative clustering algorithm can be used as the clustering subroutine for a parallel version of the buckshot algorithm to cluster the complete TREC collection at near theoretical runtime expectations.
  6. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.02
    0.021731649 = product of:
      0.05432912 = sum of:
        0.034836486 = weight(_text_:r in 141) [ClassicSimilarity], result of:
          0.034836486 = score(doc=141,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.25601473 = fieldWeight in 141, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.0546875 = fieldNorm(doc=141)
        0.019492636 = product of:
          0.03898527 = sum of:
            0.03898527 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.03898527 = score(doc=141,freq=2.0), product of:
                0.14394696 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041106213 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Pages
    S.1-22
    Source
    Klassifikation und Ordnung. Tagungsband 12. Jahrestagung der Gesellschaft für Klassifikation, Darmstadt 17.-19.3.1988. Hrsg.: R. Wille
  7. Reiner, U.: VZG-Projekt Colibri : Bewertung von automatisch DDC-klassifizierten Titeldatensätzen der Deutschen Nationalbibliothek (DNB) (2009) 0.02
    0.019692358 = product of:
      0.049230892 = sum of:
        0.024347689 = weight(_text_:u in 2675) [ClassicSimilarity], result of:
          0.024347689 = score(doc=2675,freq=2.0), product of:
            0.13460001 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.041106213 = queryNorm
            0.1808892 = fieldWeight in 2675, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2675)
        0.024883203 = weight(_text_:r in 2675) [ClassicSimilarity], result of:
          0.024883203 = score(doc=2675,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.18286766 = fieldWeight in 2675, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2675)
      0.4 = coord(2/5)
    
    Type
    r
  8. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.02
    0.018627128 = product of:
      0.04656782 = sum of:
        0.029859845 = weight(_text_:r in 2760) [ClassicSimilarity], result of:
          0.029859845 = score(doc=2760,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.2194412 = fieldWeight in 2760, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.046875 = fieldNorm(doc=2760)
        0.016707974 = product of:
          0.033415947 = sum of:
            0.033415947 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.033415947 = score(doc=2760,freq=2.0), product of:
                0.14394696 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041106213 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 3.2009 19:11:54
  9. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.02
    0.01795032 = product of:
      0.0448758 = sum of:
        0.03373715 = weight(_text_:u in 3284) [ClassicSimilarity], result of:
          0.03373715 = score(doc=3284,freq=6.0), product of:
            0.13460001 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.041106213 = queryNorm
            0.25064746 = fieldWeight in 3284, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=3284)
        0.01113865 = product of:
          0.0222773 = sum of:
            0.0222773 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.0222773 = score(doc=3284,freq=2.0), product of:
                0.14394696 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041106213 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Das Klassifizieren von Objekten (z. B. Fauna, Flora, Texte) ist ein Verfahren, das auf menschlicher Intelligenz basiert. In der Informatik - insbesondere im Gebiet der Künstlichen Intelligenz (KI) - wird u. a. untersucht, inweit Verfahren, die menschliche Intelligenz benötigen, automatisiert werden können. Hierbei hat sich herausgestellt, dass die Lösung von Alltagsproblemen eine größere Herausforderung darstellt, als die Lösung von Spezialproblemen, wie z. B. das Erstellen eines Schachcomputers. So ist "Rybka" der seit Juni 2007 amtierende Computerschach-Weltmeistern. Inwieweit Alltagsprobleme mit Methoden der Künstlichen Intelligenz gelöst werden können, ist eine - für den allgemeinen Fall - noch offene Frage. Beim Lösen von Alltagsproblemen spielt die Verarbeitung der natürlichen Sprache, wie z. B. das Verstehen, eine wesentliche Rolle. Den "gesunden Menschenverstand" als Maschine (in der Cyc-Wissensbasis in Form von Fakten und Regeln) zu realisieren, ist Lenat's Ziel seit 1984. Bezüglich des KI-Paradeprojektes "Cyc" gibt es CycOptimisten und Cyc-Pessimisten. Das Verstehen der natürlichen Sprache (z. B. Werktitel, Zusammenfassung, Vorwort, Inhalt) ist auch beim intellektuellen Klassifizieren von bibliografischen Titeldatensätzen oder Netzpublikationen notwendig, um diese Textobjekte korrekt klassifizieren zu können. Seit dem Jahr 2007 werden von der Deutschen Nationalbibliothek nahezu alle Veröffentlichungen mit der Dewey Dezimalklassifikation (DDC) intellektuell klassifiziert.
    Die Menge der zu klassifizierenden Veröffentlichungen steigt spätestens seit der Existenz des World Wide Web schneller an, als sie intellektuell sachlich erschlossen werden kann. Daher werden Verfahren gesucht, um die Klassifizierung von Textobjekten zu automatisieren oder die intellektuelle Klassifizierung zumindest zu unterstützen. Seit 1968 gibt es Verfahren zur automatischen Dokumentenklassifizierung (Information Retrieval, kurz: IR) und seit 1992 zur automatischen Textklassifizierung (ATC: Automated Text Categorization). Seit immer mehr digitale Objekte im World Wide Web zur Verfügung stehen, haben Arbeiten zur automatischen Textklassifizierung seit ca. 1998 verstärkt zugenommen. Dazu gehören seit 1996 auch Arbeiten zur automatischen DDC-Klassifizierung bzw. RVK-Klassifizierung von bibliografischen Titeldatensätzen und Volltextdokumenten. Bei den Entwicklungen handelt es sich unseres Wissens bislang um experimentelle und keine im ständigen Betrieb befindlichen Systeme. Auch das VZG-Projekt Colibri/DDC ist seit 2006 u. a. mit der automatischen DDC-Klassifizierung befasst. Die diesbezüglichen Untersuchungen und Entwicklungen dienen zur Beantwortung der Forschungsfrage: "Ist es möglich, eine inhaltlich stimmige DDC-Titelklassifikation aller GVK-PLUS-Titeldatensätze automatisch zu erzielen?"
    Date
    22. 1.2010 14:41:24
  10. Oberhauser, O.: Automatisches Klassifizieren : Entwicklungsstand - Methodik - Anwendungsbereiche (2005) 0.02
    0.016302286 = product of:
      0.04075571 = sum of:
        0.028581867 = weight(_text_:o in 38) [ClassicSimilarity], result of:
          0.028581867 = score(doc=38,freq=2.0), product of:
            0.20624171 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.041106213 = queryNorm
            0.13858432 = fieldWeight in 38, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.01953125 = fieldNorm(doc=38)
        0.0121738445 = weight(_text_:u in 38) [ClassicSimilarity], result of:
          0.0121738445 = score(doc=38,freq=2.0), product of:
            0.13460001 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.041106213 = queryNorm
            0.0904446 = fieldWeight in 38, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.01953125 = fieldNorm(doc=38)
      0.4 = coord(2/5)
    
    Theme
    Grundlagen u. Einführungen: Allgemeine Literatur
  11. Ruocco, A.S.; Frieder, O.: Clustering and classification of large document bases in a parallel environment (1997) 0.02
    0.016005846 = product of:
      0.08002923 = sum of:
        0.08002923 = weight(_text_:o in 1661) [ClassicSimilarity], result of:
          0.08002923 = score(doc=1661,freq=2.0), product of:
            0.20624171 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.041106213 = queryNorm
            0.38803607 = fieldWeight in 1661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1661)
      0.2 = coord(1/5)
    
  12. Oberhauser, O.: Automatisches Klassifizieren und Bibliothekskataloge (2005) 0.02
    0.016005846 = product of:
      0.08002923 = sum of:
        0.08002923 = weight(_text_:o in 4099) [ClassicSimilarity], result of:
          0.08002923 = score(doc=4099,freq=2.0), product of:
            0.20624171 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.041106213 = queryNorm
            0.38803607 = fieldWeight in 4099, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4099)
      0.2 = coord(1/5)
    
  13. Bianchini, C.; Bargioni, S.: Automated classification using linked open data : a case study on faceted classification and Wikidata (2021) 0.02
    0.016005846 = product of:
      0.08002923 = sum of:
        0.08002923 = weight(_text_:o in 724) [ClassicSimilarity], result of:
          0.08002923 = score(doc=724,freq=2.0), product of:
            0.20624171 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.041106213 = queryNorm
            0.38803607 = fieldWeight in 724, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.0546875 = fieldNorm(doc=724)
      0.2 = coord(1/5)
    
    Abstract
    The Wikidata gadget, CCLitBox, for the automated classification of literary authors and works by a faceted classification and using Linked Open Data (LOD) is presented. The tool reproduces the classification algorithm of class O Literature of the Colon Classification and uses data freely available in Wikidata to create Colon Classification class numbers. CCLitBox is totally free and enables any user to classify literary authors and their works; it is easily accessible to everybody; it uses LOD from Wikidata but missing data for classification can be freely added if necessary; it is readymade for any cooperative and networked project.
  14. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.02
    0.015522606 = product of:
      0.038806513 = sum of:
        0.024883203 = weight(_text_:r in 1107) [ClassicSimilarity], result of:
          0.024883203 = score(doc=1107,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.18286766 = fieldWeight in 1107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.013923312 = product of:
          0.027846623 = sum of:
            0.027846623 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.027846623 = score(doc=1107,freq=2.0), product of:
                0.14394696 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041106213 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    28.10.2013 19:22:57
  15. Wu, M.; Fuller, M.; Wilkinson, R.: Using clustering and classification approaches in interactive retrieval (2001) 0.01
    0.013934595 = product of:
      0.06967297 = sum of:
        0.06967297 = weight(_text_:r in 2666) [ClassicSimilarity], result of:
          0.06967297 = score(doc=2666,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.51202947 = fieldWeight in 2666, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.109375 = fieldNorm(doc=2666)
      0.2 = coord(1/5)
    
  16. Drori, O.; Alon, N.: Using document classification for displaying search results (2003) 0.01
    0.013719295 = product of:
      0.068596475 = sum of:
        0.068596475 = weight(_text_:o in 1565) [ClassicSimilarity], result of:
          0.068596475 = score(doc=1565,freq=2.0), product of:
            0.20624171 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.041106213 = queryNorm
            0.33260235 = fieldWeight in 1565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.046875 = fieldNorm(doc=1565)
      0.2 = coord(1/5)
    
  17. Panyr, J.: Automatische Klassifikation und Information Retrieval : Anwendung und Entwicklung komplexer Verfahren in Information-Retrieval-Systemen und ihre Evaluierung (1986) 0.01
    0.01168689 = product of:
      0.058434453 = sum of:
        0.058434453 = weight(_text_:u in 32) [ClassicSimilarity], result of:
          0.058434453 = score(doc=32,freq=2.0), product of:
            0.13460001 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.041106213 = queryNorm
            0.43413407 = fieldWeight in 32, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=32)
      0.2 = coord(1/5)
    
    Footnote
    Zugleich Dissertation U Saarbrücken 1085
  18. Reiner, U.: Automatic analysis of DDC notations (2007) 0.01
    0.01168689 = product of:
      0.058434453 = sum of:
        0.058434453 = weight(_text_:u in 118) [ClassicSimilarity], result of:
          0.058434453 = score(doc=118,freq=2.0), product of:
            0.13460001 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.041106213 = queryNorm
            0.43413407 = fieldWeight in 118, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=118)
      0.2 = coord(1/5)
    
  19. Sojka, P.; Lee, M.; Rehurek, R.; Hatlapatka, R.; Kucbel, M.; Bouche, T.; Goutorbe, C.; Anghelache, R.; Wojciechowski, K.: Toolset for entity and semantic associations : Final Release (2013) 0.01
    0.010343754 = product of:
      0.051718768 = sum of:
        0.051718768 = weight(_text_:r in 1057) [ClassicSimilarity], result of:
          0.051718768 = score(doc=1057,freq=6.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.38008332 = fieldWeight in 1057, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.046875 = fieldNorm(doc=1057)
      0.2 = coord(1/5)
    
  20. Fangmeyer, H.; Gloden, R.: Bewertung und Vergleich von Klassifikationsergebnissen bei automatischen Verfahren (1978) 0.01
    0.007962625 = product of:
      0.039813124 = sum of:
        0.039813124 = weight(_text_:r in 81) [ClassicSimilarity], result of:
          0.039813124 = score(doc=81,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.29258826 = fieldWeight in 81, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.0625 = fieldNorm(doc=81)
      0.2 = coord(1/5)
    

Languages

  • e 41
  • d 18

Types

  • a 44
  • el 11
  • r 4
  • m 3
  • x 3
  • d 1
  • s 1
  • More… Less…