Search (44 results, page 1 of 3)

  • × theme_ss:"Automatisches Klassifizieren"
  • × type_ss:"a"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.03
    0.02626946 = product of:
      0.06567365 = sum of:
        0.048965674 = product of:
          0.1958627 = sum of:
            0.1958627 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.1958627 = score(doc=562,freq=2.0), product of:
                0.34849894 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041106213 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.25 = coord(1/4)
        0.016707974 = product of:
          0.033415947 = sum of:
            0.033415947 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.033415947 = score(doc=562,freq=2.0), product of:
                0.14394696 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041106213 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Cathey, R.J.; Jensen, E.C.; Beitzel, S.M.; Frieder, O.; Grossman, D.: Exploiting parallelism to support scalable hierarchical clustering (2007) 0.02
    0.022865495 = product of:
      0.11432747 = sum of:
        0.11432747 = weight(_text_:o in 448) [ClassicSimilarity], result of:
          0.11432747 = score(doc=448,freq=8.0), product of:
            0.20624171 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.041106213 = queryNorm
            0.55433726 = fieldWeight in 448, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.0390625 = fieldNorm(doc=448)
      0.2 = coord(1/5)
    
    Abstract
    A distributed memory parallel version of the group average hierarchical agglomerative clustering algorithm is proposed to enable scaling the document clustering problem to large collections. Using standard message passing operations reduces interprocess communication while maintaining efficient load balancing. In a series of experiments using a subset of a standard Text REtrieval Conference (TREC) test collection, our parallel hierarchical clustering algorithm is shown to be scalable in terms of processors efficiently used and the collection size. Results show that our algorithm performs close to the expected O(n**2/p) time on p processors rather than the worst-case O(n**3/p) time. Furthermore, the O(n**2/p) memory complexity per node allows larger collections to be clustered as the number of nodes increases. While partitioning algorithms such as k-means are trivially parallelizable, our results confirm those of other studies which showed that hierarchical algorithms produce significantly tighter clusters in the document clustering task. Finally, we show how our parallel hierarchical agglomerative clustering algorithm can be used as the clustering subroutine for a parallel version of the buckshot algorithm to cluster the complete TREC collection at near theoretical runtime expectations.
  3. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.02
    0.021731649 = product of:
      0.05432912 = sum of:
        0.034836486 = weight(_text_:r in 141) [ClassicSimilarity], result of:
          0.034836486 = score(doc=141,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.25601473 = fieldWeight in 141, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.0546875 = fieldNorm(doc=141)
        0.019492636 = product of:
          0.03898527 = sum of:
            0.03898527 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.03898527 = score(doc=141,freq=2.0), product of:
                0.14394696 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041106213 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Pages
    S.1-22
    Source
    Klassifikation und Ordnung. Tagungsband 12. Jahrestagung der Gesellschaft für Klassifikation, Darmstadt 17.-19.3.1988. Hrsg.: R. Wille
  4. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.02
    0.018627128 = product of:
      0.04656782 = sum of:
        0.029859845 = weight(_text_:r in 2760) [ClassicSimilarity], result of:
          0.029859845 = score(doc=2760,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.2194412 = fieldWeight in 2760, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.046875 = fieldNorm(doc=2760)
        0.016707974 = product of:
          0.033415947 = sum of:
            0.033415947 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.033415947 = score(doc=2760,freq=2.0), product of:
                0.14394696 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041106213 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 3.2009 19:11:54
  5. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.02
    0.01795032 = product of:
      0.0448758 = sum of:
        0.03373715 = weight(_text_:u in 3284) [ClassicSimilarity], result of:
          0.03373715 = score(doc=3284,freq=6.0), product of:
            0.13460001 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.041106213 = queryNorm
            0.25064746 = fieldWeight in 3284, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=3284)
        0.01113865 = product of:
          0.0222773 = sum of:
            0.0222773 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.0222773 = score(doc=3284,freq=2.0), product of:
                0.14394696 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041106213 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Das Klassifizieren von Objekten (z. B. Fauna, Flora, Texte) ist ein Verfahren, das auf menschlicher Intelligenz basiert. In der Informatik - insbesondere im Gebiet der Künstlichen Intelligenz (KI) - wird u. a. untersucht, inweit Verfahren, die menschliche Intelligenz benötigen, automatisiert werden können. Hierbei hat sich herausgestellt, dass die Lösung von Alltagsproblemen eine größere Herausforderung darstellt, als die Lösung von Spezialproblemen, wie z. B. das Erstellen eines Schachcomputers. So ist "Rybka" der seit Juni 2007 amtierende Computerschach-Weltmeistern. Inwieweit Alltagsprobleme mit Methoden der Künstlichen Intelligenz gelöst werden können, ist eine - für den allgemeinen Fall - noch offene Frage. Beim Lösen von Alltagsproblemen spielt die Verarbeitung der natürlichen Sprache, wie z. B. das Verstehen, eine wesentliche Rolle. Den "gesunden Menschenverstand" als Maschine (in der Cyc-Wissensbasis in Form von Fakten und Regeln) zu realisieren, ist Lenat's Ziel seit 1984. Bezüglich des KI-Paradeprojektes "Cyc" gibt es CycOptimisten und Cyc-Pessimisten. Das Verstehen der natürlichen Sprache (z. B. Werktitel, Zusammenfassung, Vorwort, Inhalt) ist auch beim intellektuellen Klassifizieren von bibliografischen Titeldatensätzen oder Netzpublikationen notwendig, um diese Textobjekte korrekt klassifizieren zu können. Seit dem Jahr 2007 werden von der Deutschen Nationalbibliothek nahezu alle Veröffentlichungen mit der Dewey Dezimalklassifikation (DDC) intellektuell klassifiziert.
    Die Menge der zu klassifizierenden Veröffentlichungen steigt spätestens seit der Existenz des World Wide Web schneller an, als sie intellektuell sachlich erschlossen werden kann. Daher werden Verfahren gesucht, um die Klassifizierung von Textobjekten zu automatisieren oder die intellektuelle Klassifizierung zumindest zu unterstützen. Seit 1968 gibt es Verfahren zur automatischen Dokumentenklassifizierung (Information Retrieval, kurz: IR) und seit 1992 zur automatischen Textklassifizierung (ATC: Automated Text Categorization). Seit immer mehr digitale Objekte im World Wide Web zur Verfügung stehen, haben Arbeiten zur automatischen Textklassifizierung seit ca. 1998 verstärkt zugenommen. Dazu gehören seit 1996 auch Arbeiten zur automatischen DDC-Klassifizierung bzw. RVK-Klassifizierung von bibliografischen Titeldatensätzen und Volltextdokumenten. Bei den Entwicklungen handelt es sich unseres Wissens bislang um experimentelle und keine im ständigen Betrieb befindlichen Systeme. Auch das VZG-Projekt Colibri/DDC ist seit 2006 u. a. mit der automatischen DDC-Klassifizierung befasst. Die diesbezüglichen Untersuchungen und Entwicklungen dienen zur Beantwortung der Forschungsfrage: "Ist es möglich, eine inhaltlich stimmige DDC-Titelklassifikation aller GVK-PLUS-Titeldatensätze automatisch zu erzielen?"
    Date
    22. 1.2010 14:41:24
  6. Ruocco, A.S.; Frieder, O.: Clustering and classification of large document bases in a parallel environment (1997) 0.02
    0.016005846 = product of:
      0.08002923 = sum of:
        0.08002923 = weight(_text_:o in 1661) [ClassicSimilarity], result of:
          0.08002923 = score(doc=1661,freq=2.0), product of:
            0.20624171 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.041106213 = queryNorm
            0.38803607 = fieldWeight in 1661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1661)
      0.2 = coord(1/5)
    
  7. Oberhauser, O.: Automatisches Klassifizieren und Bibliothekskataloge (2005) 0.02
    0.016005846 = product of:
      0.08002923 = sum of:
        0.08002923 = weight(_text_:o in 4099) [ClassicSimilarity], result of:
          0.08002923 = score(doc=4099,freq=2.0), product of:
            0.20624171 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.041106213 = queryNorm
            0.38803607 = fieldWeight in 4099, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4099)
      0.2 = coord(1/5)
    
  8. Bianchini, C.; Bargioni, S.: Automated classification using linked open data : a case study on faceted classification and Wikidata (2021) 0.02
    0.016005846 = product of:
      0.08002923 = sum of:
        0.08002923 = weight(_text_:o in 724) [ClassicSimilarity], result of:
          0.08002923 = score(doc=724,freq=2.0), product of:
            0.20624171 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.041106213 = queryNorm
            0.38803607 = fieldWeight in 724, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.0546875 = fieldNorm(doc=724)
      0.2 = coord(1/5)
    
    Abstract
    The Wikidata gadget, CCLitBox, for the automated classification of literary authors and works by a faceted classification and using Linked Open Data (LOD) is presented. The tool reproduces the classification algorithm of class O Literature of the Colon Classification and uses data freely available in Wikidata to create Colon Classification class numbers. CCLitBox is totally free and enables any user to classify literary authors and their works; it is easily accessible to everybody; it uses LOD from Wikidata but missing data for classification can be freely added if necessary; it is readymade for any cooperative and networked project.
  9. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.02
    0.015522606 = product of:
      0.038806513 = sum of:
        0.024883203 = weight(_text_:r in 1107) [ClassicSimilarity], result of:
          0.024883203 = score(doc=1107,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.18286766 = fieldWeight in 1107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.013923312 = product of:
          0.027846623 = sum of:
            0.027846623 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.027846623 = score(doc=1107,freq=2.0), product of:
                0.14394696 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041106213 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    28.10.2013 19:22:57
  10. Wu, M.; Fuller, M.; Wilkinson, R.: Using clustering and classification approaches in interactive retrieval (2001) 0.01
    0.013934595 = product of:
      0.06967297 = sum of:
        0.06967297 = weight(_text_:r in 2666) [ClassicSimilarity], result of:
          0.06967297 = score(doc=2666,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.51202947 = fieldWeight in 2666, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.109375 = fieldNorm(doc=2666)
      0.2 = coord(1/5)
    
  11. Drori, O.; Alon, N.: Using document classification for displaying search results (2003) 0.01
    0.013719295 = product of:
      0.068596475 = sum of:
        0.068596475 = weight(_text_:o in 1565) [ClassicSimilarity], result of:
          0.068596475 = score(doc=1565,freq=2.0), product of:
            0.20624171 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.041106213 = queryNorm
            0.33260235 = fieldWeight in 1565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.046875 = fieldNorm(doc=1565)
      0.2 = coord(1/5)
    
  12. Fangmeyer, H.; Gloden, R.: Bewertung und Vergleich von Klassifikationsergebnissen bei automatischen Verfahren (1978) 0.01
    0.007962625 = product of:
      0.039813124 = sum of:
        0.039813124 = weight(_text_:r in 81) [ClassicSimilarity], result of:
          0.039813124 = score(doc=81,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.29258826 = fieldWeight in 81, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.0625 = fieldNorm(doc=81)
      0.2 = coord(1/5)
    
  13. Schulze, U.: Erfahrungen bei der Anwendung automatischer Klassifizierungsverfahren zur Inhaltsanalyse einer Dokumentenmenge (1978) 0.01
    0.0077912607 = product of:
      0.038956303 = sum of:
        0.038956303 = weight(_text_:u in 83) [ClassicSimilarity], result of:
          0.038956303 = score(doc=83,freq=2.0), product of:
            0.13460001 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.041106213 = queryNorm
            0.28942272 = fieldWeight in 83, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=83)
      0.2 = coord(1/5)
    
  14. Pfister, J.: Clustering von Patent-Dokumenten am Beispiel der Datenbanken des Fachinformationszentrums Karlsruhe (2006) 0.01
    0.0077912607 = product of:
      0.038956303 = sum of:
        0.038956303 = weight(_text_:u in 5976) [ClassicSimilarity], result of:
          0.038956303 = score(doc=5976,freq=2.0), product of:
            0.13460001 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.041106213 = queryNorm
            0.28942272 = fieldWeight in 5976, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=5976)
      0.2 = coord(1/5)
    
    Source
    Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
  15. Ruiz, M.E.; Srinivasan, P.: Combining machine learning and hierarchical indexing structures for text categorization (2001) 0.01
    0.006817353 = product of:
      0.034086764 = sum of:
        0.034086764 = weight(_text_:u in 1595) [ClassicSimilarity], result of:
          0.034086764 = score(doc=1595,freq=2.0), product of:
            0.13460001 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.041106213 = queryNorm
            0.25324488 = fieldWeight in 1595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1595)
      0.2 = coord(1/5)
    
    Source
    Advances in classification research, vol.10: proceedings of the 10th ASIS SIG/CR Classification Research Workshop. Ed.: Albrechtsen, H. u. J.E. Mai
  16. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.0066831894 = product of:
      0.033415947 = sum of:
        0.033415947 = product of:
          0.066831894 = sum of:
            0.066831894 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.066831894 = score(doc=1046,freq=2.0), product of:
                0.14394696 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041106213 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    5. 5.2003 14:17:22
  17. Dolin, R.; Agrawal, D.; El Abbadi, A.; Pearlman, J.: Using automated classification for summarizing and selecting heterogeneous information sources (1998) 0.01
    0.005971969 = product of:
      0.029859845 = sum of:
        0.029859845 = weight(_text_:r in 316) [ClassicSimilarity], result of:
          0.029859845 = score(doc=316,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.2194412 = fieldWeight in 316, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.046875 = fieldNorm(doc=316)
      0.2 = coord(1/5)
    
  18. Mukhopadhyay, S.; Peng, S.; Raje, R.; Palakal, M.; Mostafa, J.: Multi-agent information classification using dynamic acquaintance lists (2003) 0.01
    0.005971969 = product of:
      0.029859845 = sum of:
        0.029859845 = weight(_text_:r in 1755) [ClassicSimilarity], result of:
          0.029859845 = score(doc=1755,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.2194412 = fieldWeight in 1755, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.046875 = fieldNorm(doc=1755)
      0.2 = coord(1/5)
    
  19. Liu, R.-L.: Dynamic category profiling for text filtering and classification (2007) 0.01
    0.005971969 = product of:
      0.029859845 = sum of:
        0.029859845 = weight(_text_:r in 900) [ClassicSimilarity], result of:
          0.029859845 = score(doc=900,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.2194412 = fieldWeight in 900, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.046875 = fieldNorm(doc=900)
      0.2 = coord(1/5)
    
  20. Cosh, K.J.; Burns, R.; Daniel, T.: Content clouds : classifying content in Web 2.0 (2008) 0.01
    0.005971969 = product of:
      0.029859845 = sum of:
        0.029859845 = weight(_text_:r in 2013) [ClassicSimilarity], result of:
          0.029859845 = score(doc=2013,freq=2.0), product of:
            0.13607219 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.041106213 = queryNorm
            0.2194412 = fieldWeight in 2013, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.046875 = fieldNorm(doc=2013)
      0.2 = coord(1/5)
    

Languages

  • e 35
  • d 9