Search (40 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.03
    0.02976569 = product of:
      0.07441422 = sum of:
        0.047341704 = weight(_text_:u in 611) [ClassicSimilarity], result of:
          0.047341704 = score(doc=611,freq=2.0), product of:
            0.13085829 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03996351 = queryNorm
            0.3617784 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=611)
        0.02707252 = product of:
          0.05414504 = sum of:
            0.05414504 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.05414504 = score(doc=611,freq=2.0), product of:
                0.1399454 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03996351 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 8.2009 12:54:24
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.03
    0.025539199 = product of:
      0.063848 = sum of:
        0.047604483 = product of:
          0.19041793 = sum of:
            0.19041793 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.19041793 = score(doc=562,freq=2.0), product of:
                0.33881107 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03996351 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.25 = coord(1/4)
        0.016243512 = product of:
          0.032487024 = sum of:
            0.032487024 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.032487024 = score(doc=562,freq=2.0), product of:
                0.1399454 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03996351 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Oberhauser, O.: Automatisches Klassifizieren : Verfahren zur Erschließung elektronischer Dokumente (2004) 0.03
    0.025358561 = product of:
      0.0633964 = sum of:
        0.01893668 = weight(_text_:u in 2487) [ClassicSimilarity], result of:
          0.01893668 = score(doc=2487,freq=2.0), product of:
            0.13085829 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03996351 = queryNorm
            0.14471136 = fieldWeight in 2487, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=2487)
        0.04445972 = weight(_text_:o in 2487) [ClassicSimilarity], result of:
          0.04445972 = score(doc=2487,freq=2.0), product of:
            0.20050845 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.03996351 = queryNorm
            0.2217349 = fieldWeight in 2487, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.03125 = fieldNorm(doc=2487)
      0.4 = coord(2/5)
    
    Theme
    Grundlagen u. Einführungen: Allgemeine Literatur
  4. Cathey, R.J.; Jensen, E.C.; Beitzel, S.M.; Frieder, O.; Grossman, D.: Exploiting parallelism to support scalable hierarchical clustering (2007) 0.02
    0.022229861 = product of:
      0.1111493 = sum of:
        0.1111493 = weight(_text_:o in 448) [ClassicSimilarity], result of:
          0.1111493 = score(doc=448,freq=8.0), product of:
            0.20050845 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.03996351 = queryNorm
            0.55433726 = fieldWeight in 448, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.0390625 = fieldNorm(doc=448)
      0.2 = coord(1/5)
    
    Abstract
    A distributed memory parallel version of the group average hierarchical agglomerative clustering algorithm is proposed to enable scaling the document clustering problem to large collections. Using standard message passing operations reduces interprocess communication while maintaining efficient load balancing. In a series of experiments using a subset of a standard Text REtrieval Conference (TREC) test collection, our parallel hierarchical clustering algorithm is shown to be scalable in terms of processors efficiently used and the collection size. Results show that our algorithm performs close to the expected O(n**2/p) time on p processors rather than the worst-case O(n**3/p) time. Furthermore, the O(n**2/p) memory complexity per node allows larger collections to be clustered as the number of nodes increases. While partitioning algorithms such as k-means are trivially parallelizable, our results confirm those of other studies which showed that hierarchical algorithms produce significantly tighter clusters in the document clustering task. Finally, we show how our parallel hierarchical agglomerative clustering algorithm can be used as the clustering subroutine for a parallel version of the buckshot algorithm to cluster the complete TREC collection at near theoretical runtime expectations.
  5. Schaalje, G.B.; Blades, N.J.; Funai, T.: ¬An open-set size-adjusted Bayesian classifier for authorship attribution (2013) 0.02
    0.019418191 = product of:
      0.09709095 = sum of:
        0.09709095 = product of:
          0.1941819 = sum of:
            0.1941819 = weight(_text_:madison in 1041) [ClassicSimilarity], result of:
              0.1941819 = score(doc=1041,freq=2.0), product of:
                0.3421433 = queryWeight, product of:
                  8.561393 = idf(docFreq=22, maxDocs=44218)
                  0.03996351 = queryNorm
                0.56754553 = fieldWeight in 1041, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.561393 = idf(docFreq=22, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1041)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Recent studies of authorship attribution have used machine-learning methods including regularized multinomial logistic regression, neural nets, support vector machines, and the nearest shrunken centroid classifier to identify likely authors of disputed texts. These methods are all limited by an inability to perform open-set classification and account for text and corpus size. We propose a customized Bayesian logit-normal-beta-binomial classification model for supervised authorship attribution. The model is based on the beta-binomial distribution with an explicit inverse relationship between extra-binomial variation and text size. The model internally estimates the relationship of extra-binomial variation to text size, and uses Markov Chain Monte Carlo (MCMC) to produce distributions of posterior authorship probabilities instead of point estimates. We illustrate the method by training the machine-learning methods as well as the open-set Bayesian classifier on undisputed papers of The Federalist, and testing the method on documents historically attributed to Alexander Hamilton, John Jay, and James Madison. The Bayesian classifier was the best classifier of these texts.
  6. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.02
    0.017451322 = product of:
      0.043628305 = sum of:
        0.032799296 = weight(_text_:u in 3284) [ClassicSimilarity], result of:
          0.032799296 = score(doc=3284,freq=6.0), product of:
            0.13085829 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03996351 = queryNorm
            0.25064746 = fieldWeight in 3284, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=3284)
        0.010829008 = product of:
          0.021658016 = sum of:
            0.021658016 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.021658016 = score(doc=3284,freq=2.0), product of:
                0.1399454 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03996351 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Das Klassifizieren von Objekten (z. B. Fauna, Flora, Texte) ist ein Verfahren, das auf menschlicher Intelligenz basiert. In der Informatik - insbesondere im Gebiet der Künstlichen Intelligenz (KI) - wird u. a. untersucht, inweit Verfahren, die menschliche Intelligenz benötigen, automatisiert werden können. Hierbei hat sich herausgestellt, dass die Lösung von Alltagsproblemen eine größere Herausforderung darstellt, als die Lösung von Spezialproblemen, wie z. B. das Erstellen eines Schachcomputers. So ist "Rybka" der seit Juni 2007 amtierende Computerschach-Weltmeistern. Inwieweit Alltagsprobleme mit Methoden der Künstlichen Intelligenz gelöst werden können, ist eine - für den allgemeinen Fall - noch offene Frage. Beim Lösen von Alltagsproblemen spielt die Verarbeitung der natürlichen Sprache, wie z. B. das Verstehen, eine wesentliche Rolle. Den "gesunden Menschenverstand" als Maschine (in der Cyc-Wissensbasis in Form von Fakten und Regeln) zu realisieren, ist Lenat's Ziel seit 1984. Bezüglich des KI-Paradeprojektes "Cyc" gibt es CycOptimisten und Cyc-Pessimisten. Das Verstehen der natürlichen Sprache (z. B. Werktitel, Zusammenfassung, Vorwort, Inhalt) ist auch beim intellektuellen Klassifizieren von bibliografischen Titeldatensätzen oder Netzpublikationen notwendig, um diese Textobjekte korrekt klassifizieren zu können. Seit dem Jahr 2007 werden von der Deutschen Nationalbibliothek nahezu alle Veröffentlichungen mit der Dewey Dezimalklassifikation (DDC) intellektuell klassifiziert.
    Die Menge der zu klassifizierenden Veröffentlichungen steigt spätestens seit der Existenz des World Wide Web schneller an, als sie intellektuell sachlich erschlossen werden kann. Daher werden Verfahren gesucht, um die Klassifizierung von Textobjekten zu automatisieren oder die intellektuelle Klassifizierung zumindest zu unterstützen. Seit 1968 gibt es Verfahren zur automatischen Dokumentenklassifizierung (Information Retrieval, kurz: IR) und seit 1992 zur automatischen Textklassifizierung (ATC: Automated Text Categorization). Seit immer mehr digitale Objekte im World Wide Web zur Verfügung stehen, haben Arbeiten zur automatischen Textklassifizierung seit ca. 1998 verstärkt zugenommen. Dazu gehören seit 1996 auch Arbeiten zur automatischen DDC-Klassifizierung bzw. RVK-Klassifizierung von bibliografischen Titeldatensätzen und Volltextdokumenten. Bei den Entwicklungen handelt es sich unseres Wissens bislang um experimentelle und keine im ständigen Betrieb befindlichen Systeme. Auch das VZG-Projekt Colibri/DDC ist seit 2006 u. a. mit der automatischen DDC-Klassifizierung befasst. Die diesbezüglichen Untersuchungen und Entwicklungen dienen zur Beantwortung der Forschungsfrage: "Ist es möglich, eine inhaltlich stimmige DDC-Titelklassifikation aller GVK-PLUS-Titeldatensätze automatisch zu erzielen?"
    Date
    22. 1.2010 14:41:24
  7. Oberhauser, O.: Automatisches Klassifizieren : Entwicklungsstand - Methodik - Anwendungsbereiche (2005) 0.02
    0.015849102 = product of:
      0.039622754 = sum of:
        0.011835426 = weight(_text_:u in 38) [ClassicSimilarity], result of:
          0.011835426 = score(doc=38,freq=2.0), product of:
            0.13085829 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03996351 = queryNorm
            0.0904446 = fieldWeight in 38, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.01953125 = fieldNorm(doc=38)
        0.027787326 = weight(_text_:o in 38) [ClassicSimilarity], result of:
          0.027787326 = score(doc=38,freq=2.0), product of:
            0.20050845 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.03996351 = queryNorm
            0.13858432 = fieldWeight in 38, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.01953125 = fieldNorm(doc=38)
      0.4 = coord(2/5)
    
    Theme
    Grundlagen u. Einführungen: Allgemeine Literatur
  8. Ruocco, A.S.; Frieder, O.: Clustering and classification of large document bases in a parallel environment (1997) 0.02
    0.015560903 = product of:
      0.07780451 = sum of:
        0.07780451 = weight(_text_:o in 1661) [ClassicSimilarity], result of:
          0.07780451 = score(doc=1661,freq=2.0), product of:
            0.20050845 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.03996351 = queryNorm
            0.38803607 = fieldWeight in 1661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1661)
      0.2 = coord(1/5)
    
  9. Oberhauser, O.: Automatisches Klassifizieren und Bibliothekskataloge (2005) 0.02
    0.015560903 = product of:
      0.07780451 = sum of:
        0.07780451 = weight(_text_:o in 4099) [ClassicSimilarity], result of:
          0.07780451 = score(doc=4099,freq=2.0), product of:
            0.20050845 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.03996351 = queryNorm
            0.38803607 = fieldWeight in 4099, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4099)
      0.2 = coord(1/5)
    
  10. Bianchini, C.; Bargioni, S.: Automated classification using linked open data : a case study on faceted classification and Wikidata (2021) 0.02
    0.015560903 = product of:
      0.07780451 = sum of:
        0.07780451 = weight(_text_:o in 724) [ClassicSimilarity], result of:
          0.07780451 = score(doc=724,freq=2.0), product of:
            0.20050845 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.03996351 = queryNorm
            0.38803607 = fieldWeight in 724, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.0546875 = fieldNorm(doc=724)
      0.2 = coord(1/5)
    
    Abstract
    The Wikidata gadget, CCLitBox, for the automated classification of literary authors and works by a faceted classification and using Linked Open Data (LOD) is presented. The tool reproduces the classification algorithm of class O Literature of the Colon Classification and uses data freely available in Wikidata to create Colon Classification class numbers. CCLitBox is totally free and enables any user to classify literary authors and their works; it is easily accessible to everybody; it uses LOD from Wikidata but missing data for classification can be freely added if necessary; it is readymade for any cooperative and networked project.
  11. Drori, O.; Alon, N.: Using document classification for displaying search results (2003) 0.01
    0.013337917 = product of:
      0.06668958 = sum of:
        0.06668958 = weight(_text_:o in 1565) [ClassicSimilarity], result of:
          0.06668958 = score(doc=1565,freq=2.0), product of:
            0.20050845 = queryWeight, product of:
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.03996351 = queryNorm
            0.33260235 = fieldWeight in 1565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.017288 = idf(docFreq=795, maxDocs=44218)
              0.046875 = fieldNorm(doc=1565)
      0.2 = coord(1/5)
    
  12. Panyr, J.: Automatische Klassifikation und Information Retrieval : Anwendung und Entwicklung komplexer Verfahren in Information-Retrieval-Systemen und ihre Evaluierung (1986) 0.01
    0.011362008 = product of:
      0.05681004 = sum of:
        0.05681004 = weight(_text_:u in 32) [ClassicSimilarity], result of:
          0.05681004 = score(doc=32,freq=2.0), product of:
            0.13085829 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03996351 = queryNorm
            0.43413407 = fieldWeight in 32, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=32)
      0.2 = coord(1/5)
    
    Footnote
    Zugleich Dissertation U Saarbrücken 1085
  13. Reiner, U.: Automatic analysis of DDC notations (2007) 0.01
    0.011362008 = product of:
      0.05681004 = sum of:
        0.05681004 = weight(_text_:u in 118) [ClassicSimilarity], result of:
          0.05681004 = score(doc=118,freq=2.0), product of:
            0.13085829 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03996351 = queryNorm
            0.43413407 = fieldWeight in 118, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=118)
      0.2 = coord(1/5)
    
  14. Wätjen, H.-J.; Diekmann, B.; Möller, G.; Carstensen, K.-U.: Bericht zum DFG-Projekt: GERHARD : German Harvest Automated Retrieval and Directory (1998) 0.01
    0.009468341 = product of:
      0.047341704 = sum of:
        0.047341704 = weight(_text_:u in 3065) [ClassicSimilarity], result of:
          0.047341704 = score(doc=3065,freq=2.0), product of:
            0.13085829 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03996351 = queryNorm
            0.3617784 = fieldWeight in 3065, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=3065)
      0.2 = coord(1/5)
    
  15. Schulze, U.: Erfahrungen bei der Anwendung automatischer Klassifizierungsverfahren zur Inhaltsanalyse einer Dokumentenmenge (1978) 0.01
    0.0075746723 = product of:
      0.03787336 = sum of:
        0.03787336 = weight(_text_:u in 83) [ClassicSimilarity], result of:
          0.03787336 = score(doc=83,freq=2.0), product of:
            0.13085829 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03996351 = queryNorm
            0.28942272 = fieldWeight in 83, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=83)
      0.2 = coord(1/5)
    
  16. Pfister, J.: Clustering von Patent-Dokumenten am Beispiel der Datenbanken des Fachinformationszentrums Karlsruhe (2006) 0.01
    0.0075746723 = product of:
      0.03787336 = sum of:
        0.03787336 = weight(_text_:u in 5976) [ClassicSimilarity], result of:
          0.03787336 = score(doc=5976,freq=2.0), product of:
            0.13085829 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03996351 = queryNorm
            0.28942272 = fieldWeight in 5976, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=5976)
      0.2 = coord(1/5)
    
    Source
    Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
  17. Ruiz, M.E.; Srinivasan, P.: Combining machine learning and hierarchical indexing structures for text categorization (2001) 0.01
    0.0066278386 = product of:
      0.03313919 = sum of:
        0.03313919 = weight(_text_:u in 1595) [ClassicSimilarity], result of:
          0.03313919 = score(doc=1595,freq=2.0), product of:
            0.13085829 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03996351 = queryNorm
            0.25324488 = fieldWeight in 1595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1595)
      0.2 = coord(1/5)
    
    Source
    Advances in classification research, vol.10: proceedings of the 10th ASIS SIG/CR Classification Research Workshop. Ed.: Albrechtsen, H. u. J.E. Mai
  18. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.006497405 = product of:
      0.032487024 = sum of:
        0.032487024 = product of:
          0.06497405 = sum of:
            0.06497405 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.06497405 = score(doc=1046,freq=2.0), product of:
                0.1399454 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03996351 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    5. 5.2003 14:17:22
  19. Reiner, U.: DDC-based search in the data of the German National Bibliography (2008) 0.01
    0.005681004 = product of:
      0.02840502 = sum of:
        0.02840502 = weight(_text_:u in 2166) [ClassicSimilarity], result of:
          0.02840502 = score(doc=2166,freq=2.0), product of:
            0.13085829 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03996351 = queryNorm
            0.21706703 = fieldWeight in 2166, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=2166)
      0.2 = coord(1/5)
    
  20. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.01
    0.005414504 = product of:
      0.02707252 = sum of:
        0.02707252 = product of:
          0.05414504 = sum of:
            0.05414504 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.05414504 = score(doc=2748,freq=2.0), product of:
                0.1399454 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03996351 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    1. 2.2016 18:25:22

Languages

  • e 26
  • d 14

Types

  • a 31
  • el 6
  • m 3
  • r 2
  • d 1
  • s 1
  • x 1
  • More… Less…