Search (68 results, page 1 of 4)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.04
    0.037300225 = product of:
      0.11190067 = sum of:
        0.06151114 = product of:
          0.18453342 = sum of:
            0.18453342 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.18453342 = score(doc=562,freq=2.0), product of:
                0.32834074 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03872851 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.05038953 = sum of:
          0.01890646 = weight(_text_:4 in 562) [ClassicSimilarity], result of:
            0.01890646 = score(doc=562,freq=2.0), product of:
              0.105097495 = queryWeight, product of:
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.03872851 = queryNorm
              0.17989448 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
          0.03148307 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.03148307 = score(doc=562,freq=2.0), product of:
              0.13562064 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03872851 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
      0.33333334 = coord(2/6)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Source
    Proceedings of the 4th IEEE International Conference on Data Mining (ICDM 2004), 1-4 November 2004, Brighton, UK
  2. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.03
    0.026174065 = product of:
      0.07852219 = sum of:
        0.028132662 = weight(_text_:r in 2760) [ClassicSimilarity], result of:
          0.028132662 = score(doc=2760,freq=2.0), product of:
            0.12820137 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.03872851 = queryNorm
            0.2194412 = fieldWeight in 2760, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.046875 = fieldNorm(doc=2760)
        0.05038953 = sum of:
          0.01890646 = weight(_text_:4 in 2760) [ClassicSimilarity], result of:
            0.01890646 = score(doc=2760,freq=2.0), product of:
              0.105097495 = queryWeight, product of:
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.03872851 = queryNorm
              0.17989448 = fieldWeight in 2760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.046875 = fieldNorm(doc=2760)
          0.03148307 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
            0.03148307 = score(doc=2760,freq=2.0), product of:
              0.13562064 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03872851 = queryNorm
              0.23214069 = fieldWeight in 2760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2760)
      0.33333334 = coord(2/6)
    
    Date
    22. 3.2009 19:11:54
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.803-813
  3. Schaalje, G.B.; Blades, N.J.; Funai, T.: ¬An open-set size-adjusted Bayesian classifier for authorship attribution (2013) 0.02
    0.017150164 = product of:
      0.102900974 = sum of:
        0.102900974 = weight(_text_:john in 1041) [ClassicSimilarity], result of:
          0.102900974 = score(doc=1041,freq=2.0), product of:
            0.24518675 = queryWeight, product of:
              6.330911 = idf(docFreq=213, maxDocs=44218)
              0.03872851 = queryNorm
            0.41968408 = fieldWeight in 1041, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.330911 = idf(docFreq=213, maxDocs=44218)
              0.046875 = fieldNorm(doc=1041)
      0.16666667 = coord(1/6)
    
    Abstract
    Recent studies of authorship attribution have used machine-learning methods including regularized multinomial logistic regression, neural nets, support vector machines, and the nearest shrunken centroid classifier to identify likely authors of disputed texts. These methods are all limited by an inability to perform open-set classification and account for text and corpus size. We propose a customized Bayesian logit-normal-beta-binomial classification model for supervised authorship attribution. The model is based on the beta-binomial distribution with an explicit inverse relationship between extra-binomial variation and text size. The model internally estimates the relationship of extra-binomial variation to text size, and uses Markov Chain Monte Carlo (MCMC) to produce distributions of posterior authorship probabilities instead of point estimates. We illustrate the method by training the machine-learning methods as well as the open-set Bayesian classifier on undisputed papers of The Federalist, and testing the method on documents historically attributed to Alexander Hamilton, John Jay, and James Madison. The Bayesian classifier was the best classifier of these texts.
  4. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.02
    0.017062187 = product of:
      0.05118656 = sum of:
        0.03282144 = weight(_text_:r in 141) [ClassicSimilarity], result of:
          0.03282144 = score(doc=141,freq=2.0), product of:
            0.12820137 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.03872851 = queryNorm
            0.25601473 = fieldWeight in 141, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.0546875 = fieldNorm(doc=141)
        0.018365124 = product of:
          0.03673025 = sum of:
            0.03673025 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.03673025 = score(doc=141,freq=2.0), product of:
                0.13562064 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03872851 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Pages
    S.1-22
    Source
    Klassifikation und Ordnung. Tagungsband 12. Jahrestagung der Gesellschaft für Klassifikation, Darmstadt 17.-19.3.1988. Hrsg.: R. Wille
  5. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.02
    0.01679651 = product of:
      0.10077906 = sum of:
        0.10077906 = sum of:
          0.03781292 = weight(_text_:4 in 1046) [ClassicSimilarity], result of:
            0.03781292 = score(doc=1046,freq=2.0), product of:
              0.105097495 = queryWeight, product of:
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.03872851 = queryNorm
              0.35978895 = fieldWeight in 1046, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.09375 = fieldNorm(doc=1046)
          0.06296614 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
            0.06296614 = score(doc=1046,freq=2.0), product of:
              0.13562064 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03872851 = queryNorm
              0.46428138 = fieldWeight in 1046, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=1046)
      0.16666667 = coord(1/6)
    
    Date
    5. 5.2003 14:17:22
    Source
    Journal of library administration. 34(2001) nos.3/4, S.221-228
  6. Walther, R.: Möglichkeiten und Grenzen automatischer Klassifikationen von Web-Dokumenten (2001) 0.01
    0.014616735 = product of:
      0.043850206 = sum of:
        0.03282144 = weight(_text_:r in 1562) [ClassicSimilarity], result of:
          0.03282144 = score(doc=1562,freq=2.0), product of:
            0.12820137 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.03872851 = queryNorm
            0.25601473 = fieldWeight in 1562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1562)
        0.011028768 = product of:
          0.022057535 = sum of:
            0.022057535 = weight(_text_:4 in 1562) [ClassicSimilarity], result of:
              0.022057535 = score(doc=1562,freq=2.0), product of:
                0.105097495 = queryWeight, product of:
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.03872851 = queryNorm
                0.2098769 = fieldWeight in 1562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1562)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    4. 5.2003 19:55:46
  7. Dolin, R.; Agrawal, D.; El Abbadi, A.; Pearlman, J.: Using automated classification for summarizing and selecting heterogeneous information sources (1998) 0.01
    0.012528631 = product of:
      0.03758589 = sum of:
        0.028132662 = weight(_text_:r in 316) [ClassicSimilarity], result of:
          0.028132662 = score(doc=316,freq=2.0), product of:
            0.12820137 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.03872851 = queryNorm
            0.2194412 = fieldWeight in 316, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.046875 = fieldNorm(doc=316)
        0.00945323 = product of:
          0.01890646 = sum of:
            0.01890646 = weight(_text_:4 in 316) [ClassicSimilarity], result of:
              0.01890646 = score(doc=316,freq=2.0), product of:
                0.105097495 = queryWeight, product of:
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.03872851 = queryNorm
                0.17989448 = fieldWeight in 316, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.046875 = fieldNorm(doc=316)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Source
    D-Lib magazine. 4(1998) no.1
  8. Puzicha, J.: Informationen finden! : Intelligente Suchmaschinentechnologie & automatische Kategorisierung (2007) 0.01
    0.012528631 = product of:
      0.03758589 = sum of:
        0.028132662 = weight(_text_:r in 2817) [ClassicSimilarity], result of:
          0.028132662 = score(doc=2817,freq=2.0), product of:
            0.12820137 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.03872851 = queryNorm
            0.2194412 = fieldWeight in 2817, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.046875 = fieldNorm(doc=2817)
        0.00945323 = product of:
          0.01890646 = sum of:
            0.01890646 = weight(_text_:4 in 2817) [ClassicSimilarity], result of:
              0.01890646 = score(doc=2817,freq=2.0), product of:
                0.105097495 = queryWeight, product of:
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.03872851 = queryNorm
                0.17989448 = fieldWeight in 2817, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2817)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Wie in diesem Text erläutert wurde, ist die Effektivität von Such- und Klassifizierungssystemen durch folgendes bestimmt: 1) den Arbeitsauftrag, 2) die Genauigkeit des Systems, 3) den zu erreichenden Automatisierungsgrad, 4) die Einfachheit der Integration in bereits vorhandene Systeme. Diese Kriterien gehen davon aus, dass jedes System, unabhängig von der Technologie, in der Lage ist, Grundvoraussetzungen des Produkts in Bezug auf Funktionalität, Skalierbarkeit und Input-Methode zu erfüllen. Diese Produkteigenschaften sind in der Recommind Produktliteratur genauer erläutert. Von diesen Fähigkeiten ausgehend sollte die vorhergehende Diskussion jedoch einige klare Trends aufgezeigt haben. Es ist nicht überraschend, dass jüngere Entwicklungen im Maschine Learning und anderen Bereichen der Informatik einen theoretischen Ausgangspunkt für die Entwicklung von Suchmaschinen- und Klassifizierungstechnologie haben. Besonders jüngste Fortschritte bei den statistischen Methoden (PLSA) und anderen mathematischen Werkzeugen (SVMs) haben eine Ergebnisqualität auf Durchbruchsniveau erreicht. Dazu kommt noch die Flexibilität in der Anwendung durch Selbsttraining und Kategorienerkennen von PLSA-Systemen, wie auch eine neue Generation von vorher unerreichten Produktivitätsverbesserungen.
    Type
    r
  9. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.01
    0.012187276 = product of:
      0.036561828 = sum of:
        0.023443883 = weight(_text_:r in 1107) [ClassicSimilarity], result of:
          0.023443883 = score(doc=1107,freq=2.0), product of:
            0.12820137 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.03872851 = queryNorm
            0.18286766 = fieldWeight in 1107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.013117946 = product of:
          0.026235892 = sum of:
            0.026235892 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.026235892 = score(doc=1107,freq=2.0), product of:
                0.13562064 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03872851 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    28.10.2013 19:22:57
  10. Search Engines and Beyond : Developing efficient knowledge management systems, April 19-20 1999, Boston, Mass (1999) 0.01
    0.011433441 = product of:
      0.06860065 = sum of:
        0.06860065 = weight(_text_:john in 2596) [ClassicSimilarity], result of:
          0.06860065 = score(doc=2596,freq=2.0), product of:
            0.24518675 = queryWeight, product of:
              6.330911 = idf(docFreq=213, maxDocs=44218)
              0.03872851 = queryNorm
            0.2797894 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.330911 = idf(docFreq=213, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
      0.16666667 = coord(1/6)
    
    Content
    Ramana Rao (Inxight, Palo Alto, CA) 7 ± 2 Insights on achieving Effective Information Access Session One: Updates and a twelve month perspective Danny Sullivan (Search Engine Watch, US / England) Portalization and other search trends Carol Tenopir (University of Tennessee) Search realities faced by end users and professional searchers Session Two: Today's search engines and beyond Daniel Hoogterp (Retrieval Technologies, McLean, VA) Effective presentation and utilization of search techniques Rick Kenny (Fulcrum Technologies, Ontario, Canada) Beyond document clustering: The knowledge impact statement Gary Stock (Ingenius, Kalamazoo, MI) Automated change monitoring Gary Culliss (Direct Hit, Wellesley Hills, MA) User popularity ranked search engines Byron Dom (IBM, CA) Automatically finding the best pages on the World Wide Web (CLEVER) Peter Tomassi (LookSmart, San Francisco, CA) Adding human intellect to search technology Session Three: Panel discussion: Human v automated categorization and editing Ev Brenner (New York, NY)- Chairman James Callan (University of Massachusetts, MA) Marc Krellenstein (Northern Light Technology, Cambridge, MA) Dan Miller (Ask Jeeves, Berkeley, CA) Session Four: Updates and a twelve month perspective Steve Arnold (AIT, Harrods Creek, KY) Review: The leading edge in search and retrieval software Ellen Voorhees (NIST, Gaithersburg, MD) TREC update Session Five: Search engines now and beyond Intelligent Agents John Snyder (Muscat, Cambridge, England) Practical issues behind intelligent agents Text summarization Therese Firmin, (Dept of Defense, Ft George G. Meade, MD) The TIPSTER/SUMMAC evaluation of automatic text summarization systems Cross language searching Elizabeth Liddy (TextWise, Syracuse, NY) A conceptual interlingua approach to cross-language retrieval. Video search and retrieval Armon Amir (IBM, Almaden, CA) CueVideo: Modular system for automatic indexing and browsing of video/audio Speech recognition Michael Witbrock (Lycos, Waltham, MA) Retrieval of spoken documents Visualization James A. Wise (Integral Visuals, Richland, WA) Information visualization in the new millennium: Emerging science or passing fashion? Text mining David Evans (Claritech, Pittsburgh, PA) Text mining - towards decision support
  11. Wu, M.; Fuller, M.; Wilkinson, R.: Using clustering and classification approaches in interactive retrieval (2001) 0.01
    0.01094048 = product of:
      0.06564288 = sum of:
        0.06564288 = weight(_text_:r in 2666) [ClassicSimilarity], result of:
          0.06564288 = score(doc=2666,freq=2.0), product of:
            0.12820137 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.03872851 = queryNorm
            0.51202947 = fieldWeight in 2666, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.109375 = fieldNorm(doc=2666)
      0.16666667 = coord(1/6)
    
  12. Reiner, U.: VZG-Projekt Colibri : Bewertung von automatisch DDC-klassifizierten Titeldatensätzen der Deutschen Nationalbibliothek (DNB) (2009) 0.01
    0.010440525 = product of:
      0.031321574 = sum of:
        0.023443883 = weight(_text_:r in 2675) [ClassicSimilarity], result of:
          0.023443883 = score(doc=2675,freq=2.0), product of:
            0.12820137 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.03872851 = queryNorm
            0.18286766 = fieldWeight in 2675, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2675)
        0.007877692 = product of:
          0.015755383 = sum of:
            0.015755383 = weight(_text_:4 in 2675) [ClassicSimilarity], result of:
              0.015755383 = score(doc=2675,freq=2.0), product of:
                0.105097495 = queryWeight, product of:
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.03872851 = queryNorm
                0.14991207 = fieldWeight in 2675, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2675)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Das VZG-Projekt Colibri/DDC beschäftigt sich seit 2003 mit automatischen Verfahren zur Dewey-Dezimalklassifikation (Dewey Decimal Classification, kurz DDC). Ziel des Projektes ist eine einheitliche DDC-Erschließung von bibliografischen Titeldatensätzen und eine Unterstützung der DDC-Expert(inn)en und DDC-Laien, z. B. bei der Analyse und Synthese von DDC-Notationen und deren Qualitätskontrolle und der DDC-basierten Suche. Der vorliegende Bericht konzentriert sich auf die erste größere automatische DDC-Klassifizierung und erste automatische und intellektuelle Bewertung mit der Klassifizierungskomponente vc_dcl1. Grundlage hierfür waren die von der Deutschen Nationabibliothek (DNB) im November 2007 zur Verfügung gestellten 25.653 Titeldatensätze (12 Wochen-/Monatslieferungen) der Deutschen Nationalbibliografie der Reihen A, B und H. Nach Erläuterung der automatischen DDC-Klassifizierung und automatischen Bewertung in Kapitel 2 wird in Kapitel 3 auf den DNB-Bericht "Colibri_Auswertung_DDC_Endbericht_Sommer_2008" eingegangen. Es werden Sachverhalte geklärt und Fragen gestellt, deren Antworten die Weichen für den Verlauf der weiteren Klassifizierungstests stellen werden. Über das Kapitel 3 hinaus führende weitergehende Betrachtungen und Gedanken zur Fortführung der automatischen DDC-Klassifizierung werden in Kapitel 4 angestellt. Der Bericht dient dem vertieften Verständnis für die automatischen Verfahren.
    Type
    r
  13. Dubin, D.: Dimensions and discriminability (1998) 0.01
    0.009797964 = product of:
      0.058787785 = sum of:
        0.058787785 = sum of:
          0.022057535 = weight(_text_:4 in 2338) [ClassicSimilarity], result of:
            0.022057535 = score(doc=2338,freq=2.0), product of:
              0.105097495 = queryWeight, product of:
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.03872851 = queryNorm
              0.2098769 = fieldWeight in 2338, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2338)
          0.03673025 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
            0.03673025 = score(doc=2338,freq=2.0), product of:
              0.13562064 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03872851 = queryNorm
              0.2708308 = fieldWeight in 2338, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2338)
      0.16666667 = coord(1/6)
    
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  14. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.01
    0.009797964 = product of:
      0.058787785 = sum of:
        0.058787785 = sum of:
          0.022057535 = weight(_text_:4 in 2560) [ClassicSimilarity], result of:
            0.022057535 = score(doc=2560,freq=2.0), product of:
              0.105097495 = queryWeight, product of:
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.03872851 = queryNorm
              0.2098769 = fieldWeight in 2560, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2560)
          0.03673025 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
            0.03673025 = score(doc=2560,freq=2.0), product of:
              0.13562064 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03872851 = queryNorm
              0.2708308 = fieldWeight in 2560, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2560)
      0.16666667 = coord(1/6)
    
    Date
    22. 9.2008 18:31:54
    Source
    International cataloguing and bibliographic control. 36(2007) no.4, S.78-82
  15. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.01
    0.008398255 = product of:
      0.05038953 = sum of:
        0.05038953 = sum of:
          0.01890646 = weight(_text_:4 in 690) [ClassicSimilarity], result of:
            0.01890646 = score(doc=690,freq=2.0), product of:
              0.105097495 = queryWeight, product of:
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.03872851 = queryNorm
              0.17989448 = fieldWeight in 690, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.046875 = fieldNorm(doc=690)
          0.03148307 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
            0.03148307 = score(doc=690,freq=2.0), product of:
              0.13562064 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03872851 = queryNorm
              0.23214069 = fieldWeight in 690, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=690)
      0.16666667 = coord(1/6)
    
    Date
    23. 3.2013 13:22:36
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.844-860
  16. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.01
    0.008398255 = product of:
      0.05038953 = sum of:
        0.05038953 = sum of:
          0.01890646 = weight(_text_:4 in 2158) [ClassicSimilarity], result of:
            0.01890646 = score(doc=2158,freq=2.0), product of:
              0.105097495 = queryWeight, product of:
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.03872851 = queryNorm
              0.17989448 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
          0.03148307 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
            0.03148307 = score(doc=2158,freq=2.0), product of:
              0.13562064 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03872851 = queryNorm
              0.23214069 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
      0.16666667 = coord(1/6)
    
    Date
    4. 8.2015 19:22:04
  17. Sojka, P.; Lee, M.; Rehurek, R.; Hatlapatka, R.; Kucbel, M.; Bouche, T.; Goutorbe, C.; Anghelache, R.; Wojciechowski, K.: Toolset for entity and semantic associations : Final Release (2013) 0.01
    0.0081212 = product of:
      0.0487272 = sum of:
        0.0487272 = weight(_text_:r in 1057) [ClassicSimilarity], result of:
          0.0487272 = score(doc=1057,freq=6.0), product of:
            0.12820137 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.03872851 = queryNorm
            0.38008332 = fieldWeight in 1057, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.046875 = fieldNorm(doc=1057)
      0.16666667 = coord(1/6)
    
  18. Wätjen, H.-J.; Diekmann, B.; Möller, G.; Carstensen, K.-U.: Bericht zum DFG-Projekt: GERHARD : German Harvest Automated Retrieval and Directory (1998) 0.01
    0.007814628 = product of:
      0.046887767 = sum of:
        0.046887767 = weight(_text_:r in 3065) [ClassicSimilarity], result of:
          0.046887767 = score(doc=3065,freq=2.0), product of:
            0.12820137 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.03872851 = queryNorm
            0.36573532 = fieldWeight in 3065, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.078125 = fieldNorm(doc=3065)
      0.16666667 = coord(1/6)
    
    Type
    r
  19. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.01
    0.006998546 = product of:
      0.041991275 = sum of:
        0.041991275 = sum of:
          0.015755383 = weight(_text_:4 in 2765) [ClassicSimilarity], result of:
            0.015755383 = score(doc=2765,freq=2.0), product of:
              0.105097495 = queryWeight, product of:
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.03872851 = queryNorm
              0.14991207 = fieldWeight in 2765, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.7136984 = idf(docFreq=7967, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
          0.026235892 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
            0.026235892 = score(doc=2765,freq=2.0), product of:
              0.13562064 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03872851 = queryNorm
              0.19345059 = fieldWeight in 2765, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
      0.16666667 = coord(1/6)
    
    Date
    22. 3.2009 19:14:43
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.814-825
  20. Dolin, R.; Agrawal, D.; El Abbadi, A.; Pearlman, J.: Using automated classification for summarizing and selecting heterogeneous information sources (1998) 0.01
    0.0062643155 = product of:
      0.018792946 = sum of:
        0.014066331 = weight(_text_:r in 1253) [ClassicSimilarity], result of:
          0.014066331 = score(doc=1253,freq=2.0), product of:
            0.12820137 = queryWeight, product of:
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.03872851 = queryNorm
            0.1097206 = fieldWeight in 1253, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3102584 = idf(docFreq=4387, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1253)
        0.004726615 = product of:
          0.00945323 = sum of:
            0.00945323 = weight(_text_:4 in 1253) [ClassicSimilarity], result of:
              0.00945323 = score(doc=1253,freq=2.0), product of:
                0.105097495 = queryWeight, product of:
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.03872851 = queryNorm
                0.08994724 = fieldWeight in 1253, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1253)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Source
    D-Lib magazine. 4(1998) no.1, xx S

Years

Languages

  • e 54
  • d 14

Types

  • a 54
  • el 13
  • r 4
  • x 2
  • m 1
  • More… Less…