Search (53 results, page 1 of 3)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.19
    0.19295737 = product of:
      0.2572765 = sum of:
        0.060451537 = product of:
          0.18135461 = sum of:
            0.18135461 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.18135461 = score(doc=562,freq=2.0), product of:
                0.32268468 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038061365 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.18135461 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.18135461 = score(doc=562,freq=2.0), product of:
            0.32268468 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038061365 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.015470369 = product of:
          0.030940738 = sum of:
            0.030940738 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.030940738 = score(doc=562,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.02
    0.017016988 = product of:
      0.034033976 = sum of:
        0.01856361 = product of:
          0.05569083 = sum of:
            0.05569083 = weight(_text_:k in 690) [ClassicSimilarity], result of:
              0.05569083 = score(doc=690,freq=6.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.40988132 = fieldWeight in 690, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.33333334 = coord(1/3)
        0.015470369 = product of:
          0.030940738 = sum of:
            0.030940738 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.030940738 = score(doc=690,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    We describe the latent semantic indexing subspace signature model (LSISSM) for semantic content representation of unstructured text. Grounded on singular value decomposition, the model represents terms and documents by the distribution signatures of their statistical contribution across the top-ranking latent concept dimensions. LSISSM matches term signatures with document signatures according to their mapping coherence between latent semantic indexing (LSI) term subspace and LSI document subspace. LSISSM does feature reduction and finds a low-rank approximation of scalable and sparse term-document matrices. Experiments demonstrate that this approach significantly improves the performance of major clustering algorithms such as standard K-means and self-organizing maps compared with the vector space model and the traditional LSI model. The unique contribution ranking mechanism in LSISSM also improves the initialization of standard K-means compared with random seeding procedure, which sometimes causes low efficiency and effectiveness of clustering. A two-stage initialization strategy based on LSISSM significantly reduces the running time of standard K-means procedures.
    Date
    23. 3.2013 13:22:36
  3. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.02
    0.015276376 = product of:
      0.030552752 = sum of:
        0.012503989 = product of:
          0.037511967 = sum of:
            0.037511967 = weight(_text_:k in 2560) [ClassicSimilarity], result of:
              0.037511967 = score(doc=2560,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.27608594 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.33333334 = coord(1/3)
        0.018048763 = product of:
          0.036097527 = sum of:
            0.036097527 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.036097527 = score(doc=2560,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 9.2008 18:31:54
  4. Li, T.; Zhu, S.; Ogihara, M.: Hierarchical document classification using automatically generated hierarchy (2007) 0.01
    0.010007967 = product of:
      0.04003187 = sum of:
        0.04003187 = product of:
          0.08006374 = sum of:
            0.08006374 = weight(_text_:intelligent in 4797) [ClassicSimilarity], result of:
              0.08006374 = score(doc=4797,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.37342542 = fieldWeight in 4797, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4797)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Journal of intelligent information systems. 29(2007) no.2, S.211-230
  5. Search Engines and Beyond : Developing efficient knowledge management systems, April 19-20 1999, Boston, Mass (1999) 0.01
    0.009435602 = product of:
      0.03774241 = sum of:
        0.03774241 = product of:
          0.07548482 = sum of:
            0.07548482 = weight(_text_:intelligent in 2596) [ClassicSimilarity], result of:
              0.07548482 = score(doc=2596,freq=4.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.35206887 = fieldWeight in 2596, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2596)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Ramana Rao (Inxight, Palo Alto, CA) 7 ± 2 Insights on achieving Effective Information Access Session One: Updates and a twelve month perspective Danny Sullivan (Search Engine Watch, US / England) Portalization and other search trends Carol Tenopir (University of Tennessee) Search realities faced by end users and professional searchers Session Two: Today's search engines and beyond Daniel Hoogterp (Retrieval Technologies, McLean, VA) Effective presentation and utilization of search techniques Rick Kenny (Fulcrum Technologies, Ontario, Canada) Beyond document clustering: The knowledge impact statement Gary Stock (Ingenius, Kalamazoo, MI) Automated change monitoring Gary Culliss (Direct Hit, Wellesley Hills, MA) User popularity ranked search engines Byron Dom (IBM, CA) Automatically finding the best pages on the World Wide Web (CLEVER) Peter Tomassi (LookSmart, San Francisco, CA) Adding human intellect to search technology Session Three: Panel discussion: Human v automated categorization and editing Ev Brenner (New York, NY)- Chairman James Callan (University of Massachusetts, MA) Marc Krellenstein (Northern Light Technology, Cambridge, MA) Dan Miller (Ask Jeeves, Berkeley, CA) Session Four: Updates and a twelve month perspective Steve Arnold (AIT, Harrods Creek, KY) Review: The leading edge in search and retrieval software Ellen Voorhees (NIST, Gaithersburg, MD) TREC update Session Five: Search engines now and beyond Intelligent Agents John Snyder (Muscat, Cambridge, England) Practical issues behind intelligent agents Text summarization Therese Firmin, (Dept of Defense, Ft George G. Meade, MD) The TIPSTER/SUMMAC evaluation of automatic text summarization systems Cross language searching Elizabeth Liddy (TextWise, Syracuse, NY) A conceptual interlingua approach to cross-language retrieval. Video search and retrieval Armon Amir (IBM, Almaden, CA) CueVideo: Modular system for automatic indexing and browsing of video/audio Speech recognition Michael Witbrock (Lycos, Waltham, MA) Retrieval of spoken documents Visualization James A. Wise (Integral Visuals, Richland, WA) Information visualization in the new millennium: Emerging science or passing fashion? Text mining David Evans (Claritech, Pittsburgh, PA) Text mining - towards decision support
  6. Khoo, C.S.G.; Ng, K.; Ou, S.: ¬An exploratory study of human clustering of Web pages (2003) 0.01
    0.008729358 = product of:
      0.017458716 = sum of:
        0.0071451366 = product of:
          0.02143541 = sum of:
            0.02143541 = weight(_text_:k in 2741) [ClassicSimilarity], result of:
              0.02143541 = score(doc=2741,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.15776339 = fieldWeight in 2741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2741)
          0.33333334 = coord(1/3)
        0.01031358 = product of:
          0.02062716 = sum of:
            0.02062716 = weight(_text_:22 in 2741) [ClassicSimilarity], result of:
              0.02062716 = score(doc=2741,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.15476047 = fieldWeight in 2741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2741)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    12. 9.2004 9:56:22
  7. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.0077351844 = product of:
      0.030940738 = sum of:
        0.030940738 = product of:
          0.061881475 = sum of:
            0.061881475 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.061881475 = score(doc=1046,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    5. 5.2003 14:17:22
  8. Sparck Jones, K.: Automatic classification (1976) 0.01
    0.0071451366 = product of:
      0.028580546 = sum of:
        0.028580546 = product of:
          0.08574164 = sum of:
            0.08574164 = weight(_text_:k in 2908) [ClassicSimilarity], result of:
              0.08574164 = score(doc=2908,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.63105357 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.125 = fieldNorm(doc=2908)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
  9. Schek, M.: Automatische Klassifizierung in Erschließung und Recherche eines Pressearchivs (2006) 0.01
    0.006671978 = product of:
      0.026687913 = sum of:
        0.026687913 = product of:
          0.053375825 = sum of:
            0.053375825 = weight(_text_:intelligent in 6043) [ClassicSimilarity], result of:
              0.053375825 = score(doc=6043,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.24895029 = fieldWeight in 6043, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6043)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Die Süddeutsche Zeitung (SZ) verfügt seit ihrer Gründung 1945 über ein Pressearchiv, das die Texte der eigenen Redakteure und zahlreicher nationaler und internationaler Publikationen dokumentiert und für Recherchezwecke bereitstellt. Die DIZ-Pressedatenbank (www.medienport.de) ermöglicht die browserbasierte Recherche für Redakteure und externe Kunden im Intra- und Internet und die kundenspezifischen Content Feeds für Verlage, Rundfunkanstalten und Portale. Die DIZ-Pressedatenbank enthält z. Zt. 7,8 Millionen Artikel, die jeweils als HTML oder PDF abrufbar sind. Täglich kommen ca. 3.500 Artikel hinzu, von denen ca. 1.000 durch Dokumentare inhaltlich erschlossen werden. Die Informationserschließung erfolgt im DIZ nicht durch die Vergabe von Schlagwörtern am Dokument, sondern durch die Verlinkung der Artikel mit "virtuellen Mappen", den Dossiers. Insgesamt enthält die DIZ-Pressedatenbank ca. 90.000 Dossiers, die untereinander zum "DIZ-Wissensnetz" verlinkt sind. DIZ definiert das Wissensnetz als Alleinstellungsmerkmal und wendet beträchtliche personelle Ressourcen für die Aktualisierung und Qualitätssicherung der Dossiers auf. Im Zuge der Medienkrise mussten sich DIZ der Herausforderung stellen, bei sinkenden Lektoratskapazitäten die Qualität der Informationserschließung im Input zu erhalten. Auf der Outputseite gilt es, eine anspruchsvolle Zielgruppe - u.a. die Redakteure der Süddeutschen Zeitung - passgenau und zeitnah mit den Informationen zu versorgen, die sie für ihre tägliche Arbeit benötigt. Bezogen auf die Ausgangssituation in der Dokumentation der Süddeutschen Zeitung identifizierte DIZ drei Ansatzpunkte, wie die Aufwände auf der Inputseite (Lektorat) zu optimieren sind und gleichzeitig auf der Outputseite (Recherche) das Wissensnetz besser zu vermarkten ist: - (Teil-)Automatische Klassifizierung von Pressetexten (Vorschlagwesen) - Visualisierung des Wissensnetzes - Neue Retrievalmöglichkeiten (Ähnlichkeitssuche, Clustering) Im Bereich "Visualisierung" setzt DIZ auf den Net-Navigator von intelligent views, eine interaktive Visualisierung allgemeiner Graphen, basierend auf einem physikalischen Modell. In den Bereichen automatische Klassifizierung, Ähnlichkeitssuche und Clustering hat DIZ sich für das Produkt nextBot der Firma Brainbot entschieden.
  10. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.006445987 = product of:
      0.025783949 = sum of:
        0.025783949 = product of:
          0.051567897 = sum of:
            0.051567897 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.051567897 = score(doc=611,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 8.2009 12:54:24
  11. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.01
    0.006445987 = product of:
      0.025783949 = sum of:
        0.025783949 = product of:
          0.051567897 = sum of:
            0.051567897 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.051567897 = score(doc=2748,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 2.2016 18:25:22
  12. Yu, W.; Gong, Y.: Document clustering by concept factorization (2004) 0.01
    0.0053588524 = product of:
      0.02143541 = sum of:
        0.02143541 = product of:
          0.06430623 = sum of:
            0.06430623 = weight(_text_:k in 4084) [ClassicSimilarity], result of:
              0.06430623 = score(doc=4084,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.47329018 = fieldWeight in 4084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4084)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a
  13. Kwon, O.W.; Lee, J.H.: Text categorization based on k-nearest neighbor approach for web site classification (2003) 0.00
    0.004992816 = product of:
      0.019971265 = sum of:
        0.019971265 = product of:
          0.05991379 = sum of:
            0.05991379 = weight(_text_:k in 1070) [ClassicSimilarity], result of:
              0.05991379 = score(doc=1070,freq=10.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.44096208 = fieldWeight in 1070, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1070)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Automatic categorization is a viable method to deal with the scaling problem on the World Wide Web. For Web site classification, this paper proposes the use of Web pages linked with the home page in a different manner from the sole use of home pages in previous research. To implement our proposed method, we derive a scheme for Web site classification based on the k-nearest neighbor (k-NN) approach. It consists of three phases: Web page selection (connectivity analysis), Web page classification, and Web site classification. Given a Web site, the Web page selection chooses several representative Web pages using connectivity analysis. The k-NN classifier next classifies each of the selected Web pages. Finally, the classified Web pages are extended to a classification of the entire Web site. To improve performance, we supplement the k-NN approach with a feature selection method and a term weighting scheme using markup tags, and also reform its document-document similarity measure. In our experiments on a Korean commercial Web directory, the proposed system, using both a home page and its linked pages, improved the performance of micro-averaging breakeven point by 30.02%, compared with an ordinary classification which uses a home page only.
  14. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.00
    0.004512191 = product of:
      0.018048763 = sum of:
        0.018048763 = product of:
          0.036097527 = sum of:
            0.036097527 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.036097527 = score(doc=141,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Pages
    S.1-22
  15. Dubin, D.: Dimensions and discriminability (1998) 0.00
    0.004512191 = product of:
      0.018048763 = sum of:
        0.018048763 = product of:
          0.036097527 = sum of:
            0.036097527 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.036097527 = score(doc=2338,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 9.1997 19:16:05
  16. Automatic classification research at OCLC (2002) 0.00
    0.004512191 = product of:
      0.018048763 = sum of:
        0.018048763 = product of:
          0.036097527 = sum of:
            0.036097527 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.036097527 = score(doc=1563,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    5. 5.2003 9:22:09
  17. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.00
    0.004512191 = product of:
      0.018048763 = sum of:
        0.018048763 = product of:
          0.036097527 = sum of:
            0.036097527 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.036097527 = score(doc=1673,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 8.1996 22:08:06
  18. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.00
    0.004512191 = product of:
      0.018048763 = sum of:
        0.018048763 = product of:
          0.036097527 = sum of:
            0.036097527 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.036097527 = score(doc=5273,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 16:24:52
  19. Wätjen, H.-J.; Diekmann, B.; Möller, G.; Carstensen, K.-U.: Bericht zum DFG-Projekt: GERHARD : German Harvest Automated Retrieval and Directory (1998) 0.00
    0.0044657104 = product of:
      0.017862841 = sum of:
        0.017862841 = product of:
          0.053588524 = sum of:
            0.053588524 = weight(_text_:k in 3065) [ClassicSimilarity], result of:
              0.053588524 = score(doc=3065,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.39440846 = fieldWeight in 3065, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3065)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
  20. Shen, D.; Chen, Z.; Yang, Q.; Zeng, H.J.; Zhang, B.; Lu, Y.; Ma, W.Y.: Web page classification through summarization (2004) 0.00
    0.0044657104 = product of:
      0.017862841 = sum of:
        0.017862841 = product of:
          0.053588524 = sum of:
            0.053588524 = weight(_text_:k in 4132) [ClassicSimilarity], result of:
              0.053588524 = score(doc=4132,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.39440846 = fieldWeight in 4132, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4132)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a

Languages

  • e 45
  • d 7
  • a 1
  • More… Less…

Types

  • a 46
  • el 10
  • r 1
  • More… Less…