Search (71 results, page 1 of 4)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10417418 = sum of:
      0.082946934 = product of:
        0.2488408 = sum of:
          0.2488408 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.2488408 = score(doc=562,freq=2.0), product of:
              0.442763 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052224867 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.021227246 = product of:
        0.042454492 = sum of:
          0.042454492 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.042454492 = score(doc=562,freq=2.0), product of:
              0.18288259 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052224867 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.04
    0.04267995 = product of:
      0.0853599 = sum of:
        0.0853599 = sum of:
          0.03582966 = weight(_text_:technology in 5273) [ClassicSimilarity], result of:
            0.03582966 = score(doc=5273,freq=2.0), product of:
              0.15554588 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.052224867 = queryNorm
              0.23034787 = fieldWeight in 5273, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
          0.04953024 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
            0.04953024 = score(doc=5273,freq=2.0), product of:
              0.18288259 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052224867 = queryNorm
              0.2708308 = fieldWeight in 5273, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:24:52
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.3, S.431-442
  3. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.04
    0.036582813 = product of:
      0.073165625 = sum of:
        0.073165625 = sum of:
          0.030711137 = weight(_text_:technology in 2760) [ClassicSimilarity], result of:
            0.030711137 = score(doc=2760,freq=2.0), product of:
              0.15554588 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.052224867 = queryNorm
              0.19744103 = fieldWeight in 2760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.046875 = fieldNorm(doc=2760)
          0.042454492 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
            0.042454492 = score(doc=2760,freq=2.0), product of:
              0.18288259 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052224867 = queryNorm
              0.23214069 = fieldWeight in 2760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2760)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 19:11:54
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.803-813
  4. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.04
    0.036582813 = product of:
      0.073165625 = sum of:
        0.073165625 = sum of:
          0.030711137 = weight(_text_:technology in 690) [ClassicSimilarity], result of:
            0.030711137 = score(doc=690,freq=2.0), product of:
              0.15554588 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.052224867 = queryNorm
              0.19744103 = fieldWeight in 690, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.046875 = fieldNorm(doc=690)
          0.042454492 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
            0.042454492 = score(doc=690,freq=2.0), product of:
              0.18288259 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052224867 = queryNorm
              0.23214069 = fieldWeight in 690, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=690)
      0.5 = coord(1/2)
    
    Date
    23. 3.2013 13:22:36
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.844-860
  5. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.04
    0.036582813 = product of:
      0.073165625 = sum of:
        0.073165625 = sum of:
          0.030711137 = weight(_text_:technology in 2158) [ClassicSimilarity], result of:
            0.030711137 = score(doc=2158,freq=2.0), product of:
              0.15554588 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.052224867 = queryNorm
              0.19744103 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
          0.042454492 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
            0.042454492 = score(doc=2158,freq=2.0), product of:
              0.18288259 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052224867 = queryNorm
              0.23214069 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
      0.5 = coord(1/2)
    
    Date
    4. 8.2015 19:22:04
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1817-1831
  6. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.03
    0.030485678 = product of:
      0.060971357 = sum of:
        0.060971357 = sum of:
          0.025592614 = weight(_text_:technology in 2765) [ClassicSimilarity], result of:
            0.025592614 = score(doc=2765,freq=2.0), product of:
              0.15554588 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.052224867 = queryNorm
              0.16453418 = fieldWeight in 2765, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
          0.035378743 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
            0.035378743 = score(doc=2765,freq=2.0), product of:
              0.18288259 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052224867 = queryNorm
              0.19345059 = fieldWeight in 2765, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 19:14:43
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.814-825
  7. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.03
    0.030485678 = product of:
      0.060971357 = sum of:
        0.060971357 = sum of:
          0.025592614 = weight(_text_:technology in 1107) [ClassicSimilarity], result of:
            0.025592614 = score(doc=1107,freq=2.0), product of:
              0.15554588 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.052224867 = queryNorm
              0.16453418 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
          0.035378743 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
            0.035378743 = score(doc=1107,freq=2.0), product of:
              0.18288259 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052224867 = queryNorm
              0.19345059 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
      0.5 = coord(1/2)
    
    Date
    28.10.2013 19:22:57
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2265-2277
  8. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.02
    0.021227246 = product of:
      0.042454492 = sum of:
        0.042454492 = product of:
          0.084908985 = sum of:
            0.084908985 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.084908985 = score(doc=1046,freq=2.0), product of:
                0.18288259 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052224867 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 14:17:22
  9. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.02
    0.017689371 = product of:
      0.035378743 = sum of:
        0.035378743 = product of:
          0.070757486 = sum of:
            0.070757486 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.070757486 = score(doc=611,freq=2.0), product of:
                0.18288259 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052224867 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
  10. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.02
    0.017689371 = product of:
      0.035378743 = sum of:
        0.035378743 = product of:
          0.070757486 = sum of:
            0.070757486 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.070757486 = score(doc=2748,freq=2.0), product of:
                0.18288259 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052224867 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  11. Savic, D.: Designing an expert system for classifying office documents (1994) 0.01
    0.014477369 = product of:
      0.028954739 = sum of:
        0.028954739 = product of:
          0.057909478 = sum of:
            0.057909478 = weight(_text_:technology in 2655) [ClassicSimilarity], result of:
              0.057909478 = score(doc=2655,freq=4.0), product of:
                0.15554588 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.052224867 = queryNorm
                0.3722984 = fieldWeight in 2655, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2655)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Can records management benefit from artificial intelligence technology, in particular from expert systems? Gives an answer to this question by showing an example of a small scale prototype project in automatic classification of office documents. Project methodology and basic elements of an expert system's approach are elaborated to give guidelines to potential users of this promising technology
  12. Zhang, X: Rough set theory based automatic text categorization (2005) 0.01
    0.014477369 = product of:
      0.028954739 = sum of:
        0.028954739 = product of:
          0.057909478 = sum of:
            0.057909478 = weight(_text_:technology in 2822) [ClassicSimilarity], result of:
              0.057909478 = score(doc=2822,freq=4.0), product of:
                0.15554588 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.052224867 = queryNorm
                0.3722984 = fieldWeight in 2822, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2822)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Der Forschungsbericht "Rough Set Theory Based Automatic Text Categorization and the Handling of Semantic Heterogeneity" von Xueying Zhang ist in Buchform auf Englisch erschienen. Zhang hat in ihrer Arbeit ein Verfahren basierend auf der Rough Set Theory entwickelt, das Beziehungen zwischen Schlagwörtern verschiedener Vokabulare herstellt. Sie war von 2003 bis 2005 Mitarbeiterin des IZ und ist seit Oktober 2005 Associate Professor an der Nanjing University of Science and Technology.
    Footnote
    Nanjing University of Science and Technology, Diss.
  13. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.01
    0.01238256 = product of:
      0.02476512 = sum of:
        0.02476512 = product of:
          0.04953024 = sum of:
            0.04953024 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.04953024 = score(doc=141,freq=2.0), product of:
                0.18288259 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052224867 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.1-22
  14. Dubin, D.: Dimensions and discriminability (1998) 0.01
    0.01238256 = product of:
      0.02476512 = sum of:
        0.02476512 = product of:
          0.04953024 = sum of:
            0.04953024 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.04953024 = score(doc=2338,freq=2.0), product of:
                0.18288259 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052224867 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
  15. Automatic classification research at OCLC (2002) 0.01
    0.01238256 = product of:
      0.02476512 = sum of:
        0.02476512 = product of:
          0.04953024 = sum of:
            0.04953024 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.04953024 = score(doc=1563,freq=2.0), product of:
                0.18288259 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052224867 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 9:22:09
  16. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.01238256 = product of:
      0.02476512 = sum of:
        0.02476512 = product of:
          0.04953024 = sum of:
            0.04953024 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.04953024 = score(doc=1673,freq=2.0), product of:
                0.18288259 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052224867 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
  17. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.01
    0.01238256 = product of:
      0.02476512 = sum of:
        0.02476512 = product of:
          0.04953024 = sum of:
            0.04953024 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.04953024 = score(doc=2560,freq=2.0), product of:
                0.18288259 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052224867 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2008 18:31:54
  18. Adams, K.C.: Word wranglers : Automatic classification tools transform enterprise documents from "bags of words" into knowledge resources (2003) 0.01
    0.011081928 = product of:
      0.022163857 = sum of:
        0.022163857 = product of:
          0.044327714 = sum of:
            0.044327714 = weight(_text_:technology in 1665) [ClassicSimilarity], result of:
              0.044327714 = score(doc=1665,freq=6.0), product of:
                0.15554588 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.052224867 = queryNorm
                0.2849816 = fieldWeight in 1665, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1665)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Taxonomies are an important part of any knowledge management (KM) system, and automatic classification software is emerging as a "killer app" for consumer and enterprise portals. A number of companies such as Inxight Software , Mohomine, Metacode, and others claim to interpret the semantic content of any textual document and automatically classify text on the fly. The promise that software could automatically produce a Yahoo-style directory is a siren call not many IT managers are able to resist. KM needs have grown more complex due to the increasing amount of digital information, the declining effectiveness of keyword searching, and heterogeneous document formats in corporate databases. This environment requires innovative KM tools, and automatic classification technology is an example of this new kind of software. These products can be divided into three categories according to their underlying technology - rules-based, catalog-by-example, and statistical clustering. Evolving trends in this market include framing classification as a cyborg (computer- and human-based) activity and the increasing use of extensible markup language (XML) and support vector machine (SVM) technology. In this article, we'll survey the rapidly changing automatic classification software market and examine the features and capabilities of leading classification products.
  19. Wang, J.: ¬An extensive study on automated Dewey Decimal Classification (2009) 0.01
    0.011081928 = product of:
      0.022163857 = sum of:
        0.022163857 = product of:
          0.044327714 = sum of:
            0.044327714 = weight(_text_:technology in 3172) [ClassicSimilarity], result of:
              0.044327714 = score(doc=3172,freq=6.0), product of:
                0.15554588 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.052224867 = queryNorm
                0.2849816 = fieldWeight in 3172, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3172)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we present a theoretical analysis and extensive experiments on the automated assignment of Dewey Decimal Classification (DDC) classes to bibliographic data with a supervised machine-learning approach. Library classification systems, such as the DDC, impose great obstacles on state-of-art text categorization (TC) technologies, including deep hierarchy, data sparseness, and skewed distribution. We first analyze statistically the document and category distributions over the DDC, and discuss the obstacles imposed by bibliographic corpora and library classification schemes on TC technology. To overcome these obstacles, we propose an innovative algorithm to reshape the DDC structure into a balanced virtual tree by balancing the category distribution and flattening the hierarchy. To improve the classification effectiveness to a level acceptable to real-world applications, we propose an interactive classification model that is able to predict a class of any depth within a limited number of user interactions. The experiments are conducted on a large bibliographic collection created by the Library of Congress within the science and technology domains over 10 years. With no more than three interactions, a classification accuracy of nearly 90% is achieved, thus providing a practical solution to the automatic bibliographic classification problem.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.11, S.2269-2286
  20. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.01
    0.010613623 = product of:
      0.021227246 = sum of:
        0.021227246 = product of:
          0.042454492 = sum of:
            0.042454492 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.042454492 = score(doc=3051,freq=2.0), product of:
                0.18288259 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052224867 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 19:51:28

Authors

Years

Languages

  • e 67
  • d 4
  • More… Less…

Types

  • a 65
  • el 6
  • m 1
  • r 1
  • More… Less…