Search (4 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Automatisches Klassifizieren"
  • × year_i:[2010 TO 2020}
  1. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.01
    0.0066457465 = product of:
      0.03987448 = sum of:
        0.02662841 = weight(_text_:internet in 2158) [ClassicSimilarity], result of:
          0.02662841 = score(doc=2158,freq=4.0), product of:
            0.09621047 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.032588977 = queryNorm
            0.27677247 = fieldWeight in 2158, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.01324607 = product of:
          0.02649214 = sum of:
            0.02649214 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.02649214 = score(doc=2158,freq=2.0), product of:
                0.11412105 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032588977 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.16666667 = coord(2/12)
    
    Abstract
    This paper introduces a project to develop a reliable, cost-effective method for classifying Internet texts into register categories, and apply that approach to the analysis of a large corpus of web documents. To date, the project has proceeded in 2 key phases. First, we developed a bottom-up method for web register classification, asking end users of the web to utilize a decision-tree survey to code relevant situational characteristics of web documents, resulting in a bottom-up identification of register and subregister categories. We present details regarding the development and testing of this method through a series of 10 pilot studies. Then, in the second phase of our project we applied this procedure to a corpus of 53,000 web documents. An analysis of the results demonstrates the effectiveness of these methods for web register classification and provides a preliminary description of the types and distribution of registers on the web.
    Date
    4. 8.2015 19:22:04
    Theme
    Internet
  2. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.00
    0.001839732 = product of:
      0.022076784 = sum of:
        0.022076784 = product of:
          0.044153567 = sum of:
            0.044153567 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.044153567 = score(doc=2748,freq=2.0), product of:
                0.11412105 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032588977 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.083333336 = coord(1/12)
    
    Date
    1. 2.2016 18:25:22
  3. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.00
    0.0011038391 = product of:
      0.01324607 = sum of:
        0.01324607 = product of:
          0.02649214 = sum of:
            0.02649214 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.02649214 = score(doc=690,freq=2.0), product of:
                0.11412105 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032588977 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
      0.083333336 = coord(1/12)
    
    Date
    23. 3.2013 13:22:36
  4. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.00
    9.19866E-4 = product of:
      0.011038392 = sum of:
        0.011038392 = product of:
          0.022076784 = sum of:
            0.022076784 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.022076784 = score(doc=1107,freq=2.0), product of:
                0.11412105 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032588977 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.083333336 = coord(1/12)
    
    Date
    28.10.2013 19:22:57