Search (95 results, page 1 of 5)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Chung, Y.-M.; Noh, Y.-H.: Developing a specialized directory system by automatically classifying Web documents (2003) 0.04
    0.03716839 = product of:
      0.055752583 = sum of:
        0.045771766 = sum of:
          0.030835807 = weight(_text_:k in 1566) [ClassicSimilarity], result of:
            0.030835807 = score(doc=1566,freq=2.0), product of:
              0.13030402 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.036501996 = queryNorm
              0.23664509 = fieldWeight in 1566, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=1566)
          0.014935959 = weight(_text_:h in 1566) [ClassicSimilarity], result of:
            0.014935959 = score(doc=1566,freq=2.0), product of:
              0.09068736 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.036501996 = queryNorm
              0.16469726 = fieldWeight in 1566, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.046875 = fieldNorm(doc=1566)
        0.009980817 = product of:
          0.029942451 = sum of:
            0.029942451 = weight(_text_:29 in 1566) [ClassicSimilarity], result of:
              0.029942451 = score(doc=1566,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.23319192 = fieldWeight in 1566, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1566)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    This study developed a specialized directory system using an automatic classification technique. Economics was selected as the subject field for the classification experiments with Web documents. The classification scheme of the directory follows the DDC, and subject terms representing each class number or subject category were selected from the DDC table to construct a representative term dictionary. In collecting and classifying the Web documents, various strategies were tested in order to find the optimal thresholds. In the classification experiments, Web documents in economics were classified into a total of 757 hierarchical subject categories built from the DDC scheme. The first and second experiments using the representative term dictionary resulted in relatively high precision ratios of 77 and 60%, respectively. The third experiment employing a machine learning-based k-nearest neighbours (kNN) classifier in a closed experimental setting achieved a precision ratio of 96%. This implies that it is possible to enhance the classification performance by applying a hybrid method combining a dictionary-based technique and a kNN classifier
    Source
    Journal of information science. 29(2003) no.2, S.117-126
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.03
    0.029783962 = product of:
      0.044675943 = sum of:
        0.03478491 = product of:
          0.17392454 = sum of:
            0.17392454 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.17392454 = score(doc=562,freq=2.0), product of:
                0.30946434 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036501996 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.2 = coord(1/5)
        0.009891033 = product of:
          0.029673098 = sum of:
            0.029673098 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.029673098 = score(doc=562,freq=2.0), product of:
                0.12782377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036501996 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Wätjen, H.-J.; Diekmann, B.; Möller, G.; Carstensen, K.-U.: Bericht zum DFG-Projekt: GERHARD : German Harvest Automated Retrieval and Directory (1998) 0.03
    0.02542876 = product of:
      0.07628628 = sum of:
        0.07628628 = sum of:
          0.05139301 = weight(_text_:k in 3065) [ClassicSimilarity], result of:
            0.05139301 = score(doc=3065,freq=2.0), product of:
              0.13030402 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.036501996 = queryNorm
              0.39440846 = fieldWeight in 3065, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.078125 = fieldNorm(doc=3065)
          0.024893267 = weight(_text_:h in 3065) [ClassicSimilarity], result of:
            0.024893267 = score(doc=3065,freq=2.0), product of:
              0.09068736 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.036501996 = queryNorm
              0.27449545 = fieldWeight in 3065, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.078125 = fieldNorm(doc=3065)
      0.33333334 = coord(1/3)
    
  4. Kwon, O.W.; Lee, J.H.: Text categorization based on k-nearest neighbor approach for web site classification (2003) 0.02
    0.024697945 = product of:
      0.037046917 = sum of:
        0.028729567 = product of:
          0.057459135 = sum of:
            0.057459135 = weight(_text_:k in 1070) [ClassicSimilarity], result of:
              0.057459135 = score(doc=1070,freq=10.0), product of:
                0.13030402 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.036501996 = queryNorm
                0.44096208 = fieldWeight in 1070, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1070)
          0.5 = coord(1/2)
        0.008317348 = product of:
          0.024952043 = sum of:
            0.024952043 = weight(_text_:29 in 1070) [ClassicSimilarity], result of:
              0.024952043 = score(doc=1070,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.19432661 = fieldWeight in 1070, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1070)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Automatic categorization is a viable method to deal with the scaling problem on the World Wide Web. For Web site classification, this paper proposes the use of Web pages linked with the home page in a different manner from the sole use of home pages in previous research. To implement our proposed method, we derive a scheme for Web site classification based on the k-nearest neighbor (k-NN) approach. It consists of three phases: Web page selection (connectivity analysis), Web page classification, and Web site classification. Given a Web site, the Web page selection chooses several representative Web pages using connectivity analysis. The k-NN classifier next classifies each of the selected Web pages. Finally, the classified Web pages are extended to a classification of the entire Web site. To improve performance, we supplement the k-NN approach with a feature selection method and a term weighting scheme using markup tags, and also reform its document-document similarity measure. In our experiments on a Korean commercial Web directory, the proposed system, using both a home page and its linked pages, improved the performance of micro-averaging breakeven point by 30.02%, compared with an ordinary classification which uses a home page only.
    Date
    27.12.2007 17:32:29
  5. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.02
    0.024397086 = product of:
      0.036595628 = sum of:
        0.026704593 = product of:
          0.053409185 = sum of:
            0.053409185 = weight(_text_:k in 690) [ClassicSimilarity], result of:
              0.053409185 = score(doc=690,freq=6.0), product of:
                0.13030402 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.036501996 = queryNorm
                0.40988132 = fieldWeight in 690, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
        0.009891033 = product of:
          0.029673098 = sum of:
            0.029673098 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.029673098 = score(doc=690,freq=2.0), product of:
                0.12782377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036501996 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    We describe the latent semantic indexing subspace signature model (LSISSM) for semantic content representation of unstructured text. Grounded on singular value decomposition, the model represents terms and documents by the distribution signatures of their statistical contribution across the top-ranking latent concept dimensions. LSISSM matches term signatures with document signatures according to their mapping coherence between latent semantic indexing (LSI) term subspace and LSI document subspace. LSISSM does feature reduction and finds a low-rank approximation of scalable and sparse term-document matrices. Experiments demonstrate that this approach significantly improves the performance of major clustering algorithms such as standard K-means and self-organizing maps compared with the vector space model and the traditional LSI model. The unique contribution ranking mechanism in LSISSM also improves the initialization of standard K-means compared with random seeding procedure, which sometimes causes low efficiency and effectiveness of clustering. A two-stage initialization strategy based on LSISSM significantly reduces the running time of standard K-means procedures.
    Date
    23. 3.2013 13:22:36
  6. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.02
    0.01968473 = product of:
      0.029527094 = sum of:
        0.017987555 = product of:
          0.03597511 = sum of:
            0.03597511 = weight(_text_:k in 2560) [ClassicSimilarity], result of:
              0.03597511 = score(doc=2560,freq=2.0), product of:
                0.13030402 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.036501996 = queryNorm
                0.27608594 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
        0.011539539 = product of:
          0.034618616 = sum of:
            0.034618616 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.034618616 = score(doc=2560,freq=2.0), product of:
                0.12782377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036501996 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    22. 9.2008 18:31:54
  7. Pong, J.Y.-H.; Kwok, R.C.-W.; Lau, R.Y.-K.; Hao, J.-X.; Wong, P.C.-C.: ¬A comparative study of two automatic document classification methods in a library setting (2008) 0.02
    0.016262328 = product of:
      0.048786983 = sum of:
        0.048786983 = sum of:
          0.03634035 = weight(_text_:k in 2532) [ClassicSimilarity], result of:
            0.03634035 = score(doc=2532,freq=4.0), product of:
              0.13030402 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.036501996 = queryNorm
              0.2788889 = fieldWeight in 2532, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2532)
          0.012446634 = weight(_text_:h in 2532) [ClassicSimilarity], result of:
            0.012446634 = score(doc=2532,freq=2.0), product of:
              0.09068736 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.036501996 = queryNorm
              0.13724773 = fieldWeight in 2532, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2532)
      0.33333334 = coord(1/3)
    
    Abstract
    In current library practice, trained human experts usually carry out document cataloguing and indexing based on a manual approach. With the explosive growth in the number of electronic documents available on the Internet and digital libraries, it is increasingly difficult for library practitioners to categorize both electronic documents and traditional library materials using just a manual approach. To improve the effectiveness and efficiency of document categorization at the library setting, more in-depth studies of using automatic document classification methods to categorize library items are required. Machine learning research has advanced rapidly in recent years. However, applying machine learning techniques to improve library practice is still a relatively unexplored area. This paper illustrates the design and development of a machine learning based automatic document classification system to alleviate the manual categorization problem encountered within the library setting. Two supervised machine learning algorithms have been tested. Our empirical tests show that supervised machine learning algorithms in general, and the k-nearest neighbours (KNN) algorithm in particular, can be used to develop an effective document classification system to enhance current library practice. Moreover, some concrete recommendations regarding how to practically apply the KNN algorithm to develop automatic document classification in a library setting are made. To our best knowledge, this is the first in-depth study of applying the KNN algorithm to automatic document classification based on the widely used LCC classification scheme adopted by many large libraries.
  8. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.02
    0.015907384 = product of:
      0.023861077 = sum of:
        0.012321538 = product of:
          0.024643077 = sum of:
            0.024643077 = weight(_text_:h in 141) [ClassicSimilarity], result of:
              0.024643077 = score(doc=141,freq=4.0), product of:
                0.09068736 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.036501996 = queryNorm
                0.27173662 = fieldWeight in 141, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
        0.011539539 = product of:
          0.034618616 = sum of:
            0.034618616 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.034618616 = score(doc=141,freq=2.0), product of:
                0.12782377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036501996 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Pages
    S.1-22
  9. Ma, Z.; Sun, A.; Cong, G.: On predicting the popularity of newly emerging hashtags in Twitter (2013) 0.01
    0.0141104 = product of:
      0.0211656 = sum of:
        0.012848252 = product of:
          0.025696505 = sum of:
            0.025696505 = weight(_text_:k in 967) [ClassicSimilarity], result of:
              0.025696505 = score(doc=967,freq=2.0), product of:
                0.13030402 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.036501996 = queryNorm
                0.19720423 = fieldWeight in 967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=967)
          0.5 = coord(1/2)
        0.008317348 = product of:
          0.024952043 = sum of:
            0.024952043 = weight(_text_:29 in 967) [ClassicSimilarity], result of:
              0.024952043 = score(doc=967,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.19432661 = fieldWeight in 967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=967)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Because of Twitter's popularity and the viral nature of information dissemination on Twitter, predicting which Twitter topics will become popular in the near future becomes a task of considerable economic importance. Many Twitter topics are annotated by hashtags. In this article, we propose methods to predict the popularity of new hashtags on Twitter by formulating the problem as a classification task. We use five standard classification models (i.e., Naïve bayes, k-nearest neighbors, decision trees, support vector machines, and logistic regression) for prediction. The main challenge is the identification of effective features for describing new hashtags. We extract 7 content features from a hashtag string and the collection of tweets containing the hashtag and 11 contextual features from the social graph formed by users who have adopted the hashtag. We conducted experiments on a Twitter data set consisting of 31 million tweets from 2 million Singapore-based users. The experimental results show that the standard classifiers using the extracted features significantly outperform the baseline methods that do not use these features. Among the five classifiers, the logistic regression model performs the best in terms of the Micro-F1 measure. We also observe that contextual features are more effective than content features.
    Date
    25. 6.2013 19:05:29
  10. Golub, K.; Hansson, J.; Soergel, D.; Tudhope, D.: Managing classification in libraries : a methodological outline for evaluating automatic subject indexing and classification in Swedish library catalogues (2015) 0.01
    0.0141104 = product of:
      0.0211656 = sum of:
        0.012848252 = product of:
          0.025696505 = sum of:
            0.025696505 = weight(_text_:k in 2300) [ClassicSimilarity], result of:
              0.025696505 = score(doc=2300,freq=2.0), product of:
                0.13030402 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.036501996 = queryNorm
                0.19720423 = fieldWeight in 2300, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2300)
          0.5 = coord(1/2)
        0.008317348 = product of:
          0.024952043 = sum of:
            0.024952043 = weight(_text_:29 in 2300) [ClassicSimilarity], result of:
              0.024952043 = score(doc=2300,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.19432661 = fieldWeight in 2300, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2300)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Source
    Classification and authority control: expanding resource discovery: proceedings of the International UDC Seminar 2015, 29-30 October 2015, Lisbon, Portugal. Eds.: Slavic, A. u. M.I. Cordeiro
  11. Sparck Jones, K.: Automatic classification (1976) 0.01
    0.013704803 = product of:
      0.04111441 = sum of:
        0.04111441 = product of:
          0.08222882 = sum of:
            0.08222882 = weight(_text_:k in 2908) [ClassicSimilarity], result of:
              0.08222882 = score(doc=2908,freq=2.0), product of:
                0.13030402 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.036501996 = queryNorm
                0.63105357 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.125 = fieldNorm(doc=2908)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  12. Ruiz, M.E.; Srinivasan, P.: Combining machine learning and hierarchical indexing structures for text categorization (2001) 0.01
    0.0135712875 = product of:
      0.02035693 = sum of:
        0.008712643 = product of:
          0.017425286 = sum of:
            0.017425286 = weight(_text_:h in 1595) [ClassicSimilarity], result of:
              0.017425286 = score(doc=1595,freq=2.0), product of:
                0.09068736 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.036501996 = queryNorm
                0.19214681 = fieldWeight in 1595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1595)
          0.5 = coord(1/2)
        0.011644287 = product of:
          0.03493286 = sum of:
            0.03493286 = weight(_text_:29 in 1595) [ClassicSimilarity], result of:
              0.03493286 = score(doc=1595,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.27205724 = fieldWeight in 1595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1595)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    11. 5.2003 18:29:44
    Source
    Advances in classification research, vol.10: proceedings of the 10th ASIS SIG/CR Classification Research Workshop. Ed.: Albrechtsen, H. u. J.E. Mai
  13. Khoo, C.S.G.; Ng, K.; Ou, S.: ¬An exploratory study of human clustering of Web pages (2003) 0.01
    0.011248416 = product of:
      0.016872624 = sum of:
        0.010278602 = product of:
          0.020557204 = sum of:
            0.020557204 = weight(_text_:k in 2741) [ClassicSimilarity], result of:
              0.020557204 = score(doc=2741,freq=2.0), product of:
                0.13030402 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.036501996 = queryNorm
                0.15776339 = fieldWeight in 2741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2741)
          0.5 = coord(1/2)
        0.0065940223 = product of:
          0.019782066 = sum of:
            0.019782066 = weight(_text_:22 in 2741) [ClassicSimilarity], result of:
              0.019782066 = score(doc=2741,freq=2.0), product of:
                0.12782377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036501996 = queryNorm
                0.15476047 = fieldWeight in 2741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2741)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    12. 9.2004 9:56:22
  14. Yu, W.; Gong, Y.: Document clustering by concept factorization (2004) 0.01
    0.010278603 = product of:
      0.030835807 = sum of:
        0.030835807 = product of:
          0.061671615 = sum of:
            0.061671615 = weight(_text_:k in 4084) [ClassicSimilarity], result of:
              0.061671615 = score(doc=4084,freq=2.0), product of:
                0.13030402 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.036501996 = queryNorm
                0.47329018 = fieldWeight in 4084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4084)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a
  15. Automatische Klassifikation und Extraktion in Documentum (2005) 0.01
    0.009693777 = product of:
      0.014540665 = sum of:
        0.006223317 = product of:
          0.012446634 = sum of:
            0.012446634 = weight(_text_:h in 3974) [ClassicSimilarity], result of:
              0.012446634 = score(doc=3974,freq=2.0), product of:
                0.09068736 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.036501996 = queryNorm
                0.13724773 = fieldWeight in 3974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3974)
          0.5 = coord(1/2)
        0.008317348 = product of:
          0.024952043 = sum of:
            0.024952043 = weight(_text_:29 in 3974) [ClassicSimilarity], result of:
              0.024952043 = score(doc=3974,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.19432661 = fieldWeight in 3974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3974)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Footnote
    Kontakt: LCI GmbH, Freiburger Str. 16, 16,79199 Kirchzarten, Tel.: (0 76 61) 9 89 961o, Fax: (01212) 5 37 48 29 36, info@lci-software.com, www.lci-software.com
    Source
    Information - Wissenschaft und Praxis. 56(2005) H.5/6, S.276
  16. Bock, H.-H.: Automatische Klassifikation : theoretische und praktische Methoden zur Gruppierung und Strukturierung von Daten (Cluster-Analyse) (1974) 0.01
    0.009387839 = product of:
      0.028163515 = sum of:
        0.028163515 = product of:
          0.05632703 = sum of:
            0.05632703 = weight(_text_:h in 7693) [ClassicSimilarity], result of:
              0.05632703 = score(doc=7693,freq=4.0), product of:
                0.09068736 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.036501996 = queryNorm
                0.6211123 = fieldWeight in 7693, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.125 = fieldNorm(doc=7693)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  17. Borko, H.: Research in computer based classification systems (1985) 0.01
    0.008900067 = product of:
      0.026700199 = sum of:
        0.026700199 = sum of:
          0.017987555 = weight(_text_:k in 3647) [ClassicSimilarity], result of:
            0.017987555 = score(doc=3647,freq=2.0), product of:
              0.13030402 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.036501996 = queryNorm
              0.13804297 = fieldWeight in 3647, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3647)
          0.008712643 = weight(_text_:h in 3647) [ClassicSimilarity], result of:
            0.008712643 = score(doc=3647,freq=2.0), product of:
              0.09068736 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.036501996 = queryNorm
              0.096073404 = fieldWeight in 3647, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3647)
      0.33333334 = coord(1/3)
    
    Abstract
    The selection in this reader by R. M. Needham and K. Sparck Jones reports an early approach to automatic classification that was taken in England. The following selection reviews various approaches that were being pursued in the United States at about the same time. It then discusses a particular approach initiated in the early 1960s by Harold Borko, at that time Head of the Language Processing and Retrieval Research Staff at the System Development Corporation, Santa Monica, California and, since 1966, a member of the faculty at the Graduate School of Library and Information Science, University of California, Los Angeles. As was described earlier, there are two steps in automatic classification, the first being to identify pairs of terms that are similar by virtue of co-occurring as index terms in the same documents, and the second being to form equivalence classes of intersubstitutable terms. To compute similarities, Borko and his associates used a standard correlation formula; to derive classification categories, where Needham and Sparck Jones used clumping, the Borko team used the statistical technique of factor analysis. The fact that documents can be classified automatically, and in any number of ways, is worthy of passing notice. Worthy of serious attention would be a demonstra tion that a computer-based classification system was effective in the organization and retrieval of documents. One reason for the inclusion of the following selection in the reader is that it addresses the question of evaluation. To evaluate the effectiveness of their automatically derived classification, Borko and his team asked three questions. The first was Is the classification reliable? in other words, could the categories derived from one sample of texts be used to classify other texts? Reliability was assessed by a case-study comparison of the classes derived from three different samples of abstracts. The notso-surprising conclusion reached was that automatically derived classes were reliable only to the extent that the sample from which they were derived was representative of the total document collection. The second evaluation question asked whether the classification was reasonable, in the sense of adequately describing the content of the document collection. The answer was sought by comparing the automatically derived categories with categories in a related classification system that was manually constructed. Here the conclusion was that the automatic method yielded categories that fairly accurately reflected the major area of interest in the sample collection of texts; however, since there were only eleven such categories and they were quite broad, they could not be regarded as suitable for use in a university or any large general library. The third evaluation question asked whether automatic classification was accurate, in the sense of producing results similar to those obtainabie by human cIassifiers. When using human classification as a criterion, automatic classification was found to be 50 percent accurate.
  18. Schek, M.: Automatische Klassifizierung und Visualisierung im Archiv der Süddeutschen Zeitung (2005) 0.01
    0.008900067 = product of:
      0.026700199 = sum of:
        0.026700199 = sum of:
          0.017987555 = weight(_text_:k in 4884) [ClassicSimilarity], result of:
            0.017987555 = score(doc=4884,freq=2.0), product of:
              0.13030402 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.036501996 = queryNorm
              0.13804297 = fieldWeight in 4884, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.02734375 = fieldNorm(doc=4884)
          0.008712643 = weight(_text_:h in 4884) [ClassicSimilarity], result of:
            0.008712643 = score(doc=4884,freq=2.0), product of:
              0.09068736 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.036501996 = queryNorm
              0.096073404 = fieldWeight in 4884, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.02734375 = fieldNorm(doc=4884)
      0.33333334 = coord(1/3)
    
    Object
    K-Infinity
    Source
    Medienwirtschaft. 2(2005) H.1, S.20-24
  19. Panyr, J.: STEINADLER: ein Verfahren zur automatischen Deskribierung und zur automatischen thematischen Klassifikation (1978) 0.01
    0.0088718375 = product of:
      0.026615512 = sum of:
        0.026615512 = product of:
          0.07984653 = sum of:
            0.07984653 = weight(_text_:29 in 5169) [ClassicSimilarity], result of:
              0.07984653 = score(doc=5169,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.6218451 = fieldWeight in 5169, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=5169)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    Nachrichten für Dokumentation. 29(1978), S.92-96
  20. Shen, D.; Chen, Z.; Yang, Q.; Zeng, H.J.; Zhang, B.; Lu, Y.; Ma, W.Y.: Web page classification through summarization (2004) 0.01
    0.008565502 = product of:
      0.025696505 = sum of:
        0.025696505 = product of:
          0.05139301 = sum of:
            0.05139301 = weight(_text_:k in 4132) [ClassicSimilarity], result of:
              0.05139301 = score(doc=4132,freq=2.0), product of:
                0.13030402 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.036501996 = queryNorm
                0.39440846 = fieldWeight in 4132, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4132)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a

Years

Languages

  • e 70
  • d 24
  • a 1
  • More… Less…

Types

  • a 83
  • el 12
  • m 2
  • r 2
  • x 1
  • More… Less…