Search (169 results, page 1 of 9)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.24
    0.24117026 = product of:
      0.32156035 = sum of:
        0.07555613 = product of:
          0.22666839 = sum of:
            0.22666839 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.22666839 = score(doc=562,freq=2.0), product of:
                0.40331158 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047571484 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.22666839 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.22666839 = score(doc=562,freq=2.0), product of:
            0.40331158 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047571484 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.019335838 = product of:
          0.038671676 = sum of:
            0.038671676 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.038671676 = score(doc=562,freq=2.0), product of:
                0.16658723 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047571484 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.06
    0.06437133 = product of:
      0.12874267 = sum of:
        0.016197294 = weight(_text_:information in 611) [ClassicSimilarity], result of:
          0.016197294 = score(doc=611,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.19395474 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=611)
        0.11254538 = sum of:
          0.04809258 = weight(_text_:retrieval in 611) [ClassicSimilarity], result of:
            0.04809258 = score(doc=611,freq=2.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.33420905 = fieldWeight in 611, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.078125 = fieldNorm(doc=611)
          0.0644528 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
            0.0644528 = score(doc=611,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.38690117 = fieldWeight in 611, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=611)
      0.5 = coord(2/4)
    
    Content
    Präsentation zum Vortrag anlässlich des 98. Deutscher Bibliothekartag in Erfurt: Ein neuer Blick auf Bibliotheken; TK10: Information erschließen und recherchieren Inhalte erschließen - mit neuen Tools
    Date
    22. 8.2009 12:54:24
    Theme
    Klassifikationssysteme im Online-Retrieval
  3. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.05
    0.051096417 = product of:
      0.102192834 = sum of:
        0.016197294 = weight(_text_:information in 2765) [ClassicSimilarity], result of:
          0.016197294 = score(doc=2765,freq=8.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.19395474 = fieldWeight in 2765, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2765)
        0.08599554 = sum of:
          0.053769138 = weight(_text_:retrieval in 2765) [ClassicSimilarity], result of:
            0.053769138 = score(doc=2765,freq=10.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.37365708 = fieldWeight in 2765, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
          0.0322264 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
            0.0322264 = score(doc=2765,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.19345059 = fieldWeight in 2765, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
      0.5 = coord(2/4)
    
    Abstract
    Passages can be hidden within a text to circumvent their disallowed transfer. Such release of compartmentalized information is of concern to all corporate and governmental organizations. Passage retrieval is well studied; we posit, however, that passage detection is not. Passage retrieval is the determination of the degree of relevance of blocks of text, namely passages, comprising a document. Rather than determining the relevance of a document in its entirety, passage retrieval determines the relevance of the individual passages. As such, modified traditional information-retrieval techniques compare terms found in user queries with the individual passages to determine a similarity score for passages of interest. In passage detection, passages are classified into predetermined categories. More often than not, passage detection techniques are deployed to detect hidden paragraphs in documents. That is, to hide information, documents are injected with hidden text into passages. Rather than matching query terms against passages to determine their relevance, using text-mining techniques, the passages are classified. Those documents with hidden passages are defined as infected. Thus, simply stated, passage retrieval is the search for passages relevant to a user query, while passage detection is the classification of passages. That is, in passage detection, passages are labeled with one or more categories from a set of predetermined categories. We present a keyword-based dynamic passage approach (KDP) and demonstrate that KDP outperforms statistically significantly (99% confidence) the other document-splitting approaches by 12% to 18% in the passage detection and passage category-prediction tasks. Furthermore, we evaluate the effects of the feature selection, passage length, ambiguous passages, and finally training-data category distribution on passage-detection accuracy.
    Date
    22. 3.2009 19:14:43
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.814-825
  4. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.05
    0.047408134 = product of:
      0.09481627 = sum of:
        0.016034503 = weight(_text_:information in 1673) [ClassicSimilarity], result of:
          0.016034503 = score(doc=1673,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.1920054 = fieldWeight in 1673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.07878176 = sum of:
          0.033664808 = weight(_text_:retrieval in 1673) [ClassicSimilarity], result of:
            0.033664808 = score(doc=1673,freq=2.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.23394634 = fieldWeight in 1673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1673)
          0.045116954 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
            0.045116954 = score(doc=1673,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.2708308 = fieldWeight in 1673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1673)
      0.5 = coord(2/4)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
    Theme
    Klassifikationssysteme im Online-Retrieval
  5. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.04
    0.038849846 = product of:
      0.07769969 = sum of:
        0.021427006 = weight(_text_:information in 1107) [ClassicSimilarity], result of:
          0.021427006 = score(doc=1107,freq=14.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.256578 = fieldWeight in 1107, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.05627269 = sum of:
          0.02404629 = weight(_text_:retrieval in 1107) [ClassicSimilarity], result of:
            0.02404629 = score(doc=1107,freq=2.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.16710453 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
          0.0322264 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
            0.0322264 = score(doc=1107,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.19345059 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
      0.5 = coord(2/4)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2265-2277
  6. Panyr, J.: Automatische Klassifikation und Information Retrieval : Anwendung und Entwicklung komplexer Verfahren in Information-Retrieval-Systemen und ihre Evaluierung (1986) 0.04
    0.037236676 = product of:
      0.07447335 = sum of:
        0.033665445 = weight(_text_:information in 32) [ClassicSimilarity], result of:
          0.033665445 = score(doc=32,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.40312737 = fieldWeight in 32, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=32)
        0.040807907 = product of:
          0.08161581 = sum of:
            0.08161581 = weight(_text_:retrieval in 32) [ClassicSimilarity], result of:
              0.08161581 = score(doc=32,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.5671716 = fieldWeight in 32, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=32)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Series
    Sprache und Information; Bd.12
  7. Schiminovich, S.: Automatic classification and retrieval of documents by means of a bibliographic pattern discovery algorithm (1971) 0.04
    0.03514272 = product of:
      0.07028544 = sum of:
        0.022676213 = weight(_text_:information in 4846) [ClassicSimilarity], result of:
          0.022676213 = score(doc=4846,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.27153665 = fieldWeight in 4846, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4846)
        0.047609225 = product of:
          0.09521845 = sum of:
            0.09521845 = weight(_text_:retrieval in 4846) [ClassicSimilarity], result of:
              0.09521845 = score(doc=4846,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.6617001 = fieldWeight in 4846, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4846)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Information storage and retrieval. 6(1971), S.417-435
  8. Rijsbergen, C.J. van: Automatic classification in information retrieval (1978) 0.03
    0.032194868 = product of:
      0.064389735 = sum of:
        0.025915671 = weight(_text_:information in 2412) [ClassicSimilarity], result of:
          0.025915671 = score(doc=2412,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3103276 = fieldWeight in 2412, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=2412)
        0.038474064 = product of:
          0.07694813 = sum of:
            0.07694813 = weight(_text_:retrieval in 2412) [ClassicSimilarity], result of:
              0.07694813 = score(doc=2412,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.5347345 = fieldWeight in 2412, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.125 = fieldNorm(doc=2412)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
  9. Ko, Y.: ¬A new term-weighting scheme for text classification using the odds of positive and negative class probabilities (2015) 0.03
    0.028171634 = product of:
      0.05634327 = sum of:
        0.02748772 = weight(_text_:information in 2339) [ClassicSimilarity], result of:
          0.02748772 = score(doc=2339,freq=16.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3291521 = fieldWeight in 2339, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2339)
        0.02885555 = product of:
          0.0577111 = sum of:
            0.0577111 = weight(_text_:retrieval in 2339) [ClassicSimilarity], result of:
              0.0577111 = score(doc=2339,freq=8.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.40105087 = fieldWeight in 2339, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2339)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Text classification (TC) is a core technique for text mining and information retrieval. It has been applied to many applications in many different research and industrial areas. Term-weighting schemes assign an appropriate weight to each term to obtain a high TC performance. Although term weighting is one of the important modules for TC and TC has different peculiarities from those in information retrieval, many term-weighting schemes used in information retrieval, such as term frequency-inverse document frequency (tf-idf), have been used in TC in the same manner. The peculiarity of TC that differs most from information retrieval is the existence of class information. This article proposes a new term-weighting scheme that uses class information using positive and negative class distributions. As a result, the proposed scheme, log tf-TRR, consistently performs better than do other schemes using class information as well as traditional schemes such as tf-idf.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2553-2565
  10. Wu, M.; Fuller, M.; Wilkinson, R.: Using clustering and classification approaches in interactive retrieval (2001) 0.03
    0.028170511 = product of:
      0.056341022 = sum of:
        0.022676213 = weight(_text_:information in 2666) [ClassicSimilarity], result of:
          0.022676213 = score(doc=2666,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.27153665 = fieldWeight in 2666, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=2666)
        0.033664808 = product of:
          0.067329615 = sum of:
            0.067329615 = weight(_text_:retrieval in 2666) [ClassicSimilarity], result of:
              0.067329615 = score(doc=2666,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.46789268 = fieldWeight in 2666, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2666)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Information processing and management. 37(2001) no.3, S.459-484
  11. Panyr, J.: Vektorraum-Modell und Clusteranalyse in Information-Retrieval-Systemen (1987) 0.03
    0.027881574 = product of:
      0.055763148 = sum of:
        0.02244363 = weight(_text_:information in 2322) [ClassicSimilarity], result of:
          0.02244363 = score(doc=2322,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.2687516 = fieldWeight in 2322, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2322)
        0.033319518 = product of:
          0.066639036 = sum of:
            0.066639036 = weight(_text_:retrieval in 2322) [ClassicSimilarity], result of:
              0.066639036 = score(doc=2322,freq=6.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.46309367 = fieldWeight in 2322, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2322)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Ausgehend von theoretischen Indexierungsansätzen wird das klassische Vektorraum-Modell für automatische Indexierung (mit dem Trennschärfen-Modell) erläutert. Das Clustering in Information-Retrieval-Systemem wird als eine natürliche logische Folge aus diesem Modell aufgefaßt und in allen seinen Ausprägungen (d.h. als Dokumenten-, Term- oder Dokumenten- und Termklassifikation) behandelt. Anschließend werden die Suchstrategien in vorklassifizierten Dokumentenbeständen (Clustersuche) detailliert beschrieben. Zum Schluß wird noch die sinnvolle Anwendung der Clusteranalyse in Information-Retrieval-Systemen kurz diskutiert
  12. Möller, G.: Automatic classification of the World Wide Web using Universal Decimal Classification (1999) 0.03
    0.026050415 = product of:
      0.05210083 = sum of:
        0.02805454 = weight(_text_:information in 494) [ClassicSimilarity], result of:
          0.02805454 = score(doc=494,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3359395 = fieldWeight in 494, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=494)
        0.02404629 = product of:
          0.04809258 = sum of:
            0.04809258 = weight(_text_:retrieval in 494) [ClassicSimilarity], result of:
              0.04809258 = score(doc=494,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.33420905 = fieldWeight in 494, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=494)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Imprint
    Hinskey Hill : Learned Information
    Source
    Online information 99: 23rd International Online Information Meeting, Proceedings, London, 7-9 December 1999. Ed.: D. Raitt et al
    Theme
    Klassifikationssysteme im Online-Retrieval
  13. Miyamoto, S.: Information clustering based an fuzzy multisets (2003) 0.03
    0.025788594 = product of:
      0.051577188 = sum of:
        0.027772574 = weight(_text_:information in 1071) [ClassicSimilarity], result of:
          0.027772574 = score(doc=1071,freq=12.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3325631 = fieldWeight in 1071, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1071)
        0.023804612 = product of:
          0.047609225 = sum of:
            0.047609225 = weight(_text_:retrieval in 1071) [ClassicSimilarity], result of:
              0.047609225 = score(doc=1071,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.33085006 = fieldWeight in 1071, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1071)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A fuzzy multiset model for information clustering is proposed with application to information retrieval on the World Wide Web. Noting that a search engine retrieves multiple occurrences of the same subjects with possibly different degrees of relevance, we observe that fuzzy multisets provide an appropriate model of information retrieval on the WWW. Information clustering which means both term clustering and document clustering is considered. Three methods of the hard c-means, fuzzy c-means, and an agglomerative method using cluster centers are proposed. Two distances between fuzzy multisets and algorithms for calculating cluster centers are defined. Theoretical properties concerning the clustering algorithms are studied. Illustrative examples are given to show how the algorithms work.
    Source
    Information processing and management. 39(2003) no.2, S.195-213
  14. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.03
    0.025748534 = product of:
      0.05149707 = sum of:
        0.006478918 = weight(_text_:information in 3284) [ClassicSimilarity], result of:
          0.006478918 = score(doc=3284,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.0775819 = fieldWeight in 3284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3284)
        0.04501815 = sum of:
          0.019237032 = weight(_text_:retrieval in 3284) [ClassicSimilarity], result of:
            0.019237032 = score(doc=3284,freq=2.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.13368362 = fieldWeight in 3284, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.03125 = fieldNorm(doc=3284)
          0.025781117 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
            0.025781117 = score(doc=3284,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.15476047 = fieldWeight in 3284, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3284)
      0.5 = coord(2/4)
    
    Abstract
    Die Menge der zu klassifizierenden Veröffentlichungen steigt spätestens seit der Existenz des World Wide Web schneller an, als sie intellektuell sachlich erschlossen werden kann. Daher werden Verfahren gesucht, um die Klassifizierung von Textobjekten zu automatisieren oder die intellektuelle Klassifizierung zumindest zu unterstützen. Seit 1968 gibt es Verfahren zur automatischen Dokumentenklassifizierung (Information Retrieval, kurz: IR) und seit 1992 zur automatischen Textklassifizierung (ATC: Automated Text Categorization). Seit immer mehr digitale Objekte im World Wide Web zur Verfügung stehen, haben Arbeiten zur automatischen Textklassifizierung seit ca. 1998 verstärkt zugenommen. Dazu gehören seit 1996 auch Arbeiten zur automatischen DDC-Klassifizierung bzw. RVK-Klassifizierung von bibliografischen Titeldatensätzen und Volltextdokumenten. Bei den Entwicklungen handelt es sich unseres Wissens bislang um experimentelle und keine im ständigen Betrieb befindlichen Systeme. Auch das VZG-Projekt Colibri/DDC ist seit 2006 u. a. mit der automatischen DDC-Klassifizierung befasst. Die diesbezüglichen Untersuchungen und Entwicklungen dienen zur Beantwortung der Forschungsfrage: "Ist es möglich, eine inhaltlich stimmige DDC-Titelklassifikation aller GVK-PLUS-Titeldatensätze automatisch zu erzielen?"
    Date
    22. 1.2010 14:41:24
  15. Guerrero-Bote, V.P.; Moya Anegón, F. de; Herrero Solana, V.: Document organization using Kohonen's algorithm (2002) 0.02
    0.024824452 = product of:
      0.049648903 = sum of:
        0.02244363 = weight(_text_:information in 2564) [ClassicSimilarity], result of:
          0.02244363 = score(doc=2564,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.2687516 = fieldWeight in 2564, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2564)
        0.027205272 = product of:
          0.054410543 = sum of:
            0.054410543 = weight(_text_:retrieval in 2564) [ClassicSimilarity], result of:
              0.054410543 = score(doc=2564,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.37811437 = fieldWeight in 2564, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The classification of documents from a bibliographic database is a task that is linked to processes of information retrieval based on partial matching. A method is described of vectorizing reference documents from LISA which permits their topological organization using Kohonen's algorithm. As an example a map is generated of 202 documents from LISA, and an analysis is made of the possibilities of this type of neural network with respect to the development of information retrieval systems based on graphical browsing.
    Source
    Information processing and management. 38(2002) no.1, S.79-89
  16. Yu, W.; Gong, Y.: Document clustering by concept factorization (2004) 0.02
    0.02414615 = product of:
      0.0482923 = sum of:
        0.019436752 = weight(_text_:information in 4084) [ClassicSimilarity], result of:
          0.019436752 = score(doc=4084,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.23274569 = fieldWeight in 4084, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4084)
        0.02885555 = product of:
          0.0577111 = sum of:
            0.0577111 = weight(_text_:retrieval in 4084) [ClassicSimilarity], result of:
              0.0577111 = score(doc=4084,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.40105087 = fieldWeight in 4084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4084)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a
  17. Ingwersen, P.; Wormell, I.: Ranganathan in the perspective of advanced information retrieval (1992) 0.02
    0.022765208 = product of:
      0.045530416 = sum of:
        0.018325146 = weight(_text_:information in 7695) [ClassicSimilarity], result of:
          0.018325146 = score(doc=7695,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.21943474 = fieldWeight in 7695, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=7695)
        0.027205272 = product of:
          0.054410543 = sum of:
            0.054410543 = weight(_text_:retrieval in 7695) [ClassicSimilarity], result of:
              0.054410543 = score(doc=7695,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.37811437 = fieldWeight in 7695, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7695)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Examnines Ranganathan's approach to knowledge organisation and its relevance to intellectual accessibility in libraries. Discusses the current and future developments of his methodology and theories in knowledge-based systems. Topics covered include: semi-automatic classification and structure of thesauri; user-intermediary interactions in information retrieval (IR); semantic value-theory and uncertainty principles in IR; and case grammar
  18. Dubin, D.: Dimensions and discriminability (1998) 0.02
    0.021098327 = product of:
      0.042196654 = sum of:
        0.019638177 = weight(_text_:information in 2338) [ClassicSimilarity], result of:
          0.019638177 = score(doc=2338,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.23515764 = fieldWeight in 2338, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2338)
        0.022558477 = product of:
          0.045116954 = sum of:
            0.045116954 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.045116954 = score(doc=2338,freq=2.0), product of:
                0.16658723 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047571484 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  19. Dolin, R.; Agrawal, D.; El Abbadi, A.; Pearlman, J.: Using automated classification for summarizing and selecting heterogeneous information sources (1998) 0.02
    0.021067452 = product of:
      0.042134903 = sum of:
        0.021730952 = weight(_text_:information in 316) [ClassicSimilarity], result of:
          0.021730952 = score(doc=316,freq=10.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.2602176 = fieldWeight in 316, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=316)
        0.020403953 = product of:
          0.040807907 = sum of:
            0.040807907 = weight(_text_:retrieval in 316) [ClassicSimilarity], result of:
              0.040807907 = score(doc=316,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.2835858 = fieldWeight in 316, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=316)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Information retrieval over the Internet increasingly requires the filtering of thousands of heterogeneous information sources. Important sources of information include not only traditional databases with structured data and queries, but also increasing numbers of non-traditional, semi- or unstructured collections such as Web sites, FTP archives, etc. As the number and variability of sources increases, new ways of automatically summarizing, discovering, and selecting collections relevant to a user's query are needed. One such method involves the use of classification schemes, such as the Library of Congress Classification (LCC) [10], within which a collection may be represented based on its content, irrespective of the structure of the actual data or documents. For such a system to be useful in a large-scale distributed environment, it must be easy to use for both collection managers and users. As a result, it must be possible to classify documents automatically within a classification scheme. Furthermore, there must be a straightforward and intuitive interface with which the user may use the scheme to assist in information retrieval (IR).
  20. Shafer, K.E.: Evaluating Scorpion results (1998) 0.02
    0.020121792 = product of:
      0.040243585 = sum of:
        0.016197294 = weight(_text_:information in 1569) [ClassicSimilarity], result of:
          0.016197294 = score(doc=1569,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.19395474 = fieldWeight in 1569, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1569)
        0.02404629 = product of:
          0.04809258 = sum of:
            0.04809258 = weight(_text_:retrieval in 1569) [ClassicSimilarity], result of:
              0.04809258 = score(doc=1569,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.33420905 = fieldWeight in 1569, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1569)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Scorpion is a research project at OCLC that builds tools for automatic subject assignment by combining library science and information retrieval techniques. A thesis of Scorpion is that the Dewey Decimal Classification (Dewey) can be used to perform automatic subject assignment for electronic items.

Years

Languages

  • e 144
  • d 23
  • a 1
  • chi 1
  • More… Less…

Types

  • a 147
  • el 20
  • x 4
  • m 3
  • r 2
  • s 2
  • d 1
  • More… Less…