Search (159 results, page 1 of 8)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.07
    0.06983921 = product of:
      0.104758814 = sum of:
        0.017515881 = weight(_text_:information in 1673) [ClassicSimilarity], result of:
          0.017515881 = score(doc=1673,freq=4.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.1920054 = fieldWeight in 1673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.08724293 = sum of:
          0.03795776 = weight(_text_:systems in 1673) [ClassicSimilarity], result of:
            0.03795776 = score(doc=1673,freq=2.0), product of:
              0.159702 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.051966466 = queryNorm
              0.23767869 = fieldWeight in 1673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1673)
          0.04928517 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
            0.04928517 = score(doc=1673,freq=2.0), product of:
              0.1819777 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051966466 = queryNorm
              0.2708308 = fieldWeight in 1673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1673)
      0.6666667 = coord(2/3)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
    Source
    Computer networks and ISDN systems. 30(1998) nos.1/7, S.646-648
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.069105834 = product of:
      0.10365874 = sum of:
        0.082536526 = product of:
          0.24760957 = sum of:
            0.24760957 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.24760957 = score(doc=562,freq=2.0), product of:
                0.4405723 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051966466 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.021122215 = product of:
          0.04224443 = sum of:
            0.04224443 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.04224443 = score(doc=562,freq=2.0), product of:
                0.1819777 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051966466 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.04
    0.035264935 = product of:
      0.0528974 = sum of:
        0.017693711 = weight(_text_:information in 611) [ClassicSimilarity], result of:
          0.017693711 = score(doc=611,freq=2.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.19395474 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=611)
        0.03520369 = product of:
          0.07040738 = sum of:
            0.07040738 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.07040738 = score(doc=611,freq=2.0), product of:
                0.1819777 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051966466 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Präsentation zum Vortrag anlässlich des 98. Deutscher Bibliothekartag in Erfurt: Ein neuer Blick auf Bibliotheken; TK10: Information erschließen und recherchieren Inhalte erschließen - mit neuen Tools
    Date
    22. 8.2009 12:54:24
  4. Choi, B.; Peng, X.: Dynamic and hierarchical classification of Web pages (2004) 0.03
    0.031042784 = product of:
      0.046564177 = sum of:
        0.018387845 = weight(_text_:information in 2555) [ClassicSimilarity], result of:
          0.018387845 = score(doc=2555,freq=6.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.20156369 = fieldWeight in 2555, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2555)
        0.028176332 = product of:
          0.056352664 = sum of:
            0.056352664 = weight(_text_:systems in 2555) [ClassicSimilarity], result of:
              0.056352664 = score(doc=2555,freq=6.0), product of:
                0.159702 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.051966466 = queryNorm
                0.35286134 = fieldWeight in 2555, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2555)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Automatic classification of Web pages is an effective way to organise the vast amount of information and to assist in retrieving relevant information from the Internet. Although many automatic classification systems have been proposed, most of them ignore the conflict between the fixed number of categories and the growing number of Web pages being added into the systems. They also require searching through all existing categories to make any classification. This article proposes a dynamic and hierarchical classification system that is capable of adding new categories as required, organising the Web pages into a tree structure, and classifying Web pages by searching through only one path of the tree. The proposed single-path search technique reduces the search complexity from (n) to (log(n)). Test results show that the system improves the accuracy of classification by 6 percent in comparison to related systems. The dynamic-category expansion technique also achieves satisfying results for adding new categories into the system as required.
    Source
    Online information review. 28(2004) no.2, S.139-147
  5. Guerrero-Bote, V.P.; Moya Anegón, F. de; Herrero Solana, V.: Document organization using Kohonen's algorithm (2002) 0.03
    0.030804854 = product of:
      0.04620728 = sum of:
        0.024517128 = weight(_text_:information in 2564) [ClassicSimilarity], result of:
          0.024517128 = score(doc=2564,freq=6.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.2687516 = fieldWeight in 2564, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2564)
        0.02169015 = product of:
          0.0433803 = sum of:
            0.0433803 = weight(_text_:systems in 2564) [ClassicSimilarity], result of:
              0.0433803 = score(doc=2564,freq=2.0), product of:
                0.159702 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.051966466 = queryNorm
                0.2716328 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The classification of documents from a bibliographic database is a task that is linked to processes of information retrieval based on partial matching. A method is described of vectorizing reference documents from LISA which permits their topological organization using Kohonen's algorithm. As an example a map is generated of 202 documents from LISA, and an analysis is made of the possibilities of this type of neural network with respect to the development of information retrieval systems based on graphical browsing.
    Source
    Information processing and management. 38(2002) no.1, S.79-89
  6. Dubin, D.: Dimensions and discriminability (1998) 0.03
    0.03073005 = product of:
      0.046095073 = sum of:
        0.021452487 = weight(_text_:information in 2338) [ClassicSimilarity], result of:
          0.021452487 = score(doc=2338,freq=6.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.23515764 = fieldWeight in 2338, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2338)
        0.024642585 = product of:
          0.04928517 = sum of:
            0.04928517 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.04928517 = score(doc=2338,freq=2.0), product of:
                0.1819777 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051966466 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  7. Ruocco, A.S.; Frieder, O.: Clustering and classification of large document bases in a parallel environment (1997) 0.03
    0.03017199 = product of:
      0.045257986 = sum of:
        0.012385598 = weight(_text_:information in 1661) [ClassicSimilarity], result of:
          0.012385598 = score(doc=1661,freq=2.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.13576832 = fieldWeight in 1661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1661)
        0.032872386 = product of:
          0.06574477 = sum of:
            0.06574477 = weight(_text_:systems in 1661) [ClassicSimilarity], result of:
              0.06574477 = score(doc=1661,freq=6.0), product of:
                0.159702 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.051966466 = queryNorm
                0.41167158 = fieldWeight in 1661, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1661)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Proposes the use of parallel computing systems to overcome the computationally intense clustering process. Examines 2 operations: clustering a document set and classifying the document set. Uses a subset of the TIPSTER corpus, specifically, articles from the Wall Street Journal. Document set classification was performed without the large storage requirements for ancillary data matrices. The time performance of the parallel systems was an improvement over sequential systems times, and produced the same clustering and classification scheme. Results show near linear speed up in higher threshold clustering applications
    Source
    Journal of the American Society for Information Science. 48(1997) no.10, S.932-943
  8. Losee, R.M.; Haas, S.W.: Sublanguage terms : dictionaries, usage, and automatic classification (1995) 0.03
    0.027805533 = product of:
      0.041708298 = sum of:
        0.02001815 = weight(_text_:information in 2650) [ClassicSimilarity], result of:
          0.02001815 = score(doc=2650,freq=4.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.21943474 = fieldWeight in 2650, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2650)
        0.02169015 = product of:
          0.0433803 = sum of:
            0.0433803 = weight(_text_:systems in 2650) [ClassicSimilarity], result of:
              0.0433803 = score(doc=2650,freq=2.0), product of:
                0.159702 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.051966466 = queryNorm
                0.2716328 = fieldWeight in 2650, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2650)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The use of terms from natural and social science titles and abstracts is studied from the perspective of sublanguages and their specialized dictionaries. Explores different notions of sublanguage distinctiveness. Object methods for separating hard and soft sciences are suggested based on measures of sublanguage use, dictionary characteristics, and sublanguage distinctiveness. Abstracts were automatically classified with a high degree of accuracy by using a formula that condsiders the degree of uniqueness of terms in each sublanguage. This may prove useful for text filtering of information retrieval systems
    Source
    Journal of the American Society for Information Science. 46(1995) no.7, S.519-529
  9. Ingwersen, P.; Wormell, I.: Ranganathan in the perspective of advanced information retrieval (1992) 0.03
    0.027805533 = product of:
      0.041708298 = sum of:
        0.02001815 = weight(_text_:information in 7695) [ClassicSimilarity], result of:
          0.02001815 = score(doc=7695,freq=4.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.21943474 = fieldWeight in 7695, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=7695)
        0.02169015 = product of:
          0.0433803 = sum of:
            0.0433803 = weight(_text_:systems in 7695) [ClassicSimilarity], result of:
              0.0433803 = score(doc=7695,freq=2.0), product of:
                0.159702 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.051966466 = queryNorm
                0.2716328 = fieldWeight in 7695, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7695)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Examnines Ranganathan's approach to knowledge organisation and its relevance to intellectual accessibility in libraries. Discusses the current and future developments of his methodology and theories in knowledge-based systems. Topics covered include: semi-automatic classification and structure of thesauri; user-intermediary interactions in information retrieval (IR); semantic value-theory and uncertainty principles in IR; and case grammar
  10. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.03
    0.027338952 = product of:
      0.041008428 = sum of:
        0.02340658 = weight(_text_:information in 1107) [ClassicSimilarity], result of:
          0.02340658 = score(doc=1107,freq=14.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.256578 = fieldWeight in 1107, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.017601846 = product of:
          0.03520369 = sum of:
            0.03520369 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.03520369 = score(doc=1107,freq=2.0), product of:
                0.1819777 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051966466 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2265-2277
  11. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.03
    0.026340041 = product of:
      0.03951006 = sum of:
        0.018387845 = weight(_text_:information in 2760) [ClassicSimilarity], result of:
          0.018387845 = score(doc=2760,freq=6.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.20156369 = fieldWeight in 2760, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2760)
        0.021122215 = product of:
          0.04224443 = sum of:
            0.04224443 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.04224443 = score(doc=2760,freq=2.0), product of:
                0.1819777 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051966466 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Information is often organized as a text hierarchy. A hierarchical text-classification system is thus essential for the management, sharing, and dissemination of information. It aims to automatically classify each incoming document into zero, one, or several categories in the text hierarchy. In this paper, we present a technique called CRHTC (context recognition for hierarchical text classification) that performs hierarchical text classification by recognizing the context of discussion (COD) of each category. A category's COD is governed by its ancestor categories, whose contents indicate contextual backgrounds of the category. A document may be classified into a category only if its content matches the category's COD. CRHTC does not require any trials to manually set parameters, and hence is more portable and easier to implement than other methods. It is empirically evaluated under various conditions. The results show that CRHTC achieves both better and more stable performance than several hierarchical and nonhierarchical text-classification methodologies.
    Date
    22. 3.2009 19:11:54
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.803-813
  12. Huang, Y.-L.: ¬A theoretic and empirical research of cluster indexing for Mandarine Chinese full text document (1998) 0.03
    0.026150528 = product of:
      0.03922579 = sum of:
        0.012385598 = weight(_text_:information in 513) [ClassicSimilarity], result of:
          0.012385598 = score(doc=513,freq=2.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.13576832 = fieldWeight in 513, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=513)
        0.026840193 = product of:
          0.053680386 = sum of:
            0.053680386 = weight(_text_:systems in 513) [ClassicSimilarity], result of:
              0.053680386 = score(doc=513,freq=4.0), product of:
                0.159702 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.051966466 = queryNorm
                0.33612844 = fieldWeight in 513, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=513)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Since most popular commercialized systems for full text retrieval are designed with full text scaning and Boolean logic query mode, these systems use an oversimplified relationship between the indexing form and the content of document. Reports the use of Singular Value Decomposition (SVD) to develop a Cluster Indexing Model (CIM) based on a Vector Space Model (VSM) in orer to explore the index theory of cluster indexing for chinese full text documents. From a series of experiments, it was found that the indexing performance of CIM is better than traditional VSM, and has almost equivalent effectiveness of the authority control of index terms
    Source
    Bulletin of library and information science. 1998, no.24, S.44-68
  13. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.02
    0.024685455 = product of:
      0.037028182 = sum of:
        0.012385598 = weight(_text_:information in 141) [ClassicSimilarity], result of:
          0.012385598 = score(doc=141,freq=2.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.13576832 = fieldWeight in 141, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=141)
        0.024642585 = product of:
          0.04928517 = sum of:
            0.04928517 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.04928517 = score(doc=141,freq=2.0), product of:
                0.1819777 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051966466 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Pages
    S.1-22
  14. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.02
    0.024685455 = product of:
      0.037028182 = sum of:
        0.012385598 = weight(_text_:information in 5273) [ClassicSimilarity], result of:
          0.012385598 = score(doc=5273,freq=2.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.13576832 = fieldWeight in 5273, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5273)
        0.024642585 = product of:
          0.04928517 = sum of:
            0.04928517 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.04928517 = score(doc=5273,freq=2.0), product of:
                0.1819777 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051966466 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 7.2006 16:24:52
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.3, S.431-442
  15. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.02
    0.024685455 = product of:
      0.037028182 = sum of:
        0.012385598 = weight(_text_:information in 2560) [ClassicSimilarity], result of:
          0.012385598 = score(doc=2560,freq=2.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.13576832 = fieldWeight in 2560, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2560)
        0.024642585 = product of:
          0.04928517 = sum of:
            0.04928517 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.04928517 = score(doc=2560,freq=2.0), product of:
                0.1819777 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051966466 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The proliferation of digital resources and their integration into a traditional library setting has created a pressing need for an automated tool that organizes textual information based on library classification schemes. Automated text classification is a research field of developing tools, methods, and models to automate text classification. This article describes the current popular approach for text classification and major text classification projects and applications that are based on library classification schemes. Related issues and challenges are discussed, and a number of considerations for the challenges are examined.
    Date
    22. 9.2008 18:31:54
  16. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.02
    0.023530371 = product of:
      0.035295557 = sum of:
        0.017693711 = weight(_text_:information in 2765) [ClassicSimilarity], result of:
          0.017693711 = score(doc=2765,freq=8.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.19395474 = fieldWeight in 2765, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2765)
        0.017601846 = product of:
          0.03520369 = sum of:
            0.03520369 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
              0.03520369 = score(doc=2765,freq=2.0), product of:
                0.1819777 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051966466 = queryNorm
                0.19345059 = fieldWeight in 2765, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2765)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Passages can be hidden within a text to circumvent their disallowed transfer. Such release of compartmentalized information is of concern to all corporate and governmental organizations. Passage retrieval is well studied; we posit, however, that passage detection is not. Passage retrieval is the determination of the degree of relevance of blocks of text, namely passages, comprising a document. Rather than determining the relevance of a document in its entirety, passage retrieval determines the relevance of the individual passages. As such, modified traditional information-retrieval techniques compare terms found in user queries with the individual passages to determine a similarity score for passages of interest. In passage detection, passages are classified into predetermined categories. More often than not, passage detection techniques are deployed to detect hidden paragraphs in documents. That is, to hide information, documents are injected with hidden text into passages. Rather than matching query terms against passages to determine their relevance, using text-mining techniques, the passages are classified. Those documents with hidden passages are defined as infected. Thus, simply stated, passage retrieval is the search for passages relevant to a user query, while passage detection is the classification of passages. That is, in passage detection, passages are labeled with one or more categories from a set of predetermined categories. We present a keyword-based dynamic passage approach (KDP) and demonstrate that KDP outperforms statistically significantly (99% confidence) the other document-splitting approaches by 12% to 18% in the passage detection and passage category-prediction tasks. Furthermore, we evaluate the effects of the feature selection, passage length, ambiguous passages, and finally training-data category distribution on passage-detection accuracy.
    Date
    22. 3.2009 19:14:43
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.814-825
  17. Li, T.; Zhu, S.; Ogihara, M.: Hierarchical document classification using automatically generated hierarchy (2007) 0.02
    0.02310364 = product of:
      0.03465546 = sum of:
        0.018387845 = weight(_text_:information in 4797) [ClassicSimilarity], result of:
          0.018387845 = score(doc=4797,freq=6.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.20156369 = fieldWeight in 4797, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4797)
        0.016267613 = product of:
          0.032535225 = sum of:
            0.032535225 = weight(_text_:systems in 4797) [ClassicSimilarity], result of:
              0.032535225 = score(doc=4797,freq=2.0), product of:
                0.159702 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.051966466 = queryNorm
                0.2037246 = fieldWeight in 4797, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4797)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Automated text categorization has witnessed a booming interest with the exponential growth of information and the ever-increasing needs for organizations. The underlying hierarchical structure identifies the relationships of dependence between different categories and provides valuable sources of information for categorization. Although considerable research has been conducted in the field of hierarchical document categorization, little has been done on automatic generation of topic hierarchies. In this paper, we propose the method of using linear discriminant projection to generate more meaningful intermediate levels of hierarchies in large flat sets of classes. The linear discriminant projection approach first transforms all documents onto a low-dimensional space and then clusters the categories into hier- archies accordingly. The paper also investigates the effect of using generated hierarchical structure for text classification. Our experiments show that generated hierarchies improve classification performance in most cases.
    Source
    Journal of intelligent information systems. 29(2007) no.2, S.211-230
  18. Humphrey, S.M.; Névéol, A.; Browne, A.; Gobeil, J.; Ruch, P.; Darmoni, S.J.: Comparing a rule-based versus statistical system for automatic categorization of MEDLINE documents according to biomedical specialty (2009) 0.02
    0.022996515 = product of:
      0.034494773 = sum of:
        0.015323205 = weight(_text_:information in 3300) [ClassicSimilarity], result of:
          0.015323205 = score(doc=3300,freq=6.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.16796975 = fieldWeight in 3300, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3300)
        0.019171566 = product of:
          0.03834313 = sum of:
            0.03834313 = weight(_text_:systems in 3300) [ClassicSimilarity], result of:
              0.03834313 = score(doc=3300,freq=4.0), product of:
                0.159702 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.051966466 = queryNorm
                0.24009174 = fieldWeight in 3300, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3300)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Automatic document categorization is an important research problem in Information Science and Natural Language Processing. Many applications, including, Word Sense Disambiguation and Information Retrieval in large collections, can benefit from such categorization. This paper focuses on automatic categorization of documents from the biomedical literature into broad discipline-based categories. Two different systems are described and contrasted: CISMeF, which uses rules based on human indexing of the documents by the Medical Subject Headings (MeSH) controlled vocabulary in order to assign metaterms (MTs), and Journal Descriptor Indexing (JDI), based on human categorization of about 4,000 journals and statistical associations between journal descriptors (JDs) and textwords in the documents. We evaluate and compare the performance of these systems against a gold standard of humanly assigned categories for 100 MEDLINE documents, using six measures selected from trec_eval. The results show that for five of the measures performance is comparable, and for one measure JDI is superior. We conclude that these results favor JDI, given the significantly greater intellectual overhead involved in human indexing and maintaining a rule base for mapping MeSH terms to MTs. We also note a JDI method that associates JDs with MeSH indexing rather than textwords, and it may be worthwhile to investigate whether this JDI method (statistical) and CISMeF (rule-based) might be combined and then evaluated showing they are complementary to one another.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.12, S.2530-2539
  19. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.02
    0.021158962 = product of:
      0.03173844 = sum of:
        0.010616227 = weight(_text_:information in 690) [ClassicSimilarity], result of:
          0.010616227 = score(doc=690,freq=2.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.116372846 = fieldWeight in 690, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=690)
        0.021122215 = product of:
          0.04224443 = sum of:
            0.04224443 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.04224443 = score(doc=690,freq=2.0), product of:
                0.1819777 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051966466 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    23. 3.2013 13:22:36
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.844-860
  20. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.02
    0.021158962 = product of:
      0.03173844 = sum of:
        0.010616227 = weight(_text_:information in 2158) [ClassicSimilarity], result of:
          0.010616227 = score(doc=2158,freq=2.0), product of:
            0.09122598 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.051966466 = queryNorm
            0.116372846 = fieldWeight in 2158, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.021122215 = product of:
          0.04224443 = sum of:
            0.04224443 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.04224443 = score(doc=2158,freq=2.0), product of:
                0.1819777 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051966466 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    4. 8.2015 19:22:04
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1817-1831

Years

Languages

  • e 141
  • d 15
  • a 1
  • chi 1
  • More… Less…

Types

  • a 141
  • el 15
  • m 3
  • x 3
  • r 2
  • s 2
  • d 1
  • More… Less…