Search (156 results, page 1 of 8)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.06873241 = product of:
      0.103098616 = sum of:
        0.082090534 = product of:
          0.2462716 = sum of:
            0.2462716 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.2462716 = score(doc=562,freq=2.0), product of:
                0.43819162 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05168566 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.02100808 = product of:
          0.04201616 = sum of:
            0.04201616 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.04201616 = score(doc=562,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Mukhopadhyay, S.; Peng, S.; Raje, R.; Palakal, M.; Mostafa, J.: Multi-agent information classification using dynamic acquaintance lists (2003) 0.04
    0.037511136 = product of:
      0.056266703 = sum of:
        0.023610333 = weight(_text_:information in 1755) [ClassicSimilarity], result of:
          0.023610333 = score(doc=1755,freq=10.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.2602176 = fieldWeight in 1755, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1755)
        0.032656368 = product of:
          0.065312736 = sum of:
            0.065312736 = weight(_text_:services in 1755) [ClassicSimilarity], result of:
              0.065312736 = score(doc=1755,freq=4.0), product of:
                0.18975723 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.05168566 = queryNorm
                0.344191 = fieldWeight in 1755, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1755)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    There has been considerable interest in recent years in providing automated information services, such as information classification, by means of a society of collaborative agents. These agents augment each other's knowledge structures (e.g., the vocabularies) and assist each other in providing efficient information services to a human user. However, when the number of agents present in the society increases, exhaustive communication and collaboration among agents result in a [arge communication overhead and increased delays in response time. This paper introduces a method to achieve selective interaction with a relatively small number of potentially useful agents, based an simple agent modeling and acquaintance lists. The key idea presented here is that the acquaintance list of an agent, representing a small number of other agents to be collaborated with, is dynamically adjusted. The best acquaintances are automatically discovered using a learning algorithm, based an the past history of collaboration. Experimental results are presented to demonstrate that such dynamically learned acquaintance lists can lead to high quality of classification, while significantly reducing the delay in response time.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.10, S.966-975
  3. McKiernan, G.: Automated categorisation of Web resources : a profile of selected projects, research, products, and services (1996) 0.04
    0.037389334 = product of:
      0.056084 = sum of:
        0.017598102 = weight(_text_:information in 2533) [ClassicSimilarity], result of:
          0.017598102 = score(doc=2533,freq=2.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.19395474 = fieldWeight in 2533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=2533)
        0.038485896 = product of:
          0.07697179 = sum of:
            0.07697179 = weight(_text_:services in 2533) [ClassicSimilarity], result of:
              0.07697179 = score(doc=2533,freq=2.0), product of:
                0.18975723 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.05168566 = queryNorm
                0.405633 = fieldWeight in 2533, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2533)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    New review of information networking. 1996, no.2, S.15-40
  4. Koch, T.; Ardö, A.; Brümmer, A.: ¬The building and maintenance of robot based internet search services : A review of current indexing and data collection methods. Prepared to meet the requirements of Work Package 3 of EU Telematics for Research, project DESIRE. Version D3.11v0.3 (Draft version 3) (1996) 0.04
    0.037156098 = product of:
      0.055734146 = sum of:
        0.012192324 = weight(_text_:information in 1669) [ClassicSimilarity], result of:
          0.012192324 = score(doc=1669,freq=6.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.1343758 = fieldWeight in 1669, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1669)
        0.043541823 = product of:
          0.087083645 = sum of:
            0.087083645 = weight(_text_:services in 1669) [ClassicSimilarity], result of:
              0.087083645 = score(doc=1669,freq=16.0), product of:
                0.18975723 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.05168566 = queryNorm
                0.45892134 = fieldWeight in 1669, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1669)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    After a short outline of problems, possibilities and difficulties of systematic information retrieval on the Internet and a description of efforts for development in this area, a specification of the terminology for this report is required. Although the process of retrieval is generally seen as an iterative process of browsing and information retrieval and several important services on the net have taken this fact into consideration, the emphasis of this report lays on the general retrieval tools for the whole of Internet. In order to be able to evaluate the differences, possibilities and restrictions of the different services it is necessary to begin with organizing the existing varieties in a typological/ taxonomical survey. The possibilities and weaknesses will be briefly compared and described for the most important services in the categories robot-based WWW-catalogues of different types, list- or form-based catalogues and simultaneous or collected search services respectively. It will however for different reasons not be possible to rank them in order of "best" services. Still more important are the weaknesses and problems common for all attempts of indexing the Internet. The problems of the quality of the input, the technical performance and the general problem of indexing virtual hypertext are shown to be at least as difficult as the different aspects of harvesting, indexing and information retrieval. Some of the attempts made in the area of further development of retrieval services will be mentioned in relation to descriptions of the contents of documents and standardization efforts. Internet harvesting and indexing technology and retrieval software is thoroughly reviewed. Details about all services and software are listed in analytical forms in Annex 1-3.
  5. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.04
    0.03507438 = product of:
      0.052611567 = sum of:
        0.017598102 = weight(_text_:information in 611) [ClassicSimilarity], result of:
          0.017598102 = score(doc=611,freq=2.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.19395474 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=611)
        0.035013467 = product of:
          0.070026934 = sum of:
            0.070026934 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.070026934 = score(doc=611,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Präsentation zum Vortrag anlässlich des 98. Deutscher Bibliothekartag in Erfurt: Ein neuer Blick auf Bibliotheken; TK10: Information erschließen und recherchieren Inhalte erschließen - mit neuen Tools
    Date
    22. 8.2009 12:54:24
  6. Dubin, D.: Dimensions and discriminability (1998) 0.03
    0.030563995 = product of:
      0.045845993 = sum of:
        0.021336567 = weight(_text_:information in 2338) [ClassicSimilarity], result of:
          0.021336567 = score(doc=2338,freq=6.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.23515764 = fieldWeight in 2338, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2338)
        0.024509426 = product of:
          0.049018852 = sum of:
            0.049018852 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.049018852 = score(doc=2338,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  7. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.03
    0.027953774 = product of:
      0.04193066 = sum of:
        0.017421233 = weight(_text_:information in 1673) [ClassicSimilarity], result of:
          0.017421233 = score(doc=1673,freq=4.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.1920054 = fieldWeight in 1673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.024509426 = product of:
          0.049018852 = sum of:
            0.049018852 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.049018852 = score(doc=1673,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
  8. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.03
    0.027191222 = product of:
      0.040786833 = sum of:
        0.0232801 = weight(_text_:information in 1107) [ClassicSimilarity], result of:
          0.0232801 = score(doc=1107,freq=14.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.256578 = fieldWeight in 1107, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.017506734 = product of:
          0.035013467 = sum of:
            0.035013467 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.035013467 = score(doc=1107,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2265-2277
  9. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.03
    0.026197713 = product of:
      0.039296567 = sum of:
        0.018288486 = weight(_text_:information in 2760) [ClassicSimilarity], result of:
          0.018288486 = score(doc=2760,freq=6.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.20156369 = fieldWeight in 2760, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2760)
        0.02100808 = product of:
          0.04201616 = sum of:
            0.04201616 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.04201616 = score(doc=2760,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Information is often organized as a text hierarchy. A hierarchical text-classification system is thus essential for the management, sharing, and dissemination of information. It aims to automatically classify each incoming document into zero, one, or several categories in the text hierarchy. In this paper, we present a technique called CRHTC (context recognition for hierarchical text classification) that performs hierarchical text classification by recognizing the context of discussion (COD) of each category. A category's COD is governed by its ancestor categories, whose contents indicate contextual backgrounds of the category. A document may be classified into a category only if its content matches the category's COD. CRHTC does not require any trials to manually set parameters, and hence is more portable and easier to implement than other methods. It is empirically evaluated under various conditions. The results show that CRHTC achieves both better and more stable performance than several hierarchical and nonhierarchical text-classification methodologies.
    Date
    22. 3.2009 19:11:54
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.803-813
  10. Golub, K.: Automated subject classification of textual Web pages, based on a controlled vocabulary : challenges and recommendations (2006) 0.03
    0.025349349 = product of:
      0.038024023 = sum of:
        0.014932485 = weight(_text_:information in 5897) [ClassicSimilarity], result of:
          0.014932485 = score(doc=5897,freq=4.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.16457605 = fieldWeight in 5897, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5897)
        0.023091538 = product of:
          0.046183076 = sum of:
            0.046183076 = weight(_text_:services in 5897) [ClassicSimilarity], result of:
              0.046183076 = score(doc=5897,freq=2.0), product of:
                0.18975723 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.05168566 = queryNorm
                0.2433798 = fieldWeight in 5897, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5897)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The primary objective of this study was to identify and address problems of applying a controlled vocabulary in automated subject classification of textual Web pages, in the area of engineering. Web pages have special characteristics such as structural information, but are at the same time rather heterogeneous. The classification approach used comprises string-to-string matching between words in a term list extracted from the Ei (Engineering Information) thesaurus and classification scheme, and words in the text to be classified. Based on a sample of 70 Web pages, a number of problems with the term list are identified. Reasons for those problems are discussed and improvements proposed. Methods for implementing the improvements are also specified, suggesting further research.
    Content
    Beitrag eines Themenheftes "Knowledge organization systems and services"
  11. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.02
    0.024552068 = product of:
      0.0368281 = sum of:
        0.012318673 = weight(_text_:information in 141) [ClassicSimilarity], result of:
          0.012318673 = score(doc=141,freq=2.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.13576832 = fieldWeight in 141, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=141)
        0.024509426 = product of:
          0.049018852 = sum of:
            0.049018852 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.049018852 = score(doc=141,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Pages
    S.1-22
  12. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.02
    0.024552068 = product of:
      0.0368281 = sum of:
        0.012318673 = weight(_text_:information in 5273) [ClassicSimilarity], result of:
          0.012318673 = score(doc=5273,freq=2.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.13576832 = fieldWeight in 5273, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5273)
        0.024509426 = product of:
          0.049018852 = sum of:
            0.049018852 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.049018852 = score(doc=5273,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 7.2006 16:24:52
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.3, S.431-442
  13. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.02
    0.024552068 = product of:
      0.0368281 = sum of:
        0.012318673 = weight(_text_:information in 2560) [ClassicSimilarity], result of:
          0.012318673 = score(doc=2560,freq=2.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.13576832 = fieldWeight in 2560, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2560)
        0.024509426 = product of:
          0.049018852 = sum of:
            0.049018852 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.049018852 = score(doc=2560,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The proliferation of digital resources and their integration into a traditional library setting has created a pressing need for an automated tool that organizes textual information based on library classification schemes. Automated text classification is a research field of developing tools, methods, and models to automate text classification. This article describes the current popular approach for text classification and major text classification projects and applications that are based on library classification schemes. Related issues and challenges are discussed, and a number of considerations for the challenges are examined.
    Date
    22. 9.2008 18:31:54
  14. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.02
    0.023403224 = product of:
      0.035104834 = sum of:
        0.017598102 = weight(_text_:information in 2765) [ClassicSimilarity], result of:
          0.017598102 = score(doc=2765,freq=8.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.19395474 = fieldWeight in 2765, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2765)
        0.017506734 = product of:
          0.035013467 = sum of:
            0.035013467 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
              0.035013467 = score(doc=2765,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.19345059 = fieldWeight in 2765, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2765)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Passages can be hidden within a text to circumvent their disallowed transfer. Such release of compartmentalized information is of concern to all corporate and governmental organizations. Passage retrieval is well studied; we posit, however, that passage detection is not. Passage retrieval is the determination of the degree of relevance of blocks of text, namely passages, comprising a document. Rather than determining the relevance of a document in its entirety, passage retrieval determines the relevance of the individual passages. As such, modified traditional information-retrieval techniques compare terms found in user queries with the individual passages to determine a similarity score for passages of interest. In passage detection, passages are classified into predetermined categories. More often than not, passage detection techniques are deployed to detect hidden paragraphs in documents. That is, to hide information, documents are injected with hidden text into passages. Rather than matching query terms against passages to determine their relevance, using text-mining techniques, the passages are classified. Those documents with hidden passages are defined as infected. Thus, simply stated, passage retrieval is the search for passages relevant to a user query, while passage detection is the classification of passages. That is, in passage detection, passages are labeled with one or more categories from a set of predetermined categories. We present a keyword-based dynamic passage approach (KDP) and demonstrate that KDP outperforms statistically significantly (99% confidence) the other document-splitting approaches by 12% to 18% in the passage detection and passage category-prediction tasks. Furthermore, we evaluate the effects of the feature selection, passage length, ambiguous passages, and finally training-data category distribution on passage-detection accuracy.
    Date
    22. 3.2009 19:14:43
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.814-825
  15. Automatische Klassifikation und Extraktion in Documentum (2005) 0.02
    0.022988904 = product of:
      0.034483355 = sum of:
        0.015240406 = weight(_text_:information in 3974) [ClassicSimilarity], result of:
          0.015240406 = score(doc=3974,freq=6.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.16796975 = fieldWeight in 3974, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3974)
        0.019242948 = product of:
          0.038485896 = sum of:
            0.038485896 = weight(_text_:services in 3974) [ClassicSimilarity], result of:
              0.038485896 = score(doc=3974,freq=2.0), product of:
                0.18975723 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.05168566 = queryNorm
                0.2028165 = fieldWeight in 3974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3974)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    "LCI Comprend ist ab sofort als integriertes Modul für EMCs Content Management System Documentum verfügbar. LCI (Learning Computers International GmbH) hat mit Unterstützung von neeb & partner diese Technologie zur Dokumentenautomation transparent in Documentum integriert. Dies ist die erste bekannte Lösung für automatische, lernende Klassifikation und Extraktion, die direkt auf dem Documentum Datenbestand arbeitet und ohne zusätzliche externe Steuerung auskommt. Die LCI Information Capture Services (ICS) dienen dazu, jegliche Art von Dokument zu klassifizieren und Information daraus zu extrahieren. Das Dokument kann strukturiert, halbstrukturiert oder unstrukturiert sein. Somit können beispielsweise gescannte Formulare genauso verarbeitet werden wie Rechnungen oder E-Mails. Die Extraktions- und Klassifikationsvorschriften und die zu lernenden Beispieldokumente werden einfach interaktiv zusammengestellt und als XML-Struktur gespeichert. Zur Laufzeit wird das Projekt angewendet, um unbekannte Dokumente aufgrund von Regeln und gelernten Beispielen automatisch zu indexieren. Dokumente können damit entweder innerhalb von Documentum oder während des Imports verarbeitet werden. Der neue Server erlaubt das Einlesen von Dateien aus dem Dateisystem oder direkt von POPS-Konten, die Analyse der Dokumente und die automatische Erzeugung von Indexwerten bei der Speicherung in einer Documentum Ablageumgebung. Diese Indexwerte, die durch inhaltsbasierte, auch mehrthematische Klassifikation oder durch Extraktion gewonnen wurden, werden als vordefinierte Attribute mit dem Documentum-Objekt abgelegt. Handelt es sich um ein gescanntes Dokument oder ein Fax, wird automatisch die integrierte Volltext-Texterkennung durchgeführt."
    Source
    Information - Wissenschaft und Praxis. 56(2005) H.5/6, S.276
  16. AlQenaei, Z.M.; Monarchi, D.E.: ¬The use of learning techniques to analyze the results of a manual classification system (2016) 0.02
    0.022988904 = product of:
      0.034483355 = sum of:
        0.015240406 = weight(_text_:information in 2836) [ClassicSimilarity], result of:
          0.015240406 = score(doc=2836,freq=6.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.16796975 = fieldWeight in 2836, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2836)
        0.019242948 = product of:
          0.038485896 = sum of:
            0.038485896 = weight(_text_:services in 2836) [ClassicSimilarity], result of:
              0.038485896 = score(doc=2836,freq=2.0), product of:
                0.18975723 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.05168566 = queryNorm
                0.2028165 = fieldWeight in 2836, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2836)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Classification is the process of assigning objects to pre-defined classes based on observations or characteristics of those objects, and there are many approaches to performing this task. The overall objective of this study is to demonstrate the use of two learning techniques to analyze the results of a manual classification system. Our sample consisted of 1,026 documents, from the ACM Computing Classification System, classified by their authors as belonging to one of the groups of the classification system: "H.3 Information Storage and Retrieval." A singular value decomposition of the documents' weighted term-frequency matrix was used to represent each document in a 50-dimensional vector space. The analysis of the representation using both supervised (decision tree) and unsupervised (clustering) techniques suggests that two pairs of the ACM classes are closely related to each other in the vector space. Class 1 (Content Analysis and Indexing) is closely related to Class 3 (Information Search and Retrieval), and Class 4 (Systems and Software) is closely related to Class 5 (Online Information Services). Further analysis was performed to test the diffusion of the words in the two classes using both cosine and Euclidean distance.
  17. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.02
    0.021044627 = product of:
      0.03156694 = sum of:
        0.010558861 = weight(_text_:information in 690) [ClassicSimilarity], result of:
          0.010558861 = score(doc=690,freq=2.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.116372846 = fieldWeight in 690, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=690)
        0.02100808 = product of:
          0.04201616 = sum of:
            0.04201616 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.04201616 = score(doc=690,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    23. 3.2013 13:22:36
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.844-860
  18. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.02
    0.021044627 = product of:
      0.03156694 = sum of:
        0.010558861 = weight(_text_:information in 2158) [ClassicSimilarity], result of:
          0.010558861 = score(doc=2158,freq=2.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.116372846 = fieldWeight in 2158, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.02100808 = product of:
          0.04201616 = sum of:
            0.04201616 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.04201616 = score(doc=2158,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    4. 8.2015 19:22:04
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1817-1831
  19. Khoo, C.S.G.; Ng, K.; Ou, S.: ¬An exploratory study of human clustering of Web pages (2003) 0.02
    0.01746514 = product of:
      0.026197711 = sum of:
        0.012192324 = weight(_text_:information in 2741) [ClassicSimilarity], result of:
          0.012192324 = score(doc=2741,freq=6.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.1343758 = fieldWeight in 2741, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2741)
        0.014005387 = product of:
          0.028010774 = sum of:
            0.028010774 = weight(_text_:22 in 2741) [ClassicSimilarity], result of:
              0.028010774 = score(doc=2741,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.15476047 = fieldWeight in 2741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2741)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This study seeks to find out how human beings cluster Web pages naturally. Twenty Web pages retrieved by the Northem Light search engine for each of 10 queries were sorted by 3 subjects into categories that were natural or meaningful to them. lt was found that different subjects clustered the same set of Web pages quite differently and created different categories. The average inter-subject similarity of the clusters created was a low 0.27. Subjects created an average of 5.4 clusters for each sorting. The categories constructed can be divided into 10 types. About 1/3 of the categories created were topical. Another 20% of the categories relate to the degree of relevance or usefulness. The rest of the categories were subject-independent categories such as format, purpose, authoritativeness and direction to other sources. The authors plan to develop automatic methods for categorizing Web pages using the common categories created by the subjects. lt is hoped that the techniques developed can be used by Web search engines to automatically organize Web pages retrieved into categories that are natural to users. 1. Introduction The World Wide Web is an increasingly important source of information for people globally because of its ease of access, the ease of publishing, its ability to transcend geographic and national boundaries, its flexibility and heterogeneity and its dynamic nature. However, Web users also find it increasingly difficult to locate relevant and useful information in this vast information storehouse. Web search engines, despite their scope and power, appear to be quite ineffective. They retrieve too many pages, and though they attempt to rank retrieved pages in order of probable relevance, often the relevant documents do not appear in the top-ranked 10 or 20 documents displayed. Several studies have found that users do not know how to use the advanced features of Web search engines, and do not know how to formulate and re-formulate queries. Users also typically exert minimal effort in performing, evaluating and refining their searches, and are unwilling to scan more than 10 or 20 items retrieved (Jansen, Spink, Bateman & Saracevic, 1998). This suggests that the conventional ranked-list display of search results does not satisfy user requirements, and that better ways of presenting and summarizing search results have to be developed. One promising approach is to group retrieved pages into clusters or categories to allow users to navigate immediately to the "promising" clusters where the most useful Web pages are likely to be located. This approach has been adopted by a number of search engines (notably Northem Light) and search agents.
    Date
    12. 9.2004 9:56:22
  20. Koch, T.; Vizine-Goetz, D.: Automatic classification and content navigation support for Web services : DESIRE II cooperates with OCLC (1998) 0.02
    0.015553892 = product of:
      0.046661675 = sum of:
        0.046661675 = product of:
          0.09332335 = sum of:
            0.09332335 = weight(_text_:services in 1568) [ClassicSimilarity], result of:
              0.09332335 = score(doc=1568,freq=6.0), product of:
                0.18975723 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.05168566 = queryNorm
                0.4918039 = fieldWeight in 1568, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1568)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Emerging standards in knowledge representation and organization are preparing the way for distributed vocabulary support in Internet search services. NetLab researchers are exploring several innovative solutions for searching and browsing in the subject-based Internet gateway, Electronic Engineering Library, Sweden (EELS). The implementation of the EELS service is described, specifically, the generation of the robot-gathered database 'All' engineering and the automated application of the Ei thesaurus and classification scheme. NetLab and OCLC researchers are collaborating to investigate advanced solutions to automated classification in the DESIRE II context. A plan for furthering the development of distributed vocabulary support in Internet search services is offered.

Years

Languages

  • e 141
  • d 13
  • a 1
  • chi 1
  • More… Less…

Types

  • a 137
  • el 17
  • m 3
  • x 3
  • s 2
  • d 1
  • r 1
  • More… Less…