Search (87 results, page 1 of 5)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.06912195 = product of:
      0.10368292 = sum of:
        0.052187677 = product of:
          0.20875071 = sum of:
            0.20875071 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20875071 = score(doc=562,freq=2.0), product of:
                0.37143064 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043811057 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.25 = coord(1/4)
        0.051495243 = sum of:
          0.015880484 = weight(_text_:of in 562) [ClassicSimilarity], result of:
            0.015880484 = score(doc=562,freq=10.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.23179851 = fieldWeight in 562, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
          0.03561476 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.03561476 = score(doc=562,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
      0.6666667 = coord(2/3)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Source
    Proceedings of the 4th IEEE International Conference on Data Mining (ICDM 2004), 1-4 November 2004, Brighton, UK
  2. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.03
    0.030438973 = product of:
      0.091316916 = sum of:
        0.091316916 = sum of:
          0.020087399 = weight(_text_:of in 1046) [ClassicSimilarity], result of:
            0.020087399 = score(doc=1046,freq=4.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.2932045 = fieldWeight in 1046, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.09375 = fieldNorm(doc=1046)
          0.07122952 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
            0.07122952 = score(doc=1046,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.46428138 = fieldWeight in 1046, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=1046)
      0.33333334 = coord(1/3)
    
    Date
    5. 5.2003 14:17:22
    Footnote
    Teil eines Themenheftes: OCLC and the Internet: An Historical Overview of Research Activities, 1990-1999 - Part II
    Source
    Journal of library administration. 34(2001) nos.3/4, S.221-228
  3. Automatic classification research at OCLC (2002) 0.02
    0.021157425 = product of:
      0.06347227 = sum of:
        0.06347227 = sum of:
          0.021921717 = weight(_text_:of in 1563) [ClassicSimilarity], result of:
            0.021921717 = score(doc=1563,freq=14.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.31997898 = fieldWeight in 1563, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1563)
          0.041550554 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
            0.041550554 = score(doc=1563,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.2708308 = fieldWeight in 1563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1563)
      0.33333334 = coord(1/3)
    
    Abstract
    OCLC enlists the cooperation of the world's libraries to make the written record of humankind's cultural heritage more accessible through electronic media. Part of this goal can be accomplished through the application of the principles of knowledge organization. We believe that cultural artifacts are effectively lost unless they are indexed, cataloged and classified. Accordingly, OCLC has developed products, sponsored research projects, and encouraged the participation in international standards communities whose outcome has been improved library classification schemes, cataloging productivity tools, and new proposals for the creation and maintenance of metadata. Though cataloging and classification requires expert intellectual effort, we recognize that at least some of the work must be automated if we hope to keep pace with cultural change
    Date
    5. 5.2003 9:22:09
  4. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.02
    0.02002593 = product of:
      0.060077786 = sum of:
        0.060077786 = sum of:
          0.018527232 = weight(_text_:of in 5273) [ClassicSimilarity], result of:
            0.018527232 = score(doc=5273,freq=10.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.2704316 = fieldWeight in 5273, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
          0.041550554 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
            0.041550554 = score(doc=5273,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.2708308 = fieldWeight in 5273, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
      0.33333334 = coord(1/3)
    
    Abstract
    In text categorization tasks, classification on some class hierarchies has better results than in cases without the hierarchy. Currently, because a large number of documents are divided into several subgroups in a hierarchy, we can appropriately use a hierarchical classification method. However, we have no systematic method to build a hierarchical classification system that performs well with large collections of practical data. In this article, we introduce a new evaluation scheme for internal node classifiers, which can be used effectively to develop a hierarchical classification system. We also show that our method for constructing the hierarchical classification system is very effective, especially for the task of constructing classifiers applied to hierarchy tree with a lot of levels.
    Date
    22. 7.2006 16:24:52
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.3, S.431-442
  5. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.02
    0.018633895 = product of:
      0.055901684 = sum of:
        0.055901684 = sum of:
          0.014351131 = weight(_text_:of in 2560) [ClassicSimilarity], result of:
            0.014351131 = score(doc=2560,freq=6.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.20947541 = fieldWeight in 2560, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2560)
          0.041550554 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
            0.041550554 = score(doc=2560,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.2708308 = fieldWeight in 2560, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2560)
      0.33333334 = coord(1/3)
    
    Abstract
    The proliferation of digital resources and their integration into a traditional library setting has created a pressing need for an automated tool that organizes textual information based on library classification schemes. Automated text classification is a research field of developing tools, methods, and models to automate text classification. This article describes the current popular approach for text classification and major text classification projects and applications that are based on library classification schemes. Related issues and challenges are discussed, and a number of considerations for the challenges are examined.
    Date
    22. 9.2008 18:31:54
  6. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.02
    0.017165082 = product of:
      0.051495243 = sum of:
        0.051495243 = sum of:
          0.015880484 = weight(_text_:of in 2760) [ClassicSimilarity], result of:
            0.015880484 = score(doc=2760,freq=10.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.23179851 = fieldWeight in 2760, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=2760)
          0.03561476 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
            0.03561476 = score(doc=2760,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 2760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2760)
      0.33333334 = coord(1/3)
    
    Abstract
    Information is often organized as a text hierarchy. A hierarchical text-classification system is thus essential for the management, sharing, and dissemination of information. It aims to automatically classify each incoming document into zero, one, or several categories in the text hierarchy. In this paper, we present a technique called CRHTC (context recognition for hierarchical text classification) that performs hierarchical text classification by recognizing the context of discussion (COD) of each category. A category's COD is governed by its ancestor categories, whose contents indicate contextual backgrounds of the category. A document may be classified into a category only if its content matches the category's COD. CRHTC does not require any trials to manually set parameters, and hence is more portable and easier to implement than other methods. It is empirically evaluated under various conditions. The results show that CRHTC achieves both better and more stable performance than several hierarchical and nonhierarchical text-classification methodologies.
    Date
    22. 3.2009 19:11:54
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.803-813
  7. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.02
    0.01700591 = product of:
      0.051017724 = sum of:
        0.051017724 = sum of:
          0.021338759 = weight(_text_:of in 2765) [ClassicSimilarity], result of:
            0.021338759 = score(doc=2765,freq=26.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.31146988 = fieldWeight in 2765, product of:
                5.0990195 = tf(freq=26.0), with freq of:
                  26.0 = termFreq=26.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
          0.029678967 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
            0.029678967 = score(doc=2765,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.19345059 = fieldWeight in 2765, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
      0.33333334 = coord(1/3)
    
    Abstract
    Passages can be hidden within a text to circumvent their disallowed transfer. Such release of compartmentalized information is of concern to all corporate and governmental organizations. Passage retrieval is well studied; we posit, however, that passage detection is not. Passage retrieval is the determination of the degree of relevance of blocks of text, namely passages, comprising a document. Rather than determining the relevance of a document in its entirety, passage retrieval determines the relevance of the individual passages. As such, modified traditional information-retrieval techniques compare terms found in user queries with the individual passages to determine a similarity score for passages of interest. In passage detection, passages are classified into predetermined categories. More often than not, passage detection techniques are deployed to detect hidden paragraphs in documents. That is, to hide information, documents are injected with hidden text into passages. Rather than matching query terms against passages to determine their relevance, using text-mining techniques, the passages are classified. Those documents with hidden passages are defined as infected. Thus, simply stated, passage retrieval is the search for passages relevant to a user query, while passage detection is the classification of passages. That is, in passage detection, passages are labeled with one or more categories from a set of predetermined categories. We present a keyword-based dynamic passage approach (KDP) and demonstrate that KDP outperforms statistically significantly (99% confidence) the other document-splitting approaches by 12% to 18% in the passage detection and passage category-prediction tasks. Furthermore, we evaluate the effects of the feature selection, passage length, ambiguous passages, and finally training-data category distribution on passage-detection accuracy.
    Date
    22. 3.2009 19:14:43
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.814-825
  8. Khoo, C.S.G.; Ng, K.; Ou, S.: ¬An exploratory study of human clustering of Web pages (2003) 0.02
    0.01514668 = product of:
      0.04544004 = sum of:
        0.04544004 = sum of:
          0.02169687 = weight(_text_:of in 2741) [ClassicSimilarity], result of:
            0.02169687 = score(doc=2741,freq=42.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.31669703 = fieldWeight in 2741, product of:
                6.4807405 = tf(freq=42.0), with freq of:
                  42.0 = termFreq=42.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.03125 = fieldNorm(doc=2741)
          0.023743173 = weight(_text_:22 in 2741) [ClassicSimilarity], result of:
            0.023743173 = score(doc=2741,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.15476047 = fieldWeight in 2741, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2741)
      0.33333334 = coord(1/3)
    
    Abstract
    This study seeks to find out how human beings cluster Web pages naturally. Twenty Web pages retrieved by the Northem Light search engine for each of 10 queries were sorted by 3 subjects into categories that were natural or meaningful to them. lt was found that different subjects clustered the same set of Web pages quite differently and created different categories. The average inter-subject similarity of the clusters created was a low 0.27. Subjects created an average of 5.4 clusters for each sorting. The categories constructed can be divided into 10 types. About 1/3 of the categories created were topical. Another 20% of the categories relate to the degree of relevance or usefulness. The rest of the categories were subject-independent categories such as format, purpose, authoritativeness and direction to other sources. The authors plan to develop automatic methods for categorizing Web pages using the common categories created by the subjects. lt is hoped that the techniques developed can be used by Web search engines to automatically organize Web pages retrieved into categories that are natural to users. 1. Introduction The World Wide Web is an increasingly important source of information for people globally because of its ease of access, the ease of publishing, its ability to transcend geographic and national boundaries, its flexibility and heterogeneity and its dynamic nature. However, Web users also find it increasingly difficult to locate relevant and useful information in this vast information storehouse. Web search engines, despite their scope and power, appear to be quite ineffective. They retrieve too many pages, and though they attempt to rank retrieved pages in order of probable relevance, often the relevant documents do not appear in the top-ranked 10 or 20 documents displayed. Several studies have found that users do not know how to use the advanced features of Web search engines, and do not know how to formulate and re-formulate queries. Users also typically exert minimal effort in performing, evaluating and refining their searches, and are unwilling to scan more than 10 or 20 items retrieved (Jansen, Spink, Bateman & Saracevic, 1998). This suggests that the conventional ranked-list display of search results does not satisfy user requirements, and that better ways of presenting and summarizing search results have to be developed. One promising approach is to group retrieved pages into clusters or categories to allow users to navigate immediately to the "promising" clusters where the most useful Web pages are likely to be located. This approach has been adopted by a number of search engines (notably Northem Light) and search agents.
    Date
    12. 9.2004 9:56:22
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
  9. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.009892989 = product of:
      0.029678967 = sum of:
        0.029678967 = product of:
          0.059357934 = sum of:
            0.059357934 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.059357934 = score(doc=611,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 8.2009 12:54:24
  10. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.01
    0.0059357933 = product of:
      0.01780738 = sum of:
        0.01780738 = product of:
          0.03561476 = sum of:
            0.03561476 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.03561476 = score(doc=3051,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 8.2009 19:51:28
  11. Guerrero-Bote, V.P.; Moya Anegón, F. de; Herrero Solana, V.: Document organization using Kohonen's algorithm (2002) 0.00
    0.004463867 = product of:
      0.0133916 = sum of:
        0.0133916 = product of:
          0.0267832 = sum of:
            0.0267832 = weight(_text_:of in 2564) [ClassicSimilarity], result of:
              0.0267832 = score(doc=2564,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.39093933 = fieldWeight in 2564, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The classification of documents from a bibliographic database is a task that is linked to processes of information retrieval based on partial matching. A method is described of vectorizing reference documents from LISA which permits their topological organization using Kohonen's algorithm. As an example a map is generated of 202 documents from LISA, and an analysis is made of the possibilities of this type of neural network with respect to the development of information retrieval systems based on graphical browsing.
  12. Figuerola, C.G.; Rodríguez, A.F.Z.; Berrocal, J.L.A.: Automatic vs manual categorisation of documents in Spanish (2001) 0.00
    0.00436691 = product of:
      0.01310073 = sum of:
        0.01310073 = product of:
          0.02620146 = sum of:
            0.02620146 = weight(_text_:of in 4475) [ClassicSimilarity], result of:
              0.02620146 = score(doc=4475,freq=20.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.38244802 = fieldWeight in 4475, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4475)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Automatic categorisation can be understood as a learning process during which a program recognises the characteristics that distinguish each category or class from others, i.e. those characteristics which the documents should have in order to belong to that category. As yet few experiments have been carried out with documents in Spanish. Here we show the possibilities of elaborating pattern vectors that include the characteristics of different classes or categories of documents, using techniques based on those applied to the expansion of queries by relevance; likewise, the results of applying these techniques to a collection of documents in Spanish are given. The same collection of documents was categorised manually and the results of both procedures were compared.
    Source
    Journal of documentation. 57(2001) no.6, S.763-773
  13. Reiner, U.: DDC-based search in the data of the German National Bibliography (2008) 0.00
    0.0041003237 = product of:
      0.01230097 = sum of:
        0.01230097 = product of:
          0.02460194 = sum of:
            0.02460194 = weight(_text_:of in 2166) [ClassicSimilarity], result of:
              0.02460194 = score(doc=2166,freq=24.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3591007 = fieldWeight in 2166, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2166)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In 2004, the German National Library began to classify title records of the German National Bibliography according to subject groups based on the divisions of the Dewey Decimal Classification (DDC). Since 2006, all titles of the main series of the German National Bibliography are classified in strict compliance with the DDC. On this basis, an enhanced DDC-based search can be realized - e.g., searching the data of the German National Bibliography for title records using number components of synthesized classification numbers or searching for DDC numbers using unclassified title records. This paper gives an account of the current research and development of the DDC-based search. The work is conducted in the VZG project Colibri that focuses on the automatic analysis of DDC-synthesized numbers and the automatic classification of bibliographic title records.
    Source
    New pespectives on subject indexing and classification: essays in honour of Magda Heiner-Freiling. Red.: K. Knull-Schlomann, u.a
  14. Lim, C.S.; Lee, K.J.; Kim, G.C.: Multiple sets of features for automatic genre classification of web documents (2005) 0.00
    0.0040669674 = product of:
      0.012200902 = sum of:
        0.012200902 = product of:
          0.024401804 = sum of:
            0.024401804 = weight(_text_:of in 1048) [ClassicSimilarity], result of:
              0.024401804 = score(doc=1048,freq=34.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.35617945 = fieldWeight in 1048, product of:
                  5.8309517 = tf(freq=34.0), with freq of:
                    34.0 = termFreq=34.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1048)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    With the increase of information on the Web, it is difficult to find desired information quickly out of the documents retrieved by a search engine. One way to solve this problem is to classify web documents according to various criteria. Most document classification has been focused on a subject or a topic of a document. A genre or a style is another view of a document different from a subject or a topic. The genre is also a criterion to classify documents. In this paper, we suggest multiple sets of features to classify genres of web documents. The basic set of features, which have been proposed in the previous studies, is acquired from the textual properties of documents, such as the number of sentences, the number of a certain word, etc. However, web documents are different from textual documents in that they contain URL and HTML tags within the pages. We introduce new sets of features specific to web documents, which are extracted from URL and HTML tags. The present work is an attempt to evaluate the performance of the proposed sets of features, and to discuss their characteristics. Finally, we conclude which is an appropriate set of features in automatic genre classification of web documents.
  15. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.00
    0.0039571957 = product of:
      0.011871587 = sum of:
        0.011871587 = product of:
          0.023743173 = sum of:
            0.023743173 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.023743173 = score(doc=3284,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2010 14:41:24
  16. Díaz, I.; Ranilla, J.; Montañes, E.; Fernández, J.; Combarro, E.F.: Improving performance of text categorization by combining filtering and support vector machines (2004) 0.00
    0.003925761 = product of:
      0.011777283 = sum of:
        0.011777283 = product of:
          0.023554565 = sum of:
            0.023554565 = weight(_text_:of in 2234) [ClassicSimilarity], result of:
              0.023554565 = score(doc=2234,freq=22.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34381276 = fieldWeight in 2234, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2234)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Text Categorization is the process of assigning documents to a set of previously fixed categories. A lot of research is going an with the goal of automating this time-consuming task. Several different algorithms have been applied, and Support Vector Machines (SVM) have shown very good results. In this report, we try to prove that a previous filtering of the words used by SVM in the classification can improve the overall performance. This hypothesis is systematically tested with three different measures of word relevance, an two different corpus (one of them considered in three different splits), and with both local and global vocabularies. The results show that filtering significantly improves the recall of the method, and that also has the effect of significantly improving the overall performance.
    Source
    Journal of the American Society for Information Science and technology. 55(2004) no.7, S.579-592
  17. Ozmutlu, S.; Cosar, G.C.: Analyzing the results of automatic new topic identification (2008) 0.00
    0.003925761 = product of:
      0.011777283 = sum of:
        0.011777283 = product of:
          0.023554565 = sum of:
            0.023554565 = weight(_text_:of in 2604) [ClassicSimilarity], result of:
              0.023554565 = score(doc=2604,freq=22.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34381276 = fieldWeight in 2604, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2604)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - Identification of topic changes within a user search session is a key issue in content analysis of search engine user queries. Recently, various studies have focused on new topic identification/session identification of search engine transaction logs, and several problems regarding the estimation of topic shifts and continuations were observed in these studies. This study aims to analyze the reasons for the problems that were encountered as a result of applying automatic new topic identification. Design/methodology/approach - Measures, such as cleaning the data of common words and analyzing the errors of automatic new topic identification, are applied to eliminate the problems in estimating topic shifts and continuations. Findings - The findings show that the resulting errors of automatic new topic identification have a pattern, and further research is required to improve the performance of automatic new topic identification. Originality/value - Improving the performance of automatic new topic identification would be valuable to search engine designers, so that they can develop new clustering and query recommendation algorithms, as well as custom-tailored graphical user interfaces for search engine users.
  18. Sebastiani, F.: Classification of text, automatic (2006) 0.00
    0.0039058835 = product of:
      0.01171765 = sum of:
        0.01171765 = product of:
          0.0234353 = sum of:
            0.0234353 = weight(_text_:of in 5003) [ClassicSimilarity], result of:
              0.0234353 = score(doc=5003,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34207192 = fieldWeight in 5003, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5003)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Automatic text classification (ATC) is a discipline at the crossroads of information retrieval (IR), machine learning (ML), and computational linguistics (CL), and consists in the realization of text classifiers, i.e. software systems capable of assigning texts to one or more categories, or classes, from a predefined set. Applications range from the automated indexing of scientific articles, to e-mail routing, spam filtering, authorship attribution, and automated survey coding. This article will focus on the ML approach to ATC, whereby a software system (called the learner) automatically builds a classifier for the categories of interest by generalizing from a "training" set of pre-classified texts.
    Source
    Encyclopedia of language and linguistics. 2nd ed. Ed.: K. Brown. Vol. 14
  19. Chung, Y.M.; Lee, J.Y.: ¬A corpus-based approach to comparative evaluation of statistical term association measures (2001) 0.00
    0.0038202507 = product of:
      0.011460752 = sum of:
        0.011460752 = product of:
          0.022921504 = sum of:
            0.022921504 = weight(_text_:of in 5769) [ClassicSimilarity], result of:
              0.022921504 = score(doc=5769,freq=30.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.33457235 = fieldWeight in 5769, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5769)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Statistical association measures have been widely applied in information retrieval research, usually employing a clustering of documents or terms on the basis of their relationships. Applications of the association measures for term clustering include automatic thesaurus construction and query expansion. This research evaluates the similarity of six association measures by comparing the relationship and behavior they demonstrate in various analyses of a test corpus. Analysis techniques include comparisons of highly ranked term pairs and term clusters, analyses of the correlation among the association measures using Pearson's correlation coefficient and MDS mapping, and an analysis of the impact of a term frequency on the association values by means of z-score. The major findings of the study are as follows: First, the most similar association measures are mutual information and Yule's coefficient of colligation Y, whereas cosine and Jaccard coefficients, as well as X**2 statistic and likelihood ratio, demonstrate quite similar behavior for terms with high frequency. Second, among all the measures, the X**2 statistic is the least affected by the frequency of terms. Third, although cosine and Jaccard coefficients tend to emphasize high frequency terms, mutual information and Yule's Y seem to overestimate rare terms
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.4, S.283-296
  20. Na, J.-C.; Sui, H.; Khoo, C.; Chan, S.; Zhou, Y.: Effectiveness of simple linguistic processing in automatic sentiment classification of product reviews (2004) 0.00
    0.0038202507 = product of:
      0.011460752 = sum of:
        0.011460752 = product of:
          0.022921504 = sum of:
            0.022921504 = weight(_text_:of in 2624) [ClassicSimilarity], result of:
              0.022921504 = score(doc=2624,freq=30.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.33457235 = fieldWeight in 2624, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2624)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper reports a study in automatic sentiment classification, i.e., automatically classifying documents as expressing positive or negative Sentiments/opinions. The study investigates the effectiveness of using SVM (Support Vector Machine) an various text features to classify product reviews into recommended (positive Sentiment) and not recommended (negative sentiment). Compared with traditional topical classification, it was hypothesized that syntactic and semantic processing of text would be more important for sentiment classification. In the first part of this study, several different approaches, unigrams (individual words), selected words (such as verb, adjective, and adverb), and words labelled with part-of-speech tags were investigated. A sample of 1,800 various product reviews was retrieved from Review Centre (www.reviewcentre.com) for the study. 1,200 reviews were used for training, and 600 for testing. Using SVM, the baseline unigram approach obtained an accuracy rate of around 76%. The use of selected words obtained a marginally better result of 77.33%. Error analysis suggests various approaches for improving classification accuracy: use of negation phrase, making inference from superficial words, and solving the problem of comments an parts. The second part of the study that is in progress investigates the use of negation phrase through simple linguistic processing to improve classification accuracy. This approach increased the accuracy rate up to 79.33%.
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine

Languages

  • e 79
  • d 7
  • a 1
  • More… Less…

Types

  • a 73
  • el 11
  • m 3
  • x 3
  • s 1
  • More… Less…