Search (152 results, page 1 of 8)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.08244937 = product of:
      0.10993249 = sum of:
        0.07509089 = product of:
          0.22527267 = sum of:
            0.22527267 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.22527267 = score(doc=562,freq=2.0), product of:
                0.40082818 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047278564 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.015624823 = weight(_text_:for in 562) [ClassicSimilarity], result of:
          0.015624823 = score(doc=562,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 562, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.038433556 = score(doc=562,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Ruocco, A.S.; Frieder, O.: Clustering and classification of large document bases in a parallel environment (1997) 0.07
    0.065053955 = product of:
      0.13010791 = sum of:
        0.01822896 = weight(_text_:for in 1661) [ClassicSimilarity], result of:
          0.01822896 = score(doc=1661,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20535621 = fieldWeight in 1661, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1661)
        0.111878954 = weight(_text_:computing in 1661) [ClassicSimilarity], result of:
          0.111878954 = score(doc=1661,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.42780277 = fieldWeight in 1661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1661)
      0.5 = coord(2/4)
    
    Abstract
    Proposes the use of parallel computing systems to overcome the computationally intense clustering process. Examines 2 operations: clustering a document set and classifying the document set. Uses a subset of the TIPSTER corpus, specifically, articles from the Wall Street Journal. Document set classification was performed without the large storage requirements for ancillary data matrices. The time performance of the parallel systems was an improvement over sequential systems times, and produced the same clustering and classification scheme. Results show near linear speed up in higher threshold clustering applications
    Source
    Journal of the American Society for Information Science. 48(1997) no.10, S.932-943
  3. Barbu, E.: What kind of knowledge is in Wikipedia? : unsupervised extraction of properties for similar concepts (2014) 0.06
    0.06030063 = product of:
      0.12060126 = sum of:
        0.024705013 = weight(_text_:for in 1547) [ClassicSimilarity], result of:
          0.024705013 = score(doc=1547,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.27831143 = fieldWeight in 1547, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=1547)
        0.095896244 = weight(_text_:computing in 1547) [ClassicSimilarity], result of:
          0.095896244 = score(doc=1547,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.36668807 = fieldWeight in 1547, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=1547)
      0.5 = coord(2/4)
    
    Abstract
    This article presents a novel method for extracting knowledge from Wikipedia and a classification schema for annotating the extracted knowledge. Unlike the majority of approaches in the literature, we use the raw Wikipedia text for knowledge acquisition. The main assumption made is that the concepts classified under the same node in a taxonomy are described in a comparable way in Wikipedia. The annotation of the extracted knowledge is done at two levels: ontological and logical. The extracted properties are evaluated in the traditional way, that is, by computing the precision of the extraction procedure and in a clustering task. The second method of evaluation is seldom used in the natural language processing community, but it is regularly employed in cognitive psychology.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.12, S.2489-2497
  4. Chae, G.; Park, J.; Park, J.; Yeo, W.S.; Shi, C.: Linking and clustering artworks using social tags : revitalizing crowd-sourced information on cultural collections (2016) 0.05
    0.04916378 = product of:
      0.09832756 = sum of:
        0.01841403 = weight(_text_:for in 2852) [ClassicSimilarity], result of:
          0.01841403 = score(doc=2852,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20744109 = fieldWeight in 2852, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2852)
        0.079913534 = weight(_text_:computing in 2852) [ClassicSimilarity], result of:
          0.079913534 = score(doc=2852,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 2852, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2852)
      0.5 = coord(2/4)
    
    Abstract
    Social tagging is one of the most popular methods for collecting crowd-sourced information in galleries, libraries, archives, and museums (GLAMs). However, when the number of social tags grows rapidly, using them becomes problematic and, as a result, they are often left as simply big data that cannot be used for practical purposes. To revitalize the use of this crowd-sourced information, we propose using social tags to link and cluster artworks based on an experimental study using an online collection at the Gyeonggi Museum of Modern Art (GMoMA). We view social tagging as a folksonomy, where artworks are classified by keywords of the crowd's various interpretations and one artwork can belong to several different categories simultaneously. To leverage this strength of social tags, we used a clustering method called "link communities" to detect overlapping communities in a network of artworks constructed by computing similarities between all artwork pairs. We used this framework to identify semantic relationships and clusters of similar artworks. By comparing the clustering results with curators' manual classification results, we demonstrated the potential of social tagging data for automatically clustering artworks in a way that reflects the dynamic perspectives of crowds.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.4, S.885-899
  5. AlQenaei, Z.M.; Monarchi, D.E.: ¬The use of learning techniques to analyze the results of a manual classification system (2016) 0.03
    0.028253702 = product of:
      0.11301481 = sum of:
        0.11301481 = weight(_text_:computing in 2836) [ClassicSimilarity], result of:
          0.11301481 = score(doc=2836,freq=4.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.43214604 = fieldWeight in 2836, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2836)
      0.25 = coord(1/4)
    
    Abstract
    Classification is the process of assigning objects to pre-defined classes based on observations or characteristics of those objects, and there are many approaches to performing this task. The overall objective of this study is to demonstrate the use of two learning techniques to analyze the results of a manual classification system. Our sample consisted of 1,026 documents, from the ACM Computing Classification System, classified by their authors as belonging to one of the groups of the classification system: "H.3 Information Storage and Retrieval." A singular value decomposition of the documents' weighted term-frequency matrix was used to represent each document in a 50-dimensional vector space. The analysis of the representation using both supervised (decision tree) and unsupervised (clustering) techniques suggests that two pairs of the ACM classes are closely related to each other in the vector space. Class 1 (Content Analysis and Indexing) is closely related to Class 3 (Information Search and Retrieval), and Class 4 (Systems and Software) is closely related to Class 5 (Online Information Services). Further analysis was performed to test the diffusion of the words in the two classes using both cosine and Euclidean distance.
    Object
    Computing Classification System
  6. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.03
    0.025621045 = product of:
      0.05124209 = sum of:
        0.028822517 = weight(_text_:for in 5273) [ClassicSimilarity], result of:
          0.028822517 = score(doc=5273,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.3246967 = fieldWeight in 5273, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5273)
        0.022419576 = product of:
          0.04483915 = sum of:
            0.04483915 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.04483915 = score(doc=5273,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In text categorization tasks, classification on some class hierarchies has better results than in cases without the hierarchy. Currently, because a large number of documents are divided into several subgroups in a hierarchy, we can appropriately use a hierarchical classification method. However, we have no systematic method to build a hierarchical classification system that performs well with large collections of practical data. In this article, we introduce a new evaluation scheme for internal node classifiers, which can be used effectively to develop a hierarchical classification system. We also show that our method for constructing the hierarchical classification system is very effective, especially for the task of constructing classifiers applied to hierarchy tree with a lot of levels.
    Date
    22. 7.2006 16:24:52
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.3, S.431-442
  7. Sebastiani, F.: Machine learning in automated text categorization (2002) 0.02
    0.023974061 = product of:
      0.095896244 = sum of:
        0.095896244 = weight(_text_:computing in 3389) [ClassicSimilarity], result of:
          0.095896244 = score(doc=3389,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.36668807 = fieldWeight in 3389, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=3389)
      0.25 = coord(1/4)
    
    Source
    ACM computing surveys. 34(2002) no.1, S.1-47
  8. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.02
    0.0223727 = product of:
      0.0447454 = sum of:
        0.022325827 = weight(_text_:for in 2560) [ClassicSimilarity], result of:
          0.022325827 = score(doc=2560,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.25150898 = fieldWeight in 2560, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2560)
        0.022419576 = product of:
          0.04483915 = sum of:
            0.04483915 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.04483915 = score(doc=2560,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The proliferation of digital resources and their integration into a traditional library setting has created a pressing need for an automated tool that organizes textual information based on library classification schemes. Automated text classification is a research field of developing tools, methods, and models to automate text classification. This article describes the current popular approach for text classification and major text classification projects and applications that are based on library classification schemes. Related issues and challenges are discussed, and a number of considerations for the challenges are examined.
    Date
    22. 9.2008 18:31:54
  9. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.02
    0.020656807 = product of:
      0.041313615 = sum of:
        0.022096837 = weight(_text_:for in 2760) [ClassicSimilarity], result of:
          0.022096837 = score(doc=2760,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 2760, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2760)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.038433556 = score(doc=2760,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Information is often organized as a text hierarchy. A hierarchical text-classification system is thus essential for the management, sharing, and dissemination of information. It aims to automatically classify each incoming document into zero, one, or several categories in the text hierarchy. In this paper, we present a technique called CRHTC (context recognition for hierarchical text classification) that performs hierarchical text classification by recognizing the context of discussion (COD) of each category. A category's COD is governed by its ancestor categories, whose contents indicate contextual backgrounds of the category. A document may be classified into a category only if its content matches the category's COD. CRHTC does not require any trials to manually set parameters, and hence is more portable and easier to implement than other methods. It is empirically evaluated under various conditions. The results show that CRHTC achieves both better and more stable performance than several hierarchical and nonhierarchical text-classification methodologies.
    Date
    22. 3.2009 19:11:54
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.803-813
  10. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.02
    0.020656807 = product of:
      0.041313615 = sum of:
        0.022096837 = weight(_text_:for in 2158) [ClassicSimilarity], result of:
          0.022096837 = score(doc=2158,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 2158, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.038433556 = score(doc=2158,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper introduces a project to develop a reliable, cost-effective method for classifying Internet texts into register categories, and apply that approach to the analysis of a large corpus of web documents. To date, the project has proceeded in 2 key phases. First, we developed a bottom-up method for web register classification, asking end users of the web to utilize a decision-tree survey to code relevant situational characteristics of web documents, resulting in a bottom-up identification of register and subregister categories. We present details regarding the development and testing of this method through a series of 10 pilot studies. Then, in the second phase of our project we applied this procedure to a corpus of 53,000 web documents. An analysis of the results demonstrates the effectiveness of these methods for web register classification and provides a preliminary description of the types and distribution of registers on the web.
    Date
    4. 8.2015 19:22:04
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1817-1831
  11. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.02
    0.019283235 = product of:
      0.03856647 = sum of:
        0.022552488 = weight(_text_:for in 1107) [ClassicSimilarity], result of:
          0.022552488 = score(doc=1107,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2540624 = fieldWeight in 1107, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.032027967 = score(doc=1107,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2265-2277
  12. Dubin, D.: Dimensions and discriminability (1998) 0.02
    0.017654698 = product of:
      0.035309397 = sum of:
        0.012889821 = weight(_text_:for in 2338) [ClassicSimilarity], result of:
          0.012889821 = score(doc=2338,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14520876 = fieldWeight in 2338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2338)
        0.022419576 = product of:
          0.04483915 = sum of:
            0.04483915 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.04483915 = score(doc=2338,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  13. Automatic classification research at OCLC (2002) 0.02
    0.017654698 = product of:
      0.035309397 = sum of:
        0.012889821 = weight(_text_:for in 1563) [ClassicSimilarity], result of:
          0.012889821 = score(doc=1563,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14520876 = fieldWeight in 1563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1563)
        0.022419576 = product of:
          0.04483915 = sum of:
            0.04483915 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.04483915 = score(doc=1563,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    OCLC enlists the cooperation of the world's libraries to make the written record of humankind's cultural heritage more accessible through electronic media. Part of this goal can be accomplished through the application of the principles of knowledge organization. We believe that cultural artifacts are effectively lost unless they are indexed, cataloged and classified. Accordingly, OCLC has developed products, sponsored research projects, and encouraged the participation in international standards communities whose outcome has been improved library classification schemes, cataloging productivity tools, and new proposals for the creation and maintenance of metadata. Though cataloging and classification requires expert intellectual effort, we recognize that at least some of the work must be automated if we hope to keep pace with cultural change
    Date
    5. 5.2003 9:22:09
  14. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.02
    0.017654698 = product of:
      0.035309397 = sum of:
        0.012889821 = weight(_text_:for in 1673) [ClassicSimilarity], result of:
          0.012889821 = score(doc=1673,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14520876 = fieldWeight in 1673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.022419576 = product of:
          0.04483915 = sum of:
            0.04483915 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.04483915 = score(doc=1673,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
  15. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.02
    0.0174208 = product of:
      0.0348416 = sum of:
        0.015624823 = weight(_text_:for in 690) [ClassicSimilarity], result of:
          0.015624823 = score(doc=690,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 690, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=690)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.038433556 = score(doc=690,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    We describe the latent semantic indexing subspace signature model (LSISSM) for semantic content representation of unstructured text. Grounded on singular value decomposition, the model represents terms and documents by the distribution signatures of their statistical contribution across the top-ranking latent concept dimensions. LSISSM matches term signatures with document signatures according to their mapping coherence between latent semantic indexing (LSI) term subspace and LSI document subspace. LSISSM does feature reduction and finds a low-rank approximation of scalable and sparse term-document matrices. Experiments demonstrate that this approach significantly improves the performance of major clustering algorithms such as standard K-means and self-organizing maps compared with the vector space model and the traditional LSI model. The unique contribution ranking mechanism in LSISSM also improves the initialization of standard K-means compared with random seeding procedure, which sometimes causes low efficiency and effectiveness of clustering. A two-stage initialization strategy based on LSISSM significantly reduces the running time of standard K-means procedures.
    Date
    23. 3.2013 13:22:36
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.844-860
  16. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.02
    0.0159805 = product of:
      0.031961 = sum of:
        0.01594702 = weight(_text_:for in 2765) [ClassicSimilarity], result of:
          0.01594702 = score(doc=2765,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17964928 = fieldWeight in 2765, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2765)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
              0.032027967 = score(doc=2765,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 2765, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2765)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Passages can be hidden within a text to circumvent their disallowed transfer. Such release of compartmentalized information is of concern to all corporate and governmental organizations. Passage retrieval is well studied; we posit, however, that passage detection is not. Passage retrieval is the determination of the degree of relevance of blocks of text, namely passages, comprising a document. Rather than determining the relevance of a document in its entirety, passage retrieval determines the relevance of the individual passages. As such, modified traditional information-retrieval techniques compare terms found in user queries with the individual passages to determine a similarity score for passages of interest. In passage detection, passages are classified into predetermined categories. More often than not, passage detection techniques are deployed to detect hidden paragraphs in documents. That is, to hide information, documents are injected with hidden text into passages. Rather than matching query terms against passages to determine their relevance, using text-mining techniques, the passages are classified. Those documents with hidden passages are defined as infected. Thus, simply stated, passage retrieval is the search for passages relevant to a user query, while passage detection is the classification of passages. That is, in passage detection, passages are labeled with one or more categories from a set of predetermined categories. We present a keyword-based dynamic passage approach (KDP) and demonstrate that KDP outperforms statistically significantly (99% confidence) the other document-splitting approaches by 12% to 18% in the passage detection and passage category-prediction tasks. Furthermore, we evaluate the effects of the feature selection, passage length, ambiguous passages, and finally training-data category distribution on passage-detection accuracy.
    Date
    22. 3.2009 19:14:43
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.814-825
  17. Khoo, C.S.G.; Ng, K.; Ou, S.: ¬An exploratory study of human clustering of Web pages (2003) 0.01
    0.014640598 = product of:
      0.029281195 = sum of:
        0.01647001 = weight(_text_:for in 2741) [ClassicSimilarity], result of:
          0.01647001 = score(doc=2741,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.18554096 = fieldWeight in 2741, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=2741)
        0.012811186 = product of:
          0.025622372 = sum of:
            0.025622372 = weight(_text_:22 in 2741) [ClassicSimilarity], result of:
              0.025622372 = score(doc=2741,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.15476047 = fieldWeight in 2741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2741)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This study seeks to find out how human beings cluster Web pages naturally. Twenty Web pages retrieved by the Northem Light search engine for each of 10 queries were sorted by 3 subjects into categories that were natural or meaningful to them. lt was found that different subjects clustered the same set of Web pages quite differently and created different categories. The average inter-subject similarity of the clusters created was a low 0.27. Subjects created an average of 5.4 clusters for each sorting. The categories constructed can be divided into 10 types. About 1/3 of the categories created were topical. Another 20% of the categories relate to the degree of relevance or usefulness. The rest of the categories were subject-independent categories such as format, purpose, authoritativeness and direction to other sources. The authors plan to develop automatic methods for categorizing Web pages using the common categories created by the subjects. lt is hoped that the techniques developed can be used by Web search engines to automatically organize Web pages retrieved into categories that are natural to users. 1. Introduction The World Wide Web is an increasingly important source of information for people globally because of its ease of access, the ease of publishing, its ability to transcend geographic and national boundaries, its flexibility and heterogeneity and its dynamic nature. However, Web users also find it increasingly difficult to locate relevant and useful information in this vast information storehouse. Web search engines, despite their scope and power, appear to be quite ineffective. They retrieve too many pages, and though they attempt to rank retrieved pages in order of probable relevance, often the relevant documents do not appear in the top-ranked 10 or 20 documents displayed. Several studies have found that users do not know how to use the advanced features of Web search engines, and do not know how to formulate and re-formulate queries. Users also typically exert minimal effort in performing, evaluating and refining their searches, and are unwilling to scan more than 10 or 20 items retrieved (Jansen, Spink, Bateman & Saracevic, 1998). This suggests that the conventional ranked-list display of search results does not satisfy user requirements, and that better ways of presenting and summarizing search results have to be developed. One promising approach is to group retrieved pages into clusters or categories to allow users to navigate immediately to the "promising" clusters where the most useful Web pages are likely to be located. This approach has been adopted by a number of search engines (notably Northem Light) and search agents.
    Date
    12. 9.2004 9:56:22
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
  18. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.009608389 = product of:
      0.038433556 = sum of:
        0.038433556 = product of:
          0.07686711 = sum of:
            0.07686711 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.07686711 = score(doc=1046,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    5. 5.2003 14:17:22
  19. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.008006992 = product of:
      0.032027967 = sum of:
        0.032027967 = product of:
          0.064055935 = sum of:
            0.064055935 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.064055935 = score(doc=611,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 8.2009 12:54:24
  20. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.01
    0.008006992 = product of:
      0.032027967 = sum of:
        0.032027967 = product of:
          0.064055935 = sum of:
            0.064055935 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.064055935 = score(doc=2748,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 2.2016 18:25:22

Years

Languages

  • e 145
  • d 5
  • a 1
  • chi 1
  • More… Less…

Types

  • a 133
  • el 22
  • s 2
  • m 1
  • r 1
  • x 1
  • More… Less…