Search (45 results, page 1 of 3)

  • × language_ss:"e"
  • × theme_ss:"Automatisches Klassifizieren"
  • × year_i:[2000 TO 2010}
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.25
    0.2475688 = product of:
      0.53639907 = sum of:
        0.050531503 = product of:
          0.1515945 = sum of:
            0.1515945 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.1515945 = score(doc=562,freq=2.0), product of:
                0.26973245 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031815533 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.1515945 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.1515945 = score(doc=562,freq=2.0), product of:
            0.26973245 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031815533 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.022462882 = weight(_text_:web in 562) [ClassicSimilarity], result of:
          0.022462882 = score(doc=562,freq=2.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.21634221 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.1515945 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.1515945 = score(doc=562,freq=2.0), product of:
            0.26973245 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031815533 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.1515945 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.1515945 = score(doc=562,freq=2.0), product of:
            0.26973245 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031815533 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.008621131 = product of:
          0.025863392 = sum of:
            0.025863392 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.025863392 = score(doc=562,freq=2.0), product of:
                0.11141258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031815533 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
      0.46153846 = coord(6/13)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Yao, H.; Etzkorn, L.H.; Virani, S.: Automated classification and retrieval of reusable software components (2008) 0.05
    0.05384943 = product of:
      0.17501064 = sum of:
        0.025966093 = weight(_text_:world in 1382) [ClassicSimilarity], result of:
          0.025966093 = score(doc=1382,freq=2.0), product of:
            0.122288436 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.031815533 = queryNorm
            0.21233483 = fieldWeight in 1382, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1382)
        0.03450407 = weight(_text_:wide in 1382) [ClassicSimilarity], result of:
          0.03450407 = score(doc=1382,freq=2.0), product of:
            0.14096694 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031815533 = queryNorm
            0.24476713 = fieldWeight in 1382, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1382)
        0.01871907 = weight(_text_:web in 1382) [ClassicSimilarity], result of:
          0.01871907 = score(doc=1382,freq=2.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.18028519 = fieldWeight in 1382, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1382)
        0.0958214 = weight(_text_:software in 1382) [ClassicSimilarity], result of:
          0.0958214 = score(doc=1382,freq=24.0), product of:
            0.12621705 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031815533 = queryNorm
            0.75917953 = fieldWeight in 1382, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1382)
      0.30769232 = coord(4/13)
    
    Abstract
    The authors describe their research which improves software reuse by using an automated approach to semantically search for and retrieve reusable software components in large software component repositories and on the World Wide Web (WWW). Using automation and smart (semantic) techniques, their approach speeds up the search and retrieval of reusable software components, while retaining good accuracy, and therefore improves the affordability of software reuse. A program understanding of software components and natural language understanding of user queries was employed. Then the software component descriptions were compared by matching the resulting semantic representations of the user queries to the semantic representations of the software components to search for software components that best match the user queries. A proof of concept system was developed to test the authors' approach. The results of this proof of concept system were compared to human experts, and statistical analysis was performed on the collected experimental data. The results from these experiments demonstrate that this automated semantic-based approach for software reusable component classification and retrieval is successful when compared to the labor-intensive results from the experts, thus showing that this approach can significantly benefit software reuse classification and retrieval.
  3. Kwon, O.W.; Lee, J.H.: Text categorization based on k-nearest neighbor approach for web site classification (2003) 0.04
    0.043144096 = product of:
      0.1402183 = sum of:
        0.025966093 = weight(_text_:world in 1070) [ClassicSimilarity], result of:
          0.025966093 = score(doc=1070,freq=2.0), product of:
            0.122288436 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.031815533 = queryNorm
            0.21233483 = fieldWeight in 1070, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1070)
        0.03450407 = weight(_text_:wide in 1070) [ClassicSimilarity], result of:
          0.03450407 = score(doc=1070,freq=2.0), product of:
            0.14096694 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031815533 = queryNorm
            0.24476713 = fieldWeight in 1070, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1070)
        0.07249864 = weight(_text_:web in 1070) [ClassicSimilarity], result of:
          0.07249864 = score(doc=1070,freq=30.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.69824153 = fieldWeight in 1070, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1070)
        0.0072494904 = product of:
          0.02174847 = sum of:
            0.02174847 = weight(_text_:29 in 1070) [ClassicSimilarity], result of:
              0.02174847 = score(doc=1070,freq=2.0), product of:
                0.11191709 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.031815533 = queryNorm
                0.19432661 = fieldWeight in 1070, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1070)
          0.33333334 = coord(1/3)
      0.30769232 = coord(4/13)
    
    Abstract
    Automatic categorization is a viable method to deal with the scaling problem on the World Wide Web. For Web site classification, this paper proposes the use of Web pages linked with the home page in a different manner from the sole use of home pages in previous research. To implement our proposed method, we derive a scheme for Web site classification based on the k-nearest neighbor (k-NN) approach. It consists of three phases: Web page selection (connectivity analysis), Web page classification, and Web site classification. Given a Web site, the Web page selection chooses several representative Web pages using connectivity analysis. The k-NN classifier next classifies each of the selected Web pages. Finally, the classified Web pages are extended to a classification of the entire Web site. To improve performance, we supplement the k-NN approach with a feature selection method and a term weighting scheme using markup tags, and also reform its document-document similarity measure. In our experiments on a Korean commercial Web directory, the proposed system, using both a home page and its linked pages, improved the performance of micro-averaging breakeven point by 30.02%, compared with an ordinary classification which uses a home page only.
    Date
    27.12.2007 17:32:29
  4. Khoo, C.S.G.; Ng, K.; Ou, S.: ¬An exploratory study of human clustering of Web pages (2003) 0.03
    0.032615192 = product of:
      0.105999365 = sum of:
        0.020772874 = weight(_text_:world in 2741) [ClassicSimilarity], result of:
          0.020772874 = score(doc=2741,freq=2.0), product of:
            0.122288436 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.031815533 = queryNorm
            0.16986786 = fieldWeight in 2741, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=2741)
        0.027603257 = weight(_text_:wide in 2741) [ClassicSimilarity], result of:
          0.027603257 = score(doc=2741,freq=2.0), product of:
            0.14096694 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031815533 = queryNorm
            0.1958137 = fieldWeight in 2741, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2741)
        0.051875807 = weight(_text_:web in 2741) [ClassicSimilarity], result of:
          0.051875807 = score(doc=2741,freq=24.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.49962097 = fieldWeight in 2741, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2741)
        0.0057474207 = product of:
          0.017242262 = sum of:
            0.017242262 = weight(_text_:22 in 2741) [ClassicSimilarity], result of:
              0.017242262 = score(doc=2741,freq=2.0), product of:
                0.11141258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031815533 = queryNorm
                0.15476047 = fieldWeight in 2741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2741)
          0.33333334 = coord(1/3)
      0.30769232 = coord(4/13)
    
    Abstract
    This study seeks to find out how human beings cluster Web pages naturally. Twenty Web pages retrieved by the Northem Light search engine for each of 10 queries were sorted by 3 subjects into categories that were natural or meaningful to them. lt was found that different subjects clustered the same set of Web pages quite differently and created different categories. The average inter-subject similarity of the clusters created was a low 0.27. Subjects created an average of 5.4 clusters for each sorting. The categories constructed can be divided into 10 types. About 1/3 of the categories created were topical. Another 20% of the categories relate to the degree of relevance or usefulness. The rest of the categories were subject-independent categories such as format, purpose, authoritativeness and direction to other sources. The authors plan to develop automatic methods for categorizing Web pages using the common categories created by the subjects. lt is hoped that the techniques developed can be used by Web search engines to automatically organize Web pages retrieved into categories that are natural to users. 1. Introduction The World Wide Web is an increasingly important source of information for people globally because of its ease of access, the ease of publishing, its ability to transcend geographic and national boundaries, its flexibility and heterogeneity and its dynamic nature. However, Web users also find it increasingly difficult to locate relevant and useful information in this vast information storehouse. Web search engines, despite their scope and power, appear to be quite ineffective. They retrieve too many pages, and though they attempt to rank retrieved pages in order of probable relevance, often the relevant documents do not appear in the top-ranked 10 or 20 documents displayed. Several studies have found that users do not know how to use the advanced features of Web search engines, and do not know how to formulate and re-formulate queries. Users also typically exert minimal effort in performing, evaluating and refining their searches, and are unwilling to scan more than 10 or 20 items retrieved (Jansen, Spink, Bateman & Saracevic, 1998). This suggests that the conventional ranked-list display of search results does not satisfy user requirements, and that better ways of presenting and summarizing search results have to be developed. One promising approach is to group retrieved pages into clusters or categories to allow users to navigate immediately to the "promising" clusters where the most useful Web pages are likely to be located. This approach has been adopted by a number of search engines (notably Northem Light) and search agents.
    Date
    12. 9.2004 9:56:22
  5. Classification, automation, and new media : Proceedings of the 24th Annual Conference of the Gesellschaft für Klassifikation e.V., University of Passau, March 15 - 17, 2000 (2002) 0.03
    0.027216949 = product of:
      0.11794011 = sum of:
        0.0367216 = weight(_text_:world in 5997) [ClassicSimilarity], result of:
          0.0367216 = score(doc=5997,freq=4.0), product of:
            0.122288436 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.031815533 = queryNorm
            0.30028677 = fieldWeight in 5997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.048796132 = weight(_text_:wide in 5997) [ClassicSimilarity], result of:
          0.048796132 = score(doc=5997,freq=4.0), product of:
            0.14096694 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031815533 = queryNorm
            0.34615302 = fieldWeight in 5997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.03242238 = weight(_text_:web in 5997) [ClassicSimilarity], result of:
          0.03242238 = score(doc=5997,freq=6.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.3122631 = fieldWeight in 5997, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
      0.23076923 = coord(3/13)
    
    Content
    Data Analysis, Statistics, and Classification.- Pattern Recognition and Automation.- Data Mining, Information Processing, and Automation.- New Media, Web Mining, and Automation.- Applications in Management Science, Finance, and Marketing.- Applications in Medicine, Biology, Archaeology, and Others.- Author Index.- Subject Index.
    RSWK
    World Wide Web / Wissensorganisation / Kongress / Passau <2000>
    Subject
    World Wide Web / Wissensorganisation / Kongress / Passau <2000>
  6. Miyamoto, S.: Information clustering based an fuzzy multisets (2003) 0.03
    0.025584215 = product of:
      0.11086493 = sum of:
        0.03635253 = weight(_text_:world in 1071) [ClassicSimilarity], result of:
          0.03635253 = score(doc=1071,freq=2.0), product of:
            0.122288436 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.031815533 = queryNorm
            0.29726875 = fieldWeight in 1071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1071)
        0.0483057 = weight(_text_:wide in 1071) [ClassicSimilarity], result of:
          0.0483057 = score(doc=1071,freq=2.0), product of:
            0.14096694 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031815533 = queryNorm
            0.342674 = fieldWeight in 1071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1071)
        0.026206696 = weight(_text_:web in 1071) [ClassicSimilarity], result of:
          0.026206696 = score(doc=1071,freq=2.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.25239927 = fieldWeight in 1071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1071)
      0.23076923 = coord(3/13)
    
    Abstract
    A fuzzy multiset model for information clustering is proposed with application to information retrieval on the World Wide Web. Noting that a search engine retrieves multiple occurrences of the same subjects with possibly different degrees of relevance, we observe that fuzzy multisets provide an appropriate model of information retrieval on the WWW. Information clustering which means both term clustering and document clustering is considered. Three methods of the hard c-means, fuzzy c-means, and an agglomerative method using cluster centers are proposed. Two distances between fuzzy multisets and algorithms for calculating cluster centers are defined. Theoretical properties concerning the clustering algorithms are studied. Illustrative examples are given to show how the algorithms work.
  7. Choi, B.; Peng, X.: Dynamic and hierarchical classification of Web pages (2004) 0.02
    0.021053819 = product of:
      0.13684982 = sum of:
        0.08662128 = weight(_text_:log in 2555) [ClassicSimilarity], result of:
          0.08662128 = score(doc=2555,freq=2.0), product of:
            0.20389368 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.031815533 = queryNorm
            0.42483553 = fieldWeight in 2555, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.046875 = fieldNorm(doc=2555)
        0.050228536 = weight(_text_:web in 2555) [ClassicSimilarity], result of:
          0.050228536 = score(doc=2555,freq=10.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.48375595 = fieldWeight in 2555, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2555)
      0.15384616 = coord(2/13)
    
    Abstract
    Automatic classification of Web pages is an effective way to organise the vast amount of information and to assist in retrieving relevant information from the Internet. Although many automatic classification systems have been proposed, most of them ignore the conflict between the fixed number of categories and the growing number of Web pages being added into the systems. They also require searching through all existing categories to make any classification. This article proposes a dynamic and hierarchical classification system that is capable of adding new categories as required, organising the Web pages into a tree structure, and classifying Web pages by searching through only one path of the tree. The proposed single-path search technique reduces the search complexity from (n) to (log(n)). Test results show that the system improves the accuracy of classification by 6 percent in comparison to related systems. The dynamic-category expansion technique also achieves satisfying results for adding new categories into the system as required.
  8. Montesi, M.; Navarrete, T.: Classifying web genres in context : A case study documenting the web genres used by a software engineer (2008) 0.02
    0.019212537 = product of:
      0.12488148 = sum of:
        0.06738865 = weight(_text_:web in 2100) [ClassicSimilarity], result of:
          0.06738865 = score(doc=2100,freq=18.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.64902663 = fieldWeight in 2100, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2100)
        0.057492837 = weight(_text_:software in 2100) [ClassicSimilarity], result of:
          0.057492837 = score(doc=2100,freq=6.0), product of:
            0.12621705 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031815533 = queryNorm
            0.4555077 = fieldWeight in 2100, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=2100)
      0.15384616 = coord(2/13)
    
    Abstract
    This case study analyzes the Internet-based resources that a software engineer uses in his daily work. Methodologically, we studied the web browser history of the participant, classifying all the web pages he had seen over a period of 12 days into web genres. We interviewed him before and after the analysis of the web browser history. In the first interview, he spoke about his general information behavior; in the second, he commented on each web genre, explaining why and how he used them. As a result, three approaches allow us to describe the set of 23 web genres obtained: (a) the purposes they serve for the participant; (b) the role they play in the various work and search phases; (c) and the way they are used in combination with each other. Further observations concern the way the participant assesses quality of web-based resources, and his information behavior as a software engineer.
  9. Cosh, K.J.; Burns, R.; Daniel, T.: Content clouds : classifying content in Web 2.0 (2008) 0.02
    0.015801635 = product of:
      0.10271062 = sum of:
        0.031767312 = weight(_text_:web in 2013) [ClassicSimilarity], result of:
          0.031767312 = score(doc=2013,freq=4.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.3059541 = fieldWeight in 2013, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2013)
        0.07094331 = weight(_text_:2.0 in 2013) [ClassicSimilarity], result of:
          0.07094331 = score(doc=2013,freq=2.0), product of:
            0.1845216 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.031815533 = queryNorm
            0.3844716 = fieldWeight in 2013, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.046875 = fieldNorm(doc=2013)
      0.15384616 = coord(2/13)
    
    Abstract
    Purpose - With increasing amounts of user generated content being produced electronically in the form of wikis, blogs, forums etc. the purpose of this paper is to investigate a new approach to classifying ad hoc content. Design/methodology/approach - The approach applies natural language processing (NLP) tools to automatically extract the content of some text, visualizing the results in a content cloud. Findings - Content clouds share the visual simplicity of a tag cloud, but display the details of an article at a different level of abstraction, providing a complimentary classification. Research limitations/implications - Provides the general approach to creating a content cloud. In the future, the process can be refined and enhanced by further evaluation of results. Further work is also required to better identify closely related articles. Practical implications - Being able to automatically classify the content generated by web users will enable others to find more appropriate content. Originality/value - The approach is original. Other researchers have produced a cloud, simply by using skiplists to filter unwanted words, this paper's approach improves this by applying appropriate NLP techniques.
  10. Godby, C. J.; Stuler, J.: ¬The Library of Congress Classification as a knowledge base for automatic subject categorization (2001) 0.01
    0.008884234 = product of:
      0.11549504 = sum of:
        0.11549504 = weight(_text_:log in 1567) [ClassicSimilarity], result of:
          0.11549504 = score(doc=1567,freq=2.0), product of:
            0.20389368 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.031815533 = queryNorm
            0.5664474 = fieldWeight in 1567, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0625 = fieldNorm(doc=1567)
      0.07692308 = coord(1/13)
    
    Abstract
    This paper describes a set of experiments in adapting a subset of the Library of Congress Classification for use as a database for automatic classification. A high degree of concept integrity was obtained when subject headings were mapped from OCLC's WorldCat database and filtered using the log-likelihood statistic
  11. Chung, Y.-M.; Noh, Y.-H.: Developing a specialized directory system by automatically classifying Web documents (2003) 0.01
    0.008250023 = product of:
      0.05362515 = sum of:
        0.044925764 = weight(_text_:web in 1566) [ClassicSimilarity], result of:
          0.044925764 = score(doc=1566,freq=8.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.43268442 = fieldWeight in 1566, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1566)
        0.008699387 = product of:
          0.026098162 = sum of:
            0.026098162 = weight(_text_:29 in 1566) [ClassicSimilarity], result of:
              0.026098162 = score(doc=1566,freq=2.0), product of:
                0.11191709 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.031815533 = queryNorm
                0.23319192 = fieldWeight in 1566, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1566)
          0.33333334 = coord(1/3)
      0.15384616 = coord(2/13)
    
    Abstract
    This study developed a specialized directory system using an automatic classification technique. Economics was selected as the subject field for the classification experiments with Web documents. The classification scheme of the directory follows the DDC, and subject terms representing each class number or subject category were selected from the DDC table to construct a representative term dictionary. In collecting and classifying the Web documents, various strategies were tested in order to find the optimal thresholds. In the classification experiments, Web documents in economics were classified into a total of 757 hierarchical subject categories built from the DDC scheme. The first and second experiments using the representative term dictionary resulted in relatively high precision ratios of 77 and 60%, respectively. The third experiment employing a machine learning-based k-nearest neighbours (kNN) classifier in a closed experimental setting achieved a precision ratio of 96%. This implies that it is possible to enhance the classification performance by applying a hybrid method combining a dictionary-based technique and a kNN classifier
    Source
    Journal of information science. 29(2003) no.2, S.117-126
  12. Godby, C.J.; Stuler, J.: ¬The Library of Congress Classification as a knowledge base for automatic subject categorization : subject access issues (2003) 0.01
    0.0077737053 = product of:
      0.10105816 = sum of:
        0.10105816 = weight(_text_:log in 3962) [ClassicSimilarity], result of:
          0.10105816 = score(doc=3962,freq=2.0), product of:
            0.20389368 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.031815533 = queryNorm
            0.49564147 = fieldWeight in 3962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3962)
      0.07692308 = coord(1/13)
    
    Abstract
    This paper describes a set of experiments in adapting a subset of the Library of Congress Classification for use as a database for automatic classification. A high degree of concept integrity was obtained when subject headings were mapped from OCLC's WorldCat database and filtered using the log-likelihood statistic.
  13. Koch, T.; Ardö, A.: Automatic classification of full-text HTML-documents from one specific subject area : DESIRE II D3.6a, Working Paper 2 (2000) 0.01
    0.005896702 = product of:
      0.076657124 = sum of:
        0.076657124 = weight(_text_:software in 1667) [ClassicSimilarity], result of:
          0.076657124 = score(doc=1667,freq=6.0), product of:
            0.12621705 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031815533 = queryNorm
            0.6073436 = fieldWeight in 1667, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=1667)
      0.07692308 = coord(1/13)
    
    Content
    1 Introduction / 2 Method overview / 3 Ei thesaurus preprocessing / 4 Automatic classification process: 4.1 Matching -- 4.2 Weighting -- 4.3 Preparation for display / 5 Results of the classification process / 6 Evaluations / 7 Software / 8 Other applications / 9 Experiments with universal classification systems / References / Appendix A: Ei classification service: Software / Appendix B: Use of the classification software as subject filter in a WWW harvester.
  14. Adams, K.C.: Word wranglers : Automatic classification tools transform enterprise documents from "bags of words" into knowledge resources (2003) 0.00
    0.0047578807 = product of:
      0.061852448 = sum of:
        0.061852448 = weight(_text_:software in 1665) [ClassicSimilarity], result of:
          0.061852448 = score(doc=1665,freq=10.0), product of:
            0.12621705 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031815533 = queryNorm
            0.49004826 = fieldWeight in 1665, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1665)
      0.07692308 = coord(1/13)
    
    Abstract
    Taxonomies are an important part of any knowledge management (KM) system, and automatic classification software is emerging as a "killer app" for consumer and enterprise portals. A number of companies such as Inxight Software , Mohomine, Metacode, and others claim to interpret the semantic content of any textual document and automatically classify text on the fly. The promise that software could automatically produce a Yahoo-style directory is a siren call not many IT managers are able to resist. KM needs have grown more complex due to the increasing amount of digital information, the declining effectiveness of keyword searching, and heterogeneous document formats in corporate databases. This environment requires innovative KM tools, and automatic classification technology is an example of this new kind of software. These products can be divided into three categories according to their underlying technology - rules-based, catalog-by-example, and statistical clustering. Evolving trends in this market include framing classification as a cyborg (computer- and human-based) activity and the increasing use of extensible markup language (XML) and support vector machine (SVM) technology. In this article, we'll survey the rapidly changing automatic classification software market and examine the features and capabilities of leading classification products.
  15. Calado, P.; Cristo, M.; Gonçalves, M.A.; Moura, E.S. de; Ribeiro-Neto, B.; Ziviani, N.: Link-based similarity measures for the classification of Web documents (2006) 0.00
    0.004319785 = product of:
      0.056157205 = sum of:
        0.056157205 = weight(_text_:web in 4921) [ClassicSimilarity], result of:
          0.056157205 = score(doc=4921,freq=18.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.5408555 = fieldWeight in 4921, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4921)
      0.07692308 = coord(1/13)
    
    Abstract
    Traditional text-based document classifiers tend to perform poorly an the Web. Text in Web documents is usually noisy and often does not contain enough information to determine their topic. However, the Web provides a different source that can be useful to document classification: its hyperlink structure. In this work, the authors evaluate how the link structure of the Web can be used to determine a measure of similarity appropriate for document classification. They experiment with five different similarity measures and determine their adequacy for predicting the topic of a Web page. Tests performed an a Web directory Show that link information alone allows classifying documents with an average precision of 86%. Further, when combined with a traditional textbased classifier, precision increases to values of up to 90%, representing gains that range from 63 to 132% over the use of text-based classification alone. Because the measures proposed in this article are straightforward to compute, they provide a practical and effective solution for Web classification and related information retrieval tasks. Further, the authors provide an important set of guidelines an how link structure can be used effectively to classify Web documents.
  16. Sebastiani, F.: Classification of text, automatic (2006) 0.00
    0.004212807 = product of:
      0.05476649 = sum of:
        0.05476649 = weight(_text_:software in 5003) [ClassicSimilarity], result of:
          0.05476649 = score(doc=5003,freq=4.0), product of:
            0.12621705 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031815533 = queryNorm
            0.43390724 = fieldWeight in 5003, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5003)
      0.07692308 = coord(1/13)
    
    Abstract
    Automatic text classification (ATC) is a discipline at the crossroads of information retrieval (IR), machine learning (ML), and computational linguistics (CL), and consists in the realization of text classifiers, i.e. software systems capable of assigning texts to one or more categories, or classes, from a predefined set. Applications range from the automated indexing of scientific articles, to e-mail routing, spam filtering, authorship attribution, and automated survey coding. This article will focus on the ML approach to ATC, whereby a software system (called the learner) automatically builds a classifier for the categories of interest by generalizing from a "training" set of pre-classified texts.
  17. Fong, A.C.M.: Mining a Web citation database for document clustering (2002) 0.00
    0.0040317997 = product of:
      0.052413393 = sum of:
        0.052413393 = weight(_text_:web in 3940) [ClassicSimilarity], result of:
          0.052413393 = score(doc=3940,freq=2.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.50479853 = fieldWeight in 3940, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=3940)
      0.07692308 = coord(1/13)
    
  18. Lindholm, J.; Schönthal, T.; Jansson , K.: Experiences of harvesting Web resources in engineering using automatic classification (2003) 0.00
    0.0039904467 = product of:
      0.051875807 = sum of:
        0.051875807 = weight(_text_:web in 4088) [ClassicSimilarity], result of:
          0.051875807 = score(doc=4088,freq=6.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.49962097 = fieldWeight in 4088, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=4088)
      0.07692308 = coord(1/13)
    
    Abstract
    Authors describe the background and the work involved in setting up Engine-e, a Web index that uses automatic classification as a mean for the selection of resources in Engineering. Considerations in offering a robot-generated Web index as a successor to a manually indexed quality-controlled subject gateway are also discussed
  19. Lim, C.S.; Lee, K.J.; Kim, G.C.: Multiple sets of features for automatic genre classification of web documents (2005) 0.00
    0.0038096926 = product of:
      0.049526002 = sum of:
        0.049526002 = weight(_text_:web in 1048) [ClassicSimilarity], result of:
          0.049526002 = score(doc=1048,freq=14.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.47698978 = fieldWeight in 1048, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1048)
      0.07692308 = coord(1/13)
    
    Abstract
    With the increase of information on the Web, it is difficult to find desired information quickly out of the documents retrieved by a search engine. One way to solve this problem is to classify web documents according to various criteria. Most document classification has been focused on a subject or a topic of a document. A genre or a style is another view of a document different from a subject or a topic. The genre is also a criterion to classify documents. In this paper, we suggest multiple sets of features to classify genres of web documents. The basic set of features, which have been proposed in the previous studies, is acquired from the textual properties of documents, such as the number of sentences, the number of a certain word, etc. However, web documents are different from textual documents in that they contain URL and HTML tags within the pages. We introduce new sets of features specific to web documents, which are extracted from URL and HTML tags. The present work is an attempt to evaluate the performance of the proposed sets of features, and to discuss their characteristics. Finally, we conclude which is an appropriate set of features in automatic genre classification of web documents.
  20. Chan, L.M.; Lin, X.; Zeng, M.L.: Structural and multilingual approaches to subject access on the Web (2000) 0.00
    0.0034558282 = product of:
      0.044925764 = sum of:
        0.044925764 = weight(_text_:web in 507) [ClassicSimilarity], result of:
          0.044925764 = score(doc=507,freq=2.0), product of:
            0.10383032 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031815533 = queryNorm
            0.43268442 = fieldWeight in 507, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=507)
      0.07692308 = coord(1/13)
    

Types

  • a 39
  • el 7
  • m 1
  • s 1
  • More… Less…