Search (45 results, page 1 of 3)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Chan, L.M.; Lin, X.; Zeng, M.L.: Structural and multilingual approaches to subject access on the Web (2000) 0.06
    0.060126036 = product of:
      0.12025207 = sum of:
        0.08233212 = weight(_text_:26 in 507) [ClassicSimilarity], result of:
          0.08233212 = score(doc=507,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.4682183 = fieldWeight in 507, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.09375 = fieldNorm(doc=507)
        0.037919957 = product of:
          0.075839914 = sum of:
            0.075839914 = weight(_text_:access in 507) [ClassicSimilarity], result of:
              0.075839914 = score(doc=507,freq=2.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.4493789 = fieldWeight in 507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.09375 = fieldNorm(doc=507)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    IFLA journal. 26(2000) no.3, S.187-197
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.05
    0.049660552 = product of:
      0.099321105 = sum of:
        0.07908276 = product of:
          0.23724826 = sum of:
            0.23724826 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.23724826 = score(doc=562,freq=2.0), product of:
                0.42213637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04979191 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.020238347 = product of:
          0.040476695 = sum of:
            0.040476695 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.040476695 = score(doc=562,freq=2.0), product of:
                0.17436278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979191 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.05
    0.04579494 = product of:
      0.09158988 = sum of:
        0.071351536 = weight(_text_:description in 2158) [ClassicSimilarity], result of:
          0.071351536 = score(doc=2158,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.3082126 = fieldWeight in 2158, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.020238347 = product of:
          0.040476695 = sum of:
            0.040476695 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.040476695 = score(doc=2158,freq=2.0), product of:
                0.17436278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979191 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper introduces a project to develop a reliable, cost-effective method for classifying Internet texts into register categories, and apply that approach to the analysis of a large corpus of web documents. To date, the project has proceeded in 2 key phases. First, we developed a bottom-up method for web register classification, asking end users of the web to utilize a decision-tree survey to code relevant situational characteristics of web documents, resulting in a bottom-up identification of register and subregister categories. We present details regarding the development and testing of this method through a series of 10 pilot studies. Then, in the second phase of our project we applied this procedure to a corpus of 53,000 web documents. An analysis of the results demonstrates the effectiveness of these methods for web register classification and provides a preliminary description of the types and distribution of registers on the web.
    Date
    4. 8.2015 19:22:04
  4. Hagedorn, K.; Chapman, S.; Newman, D.: Enhancing search and browse using automated clustering of subject metadata (2007) 0.03
    0.030063018 = product of:
      0.060126036 = sum of:
        0.04116606 = weight(_text_:26 in 1168) [ClassicSimilarity], result of:
          0.04116606 = score(doc=1168,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.23410915 = fieldWeight in 1168, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.046875 = fieldNorm(doc=1168)
        0.018959979 = product of:
          0.037919957 = sum of:
            0.037919957 = weight(_text_:access in 1168) [ClassicSimilarity], result of:
              0.037919957 = score(doc=1168,freq=2.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.22468945 = fieldWeight in 1168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1168)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Web puzzle of online information resources often hinders end-users from effective and efficient access to these resources. Clustering resources into appropriate subject-based groupings may help alleviate these difficulties, but will it work with heterogeneous material? The University of Michigan and the University of California Irvine joined forces to test automatically enhancing metadata records using the Topic Modeling algorithm on the varied OAIster corpus. We created labels for the resulting clusters of metadata records, matched the clusters to an in-house classification system, and developed a prototype that would showcase methods for search and retrieval using the enhanced records. Results indicated that while the algorithm was somewhat time-intensive to run and using a local classification scheme had its drawbacks, precise clustering of records was achieved and the prototype interface proved that faceted classification could be powerful in helping end-users find resources.
    Date
    26.12.2011 14:10:26
  5. Subramanian, S.; Shafer, K.E.: Clustering (1998) 0.03
    0.029729806 = product of:
      0.11891922 = sum of:
        0.11891922 = weight(_text_:description in 1103) [ClassicSimilarity], result of:
          0.11891922 = score(doc=1103,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.5136877 = fieldWeight in 1103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.078125 = fieldNorm(doc=1103)
      0.25 = coord(1/4)
    
    Abstract
    This article presents our exploration of computer science clustering algorithms as they relate to the Scorpion system. Scorpion is a research project at OCLC that explores the indexing and cataloging of electronic resources. For a more complete description of the Scorpion, please visit the Scorpion Web site at <http://purl.oclc.org/scorpion>
  6. Dubin, D.: Dimensions and discriminability (1998) 0.03
    0.027446888 = product of:
      0.10978755 = sum of:
        0.10978755 = sum of:
          0.06256474 = weight(_text_:access in 2338) [ClassicSimilarity], result of:
            0.06256474 = score(doc=2338,freq=4.0), product of:
              0.16876608 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.04979191 = queryNorm
              0.3707187 = fieldWeight in 2338, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2338)
          0.04722281 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
            0.04722281 = score(doc=2338,freq=2.0), product of:
              0.17436278 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04979191 = queryNorm
              0.2708308 = fieldWeight in 2338, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2338)
      0.25 = coord(1/4)
    
    Abstract
    Visualization interfaces can improve subject access by highlighting the inclusion of document representation components in similarity and discrimination relationships. Within a set of retrieved documents, what kinds of groupings can index terms and subject headings make explicit? The role of controlled vocabulary in classifying search output is examined
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  7. Kleinoeder, H.H.; Puzicha, J.: Automatische Katalogisierung am Beispiel einer Pilotanwendung (2002) 0.02
    0.024013536 = product of:
      0.096054144 = sum of:
        0.096054144 = weight(_text_:26 in 1154) [ClassicSimilarity], result of:
          0.096054144 = score(doc=1154,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.5462547 = fieldWeight in 1154, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.109375 = fieldNorm(doc=1154)
      0.25 = coord(1/4)
    
    Date
    26. 2.1996 17:51:49
  8. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.02
    0.02286569 = product of:
      0.09146276 = sum of:
        0.09146276 = sum of:
          0.044239953 = weight(_text_:access in 1673) [ClassicSimilarity], result of:
            0.044239953 = score(doc=1673,freq=2.0), product of:
              0.16876608 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.04979191 = queryNorm
              0.2621377 = fieldWeight in 1673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1673)
          0.04722281 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
            0.04722281 = score(doc=1673,freq=2.0), product of:
              0.17436278 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04979191 = queryNorm
              0.2708308 = fieldWeight in 1673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1673)
      0.25 = coord(1/4)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
  9. Cortez, E.; Herrera, M.R.; Silva, A.S. da; Moura, E.S. de; Neubert, M.: Lightweight methods for large-scale product categorization (2011) 0.02
    0.017837884 = product of:
      0.071351536 = sum of:
        0.071351536 = weight(_text_:description in 4758) [ClassicSimilarity], result of:
          0.071351536 = score(doc=4758,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.3082126 = fieldWeight in 4758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.046875 = fieldNorm(doc=4758)
      0.25 = coord(1/4)
    
    Abstract
    In this article, we present a study about classification methods for large-scale categorization of product offers on e-shopping web sites. We present a study about the performance of previously proposed approaches and deployed a probabilistic approach to model the classification problem. We also studied an alternative way of modeling information about the description of product offers and investigated the usage of price and store of product offers as features adopted in the classification process. Our experiments used two collections of over a million product offers previously categorized by human editors and taxonomies of hundreds of categories from a real e-shopping web site. In these experiments, our method achieved an improvement of up to 9% in the quality of the categorization in comparison with the best baseline we have found.
  10. Sojka, P.; Lee, M.; Rehurek, R.; Hatlapatka, R.; Kucbel, M.; Bouche, T.; Goutorbe, C.; Anghelache, R.; Wojciechowski, K.: Toolset for entity and semantic associations : Final Release (2013) 0.02
    0.017837884 = product of:
      0.071351536 = sum of:
        0.071351536 = weight(_text_:description in 1057) [ClassicSimilarity], result of:
          0.071351536 = score(doc=1057,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.3082126 = fieldWeight in 1057, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.046875 = fieldNorm(doc=1057)
      0.25 = coord(1/4)
    
    Abstract
    In this document we describe the final release of the toolset for entity and semantic associations, integrating two versions (language dependent and language independent) of Unsupervised Document Similarity implemented by MU (using gensim tool) and Citation Indexing, Resolution and Matching (UJF/CMD). We give a brief description of tools, the rationale behind decisions made, and provide elementary evaluation. Tools are integrated in the main project result, EuDML website, and they deliver the needed functionality for exploratory searching and browsing the collected documents. EuDML users and content providers thus benefit from millions of algorithmically generated similarity and citation links, developed using state of the art machine learning and matching methods.
  11. Wu, M.; Liu, Y.-H.; Brownlee, R.; Zhang, X.: Evaluating utility and automatic classification of subject metadata from Research Data Australia (2021) 0.02
    0.017837884 = product of:
      0.071351536 = sum of:
        0.071351536 = weight(_text_:description in 453) [ClassicSimilarity], result of:
          0.071351536 = score(doc=453,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.3082126 = fieldWeight in 453, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.046875 = fieldNorm(doc=453)
      0.25 = coord(1/4)
    
    Abstract
    In this paper, we present a case study of how well subject metadata (comprising headings from an international classification scheme) has been deployed in a national data catalogue, and how often data seekers use subject metadata when searching for data. Through an analysis of user search behaviour as recorded in search logs, we find evidence that users utilise the subject metadata for data discovery. Since approximately half of the records ingested by the catalogue did not include subject metadata at the time of harvest, we experimented with automatic subject classification approaches in order to enrich these records and to provide additional support for user search and data discovery. Our results show that automatic methods work well for well represented categories of subject metadata, and these categories tend to have features that can distinguish themselves from the other categories. Our findings raise implications for data catalogue providers; they should invest more effort to enhance the quality of data records by providing an adequate description of these records for under-represented subject categories.
  12. Golub, K.: Automated subject classification of textual web documents (2006) 0.01
    0.014864903 = product of:
      0.05945961 = sum of:
        0.05945961 = weight(_text_:description in 5600) [ClassicSimilarity], result of:
          0.05945961 = score(doc=5600,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.25684384 = fieldWeight in 5600, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5600)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - To provide an integrated perspective to similarities and differences between approaches to automated classification in different research communities (machine learning, information retrieval and library science), and point to problems with the approaches and automated classification as such. Design/methodology/approach - A range of works dealing with automated classification of full-text web documents are discussed. Explorations of individual approaches are given in the following sections: special features (description, differences, evaluation), application and characteristics of web pages. Findings - Provides major similarities and differences between the three approaches: document pre-processing and utilization of web-specific document characteristics is common to all the approaches; major differences are in applied algorithms, employment or not of the vector space model and of controlled vocabularies. Problems of automated classification are recognized. Research limitations/implications - The paper does not attempt to provide an exhaustive bibliography of related resources. Practical implications - As an integrated overview of approaches from different research communities with application examples, it is very useful for students in library and information science and computer science, as well as for practitioners. Researchers from one community have the information on how similar tasks are conducted in different communities. Originality/value - To the author's knowledge, no review paper on automated text classification attempted to discuss more than one community's approach from an integrated perspective.
  13. Ozmutlu, S.; Cosar, G.C.: Analyzing the results of automatic new topic identification (2008) 0.01
    0.014554401 = product of:
      0.058217604 = sum of:
        0.058217604 = weight(_text_:26 in 2604) [ClassicSimilarity], result of:
          0.058217604 = score(doc=2604,freq=4.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.33108035 = fieldWeight in 2604, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.046875 = fieldNorm(doc=2604)
      0.25 = coord(1/4)
    
    Date
    1. 1.2009 10:26:42
    Source
    Library hi tech. 26(2008) no.3, S.466-487
  14. Zhang, X: Rough set theory based automatic text categorization (2005) 0.01
    0.01372202 = product of:
      0.05488808 = sum of:
        0.05488808 = weight(_text_:26 in 2822) [ClassicSimilarity], result of:
          0.05488808 = score(doc=2822,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.31214553 = fieldWeight in 2822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0625 = fieldNorm(doc=2822)
      0.25 = coord(1/4)
    
    Date
    26. 9.2006 18:11:36
  15. Khoo, C.S.G.; Ng, K.; Ou, S.: ¬An exploratory study of human clustering of Web pages (2003) 0.01
    0.013066109 = product of:
      0.052264437 = sum of:
        0.052264437 = sum of:
          0.025279973 = weight(_text_:access in 2741) [ClassicSimilarity], result of:
            0.025279973 = score(doc=2741,freq=2.0), product of:
              0.16876608 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.04979191 = queryNorm
              0.14979297 = fieldWeight in 2741, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.03125 = fieldNorm(doc=2741)
          0.026984464 = weight(_text_:22 in 2741) [ClassicSimilarity], result of:
            0.026984464 = score(doc=2741,freq=2.0), product of:
              0.17436278 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04979191 = queryNorm
              0.15476047 = fieldWeight in 2741, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2741)
      0.25 = coord(1/4)
    
    Abstract
    This study seeks to find out how human beings cluster Web pages naturally. Twenty Web pages retrieved by the Northem Light search engine for each of 10 queries were sorted by 3 subjects into categories that were natural or meaningful to them. lt was found that different subjects clustered the same set of Web pages quite differently and created different categories. The average inter-subject similarity of the clusters created was a low 0.27. Subjects created an average of 5.4 clusters for each sorting. The categories constructed can be divided into 10 types. About 1/3 of the categories created were topical. Another 20% of the categories relate to the degree of relevance or usefulness. The rest of the categories were subject-independent categories such as format, purpose, authoritativeness and direction to other sources. The authors plan to develop automatic methods for categorizing Web pages using the common categories created by the subjects. lt is hoped that the techniques developed can be used by Web search engines to automatically organize Web pages retrieved into categories that are natural to users. 1. Introduction The World Wide Web is an increasingly important source of information for people globally because of its ease of access, the ease of publishing, its ability to transcend geographic and national boundaries, its flexibility and heterogeneity and its dynamic nature. However, Web users also find it increasingly difficult to locate relevant and useful information in this vast information storehouse. Web search engines, despite their scope and power, appear to be quite ineffective. They retrieve too many pages, and though they attempt to rank retrieved pages in order of probable relevance, often the relevant documents do not appear in the top-ranked 10 or 20 documents displayed. Several studies have found that users do not know how to use the advanced features of Web search engines, and do not know how to formulate and re-formulate queries. Users also typically exert minimal effort in performing, evaluating and refining their searches, and are unwilling to scan more than 10 or 20 items retrieved (Jansen, Spink, Bateman & Saracevic, 1998). This suggests that the conventional ranked-list display of search results does not satisfy user requirements, and that better ways of presenting and summarizing search results have to be developed. One promising approach is to group retrieved pages into clusters or categories to allow users to navigate immediately to the "promising" clusters where the most useful Web pages are likely to be located. This approach has been adopted by a number of search engines (notably Northem Light) and search agents.
    Date
    12. 9.2004 9:56:22
  16. Classification, automation, and new media : Proceedings of the 24th Annual Conference of the Gesellschaft für Klassifikation e.V., University of Passau, March 15 - 17, 2000 (2002) 0.01
    0.012128668 = product of:
      0.04851467 = sum of:
        0.04851467 = weight(_text_:26 in 5997) [ClassicSimilarity], result of:
          0.04851467 = score(doc=5997,freq=4.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.2759003 = fieldWeight in 5997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
      0.25 = coord(1/4)
    
    Date
    26. 9.2006 18:02:28
    26. 9.2006 18:20:10
  17. Oberhauser, O.: Automatisches Klassifizieren und Bibliothekskataloge (2005) 0.01
    0.012006768 = product of:
      0.048027072 = sum of:
        0.048027072 = weight(_text_:26 in 4099) [ClassicSimilarity], result of:
          0.048027072 = score(doc=4099,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.27312735 = fieldWeight in 4099, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4099)
      0.25 = coord(1/4)
    
    Date
    15.10.2005 19:26:37
  18. Sebastiani, F.: Classification of text, automatic (2006) 0.01
    0.012006768 = product of:
      0.048027072 = sum of:
        0.048027072 = weight(_text_:26 in 5003) [ClassicSimilarity], result of:
          0.048027072 = score(doc=5003,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.27312735 = fieldWeight in 5003, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5003)
      0.25 = coord(1/4)
    
    Date
    17. 5.2006 20:45:26
  19. Wille, J.: Automatisches Klassifizieren bibliographischer Beschreibungsdaten : Vorgehensweise und Ergebnisse (2006) 0.01
    0.012006768 = product of:
      0.048027072 = sum of:
        0.048027072 = weight(_text_:26 in 6090) [ClassicSimilarity], result of:
          0.048027072 = score(doc=6090,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.27312735 = fieldWeight in 6090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6090)
      0.25 = coord(1/4)
    
    Date
    11. 4.2008 15:26:32
  20. Koch, T.; Ardö, A.; Brümmer, A.: ¬The building and maintenance of robot based internet search services : A review of current indexing and data collection methods. Prepared to meet the requirements of Work Package 3 of EU Telematics for Research, project DESIRE. Version D3.11v0.3 (Draft version 3) (1996) 0.01
    0.011891923 = product of:
      0.04756769 = sum of:
        0.04756769 = weight(_text_:description in 1669) [ClassicSimilarity], result of:
          0.04756769 = score(doc=1669,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.20547508 = fieldWeight in 1669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.03125 = fieldNorm(doc=1669)
      0.25 = coord(1/4)
    
    Abstract
    After a short outline of problems, possibilities and difficulties of systematic information retrieval on the Internet and a description of efforts for development in this area, a specification of the terminology for this report is required. Although the process of retrieval is generally seen as an iterative process of browsing and information retrieval and several important services on the net have taken this fact into consideration, the emphasis of this report lays on the general retrieval tools for the whole of Internet. In order to be able to evaluate the differences, possibilities and restrictions of the different services it is necessary to begin with organizing the existing varieties in a typological/ taxonomical survey. The possibilities and weaknesses will be briefly compared and described for the most important services in the categories robot-based WWW-catalogues of different types, list- or form-based catalogues and simultaneous or collected search services respectively. It will however for different reasons not be possible to rank them in order of "best" services. Still more important are the weaknesses and problems common for all attempts of indexing the Internet. The problems of the quality of the input, the technical performance and the general problem of indexing virtual hypertext are shown to be at least as difficult as the different aspects of harvesting, indexing and information retrieval. Some of the attempts made in the area of further development of retrieval services will be mentioned in relation to descriptions of the contents of documents and standardization efforts. Internet harvesting and indexing technology and retrieval software is thoroughly reviewed. Details about all services and software are listed in analytical forms in Annex 1-3.

Years

Languages

  • e 36
  • d 9

Types

  • a 33
  • el 10
  • m 2
  • r 2
  • s 1
  • x 1
  • More… Less…