Search (116 results, page 1 of 6)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.24
    0.23953128 = product of:
      0.31937504 = sum of:
        0.07504265 = product of:
          0.22512795 = sum of:
            0.22512795 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.22512795 = score(doc=562,freq=2.0), product of:
                0.4005707 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047248192 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.22512795 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.22512795 = score(doc=562,freq=2.0), product of:
            0.4005707 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047248192 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.019204432 = product of:
          0.038408864 = sum of:
            0.038408864 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.038408864 = score(doc=562,freq=2.0), product of:
                0.16545512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047248192 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.05
    0.047862925 = product of:
      0.09572585 = sum of:
        0.035584353 = weight(_text_:management in 2760) [ClassicSimilarity], result of:
          0.035584353 = score(doc=2760,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 2760, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2760)
        0.060141496 = sum of:
          0.021732632 = weight(_text_:science in 2760) [ClassicSimilarity], result of:
            0.021732632 = score(doc=2760,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.17461908 = fieldWeight in 2760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=2760)
          0.038408864 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
            0.038408864 = score(doc=2760,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.23214069 = fieldWeight in 2760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2760)
      0.5 = coord(2/4)
    
    Abstract
    Information is often organized as a text hierarchy. A hierarchical text-classification system is thus essential for the management, sharing, and dissemination of information. It aims to automatically classify each incoming document into zero, one, or several categories in the text hierarchy. In this paper, we present a technique called CRHTC (context recognition for hierarchical text classification) that performs hierarchical text classification by recognizing the context of discussion (COD) of each category. A category's COD is governed by its ancestor categories, whose contents indicate contextual backgrounds of the category. A document may be classified into a category only if its content matches the category's COD. CRHTC does not require any trials to manually set parameters, and hence is more portable and easier to implement than other methods. It is empirically evaluated under various conditions. The results show that CRHTC achieves both better and more stable performance than several hierarchical and nonhierarchical text-classification methodologies.
    Date
    22. 3.2009 19:11:54
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.803-813
  3. Classification, automation, and new media : Proceedings of the 24th Annual Conference of the Gesellschaft für Klassifikation e.V., University of Passau, March 15 - 17, 2000 (2002) 0.03
    0.02737132 = product of:
      0.05474264 = sum of:
        0.041936565 = weight(_text_:management in 5997) [ClassicSimilarity], result of:
          0.041936565 = score(doc=5997,freq=4.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.2633291 = fieldWeight in 5997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.012806077 = product of:
          0.025612153 = sum of:
            0.025612153 = weight(_text_:science in 5997) [ClassicSimilarity], result of:
              0.025612153 = score(doc=5997,freq=4.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.20579056 = fieldWeight in 5997, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5997)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Given the huge amount of information in the internet and in practically every domain of knowledge that we are facing today, knowledge discovery calls for automation. The book deals with methods from classification and data analysis that respond effectively to this rapidly growing challenge. The interested reader will find new methodological insights as well as applications in economics, management science, finance, and marketing, and in pattern recognition, biology, health, and archaeology.
    Content
    Data Analysis, Statistics, and Classification.- Pattern Recognition and Automation.- Data Mining, Information Processing, and Automation.- New Media, Web Mining, and Automation.- Applications in Management Science, Finance, and Marketing.- Applications in Medicine, Biology, Archaeology, and Others.- Author Index.- Subject Index.
  4. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.03
    0.025058959 = product of:
      0.100235835 = sum of:
        0.100235835 = sum of:
          0.036221053 = weight(_text_:science in 2748) [ClassicSimilarity], result of:
            0.036221053 = score(doc=2748,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.2910318 = fieldWeight in 2748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.078125 = fieldNorm(doc=2748)
          0.06401478 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
            0.06401478 = score(doc=2748,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.38690117 = fieldWeight in 2748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2748)
      0.25 = coord(1/4)
    
    Date
    1. 2.2016 18:25:22
    Series
    Lecture notes in computer science ; 9398
  5. Wu, K.J.; Chen, M.-C.; Sun, Y.: Automatic topics discovery from hyperlinked documents (2004) 0.02
    0.023225334 = product of:
      0.046450667 = sum of:
        0.035584353 = weight(_text_:management in 2563) [ClassicSimilarity], result of:
          0.035584353 = score(doc=2563,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 2563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2563)
        0.010866316 = product of:
          0.021732632 = sum of:
            0.021732632 = weight(_text_:science in 2563) [ClassicSimilarity], result of:
              0.021732632 = score(doc=2563,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.17461908 = fieldWeight in 2563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2563)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Topic discovery is an important means for marketing, e-Business and social science studies. As well, it can be applied to various purposes, such as identifying a group with certain properties and observing the emergence and diminishment of a certain cyber community. Previous topic discovery work (J.M. Kleinberg, Proceedings of the 9th Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, California, p. 668) requires manual judgment of usefulness of outcomes and is thus incapable of handling the explosive growth of the Internet. In this paper, we propose the Automatic Topic Discovery (ATD) method, which combines a method of base set construction, a clustering algorithm and an iterative principal eigenvector computation method to discover the topics relevant to a given query without using manual examination. Given a query, ATD returns with topics associated with the query and top representative pages for each topic. Our experiments show that the ATD method performs better than the traditional eigenvector method in terms of computation time and topic discovery quality.
    Source
    Information processing and management. 40(2004) no.2, S.239-255
  6. Liu, R.-L.: Context-based term frequency assessment for text classification (2010) 0.02
    0.023225334 = product of:
      0.046450667 = sum of:
        0.035584353 = weight(_text_:management in 3331) [ClassicSimilarity], result of:
          0.035584353 = score(doc=3331,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 3331, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=3331)
        0.010866316 = product of:
          0.021732632 = sum of:
            0.021732632 = weight(_text_:science in 3331) [ClassicSimilarity], result of:
              0.021732632 = score(doc=3331,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.17461908 = fieldWeight in 3331, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3331)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Automatic text classification (TC) is essential for the management of information. To properly classify a document d, it is essential to identify the semantics of each term t in d, while the semantics heavily depend on context (neighboring terms) of t in d. Therefore, we present a technique CTFA (Context-based Term Frequency Assessment) that improves text classifiers by considering term contexts in test documents. The results of the term context recognition are used to assess term frequencies of terms, and hence CTFA may easily work with various kinds of text classifiers that base their TC decisions on term frequencies, without needing to modify the classifiers. Moreover, CTFA is efficient, and neither huge memory nor domain-specific knowledge is required. Empirical results show that CTFA successfully enhances performance of several kinds of text classifiers on different experimental data.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.2, S.300-309
  7. Kwok, K.L.: ¬The use of titles and cited titles as document representations for automatic classification (1975) 0.02
    0.02075754 = product of:
      0.08303016 = sum of:
        0.08303016 = weight(_text_:management in 4347) [ClassicSimilarity], result of:
          0.08303016 = score(doc=4347,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.521365 = fieldWeight in 4347, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.109375 = fieldNorm(doc=4347)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 11(1975), S.201-206
  8. Wu, M.; Fuller, M.; Wilkinson, R.: Using clustering and classification approaches in interactive retrieval (2001) 0.02
    0.02075754 = product of:
      0.08303016 = sum of:
        0.08303016 = weight(_text_:management in 2666) [ClassicSimilarity], result of:
          0.08303016 = score(doc=2666,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.521365 = fieldWeight in 2666, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.109375 = fieldNorm(doc=2666)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 37(2001) no.3, S.459-484
  9. Dubin, D.: Dimensions and discriminability (1998) 0.02
    0.02016684 = product of:
      0.08066736 = sum of:
        0.08066736 = sum of:
          0.035857014 = weight(_text_:science in 2338) [ClassicSimilarity], result of:
            0.035857014 = score(doc=2338,freq=4.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.2881068 = fieldWeight in 2338, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2338)
          0.044810344 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
            0.044810344 = score(doc=2338,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.2708308 = fieldWeight in 2338, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2338)
      0.25 = coord(1/4)
    
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  10. Major, R.L.; Ragsdale, C.T.: ¬An aggregation approach to the classification problem using multiple prediction experts (2000) 0.02
    0.017792176 = product of:
      0.071168706 = sum of:
        0.071168706 = weight(_text_:management in 3789) [ClassicSimilarity], result of:
          0.071168706 = score(doc=3789,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.44688427 = fieldWeight in 3789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.09375 = fieldNorm(doc=3789)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 36(2000) no.4, S.683-696
  11. Krellenstein, M.: Document classification at Northern Light (1999) 0.02
    0.017792176 = product of:
      0.071168706 = sum of:
        0.071168706 = weight(_text_:management in 4435) [ClassicSimilarity], result of:
          0.071168706 = score(doc=4435,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.44688427 = fieldWeight in 4435, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.09375 = fieldNorm(doc=4435)
      0.25 = coord(1/4)
    
    Footnote
    Vortrag bei: Search engines and beyond: developing efficient knowledge management systems; 1999 Search engine Meeting, Boston, MA, April 19-20 1999
  12. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.02
    0.01754127 = product of:
      0.07016508 = sum of:
        0.07016508 = sum of:
          0.02535474 = weight(_text_:science in 5273) [ClassicSimilarity], result of:
            0.02535474 = score(doc=5273,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.20372227 = fieldWeight in 5273, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
          0.044810344 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
            0.044810344 = score(doc=5273,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.2708308 = fieldWeight in 5273, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 16:24:52
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.3, S.431-442
  13. Savic, D.: Designing an expert system for classifying office documents (1994) 0.02
    0.016774626 = product of:
      0.067098506 = sum of:
        0.067098506 = weight(_text_:management in 2655) [ClassicSimilarity], result of:
          0.067098506 = score(doc=2655,freq=4.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.42132655 = fieldWeight in 2655, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=2655)
      0.25 = coord(1/4)
    
    Abstract
    Can records management benefit from artificial intelligence technology, in particular from expert systems? Gives an answer to this question by showing an example of a small scale prototype project in automatic classification of office documents. Project methodology and basic elements of an expert system's approach are elaborated to give guidelines to potential users of this promising technology
    Source
    Records management quarterly. 28(1994) no.3, S.20-29
  14. Search Engines and Beyond : Developing efficient knowledge management systems, April 19-20 1999, Boston, Mass (1999) 0.02
    0.015483556 = product of:
      0.030967113 = sum of:
        0.023722902 = weight(_text_:management in 2596) [ClassicSimilarity], result of:
          0.023722902 = score(doc=2596,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.14896142 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
        0.0072442107 = product of:
          0.014488421 = sum of:
            0.014488421 = weight(_text_:science in 2596) [ClassicSimilarity], result of:
              0.014488421 = score(doc=2596,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.11641272 = fieldWeight in 2596, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2596)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Ramana Rao (Inxight, Palo Alto, CA) 7 ± 2 Insights on achieving Effective Information Access Session One: Updates and a twelve month perspective Danny Sullivan (Search Engine Watch, US / England) Portalization and other search trends Carol Tenopir (University of Tennessee) Search realities faced by end users and professional searchers Session Two: Today's search engines and beyond Daniel Hoogterp (Retrieval Technologies, McLean, VA) Effective presentation and utilization of search techniques Rick Kenny (Fulcrum Technologies, Ontario, Canada) Beyond document clustering: The knowledge impact statement Gary Stock (Ingenius, Kalamazoo, MI) Automated change monitoring Gary Culliss (Direct Hit, Wellesley Hills, MA) User popularity ranked search engines Byron Dom (IBM, CA) Automatically finding the best pages on the World Wide Web (CLEVER) Peter Tomassi (LookSmart, San Francisco, CA) Adding human intellect to search technology Session Three: Panel discussion: Human v automated categorization and editing Ev Brenner (New York, NY)- Chairman James Callan (University of Massachusetts, MA) Marc Krellenstein (Northern Light Technology, Cambridge, MA) Dan Miller (Ask Jeeves, Berkeley, CA) Session Four: Updates and a twelve month perspective Steve Arnold (AIT, Harrods Creek, KY) Review: The leading edge in search and retrieval software Ellen Voorhees (NIST, Gaithersburg, MD) TREC update Session Five: Search engines now and beyond Intelligent Agents John Snyder (Muscat, Cambridge, England) Practical issues behind intelligent agents Text summarization Therese Firmin, (Dept of Defense, Ft George G. Meade, MD) The TIPSTER/SUMMAC evaluation of automatic text summarization systems Cross language searching Elizabeth Liddy (TextWise, Syracuse, NY) A conceptual interlingua approach to cross-language retrieval. Video search and retrieval Armon Amir (IBM, Almaden, CA) CueVideo: Modular system for automatic indexing and browsing of video/audio Speech recognition Michael Witbrock (Lycos, Waltham, MA) Retrieval of spoken documents Visualization James A. Wise (Integral Visuals, Richland, WA) Information visualization in the new millennium: Emerging science or passing fashion? Text mining David Evans (Claritech, Pittsburgh, PA) Text mining - towards decision support
  15. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.02
    0.015035374 = product of:
      0.060141496 = sum of:
        0.060141496 = sum of:
          0.021732632 = weight(_text_:science in 690) [ClassicSimilarity], result of:
            0.021732632 = score(doc=690,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.17461908 = fieldWeight in 690, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=690)
          0.038408864 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
            0.038408864 = score(doc=690,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.23214069 = fieldWeight in 690, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=690)
      0.25 = coord(1/4)
    
    Date
    23. 3.2013 13:22:36
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.844-860
  16. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.02
    0.015035374 = product of:
      0.060141496 = sum of:
        0.060141496 = sum of:
          0.021732632 = weight(_text_:science in 2158) [ClassicSimilarity], result of:
            0.021732632 = score(doc=2158,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.17461908 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
          0.038408864 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
            0.038408864 = score(doc=2158,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.23214069 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
      0.25 = coord(1/4)
    
    Date
    4. 8.2015 19:22:04
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1817-1831
  17. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.01
    0.012529479 = product of:
      0.050117917 = sum of:
        0.050117917 = sum of:
          0.018110527 = weight(_text_:science in 2765) [ClassicSimilarity], result of:
            0.018110527 = score(doc=2765,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.1455159 = fieldWeight in 2765, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
          0.03200739 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
            0.03200739 = score(doc=2765,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.19345059 = fieldWeight in 2765, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
      0.25 = coord(1/4)
    
    Date
    22. 3.2009 19:14:43
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.814-825
  18. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.01
    0.012529479 = product of:
      0.050117917 = sum of:
        0.050117917 = sum of:
          0.018110527 = weight(_text_:science in 1107) [ClassicSimilarity], result of:
            0.018110527 = score(doc=1107,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.1455159 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
          0.03200739 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
            0.03200739 = score(doc=1107,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.19345059 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
      0.25 = coord(1/4)
    
    Date
    28.10.2013 19:22:57
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2265-2277
  19. Guerrero-Bote, V.P.; Moya Anegón, F. de; Herrero Solana, V.: Document organization using Kohonen's algorithm (2002) 0.01
    0.011861451 = product of:
      0.047445804 = sum of:
        0.047445804 = weight(_text_:management in 2564) [ClassicSimilarity], result of:
          0.047445804 = score(doc=2564,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.29792285 = fieldWeight in 2564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=2564)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 38(2002) no.1, S.79-89
  20. Savic, D.: Automatic classification of office documents : review of available methods and techniques (1995) 0.01
    0.01037877 = product of:
      0.04151508 = sum of:
        0.04151508 = weight(_text_:management in 2219) [ClassicSimilarity], result of:
          0.04151508 = score(doc=2219,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.2606825 = fieldWeight in 2219, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2219)
      0.25 = coord(1/4)
    
    Source
    Records management quarterly. 29(1995) no.4, S.3-18

Years

Languages

  • e 108
  • d 6
  • chi 1
  • More… Less…

Types

  • a 107
  • el 7
  • m 2
  • s 1
  • x 1
  • More… Less…