Search (5 results, page 1 of 1)

  • × theme_ss:"Automatisches Klassifizieren"
  • × year_i:[1990 TO 2000}
  1. Search Engines and Beyond : Developing efficient knowledge management systems, April 19-20 1999, Boston, Mass (1999) 0.01
    0.009435602 = product of:
      0.03774241 = sum of:
        0.03774241 = product of:
          0.07548482 = sum of:
            0.07548482 = weight(_text_:intelligent in 2596) [ClassicSimilarity], result of:
              0.07548482 = score(doc=2596,freq=4.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.35206887 = fieldWeight in 2596, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2596)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Ramana Rao (Inxight, Palo Alto, CA) 7 ± 2 Insights on achieving Effective Information Access Session One: Updates and a twelve month perspective Danny Sullivan (Search Engine Watch, US / England) Portalization and other search trends Carol Tenopir (University of Tennessee) Search realities faced by end users and professional searchers Session Two: Today's search engines and beyond Daniel Hoogterp (Retrieval Technologies, McLean, VA) Effective presentation and utilization of search techniques Rick Kenny (Fulcrum Technologies, Ontario, Canada) Beyond document clustering: The knowledge impact statement Gary Stock (Ingenius, Kalamazoo, MI) Automated change monitoring Gary Culliss (Direct Hit, Wellesley Hills, MA) User popularity ranked search engines Byron Dom (IBM, CA) Automatically finding the best pages on the World Wide Web (CLEVER) Peter Tomassi (LookSmart, San Francisco, CA) Adding human intellect to search technology Session Three: Panel discussion: Human v automated categorization and editing Ev Brenner (New York, NY)- Chairman James Callan (University of Massachusetts, MA) Marc Krellenstein (Northern Light Technology, Cambridge, MA) Dan Miller (Ask Jeeves, Berkeley, CA) Session Four: Updates and a twelve month perspective Steve Arnold (AIT, Harrods Creek, KY) Review: The leading edge in search and retrieval software Ellen Voorhees (NIST, Gaithersburg, MD) TREC update Session Five: Search engines now and beyond Intelligent Agents John Snyder (Muscat, Cambridge, England) Practical issues behind intelligent agents Text summarization Therese Firmin, (Dept of Defense, Ft George G. Meade, MD) The TIPSTER/SUMMAC evaluation of automatic text summarization systems Cross language searching Elizabeth Liddy (TextWise, Syracuse, NY) A conceptual interlingua approach to cross-language retrieval. Video search and retrieval Armon Amir (IBM, Almaden, CA) CueVideo: Modular system for automatic indexing and browsing of video/audio Speech recognition Michael Witbrock (Lycos, Waltham, MA) Retrieval of spoken documents Visualization James A. Wise (Integral Visuals, Richland, WA) Information visualization in the new millennium: Emerging science or passing fashion? Text mining David Evans (Claritech, Pittsburgh, PA) Text mining - towards decision support
  2. Dubin, D.: Dimensions and discriminability (1998) 0.00
    0.004512191 = product of:
      0.018048763 = sum of:
        0.018048763 = product of:
          0.036097527 = sum of:
            0.036097527 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.036097527 = score(doc=2338,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 9.1997 19:16:05
  3. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.00
    0.004512191 = product of:
      0.018048763 = sum of:
        0.018048763 = product of:
          0.036097527 = sum of:
            0.036097527 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.036097527 = score(doc=1673,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 8.1996 22:08:06
  4. Wätjen, H.-J.; Diekmann, B.; Möller, G.; Carstensen, K.-U.: Bericht zum DFG-Projekt: GERHARD : German Harvest Automated Retrieval and Directory (1998) 0.00
    0.0044657104 = product of:
      0.017862841 = sum of:
        0.017862841 = product of:
          0.053588524 = sum of:
            0.053588524 = weight(_text_:k in 3065) [ClassicSimilarity], result of:
              0.053588524 = score(doc=3065,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.39440846 = fieldWeight in 3065, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3065)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
  5. Yang, Y.; Liu, X.: ¬A re-examination of text categorization methods (1999) 0.00
    0.0031259973 = product of:
      0.012503989 = sum of:
        0.012503989 = product of:
          0.037511967 = sum of:
            0.037511967 = weight(_text_:k in 3386) [ClassicSimilarity], result of:
              0.037511967 = score(doc=3386,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.27608594 = fieldWeight in 3386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3386)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This paper reports a controlled study with statistical significance tests an five text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classifier, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classifier. We focus an the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF significantly outperform NNet and NB when the number of positive training instances per category are small (less than ten, and that all the methods perform comparably when the categories are sufficiently common (over 300 instances).