Search (77 results, page 1 of 4)

  • × theme_ss:"Automatisches Indexieren"
  1. Stankovic, R. et al.: Indexing of textual databases based on lexical resources : a case study for Serbian (2016) 0.05
    0.045435637 = product of:
      0.13630691 = sum of:
        0.13630691 = sum of:
          0.06764257 = weight(_text_:search in 2759) [ClassicSimilarity], result of:
            0.06764257 = score(doc=2759,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.3840117 = fieldWeight in 2759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.078125 = fieldNorm(doc=2759)
          0.06866435 = weight(_text_:22 in 2759) [ClassicSimilarity], result of:
            0.06866435 = score(doc=2759,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.38690117 = fieldWeight in 2759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2759)
      0.33333334 = coord(1/3)
    
    Date
    1. 2.2016 18:25:22
    Source
    Semantic keyword-based search on structured data sources: First COST Action IC1302 International KEYSTONE Conference, IKC 2015, Coimbra, Portugal, September 8-9, 2015. Revised Selected Papers. Eds.: J. Cardoso et al
  2. Nicoletti, M.: Automatische Indexierung (2001) 0.04
    0.0436406 = product of:
      0.1309218 = sum of:
        0.1309218 = weight(_text_:book in 4326) [ClassicSimilarity], result of:
          0.1309218 = score(doc=4326,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.58523595 = fieldWeight in 4326, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.09375 = fieldNorm(doc=4326)
      0.33333334 = coord(1/3)
    
    Content
    Inhalt: 1. Aufgabe - 2. Ermittlung von Mehrwortgruppen - 2.1 Definition - 3. Kennzeichnung der Mehrwortgruppen - 4. Grundformen - 5. Term- und Dokumenthäufigkeit --- Termgewichtung - 6. Steuerungsinstrument Schwellenwert - 7. Invertierter Index. Vgl. unter: http://www.grin.com/de/e-book/104966/automatische-indexierung.
  3. Koryconski, C.; Newell, A.F.: Natural-language processing and automatic indexing (1990) 0.04
    0.04114475 = product of:
      0.123434246 = sum of:
        0.123434246 = weight(_text_:book in 2313) [ClassicSimilarity], result of:
          0.123434246 = score(doc=2313,freq=4.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.55176574 = fieldWeight in 2313, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0625 = fieldNorm(doc=2313)
      0.33333334 = coord(1/3)
    
    Abstract
    The task of producing satisfactory indexes by automatic means has been tackled on two fronts: by statistical analysis of text and by attempting content analysis of the text in much the same way as a human indexer does. Though statistical techniques have a lot to offer for free-text database systems, neither method has had much success with back-of-the-book indexing. This review examines some problems associated with the application of natural-language processing techniques to book texts. - Vgl. auch die Erwiderung von K.P. Jones
  4. Asula, M.; Makke, J.; Freienthal, L.; Kuulmets, H.-A.; Sirel, R.: Kratt: developing an automatic subject indexing tool for the National Library of Estonia : how to transfer metadata information among work cluster members (2021) 0.04
    0.03779387 = product of:
      0.11338161 = sum of:
        0.11338161 = weight(_text_:book in 723) [ClassicSimilarity], result of:
          0.11338161 = score(doc=723,freq=6.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.50682926 = fieldWeight in 723, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.046875 = fieldNorm(doc=723)
      0.33333334 = coord(1/3)
    
    Abstract
    Manual subject indexing in libraries is a time-consuming and costly process and the quality of the assigned subjects is affected by the cataloger's knowledge on the specific topics contained in the book. Trying to solve these issues, we exploited the opportunities arising from artificial intelligence to develop Kratt: a prototype of an automatic subject indexing tool. Kratt is able to subject index a book independent of its extent and genre with a set of keywords present in the Estonian Subject Thesaurus. It takes Kratt approximately one minute to subject index a book, outperforming humans 10-15 times. Although the resulting keywords were not considered satisfactory by the catalogers, the ratings of a small sample of regular library users showed more promise. We also argue that the results can be enhanced by including a bigger corpus for training the model and applying more careful preprocessing techniques.
  5. Moulaison-Sandy, H.; Adkins, D.; Bossaller, J.; Cho, H.: ¬An automated approach to describing fiction : a methodology to use book reviews to identify affect (2021) 0.04
    0.036001656 = product of:
      0.108004965 = sum of:
        0.108004965 = weight(_text_:book in 710) [ClassicSimilarity], result of:
          0.108004965 = score(doc=710,freq=4.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.48279503 = fieldWeight in 710, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=710)
      0.33333334 = coord(1/3)
    
    Abstract
    Subject headings and genre terms are notoriously difficult to apply, yet are important for fiction. The current project functions as a proof of concept, using a text-mining methodology to identify affective information (emotion and tone) about fiction titles from professional book reviews as a potential first step in automating the subject analysis process. Findings are presented and discussed, comparing results to the range of aboutness and isness information in library cataloging records. The methodology is likewise presented, and how future work might expand on the current project to enhance catalog records through text-mining is explored.
  6. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.03
    0.03180495 = product of:
      0.09541484 = sum of:
        0.09541484 = sum of:
          0.0473498 = weight(_text_:search in 5001) [ClassicSimilarity], result of:
            0.0473498 = score(doc=5001,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.2688082 = fieldWeight in 5001, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
          0.04806504 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
            0.04806504 = score(doc=5001,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.2708308 = fieldWeight in 5001, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
      0.33333334 = coord(1/3)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  7. Salton, G.: SMART System: 1961-1976 (2009) 0.03
    0.029093731 = product of:
      0.08728119 = sum of:
        0.08728119 = weight(_text_:book in 3879) [ClassicSimilarity], result of:
          0.08728119 = score(doc=3879,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.39015728 = fieldWeight in 3879, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0625 = fieldNorm(doc=3879)
      0.33333334 = coord(1/3)
    
    Footnote
    Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
  8. Gödert, W.; Liebig, M.: Maschinelle Indexierung auf dem Prüfstand : Ergebnisse eines Retrievaltests zum MILOS II Projekt (1997) 0.03
    0.025457015 = product of:
      0.076371044 = sum of:
        0.076371044 = weight(_text_:book in 1174) [ClassicSimilarity], result of:
          0.076371044 = score(doc=1174,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.34138763 = fieldWeight in 1174, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1174)
      0.33333334 = coord(1/3)
    
    Abstract
    The test ran between Nov 95-Aug 96 in Cologne Fachhochschule fur Bibliothekswesen (College of Librarianship).The test basis was a database of 190,000 book titles published between 1990-95. MILOS II mechanized indexing methods proved helpful in avoiding or reducing numbers of unsatisfied/no result retrieval searches. Retrieval from mechanised indexing is 3 times more successful than from title keyword data. MILOS II also used a standardized semantic vocabulary. Mechanised indexing demands high quality software and output data
  9. Wolfe, EW.: a case study in automated metadata enhancement : Natural Language Processing in the humanities (2019) 0.03
    0.025457015 = product of:
      0.076371044 = sum of:
        0.076371044 = weight(_text_:book in 5236) [ClassicSimilarity], result of:
          0.076371044 = score(doc=5236,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.34138763 = fieldWeight in 5236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5236)
      0.33333334 = coord(1/3)
    
    Abstract
    The Black Book Interactive Project at the University of Kansas (KU) is developing an expanded corpus of novels by African American authors, with an emphasis on lesser known writers and a goal of expanding research in this field. Using a custom metadata schema with an emphasis on race-related elements, each novel is analyzed for a variety of elements such as literary style, targeted content analysis, historical context, and other areas. Librarians at KU have worked to develop a variety of computational text analysis processes designed to assist with specific aspects of this metadata collection, including text mining and natural language processing, automated subject extraction based on word sense disambiguation, harvesting data from Wikidata, and other actions.
  10. Milstead, J.L.: Thesauri in a full-text world (1998) 0.02
    0.022717819 = product of:
      0.068153456 = sum of:
        0.068153456 = sum of:
          0.033821285 = weight(_text_:search in 2337) [ClassicSimilarity], result of:
            0.033821285 = score(doc=2337,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.19200584 = fieldWeight in 2337, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2337)
          0.034332175 = weight(_text_:22 in 2337) [ClassicSimilarity], result of:
            0.034332175 = score(doc=2337,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.19345059 = fieldWeight in 2337, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2337)
      0.33333334 = coord(1/3)
    
    Abstract
    Despite early claims to the contemporary, thesauri continue to find use as access tools for information in the full-text environment. Their mode of use is changing, but this change actually represents an expansion rather than a contrdiction of their utility. Thesauri and similar vocabulary tools can complement full-text access by aiding users in focusing their searches, by supplementing the linguistic analysis of the text search engine, and even by serving as one of the tools used by the linguistic engine for its analysis. While human indexing contunues to be used for many databases, the trend is to increase the use of machine aids for this purpose. All machine-aided indexing (MAI) systems rely on thesauri as the basis for term selection. In the 21st century, the balance of effort between human and machine will change at both input and output, but thesauri will continue to play an important role for the foreseeable future
    Date
    22. 9.1997 19:16:05
  11. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.02
    0.018310493 = product of:
      0.054931477 = sum of:
        0.054931477 = product of:
          0.10986295 = sum of:
            0.10986295 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.10986295 = score(doc=402,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  12. Moreno, J.M.T.: Automatic text summarization (2014) 0.02
    0.018183582 = product of:
      0.05455074 = sum of:
        0.05455074 = weight(_text_:book in 1518) [ClassicSimilarity], result of:
          0.05455074 = score(doc=1518,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.2438483 = fieldWeight in 1518, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1518)
      0.33333334 = coord(1/3)
    
    Abstract
    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts. The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been developed in recent years that include several applications of ADS. The development of recent technology has impacted on the development of algorithms and their applications. The massive use of social networks and the new forms of the technology requires the adaptation of the classical methods of text summarizers. This is a new textbook on Automatic Text Summarization, based on teaching materials used in two or one-semester courses. It presents a extensive state-of-art and describes the new systems on the subject. Previous automatic summarization books have been either collections of specialized papers, or else authored books with only a chapter or two devoted to the field as a whole. In other hand, the classic books on the subject are not recent.
  13. Salton, G.; Wong, A.: Generation and search of clustered files (1978) 0.02
    0.01803802 = product of:
      0.054114055 = sum of:
        0.054114055 = product of:
          0.10822811 = sum of:
            0.10822811 = weight(_text_:search in 2411) [ClassicSimilarity], result of:
              0.10822811 = score(doc=2411,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.6144187 = fieldWeight in 2411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.125 = fieldNorm(doc=2411)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  14. Sparck Jones, K.; Tait, J.I.: Automatic search term variant generation (1984) 0.02
    0.01803802 = product of:
      0.054114055 = sum of:
        0.054114055 = product of:
          0.10822811 = sum of:
            0.10822811 = weight(_text_:search in 2918) [ClassicSimilarity], result of:
              0.10822811 = score(doc=2918,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.6144187 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.125 = fieldNorm(doc=2918)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  15. Molto, M.: Improving full text search performance through textual analysis (1993) 0.02
    0.01803802 = product of:
      0.054114055 = sum of:
        0.054114055 = product of:
          0.10822811 = sum of:
            0.10822811 = weight(_text_:search in 5099) [ClassicSimilarity], result of:
              0.10822811 = score(doc=5099,freq=8.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.6144187 = fieldWeight in 5099, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5099)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Explores the potential of text analysis as a tool in full text search and design improvement. Reports on a trial analysis performed in the domain of family history. The findings offered insights into possible gains and losses in using one search or design strategy versus another and strong evidence was provided to the potential of text analysis. Makes search and design recommendation
  16. Stegentritt, E.: Evaluationsresultate des mehrsprachigen Suchsystems CANAL/LS (1998) 0.02
    0.01803802 = product of:
      0.054114055 = sum of:
        0.054114055 = product of:
          0.10822811 = sum of:
            0.10822811 = weight(_text_:search in 7216) [ClassicSimilarity], result of:
              0.10822811 = score(doc=7216,freq=8.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.6144187 = fieldWeight in 7216, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7216)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The search system CANAL/LS simplifies the searching of library catalogues by analyzing search questions linguistically and translating them if required. The linguistic analysis reduces the search question words to their basic forms so that they can be compared with basic title forms. Consequently all variants of words and parts of compounds in German can be found. Presents the results of an analysis of search questions in a catalogue of 45.000 titles in the field of psychology
  17. Zhitomirsky-Geffet, M.; Prebor, G.; Bloch, O.: Improving proverb search and retrieval with a generic multidimensional ontology (2017) 0.02
    0.017896544 = product of:
      0.05368963 = sum of:
        0.05368963 = product of:
          0.10737926 = sum of:
            0.10737926 = weight(_text_:search in 3320) [ClassicSimilarity], result of:
              0.10737926 = score(doc=3320,freq=14.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.6095997 = fieldWeight in 3320, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3320)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The goal of this research is to develop a generic ontological model for proverbs that unifies potential classification criteria and various characteristics of proverbs to enable their effective retrieval and large-scale analysis. Because proverbs can be described and indexed by multiple characteristics and criteria, we built a multidimensional ontology suitable for proverb classification. To evaluate the effectiveness of the constructed ontology for improving search and retrieval of proverbs, a large-scale user experiment was arranged with 70 users who were asked to search a proverb repository using ontology-based and free-text search interfaces. The comparative analysis of the results shows that the use of this ontology helped to substantially improve the search recall, precision, user satisfaction, and efficiency and to minimize user effort during the search process. A practical contribution of this work is an automated web-based proverb search and retrieval system which incorporates the proposed ontological scheme and an initial corpus of ontology-based annotated proverbs.
  18. Search Engines and Beyond : Developing efficient knowledge management systems, April 19-20 1999, Boston, Mass (1999) 0.02
    0.016259251 = product of:
      0.04877775 = sum of:
        0.04877775 = product of:
          0.0975555 = sum of:
            0.0975555 = weight(_text_:search in 2596) [ClassicSimilarity], result of:
              0.0975555 = score(doc=2596,freq=26.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.55382955 = fieldWeight in 2596, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2596)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This series of meetings originated in Albuquerque, New Mexico in 1995. This inaugural meeting (part of an ASIDIC series) was transplanted to Bath in England (1996 and 1997) and then to Boston, Massachusetts (1998 and 1999). The Search Engines Meetings bring together commercial search engine developers, academics and corporate professionals to learn from each other. Infonortics, sponsor of meetings post-1995 with Ev Brenner, plans to continue the same success in Boston in 2000.
    Content
    Ramana Rao (Inxight, Palo Alto, CA) 7 ± 2 Insights on achieving Effective Information Access Session One: Updates and a twelve month perspective Danny Sullivan (Search Engine Watch, US / England) Portalization and other search trends Carol Tenopir (University of Tennessee) Search realities faced by end users and professional searchers Session Two: Today's search engines and beyond Daniel Hoogterp (Retrieval Technologies, McLean, VA) Effective presentation and utilization of search techniques Rick Kenny (Fulcrum Technologies, Ontario, Canada) Beyond document clustering: The knowledge impact statement Gary Stock (Ingenius, Kalamazoo, MI) Automated change monitoring Gary Culliss (Direct Hit, Wellesley Hills, MA) User popularity ranked search engines Byron Dom (IBM, CA) Automatically finding the best pages on the World Wide Web (CLEVER) Peter Tomassi (LookSmart, San Francisco, CA) Adding human intellect to search technology Session Three: Panel discussion: Human v automated categorization and editing Ev Brenner (New York, NY)- Chairman James Callan (University of Massachusetts, MA) Marc Krellenstein (Northern Light Technology, Cambridge, MA) Dan Miller (Ask Jeeves, Berkeley, CA) Session Four: Updates and a twelve month perspective Steve Arnold (AIT, Harrods Creek, KY) Review: The leading edge in search and retrieval software Ellen Voorhees (NIST, Gaithersburg, MD) TREC update Session Five: Search engines now and beyond Intelligent Agents John Snyder (Muscat, Cambridge, England) Practical issues behind intelligent agents Text summarization Therese Firmin, (Dept of Defense, Ft George G. Meade, MD) The TIPSTER/SUMMAC evaluation of automatic text summarization systems Cross language searching Elizabeth Liddy (TextWise, Syracuse, NY) A conceptual interlingua approach to cross-language retrieval. Video search and retrieval Armon Amir (IBM, Almaden, CA) CueVideo: Modular system for automatic indexing and browsing of video/audio Speech recognition Michael Witbrock (Lycos, Waltham, MA) Retrieval of spoken documents Visualization James A. Wise (Integral Visuals, Richland, WA) Information visualization in the new millennium: Emerging science or passing fashion? Text mining David Evans (Claritech, Pittsburgh, PA) Text mining - towards decision support
  19. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.02
    0.01602168 = product of:
      0.04806504 = sum of:
        0.04806504 = product of:
          0.09613008 = sum of:
            0.09613008 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.09613008 = score(doc=262,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20.10.2000 12:22:23
  20. Hlava, M.M.K.: Automatic indexing : comparing rule-based and statistics-based indexing systems (2005) 0.02
    0.01602168 = product of:
      0.04806504 = sum of:
        0.04806504 = product of:
          0.09613008 = sum of:
            0.09613008 = weight(_text_:22 in 6265) [ClassicSimilarity], result of:
              0.09613008 = score(doc=6265,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.5416616 = fieldWeight in 6265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6265)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information outlook. 9(2005) no.8, S.22-23

Years

Languages

  • e 55
  • d 19
  • f 1
  • m 1
  • ru 1
  • More… Less…

Types

  • a 68
  • el 5
  • x 3
  • m 2
  • s 1
  • More… Less…

Classifications