Search (35 results, page 1 of 2)

  • × theme_ss:"Automatisches Indexieren"
  • × year_i:[1990 TO 2000}
  1. Wolfekuhler, M.R.; Punch, W.F.: Finding salient features for personal Web pages categories (1997) 0.13
    0.1318309 = product of:
      0.19774634 = sum of:
        0.067437425 = weight(_text_:wide in 2673) [ClassicSimilarity], result of:
          0.067437425 = score(doc=2673,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 2673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.063368805 = weight(_text_:web in 2673) [ClassicSimilarity], result of:
          0.063368805 = score(doc=2673,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43716836 = fieldWeight in 2673, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.04587784 = weight(_text_:computer in 2673) [ClassicSimilarity], result of:
          0.04587784 = score(doc=2673,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28263903 = fieldWeight in 2673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.021062255 = product of:
          0.04212451 = sum of:
            0.04212451 = weight(_text_:22 in 2673) [ClassicSimilarity], result of:
              0.04212451 = score(doc=2673,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.2708308 = fieldWeight in 2673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2673)
          0.5 = coord(1/2)
      0.6666667 = coord(4/6)
    
    Abstract
    Examines techniques that discover features in sets of pre-categorized documents, such that similar documents can be found on the WWW. Examines techniques which will classifiy training examples with high accuracy, then explains why this is not necessarily useful. Describes a method for extracting word clusters from the raw document features. Results show that the clustering technique is successful in discovering word groups in personal Web pages which can be used to find similar information on the WWW
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue of papers from the 6th International World Wide Web conference, held 7-11 Apr 1997, Santa Clara, California
    Source
    Computer networks and ISDN systems. 29(1997) no.8, S.1147-1156
  2. Koch, T.: Experiments with automatic classification of WAIS databases and indexing of WWW : some results from the Nordic WAIS/WWW project (1994) 0.04
    0.043985642 = product of:
      0.13195692 = sum of:
        0.09537092 = weight(_text_:wide in 7209) [ClassicSimilarity], result of:
          0.09537092 = score(doc=7209,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.4846142 = fieldWeight in 7209, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7209)
        0.036585998 = weight(_text_:web in 7209) [ClassicSimilarity], result of:
          0.036585998 = score(doc=7209,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 7209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7209)
      0.33333334 = coord(2/6)
    
    Abstract
    The Nordic WAIS/WWW project sponsored by NORDINFO is a joint project between Lund University Library and the National Technological Library of Denmark. It aims to improve the existing networked information discovery and retrieval tools Wide Area Information System (WAIS) and World Wide Web (WWW), and to move towards unifying WWW and WAIS. Details current results focusing on the WAIS side of the project. Describes research into automatic indexing and classification of WAIS sources, development of an orientation tool for WAIS, and development of a WAIS index of WWW resources
  3. Krüger, C.: Evaluation des WWW-Suchdienstes GERHARD unter besonderer Beachtung automatischer Indexierung (1999) 0.04
    0.03502651 = product of:
      0.105079524 = sum of:
        0.06812209 = weight(_text_:wide in 1777) [ClassicSimilarity], result of:
          0.06812209 = score(doc=1777,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.34615302 = fieldWeight in 1777, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1777)
        0.036957435 = weight(_text_:web in 1777) [ClassicSimilarity], result of:
          0.036957435 = score(doc=1777,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25496176 = fieldWeight in 1777, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1777)
      0.33333334 = coord(2/6)
    
    Abstract
    Die vorliegende Arbeit beinhaltet eine Beschreibung und Evaluation des WWW - Suchdienstes GERHARD (German Harvest Automated Retrieval and Directory). GERHARD ist ein Such- und Navigationssystem für das deutsche World Wide Web, weiches ausschließlich wissenschaftlich relevante Dokumente sammelt, und diese auf der Basis computerlinguistischer und statistischer Methoden automatisch mit Hilfe eines bibliothekarischen Klassifikationssystems klassifiziert. Mit dem DFG - Projekt GERHARD ist der Versuch unternommen worden, mit einem auf einem automatischen Klassifizierungsverfahren basierenden World Wide Web - Dienst eine Alternative zu herkömmlichen Methoden der Interneterschließung zu entwickeln. GERHARD ist im deutschsprachigen Raum das einzige Verzeichnis von Internetressourcen, dessen Erstellung und Aktualisierung vollständig automatisch (also maschinell) erfolgt. GERHARD beschränkt sich dabei auf den Nachweis von Dokumenten auf wissenschaftlichen WWW - Servern. Die Grundidee dabei war, kostenintensive intellektuelle Erschließung und Klassifizierung von lnternetseiten durch computerlinguistische und statistische Methoden zu ersetzen, um auf diese Weise die nachgewiesenen Internetressourcen automatisch auf das Vokabular eines bibliothekarischen Klassifikationssystems abzubilden. GERHARD steht für German Harvest Automated Retrieval and Directory. Die WWW - Adresse (URL) von GERHARD lautet: http://www.gerhard.de. Im Rahmen der vorliegenden Diplomarbeit soll eine Beschreibung des Dienstes mit besonderem Schwerpunkt auf dem zugrundeliegenden Indexierungs- bzw. Klassifizierungssystem erfolgen und anschließend mit Hilfe eines kleinen Retrievaltests die Effektivität von GERHARD überprüft werden.
  4. Hodge, G.M.: Computer-assisted database indexing : the state-of-the-art (1994) 0.02
    0.02442523 = product of:
      0.14655139 = sum of:
        0.14655139 = weight(_text_:computer in 7936) [ClassicSimilarity], result of:
          0.14655139 = score(doc=7936,freq=10.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.90285724 = fieldWeight in 7936, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=7936)
      0.16666667 = coord(1/6)
    
    Abstract
    Discusses the state-of-the art of computer indexing, defines indexing and computer assistance, describes the reasons for renewed interest. Identifies the types of computer support in use using selected operational systems, describes the integration of various computer supports in one databases production system, and speculates on the future
  5. RIAO 91 : Computer aided information retrieval. Conference, Barcelona, 2.-4.5.1991 (1991) 0.02
    0.021846592 = product of:
      0.13107955 = sum of:
        0.13107955 = weight(_text_:computer in 4651) [ClassicSimilarity], result of:
          0.13107955 = score(doc=4651,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.8075401 = fieldWeight in 4651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.15625 = fieldNorm(doc=4651)
      0.16666667 = coord(1/6)
    
  6. Search Engines and Beyond : Developing efficient knowledge management systems, April 19-20 1999, Boston, Mass (1999) 0.02
    0.019813985 = product of:
      0.059441954 = sum of:
        0.03853567 = weight(_text_:wide in 2596) [ClassicSimilarity], result of:
          0.03853567 = score(doc=2596,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
        0.020906283 = weight(_text_:web in 2596) [ClassicSimilarity], result of:
          0.020906283 = score(doc=2596,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.14422815 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
      0.33333334 = coord(2/6)
    
    Content
    Ramana Rao (Inxight, Palo Alto, CA) 7 ± 2 Insights on achieving Effective Information Access Session One: Updates and a twelve month perspective Danny Sullivan (Search Engine Watch, US / England) Portalization and other search trends Carol Tenopir (University of Tennessee) Search realities faced by end users and professional searchers Session Two: Today's search engines and beyond Daniel Hoogterp (Retrieval Technologies, McLean, VA) Effective presentation and utilization of search techniques Rick Kenny (Fulcrum Technologies, Ontario, Canada) Beyond document clustering: The knowledge impact statement Gary Stock (Ingenius, Kalamazoo, MI) Automated change monitoring Gary Culliss (Direct Hit, Wellesley Hills, MA) User popularity ranked search engines Byron Dom (IBM, CA) Automatically finding the best pages on the World Wide Web (CLEVER) Peter Tomassi (LookSmart, San Francisco, CA) Adding human intellect to search technology Session Three: Panel discussion: Human v automated categorization and editing Ev Brenner (New York, NY)- Chairman James Callan (University of Massachusetts, MA) Marc Krellenstein (Northern Light Technology, Cambridge, MA) Dan Miller (Ask Jeeves, Berkeley, CA) Session Four: Updates and a twelve month perspective Steve Arnold (AIT, Harrods Creek, KY) Review: The leading edge in search and retrieval software Ellen Voorhees (NIST, Gaithersburg, MD) TREC update Session Five: Search engines now and beyond Intelligent Agents John Snyder (Muscat, Cambridge, England) Practical issues behind intelligent agents Text summarization Therese Firmin, (Dept of Defense, Ft George G. Meade, MD) The TIPSTER/SUMMAC evaluation of automatic text summarization systems Cross language searching Elizabeth Liddy (TextWise, Syracuse, NY) A conceptual interlingua approach to cross-language retrieval. Video search and retrieval Armon Amir (IBM, Almaden, CA) CueVideo: Modular system for automatic indexing and browsing of video/audio Speech recognition Michael Witbrock (Lycos, Waltham, MA) Retrieval of spoken documents Visualization James A. Wise (Integral Visuals, Richland, WA) Information visualization in the new millennium: Emerging science or passing fashion? Text mining David Evans (Claritech, Pittsburgh, PA) Text mining - towards decision support
  7. Silvester, J.P.: Computer supported indexing : a history and evaluation of NASA's MAI system (1998) 0.02
    0.015292614 = product of:
      0.09175568 = sum of:
        0.09175568 = weight(_text_:computer in 1302) [ClassicSimilarity], result of:
          0.09175568 = score(doc=1302,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.56527805 = fieldWeight in 1302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.109375 = fieldNorm(doc=1302)
      0.16666667 = coord(1/6)
    
  8. Alexander, M.: Retrieving digital data with fuzzy matching (1996) 0.01
    0.012845224 = product of:
      0.07707134 = sum of:
        0.07707134 = weight(_text_:wide in 6961) [ClassicSimilarity], result of:
          0.07707134 = score(doc=6961,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 6961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=6961)
      0.16666667 = coord(1/6)
    
    Abstract
    Briefly describes the Excalibur EFS system which makes use of adaptive pattern recognition technology as an aid to automatic indexing and how it is being tested at the British Library for the indexing and retrieval of scanned images from the library's holdings. Notes how Excalibur EFS can support a wide degree of fuzzy searching, compensate for the errors produced by OCR conversion of scanned images, reduce the costs of indexing, and require far less storage space than more traditional indexes
  9. Jones, R.L.: Automatic document content analysis : the AIDA project (1992) 0.01
    0.010923296 = product of:
      0.06553978 = sum of:
        0.06553978 = weight(_text_:computer in 2607) [ClassicSimilarity], result of:
          0.06553978 = score(doc=2607,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.40377006 = fieldWeight in 2607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=2607)
      0.16666667 = coord(1/6)
    
    Abstract
    The AIDA project is a research program being carried out by Computer Power in Canberra, Australia, in collaboration with the Australian Parliament. Its primary objective is to develop practical methods for carrying out document content analysis with minimal human intervention. The different techniques employed by AIDA to achieve its results are described
  10. Pritchard, J.: Information retrieval : smarter indexing (1991) 0.01
    0.010923296 = product of:
      0.06553978 = sum of:
        0.06553978 = weight(_text_:computer in 4890) [ClassicSimilarity], result of:
          0.06553978 = score(doc=4890,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.40377006 = fieldWeight in 4890, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=4890)
      0.16666667 = coord(1/6)
    
    Abstract
    Describes full text retrieval (FTR) which indexes every occurrence of every word except defined 'stop' words. This permits much more sophisticated searching than with keyword indexing. Also discusses document imaging processing (DIP). Lists suppliers and users of the software and describes the experiences of ESOO's Planning Division with Computer Intertrade Ltd. (CIL) ImagePro DIP and their operational practices
  11. Smart, G.: Using language analysis to manage information (1993) 0.01
    0.008738637 = product of:
      0.05243182 = sum of:
        0.05243182 = weight(_text_:computer in 4423) [ClassicSimilarity], result of:
          0.05243182 = score(doc=4423,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.32301605 = fieldWeight in 4423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=4423)
      0.16666667 = coord(1/6)
    
    Abstract
    The ESPRIT project SIMPR developed software to analyse documents and generate indexes for them. Of immediate application as a document indexing and classification system, this also offers a technology for information modelling that has broader implications, supporting many new uses for information management softeware. The project was based on the assumption that information can only be managed successfully by computer systems that can view the information contained in a document through the language in which the document is written, and that systems need to be sufficiently flexible to respond to the changing requirements of document use
  12. Haas, S.; He, S.: Toward the automatic identification of sublanguage vocabulary (1993) 0.01
    0.008738637 = product of:
      0.05243182 = sum of:
        0.05243182 = weight(_text_:computer in 4891) [ClassicSimilarity], result of:
          0.05243182 = score(doc=4891,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.32301605 = fieldWeight in 4891, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=4891)
      0.16666667 = coord(1/6)
    
    Abstract
    Describes a method developed for automatic identification of sublanguage vocabulary words as they occur in abstracts. Describes the sublanguage vocabulary identification procedures using abstracts from computer science and library and information science as sublanguage sources. Evaluates the results using three criteria. Discuss the practical and theoretical significance of this research and plans for further experiments
  13. Polity, Y.: Vers une ergonomie linguistique (1994) 0.01
    0.008738637 = product of:
      0.05243182 = sum of:
        0.05243182 = weight(_text_:computer in 36) [ClassicSimilarity], result of:
          0.05243182 = score(doc=36,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.32301605 = fieldWeight in 36, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=36)
      0.16666667 = coord(1/6)
    
    Abstract
    Analyzed a special type of man-mchine interaction, that of searching an information system with natural language. A model for full text processing for information retrieval was proposed that considered the system's users and how they employ information. Describes how INIST (the National Institute for Scientific and Technical Information) is developing computer assisted indexing as an aid to improving relevance when retrieving information from bibliographic data banks
  14. Alexander, M.: Automatic indexing of document images using Excalibur EFS (1995) 0.01
    0.008738637 = product of:
      0.05243182 = sum of:
        0.05243182 = weight(_text_:computer in 1911) [ClassicSimilarity], result of:
          0.05243182 = score(doc=1911,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.32301605 = fieldWeight in 1911, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=1911)
      0.16666667 = coord(1/6)
    
    Abstract
    Discusses research into the application of adaptive pattern recognition technology to enable effective retrieval from scanned document images. Describes application at the British Library of Excalibur EFS software which uses adaptive pattern recognition technology to provide access to digital information in its native forms, fuzzy searching retrieval and automatic indexing capabilities. It was used to make specialist printed catalogues and indexes accessible on computer via content based indexes
  15. Mars, N.J.I.: ¬The management of scientific information, or, how to cope with the flood (1996) 0.01
    0.008738637 = product of:
      0.05243182 = sum of:
        0.05243182 = weight(_text_:computer in 7414) [ClassicSimilarity], result of:
          0.05243182 = score(doc=7414,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.32301605 = fieldWeight in 7414, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=7414)
      0.16666667 = coord(1/6)
    
    Abstract
    Research in the Knowledge-Based Systems Group of the University of Twente in the Netherlands is aimed at reducing information overload. One approach is to support indexing by the traditional method of assigning content descriptions to find documents. A second way is to use a computer program to determine what the document says without descriptors. Discusses automated indexing and direct access to information
  16. Alexander, M.: Retrieving digital data with fuzzy matching (1997) 0.01
    0.008738637 = product of:
      0.05243182 = sum of:
        0.05243182 = weight(_text_:computer in 151) [ClassicSimilarity], result of:
          0.05243182 = score(doc=151,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.32301605 = fieldWeight in 151, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=151)
      0.16666667 = coord(1/6)
    
    Abstract
    In 1993 the British Library established a programme of activities entitled Initiatives for Access (IFA) to identify and develop computer applications based on the new technologies emerging in the aereas of digital and network service. Discusses the problem of the effective retrieval of digital data after its capture focusing on the product Excalibur EFS which looks at the way information is sorted at its fundamental level and identifies patterns in numbers. Looks at the benefits of Excalibur and outlines other experiments in progress as part of the IFA programme
  17. McKiernan, G.: Automated categorisation of Web resources : a profile of selected projects, research, products, and services (1996) 0.01
    0.008710952 = product of:
      0.052265707 = sum of:
        0.052265707 = weight(_text_:web in 2533) [ClassicSimilarity], result of:
          0.052265707 = score(doc=2533,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.36057037 = fieldWeight in 2533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=2533)
      0.16666667 = coord(1/6)
    
  18. Shafer, K.: Scorpion Project explores using Dewey to organize the Web (1996) 0.01
    0.008623403 = product of:
      0.05174041 = sum of:
        0.05174041 = weight(_text_:web in 6750) [ClassicSimilarity], result of:
          0.05174041 = score(doc=6750,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.35694647 = fieldWeight in 6750, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6750)
      0.16666667 = coord(1/6)
    
    Abstract
    As the amount of accessible information on the WWW increases, so will the cost of accessing it, even if search servcies remain free, due to the increasing amount of time users will have to spend to find needed items. Considers what the seemingly unorganized Web and the organized world of libraries can offer each other. The OCLC Scorpion Project is attempting to combine indexing and cataloguing, specifically focusing on building tools for automatic subject recognition using the technqiues of library science and information retrieval. If subject headings or concept domains can be automatically assigned to electronic items, improved filtering tools for searching can be produced
  19. Wacholder, N.; Byrd, R.J.: Retrieving information from full text using linguistic knowledge (1994) 0.01
    0.008245593 = product of:
      0.049473554 = sum of:
        0.049473554 = product of:
          0.09894711 = sum of:
            0.09894711 = weight(_text_:programs in 8524) [ClassicSimilarity], result of:
              0.09894711 = score(doc=8524,freq=2.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.38428974 = fieldWeight in 8524, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.046875 = fieldNorm(doc=8524)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Examines how techniques in the field of natural language processing can be applied to the analysis of text in information retrieval. State of the art text searching programs cannot distinguish, for example, between occurrences of the sickness, AIDS and aids as tool or between library school and school nor equate such terms as online or on-line which are variants of the same form. To make these distinction, systems must incorporate knowledge about the meaning of words in context. Research in natural language processing has concentrated on the automatic 'understanding' of language; how to analyze the grammatical structure and meaning of text. Although many asoects of this research remain experimental, describes how these techniques to recognize spelling variants, names, acronyms, and abbreviations
  20. Renouf, A.: Sticking to the text : a corpus linguist's view of language (1993) 0.01
    0.007646307 = product of:
      0.04587784 = sum of:
        0.04587784 = weight(_text_:computer in 2314) [ClassicSimilarity], result of:
          0.04587784 = score(doc=2314,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28263903 = fieldWeight in 2314, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2314)
      0.16666667 = coord(1/6)
    
    Abstract
    Corpus linguistics is the study of large, computer held bodies of text. Some corpus linguists are concerned with language descriptions for its own sake. On the corpus-linguistic continuum, the study of raw ASCII text is situated at one end, and the study of heavily pre-coded text at the other. Discusses the use of word frequency to identify changes in the lexicon; word repetition and word positioning in automatic abstracting and word clusters in automatic text retrieval. Compares the machine extract with manual abstracts. Abstractors and indexers may find themselves taking the original wording of the text more into account as the focus moves towards the electronic medium and away from the hard copy