Search (18 results, page 1 of 1)

  • × theme_ss:"Automatisches Indexieren"
  • × theme_ss:"Computerlinguistik"
  1. SIGIR'92 : Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (1992) 0.04
    0.041494764 = product of:
      0.062242147 = sum of:
        0.04789109 = weight(_text_:semantic in 6671) [ClassicSimilarity], result of:
          0.04789109 = score(doc=6671,freq=4.0), product of:
            0.21061863 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.050655533 = queryNorm
            0.22738299 = fieldWeight in 6671, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.02734375 = fieldNorm(doc=6671)
        0.014351056 = product of:
          0.028702112 = sum of:
            0.028702112 = weight(_text_:indexing in 6671) [ClassicSimilarity], result of:
              0.028702112 = score(doc=6671,freq=2.0), product of:
                0.19390269 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.050655533 = queryNorm
                0.14802328 = fieldWeight in 6671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=6671)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    HARMAN, D.: Relevance feedback revisited; AALBERSBERG, I.J.: Incremental relevance feedback; TAGUE-SUTCLIFFE, J.: Measuring the informativeness of a retrieval process; LEWIS, D.D.: An evaluation of phrasal and clustered representations on a text categorization task; BLOSSEVILLE, M.J., G. HÉBRAIL, M.G. MONTEIL u. N. PÉNOT: Automatic document classification: natural language processing, statistical analysis, and expert system techniques used together; MASAND, B., G. LINOFF u. D. WALTZ: Classifying news stories using memory based reasoning; KEEN, E.M.: Term position ranking: some new test results; CROUCH, C.J. u. B. YANG: Experiments in automatic statistical thesaurus construction; GREFENSTETTE, G.: Use of syntactic context to produce term association lists for text retrieval; ANICK, P.G. u. R.A. FLYNN: Versioning of full-text information retrieval system; BURKOWSKI, F.J.: Retrieval activities in a database consisting of heterogeneous collections; DEERWESTER, S.C., K. WACLENA u. M. LaMAR: A textual object management system; NIE, J.-Y.:Towards a probabilistic modal logic for semantic-based information retrieval; WANG, A.W., S.K.M. WONG u. Y.Y. YAO: An analysis of vector space models based on computational geometry; BARTELL, B.T., G.W. COTTRELL u. R.K. BELEW: Latent semantic indexing is an optimal special case of multidimensional scaling; GLAVITSCH, U. u. P. SCHÄUBLE: A system for retrieving speech documents; MARGULIS, E.L.: N-Poisson document modelling; HESS, M.: An incrementally extensible document retrieval system based on linguistics and logical principles; COOPER, W.S., F.C. GEY u. D.P. DABNEY: Probabilistic retrieval based on staged logistic regression; FUHR, N.: Integration of probabilistic fact and text retrieval; CROFT, B., L.A. SMITH u. H. TURTLE: A loosely-coupled integration of a text retrieval system and an object-oriented database system; DUMAIS, S.T. u. J. NIELSEN: Automating the assignement of submitted manuscripts to reviewers; GOST, M.A. u. M. MASOTTI: Design of an OPAC database to permit different subject searching accesses; ROBERTSON, A.M. u. P. WILLETT: Searching for historical word forms in a database of 17th century English text using spelling correction methods; FAX, E.A., Q.F. CHEN u. L.S. HEATH: A faster algorithm for constructing minimal perfect hash functions; MOFFAT, A. u. J. ZOBEL: Parameterised compression for sparse bitmaps; GRANDI, F., P. TIBERIO u. P. Zezula: Frame-sliced patitioned parallel signature files; ALLEN, B.: Cognitive differences in end user searching of a CD-ROM index; SONNENWALD, D.H.: Developing a theory to guide the process of designing information retrieval systems; CUTTING, D.R., J.O. PEDERSEN, D. KARGER, u. J.W. TUKEY: Scatter/ Gather: a cluster-based approach to browsing large document collections; CHALMERS, M. u. P. CHITSON: Bead: Explorations in information visualization; WILLIAMSON, C. u. B. SHNEIDERMAN: The dynamic HomeFinder: evaluating dynamic queries in a real-estate information exploring system
  2. Chowdhury, G.G.: Natural language processing and information retrieval : pt.1: basic issues; pt.2: major applications (1991) 0.03
    0.032251537 = product of:
      0.09675461 = sum of:
        0.09675461 = weight(_text_:semantic in 3313) [ClassicSimilarity], result of:
          0.09675461 = score(doc=3313,freq=2.0), product of:
            0.21061863 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.050655533 = queryNorm
            0.45938298 = fieldWeight in 3313, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.078125 = fieldNorm(doc=3313)
      0.33333334 = coord(1/3)
    
    Abstract
    Reviews the basic issues and procedures involved in natural language processing of textual material for final use in information retrieval. Covers: natural language processing; natural language understanding; syntactic and semantic analysis; parsing; knowledge bases and knowledge representation
  3. Pirkola, A.: Morphological typology of languages for IR (2001) 0.02
    0.019350924 = product of:
      0.058052767 = sum of:
        0.058052767 = weight(_text_:semantic in 4476) [ClassicSimilarity], result of:
          0.058052767 = score(doc=4476,freq=2.0), product of:
            0.21061863 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.050655533 = queryNorm
            0.2756298 = fieldWeight in 4476, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=4476)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a morphological classification of languages from the IR perspective. Linguistic typology research has shown that the morphological complexity of every language in the world can be described by two variables, index of synthesis and index of fusion. These variables provide a theoretical basis for IR research handling morphological issues. A common theoretical framework is needed in particular because of the increasing significance of cross-language retrieval research and CLIR systems processing different languages. The paper elaborates the linguistic morphological typology for the purposes of IR research. It studies how the indexes of synthesis and fusion could be used as practical tools in mono- and cross-lingual IR research. The need for semantic and syntactic typologies is discussed. The paper also reviews studies made in different languages on the effects of morphology and stemming in IR.
  4. Lustig, G.: ¬Das Projekt WAI : Wörterbuchentwicklung für automatisches Indexing (1982) 0.02
    0.019134741 = product of:
      0.057404224 = sum of:
        0.057404224 = product of:
          0.11480845 = sum of:
            0.11480845 = weight(_text_:indexing in 33) [ClassicSimilarity], result of:
              0.11480845 = score(doc=33,freq=2.0), product of:
                0.19390269 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.050655533 = queryNorm
                0.5920931 = fieldWeight in 33, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.109375 = fieldNorm(doc=33)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  5. Chou, C.; Chu, T.: ¬An analysis of BERT (NLP) for assisted subject indexing for Project Gutenberg (2022) 0.02
    0.019134741 = product of:
      0.057404224 = sum of:
        0.057404224 = product of:
          0.11480845 = sum of:
            0.11480845 = weight(_text_:indexing in 1139) [ClassicSimilarity], result of:
              0.11480845 = score(doc=1139,freq=8.0), product of:
                0.19390269 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.050655533 = queryNorm
                0.5920931 = fieldWeight in 1139, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1139)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In light of AI (Artificial Intelligence) and NLP (Natural language processing) technologies, this article examines the feasibility of using AI/NLP models to enhance the subject indexing of digital resources. While BERT (Bidirectional Encoder Representations from Transformers) models are widely used in scholarly communities, the authors assess whether BERT models can be used in machine-assisted indexing in the Project Gutenberg collection, through suggesting Library of Congress subject headings filtered by certain Library of Congress Classification subclass labels. The findings of this study are informative for further research on BERT models to assist with automatic subject indexing for digital library collections.
  6. Garfield, E.: ¬The relationship between mechanical indexing, structural linguistics and information retrieval (1992) 0.02
    0.018938482 = product of:
      0.05681544 = sum of:
        0.05681544 = product of:
          0.11363088 = sum of:
            0.11363088 = weight(_text_:indexing in 3632) [ClassicSimilarity], result of:
              0.11363088 = score(doc=3632,freq=6.0), product of:
                0.19390269 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.050655533 = queryNorm
                0.5860202 = fieldWeight in 3632, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3632)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    It is possible to locate over 60% of indexing terms used in the Current List of Medical Literature by analysing the titles of the articles. Citation indexes contain 'noise' and lack many pertinent citations. Mechanical indexing or analysis of text must begin with some linguistic technique. Discusses Harris' methods of structural linguistics, discourse analysis and transformational analysis. Provides 3 examples with references, abstracts and index entries
  7. Fox, C.: Lexical analysis and stoplists (1992) 0.02
    0.015463205 = product of:
      0.046389613 = sum of:
        0.046389613 = product of:
          0.09277923 = sum of:
            0.09277923 = weight(_text_:indexing in 3502) [ClassicSimilarity], result of:
              0.09277923 = score(doc=3502,freq=4.0), product of:
                0.19390269 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.050655533 = queryNorm
                0.47848347 = fieldWeight in 3502, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3502)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Lexical analysis is a fundamental operation in both query processing and automatic indexing, and filtering stoplist words is an important step in the automatic indexing process. Presents basic algorithms and data structures for lexical analysis, and shows how stoplist word removal can be efficiently incorporated into lexical analysis
  8. Ahlgren, P.; Kekäläinen, J.: Indexing strategies for Swedish full text retrieval under different user scenarios (2007) 0.02
    0.015280923 = product of:
      0.045842767 = sum of:
        0.045842767 = product of:
          0.09168553 = sum of:
            0.09168553 = weight(_text_:indexing in 896) [ClassicSimilarity], result of:
              0.09168553 = score(doc=896,freq=10.0), product of:
                0.19390269 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.050655533 = queryNorm
                0.47284302 = fieldWeight in 896, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=896)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper deals with Swedish full text retrieval and the problem of morphological variation of query terms in the document database. The effects of combination of indexing strategies with query terms on retrieval effectiveness were studied. Three of five tested combinations involved indexing strategies that used conflation, in the form of normalization. Further, two of these three combinations used indexing strategies that employed compound splitting. Normalization and compound splitting were performed by SWETWOL, a morphological analyzer for the Swedish language. A fourth combination attempted to group related terms by right hand truncation of query terms. The four combinations were compared to each other and to a baseline combination, where no attempt was made to counteract the problem of morphological variation of query terms in the document database. The five combinations were evaluated under six different user scenarios, where each scenario simulated a certain user type. The four alternative combinations outperformed the baseline, for each user scenario. The truncation combination had the best performance under each user scenario. The main conclusion of the paper is that normalization and right hand truncation (performed by a search expert) enhanced retrieval effectiveness in comparison to the baseline. The performance of the three combinations of indexing strategies with query terms based on normalization was not far below the performance of the truncation combination.
  9. Fagan, J.L.: ¬The effectiveness of a nonsyntactic approach to automatic phrase indexing for document retrieval (1989) 0.02
    0.015280923 = product of:
      0.045842767 = sum of:
        0.045842767 = product of:
          0.09168553 = sum of:
            0.09168553 = weight(_text_:indexing in 1845) [ClassicSimilarity], result of:
              0.09168553 = score(doc=1845,freq=10.0), product of:
                0.19390269 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.050655533 = queryNorm
                0.47284302 = fieldWeight in 1845, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1845)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    It may be possible to improve the quality of automatic indexing systems by using complex descriptors, for example, phrases, in addition to the simple descriptors (words or word stems) that are normally used in automatically constructed representations of document content. This study is directed toward the goal of developing effective methods of identifying phrases in natural language text from which good quality phrase descriptors can be constructed. The effectiveness of one method, a simple nonsyntactic phrase indexing procedure, has been tested on five experimental document collections. The results have been analyzed in order to identify the inadequacies of the procedure, and to determine what kinds of information about text structure are needed in order to construct phrase descriptors that are good indicators of document content. Two primary conclusions have been reached: (1) In the retrieval experiments, the nonsyntactic phrase construction procedure did not consistently yield substantial improvements in effectiveness. It is therefore not likely that phrase indexing of this kind will prove to be an important method of enhancing the performance of automatic document indexing and retrieval systems in operational environments. (2) Many of the shortcomings of the nonsyntactic approach can be overcome by incorporating syntactic information into the phrase construction process. However, a general syntactic analysis facility may be required, since many useful sources of phrases cannot be exploited if only a limited inventory of syntactic patterns can be recognized. Further research should be conducted into methods of incorporating automatic syntactic analysis into content analysis for document retrieval.
  10. Driscoll, J.R.; Rajala, D.A.; Shaffer, W.H.: ¬The operation and performance of an artificially intelligent keywording system (1991) 0.01
    0.013530305 = product of:
      0.040590912 = sum of:
        0.040590912 = product of:
          0.081181824 = sum of:
            0.081181824 = weight(_text_:indexing in 6681) [ClassicSimilarity], result of:
              0.081181824 = score(doc=6681,freq=4.0), product of:
                0.19390269 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.050655533 = queryNorm
                0.41867304 = fieldWeight in 6681, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6681)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents a new approach to text analysis for automating the key phrase indexing process, using artificial intelligence techniques. This mimics the behaviour of human experts by using a rule base consisting of insertion and deletion rules generated by subject-matter experts. The insertion rules are based on the idea that some phrases found in a text imply or trigger other phrases. The deletion rules apply to semantically ambiguous phrases where text presence alone does not determine appropriateness as a key phrase. The insertion and deletion rules are used to transform a list of found phrases to a list of key phrases for indexing a document. Statistical data are provided to demonstrate the performance of this expert rule based system
  11. Kuhlen, R.: Experimentelle Morphologie in der Informationswissenschaft (1977) 0.01
    0.013530305 = product of:
      0.040590912 = sum of:
        0.040590912 = product of:
          0.081181824 = sum of:
            0.081181824 = weight(_text_:indexing in 4253) [ClassicSimilarity], result of:
              0.081181824 = score(doc=4253,freq=4.0), product of:
                0.19390269 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.050655533 = queryNorm
                0.41867304 = fieldWeight in 4253, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4253)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    LCSH
    Automatic indexing
    Subject
    Automatic indexing
  12. Witschel, H.F.: Terminology extraction and automatic indexing : comparison and qualitative evaluation of methods (2005) 0.01
    0.011836551 = product of:
      0.035509653 = sum of:
        0.035509653 = product of:
          0.07101931 = sum of:
            0.07101931 = weight(_text_:indexing in 1842) [ClassicSimilarity], result of:
              0.07101931 = score(doc=1842,freq=6.0), product of:
                0.19390269 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.050655533 = queryNorm
                0.3662626 = fieldWeight in 1842, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1842)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Many terminology engineering processes involve the task of automatic terminology extraction: before the terminology of a given domain can be modelled, organised or standardised, important concepts (or terms) of this domain have to be identified and fed into terminological databases. These serve in further steps as a starting point for compiling dictionaries, thesauri or maybe even terminological ontologies for the domain. For the extraction of the initial concepts, extraction methods are needed that operate on specialised language texts. On the other hand, many machine learning or information retrieval applications require automatic indexing techniques. In Machine Learning applications concerned with the automatic clustering or classification of texts, often feature vectors are needed that describe the contents of a given text briefly but meaningfully. These feature vectors typically consist of a fairly small set of index terms together with weights indicating their importance. Short but meaningful descriptions of document contents as provided by good index terms are also useful to humans: some knowledge management applications (e.g. topic maps) use them as a set of basic concepts (topics). The author believes that the tasks of terminology extraction and automatic indexing have much in common and can thus benefit from the same set of basic algorithms. It is the goal of this paper to outline some methods that may be used in both contexts, but also to find the discriminating factors between the two tasks that call for the variation of parameters or application of different techniques. The discussion of these methods will be based on statistical, syntactical and especially morphological properties of (index) terms. The paper is concluded by the presentation of some qualitative and quantitative results comparing statistical and morphological methods.
  13. Malone, L.C.; Driscoll, J.R.; Pepe, J.W.: Modeling the performance of an automated keywording system (1991) 0.01
    0.010934137 = product of:
      0.03280241 = sum of:
        0.03280241 = product of:
          0.06560482 = sum of:
            0.06560482 = weight(_text_:indexing in 6682) [ClassicSimilarity], result of:
              0.06560482 = score(doc=6682,freq=2.0), product of:
                0.19390269 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.050655533 = queryNorm
                0.3383389 = fieldWeight in 6682, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6682)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents a model for predicting the performance of a computerised keyword assigning and indexing system. Statistical procedures were investigated in order to protect against incorrect keywording by the system behaving as an expert system designed to mimic the behaviour of human keyword indexers and representing lessons learned from military exercises and operations
  14. Polity, Y.: Vers une ergonomie linguistique (1994) 0.01
    0.010934137 = product of:
      0.03280241 = sum of:
        0.03280241 = product of:
          0.06560482 = sum of:
            0.06560482 = weight(_text_:indexing in 36) [ClassicSimilarity], result of:
              0.06560482 = score(doc=36,freq=2.0), product of:
                0.19390269 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.050655533 = queryNorm
                0.3383389 = fieldWeight in 36, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=36)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Analyzed a special type of man-mchine interaction, that of searching an information system with natural language. A model for full text processing for information retrieval was proposed that considered the system's users and how they employ information. Describes how INIST (the National Institute for Scientific and Technical Information) is developing computer assisted indexing as an aid to improving relevance when retrieving information from bibliographic data banks
  15. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.01
    0.009150834 = product of:
      0.0274525 = sum of:
        0.0274525 = product of:
          0.054905 = sum of:
            0.054905 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
              0.054905 = score(doc=6752,freq=2.0), product of:
                0.17738704 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050655533 = queryNorm
                0.30952093 = fieldWeight in 6752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6752)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    6. 3.1997 16:22:15
  16. Lorenz, S.: Konzeption und prototypische Realisierung einer begriffsbasierten Texterschließung (2006) 0.01
    0.0068631256 = product of:
      0.020589376 = sum of:
        0.020589376 = product of:
          0.04117875 = sum of:
            0.04117875 = weight(_text_:22 in 1746) [ClassicSimilarity], result of:
              0.04117875 = score(doc=1746,freq=2.0), product of:
                0.17738704 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050655533 = queryNorm
                0.23214069 = fieldWeight in 1746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1746)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2015 9:17:30
  17. Salton, G.: Automatic processing of foreign language documents (1985) 0.01
    0.0054670684 = product of:
      0.016401205 = sum of:
        0.016401205 = product of:
          0.03280241 = sum of:
            0.03280241 = weight(_text_:indexing in 3650) [ClassicSimilarity], result of:
              0.03280241 = score(doc=3650,freq=2.0), product of:
                0.19390269 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.050655533 = queryNorm
                0.16916946 = fieldWeight in 3650, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3650)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The attempt to computerize a process, such as indexing, abstracting, classifying, or retrieving information, begins with an analysis of the process into its intellectual and nonintellectual components. That part of the process which is amenable to computerization is mechanical or algorithmic. What is not is intellectual or creative and requires human intervention. Gerard Salton has been an innovator, experimenter, and promoter in the area of mechanized information systems since the early 1960s. He has been particularly ingenious at analyzing the process of information retrieval into its algorithmic components. He received a doctorate in applied mathematics from Harvard University before moving to the computer science department at Cornell, where he developed a prototype automatic retrieval system called SMART. Working with this system he and his students contributed for over a decade to our theoretical understanding of the retrieval process. On a more practical level, they have contributed design criteria for operating retrieval systems. The following selection presents one of the early descriptions of the SMART system; it is valuable as it shows the direction automatic retrieval methods were to take beyond simple word-matching techniques. These include various word normalization techniques to improve recall, for instance, the separation of words into stems and affixes; the correlation and clustering, using statistical association measures, of related terms; and the identification, using a concept thesaurus, of synonymous, broader, narrower, and sibling terms. They include, as weIl, techniques, both linguistic and statistical, to deal with the thorny problem of how to automatically extract from texts index terms that consist of more than one word. They include weighting techniques and various documentrequest matching algorithms. Significant among the latter are those which produce a retrieval output of citations ranked in relevante order. During the 1970s, Salton and his students went an to further refine these various techniques, particularly the weighting and statistical association measures. Many of their early innovations seem commonplace today. Some of their later techniques are still ahead of their time and await technological developments for implementation. The particular focus of the selection that follows is an the evaluation of a particular component of the SMART system, a multilingual thesaurus. By mapping English language expressions and their German equivalents to a common concept number, the thesaurus permitted the automatic processing of German language documents against English language queries and vice versa. The results of the evaluation, as it turned out, were somewhat inconclusive. However, this SMART experiment suggested in a bold and optimistic way how one might proceed to answer such complex questions as What is meant by retrieval language compatability? How it is to be achieved, and how evaluated?
  18. Needham, R.M.; Sparck Jones, K.: Keywords and clumps (1985) 0.00
    0.0047836853 = product of:
      0.014351056 = sum of:
        0.014351056 = product of:
          0.028702112 = sum of:
            0.028702112 = weight(_text_:indexing in 3645) [ClassicSimilarity], result of:
              0.028702112 = score(doc=3645,freq=2.0), product of:
                0.19390269 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.050655533 = queryNorm
                0.14802328 = fieldWeight in 3645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3645)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The selection that follows was chosen as it represents "a very early paper an the possibilities allowed by computers an documentation." In the early 1960s computers were being used to provide simple automatic indexing systems wherein keywords were extracted from documents. The problem with such systems was that they lacked vocabulary control, thus documents related in subject matter were not always collocated in retrieval. To improve retrieval by improving recall is the raison d'être of vocabulary control tools such as classifications and thesauri. The question arose whether it was possible by automatic means to construct classes of terms, which when substituted, one for another, could be used to improve retrieval performance? One of the first theoretical approaches to this question was initiated by R. M. Needham and Karen Sparck Jones at the Cambridge Language Research Institute in England.t The question was later pursued using experimental methodologies by Sparck Jones, who, as a Senior Research Associate in the Computer Laboratory at the University of Cambridge, has devoted her life's work to research in information retrieval and automatic naturai language processing. Based an the principles of numerical taxonomy, automatic classification techniques start from the premise that two objects are similar to the degree that they share attributes in common. When these two objects are keywords, their similarity is measured in terms of the number of documents they index in common. Step 1 in automatic classification is to compute mathematically the degree to which two terms are similar. Step 2 is to group together those terms that are "most similar" to each other, forming equivalence classes of intersubstitutable terms. The technique for forming such classes varies and is the factor that characteristically distinguishes different approaches to automatic classification. The technique used by Needham and Sparck Jones, that of clumping, is described in the selection that follows. Questions that must be asked are whether the use of automatically generated classes really does improve retrieval performance and whether there is a true eco nomic advantage in substituting mechanical for manual labor. Several years after her work with clumping, Sparck Jones was to observe that while it was not wholly satisfactory in itself, it was valuable in that it stimulated research into automatic classification. To this it might be added that it was valuable in that it introduced to libraryl information science the methods of numerical taxonomy, thus stimulating us to think again about the fundamental nature and purpose of classification. In this connection it might be useful to review how automatically derived classes differ from those of manually constructed classifications: 1) the manner of their derivation is purely a posteriori, the ultimate operationalization of the principle of literary warrant; 2) the relationship between members forming such classes is essentially statistical; the members of a given class are similar to each other not because they possess the class-defining characteristic but by virtue of sharing a family resemblance; and finally, 3) automatically derived classes are not related meaningfully one to another, that is, they are not ordered in traditional hierarchical and precedence relationships.