Search (10 results, page 1 of 1)

  • × theme_ss:"Indexierungsstudien"
  • × year_i:[1990 TO 2000}
  1. Kautto, V.: Classing and indexing : a comparative time study (1992) 0.03
    0.028236724 = product of:
      0.05647345 = sum of:
        0.031038022 = weight(_text_:data in 2670) [ClassicSimilarity], result of:
          0.031038022 = score(doc=2670,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 2670, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2670)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 2670) [ClassicSimilarity], result of:
              0.05087085 = score(doc=2670,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 2670, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2670)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A total of 16 classifiers made a subject analysis of a set of books such that some of the books were first classified by the UDC anf then indexed with terms from the General Finnish Subject Headings while another set were processed in the opposite order. Finally books on the same subject were either classifies or indexed. The total number of books processed was 581. A comparison was made of the time required for processing in different situations and of the number of classes or subject headings used. The time figures were compared with corresponding data from the British Library (1972) and the Library of Congress (1990 and 1991). The author finds that the contents analysis requires one third, classification one third and indexing obe third of the time, if the document is both classified and indexed. There was a plausible correlation (o.51) between the length of experience in classification and the decrease in the time required for classing. The average number of UDC numbers was 4,3 and the average number of terms from the list of subject headings was 4,0
  2. Tseng, Y.-H.: Keyword extraction techniques and relevance feedback (1997) 0.01
    0.012802532 = product of:
      0.051210128 = sum of:
        0.051210128 = weight(_text_:data in 1830) [ClassicSimilarity], result of:
          0.051210128 = score(doc=1830,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34584928 = fieldWeight in 1830, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1830)
      0.25 = coord(1/4)
    
    Abstract
    Automatic keyword extraction is an important and fundamental technology in an advanced information retrieval systems. Briefly compares several major keyword extraction methods, lists their advantages and disadvantages, and reports recent research progress in Taiwan. Also describes the application of a keyword extraction algorithm in an information retrieval system for relevance feedback. Preliminary analysis shows that the error rate of extracting relevant keywords is 18%, and that the precision rate is over 50%. The main disadvantage of this approach is that the extraction results depend on the retrieval results, which in turn depend on the data held by the database. Apart from collecting more data, this problem can be alleviated by the application of a thesaurus constructed by the same keyword extraction algorithm
  3. Braam, R.R.; Bruil, J.: Quality of indexing information : authors' views on indexing of their articles in chemical abstracts online CA-file (1992) 0.01
    0.010973599 = product of:
      0.043894395 = sum of:
        0.043894395 = weight(_text_:data in 2638) [ClassicSimilarity], result of:
          0.043894395 = score(doc=2638,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 2638, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2638)
      0.25 = coord(1/4)
    
    Abstract
    Studies the quality of subject indexing by Chemical Abstracts Indexing Service by confronting authors with the particular indexing terms attributed to their computer, for 270 articles published in 54 journals, 5 articles out of each journal. Responses (80%) indicate the superior quality of keywords, both as content descriptors and as retrieval tools. Author judgements on these 2 different aspects do not always converge, however. CAS's indexing policy to cover only 'new' aspects is reflected in author's judgements that index lists are somewhat incomplete, in particular in the case of thesaurus terms (index headings). The large effort expanded by CAS in maintaining and using a subject thesuaurs, in order to select valid index headings, as compared to quick and cheap keyword postings, does not lead to clear superior quality of thesaurus terms for document description nor in retrieval. Some 20% of papers were not placed in 'proper' CA main section, according to authors. As concerns the use of indexing data by third parties, in bibliometrics, users should be aware of the indexing policies behind the data, in order to prevent invalid interpretations
  4. David, C.; Giroux, L.; Bertrand-Gastaldy, S.; Lanteigne, D.: Indexing as problem solving : a cognitive approach to consistency (1995) 0.01
    0.0103460075 = product of:
      0.04138403 = sum of:
        0.04138403 = weight(_text_:data in 3833) [ClassicSimilarity], result of:
          0.04138403 = score(doc=3833,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 3833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=3833)
      0.25 = coord(1/4)
    
    Abstract
    Presents results of an experiment in which 8 indexers (4 beginners and 4 experts) were asked to index the same 4 documents with 2 different thesauri. The 3 kind of verbal reports provide complementary data on strategic behaviour. it is of prime importance to consider the indexing task as an ill-defined problem, where the solutionm is partly defined by the indexer
  5. David, C.; Giroux, L.; Bertrand-Gastaldy, S.; Lanteigne, D.: Indexing as problem solving : a cognitive approach to consistency (1995) 0.01
    0.0077595054 = product of:
      0.031038022 = sum of:
        0.031038022 = weight(_text_:data in 3609) [ClassicSimilarity], result of:
          0.031038022 = score(doc=3609,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 3609, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3609)
      0.25 = coord(1/4)
    
    Abstract
    Indexers differ in their judgement as to which terms reflect adequately the content of a document. Studies of interindexers' consistency identified several factors associated with low consistency, but failed to provide a comprehensive model of this phenomenon. Our research applies theories and methods from cognitive psychology to the study of indexing behavior. From a theoretical standpoint, indexing is considered as a problem solving situation. To access to the cognitive processes of indexers, 3 kinds of verbal reports are used. We will present results of an experiment in which 4 experienced indexers indexed the same documents. It will be shown that the 3 kinds of verbal reports provide complementary data on strategic behavior, and that it is of prime importance to consider the indexing task as an ill-defined problem, where the solution is partly defined by the indexer him(her)self
  6. Ballard, R.M.: Indexing and its relevance to technical processing (1993) 0.01
    0.0074939844 = product of:
      0.029975938 = sum of:
        0.029975938 = product of:
          0.059951875 = sum of:
            0.059951875 = weight(_text_:processing in 554) [ClassicSimilarity], result of:
              0.059951875 = score(doc=554,freq=4.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3162615 = fieldWeight in 554, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=554)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The development of regional on-line catalogs and in-house information systems for retrieval of references provide examples of the impact of indexing theory and applications on technical processing. More emphasis must be given to understanding the techniques for evaluating the effectiveness of a file, irrespective of whether that file was created as a library catalog or an index to information sources. The most significant advances in classification theory in recent decades has been as a result of efforts to improve effectiveness of indexing systems. Library classification systems are indexing languages or systems. Courses offered for the preparation of indexers in the United States and the United Kingdom are reviewed. A point of congruence for both the indexer and the library classifier would appear to be the need for a thorough preparation in the techniques of subject analysis. Any subject heading list will suffer from omissions as well as the inclusion of terms which the patron will never use. Indexing theory has provided the technical services department with methods for evaluation of effectiveness. The writer does not believe that these techniques are used, nor do current courses, workshops, and continuing education programs stress them. When theory is totally subjugated to practice, critical thinking and maximum effectiveness will suffer.
  7. Burgin, R.: ¬The effect of indexing exhaustivity on retrieval performance (1991) 0.01
    0.0063588563 = product of:
      0.025435425 = sum of:
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 5262) [ClassicSimilarity], result of:
              0.05087085 = score(doc=5262,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 5262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5262)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 27(1991) no.6, S.623-628
  8. Veenema, F.: To index or not to index (1996) 0.01
    0.006344468 = product of:
      0.025377871 = sum of:
        0.025377871 = product of:
          0.050755743 = sum of:
            0.050755743 = weight(_text_:22 in 7247) [ClassicSimilarity], result of:
              0.050755743 = score(doc=7247,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.30952093 = fieldWeight in 7247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7247)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Canadian journal of information and library science. 21(1996) no.2, S.1-22
  9. Booth, A.: How consistent is MEDLINE indexing? (1990) 0.01
    0.0055514094 = product of:
      0.022205638 = sum of:
        0.022205638 = product of:
          0.044411276 = sum of:
            0.044411276 = weight(_text_:22 in 3510) [ClassicSimilarity], result of:
              0.044411276 = score(doc=3510,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.2708308 = fieldWeight in 3510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3510)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Health libraries review. 7(1990) no.1, S.22-26
  10. Huffman, G.D.; Vital, D.A.; Bivins, R.G.: Generating indices with lexical association methods : term uniqueness (1990) 0.01
    0.005299047 = product of:
      0.021196188 = sum of:
        0.021196188 = product of:
          0.042392377 = sum of:
            0.042392377 = weight(_text_:processing in 4152) [ClassicSimilarity], result of:
              0.042392377 = score(doc=4152,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.22363065 = fieldWeight in 4152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4152)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 26(1990) no.4, S.549-558