Search (75 results, page 1 of 4)

  • × theme_ss:"Indexierungsstudien"
  1. Gregor, D.; Mandel, C.: Cataloging must change! (1991) 0.05
    0.046680793 = product of:
      0.17505297 = sum of:
        0.05567768 = weight(_text_:23 in 1999) [ClassicSimilarity], result of:
          0.05567768 = score(doc=1999,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.47518367 = fieldWeight in 1999, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.09375 = fieldNorm(doc=1999)
        0.05567768 = weight(_text_:23 in 1999) [ClassicSimilarity], result of:
          0.05567768 = score(doc=1999,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.47518367 = fieldWeight in 1999, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.09375 = fieldNorm(doc=1999)
        0.05567768 = weight(_text_:23 in 1999) [ClassicSimilarity], result of:
          0.05567768 = score(doc=1999,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.47518367 = fieldWeight in 1999, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.09375 = fieldNorm(doc=1999)
        0.008019937 = weight(_text_:in in 1999) [ClassicSimilarity], result of:
          0.008019937 = score(doc=1999,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.18034597 = fieldWeight in 1999, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=1999)
      0.26666668 = coord(4/15)
    
    Footnote
    Vgl. auch die Erwiderung von T. Mann in: Cataloging and classification quarterly 23(1997) nos.3/4, S.3-45
  2. Pimenov, E.N.: O faktorah, vliyayushchikh na indeksirivanie : indeksirovanie i predmetnaya oblast' (2000) 0.03
    0.033406608 = product of:
      0.16703303 = sum of:
        0.05567768 = weight(_text_:23 in 898) [ClassicSimilarity], result of:
          0.05567768 = score(doc=898,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.47518367 = fieldWeight in 898, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.09375 = fieldNorm(doc=898)
        0.05567768 = weight(_text_:23 in 898) [ClassicSimilarity], result of:
          0.05567768 = score(doc=898,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.47518367 = fieldWeight in 898, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.09375 = fieldNorm(doc=898)
        0.05567768 = weight(_text_:23 in 898) [ClassicSimilarity], result of:
          0.05567768 = score(doc=898,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.47518367 = fieldWeight in 898, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.09375 = fieldNorm(doc=898)
      0.2 = coord(3/15)
    
    Source
    Nauchno- Tekhnicheskaya Informatsiya; Series 1. 2000, no.2, S.15-23
  3. Haanen, E.: Specificiteit en consistentie : een kwantitatief oderzoek naar trefwoordtoekenning door UBA en UBN (1991) 0.03
    0.032164264 = product of:
      0.12061599 = sum of:
        0.037118454 = weight(_text_:23 in 4778) [ClassicSimilarity], result of:
          0.037118454 = score(doc=4778,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.31678912 = fieldWeight in 4778, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0625 = fieldNorm(doc=4778)
        0.037118454 = weight(_text_:23 in 4778) [ClassicSimilarity], result of:
          0.037118454 = score(doc=4778,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.31678912 = fieldWeight in 4778, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0625 = fieldNorm(doc=4778)
        0.037118454 = weight(_text_:23 in 4778) [ClassicSimilarity], result of:
          0.037118454 = score(doc=4778,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.31678912 = fieldWeight in 4778, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0625 = fieldNorm(doc=4778)
        0.009260627 = weight(_text_:in in 4778) [ClassicSimilarity], result of:
          0.009260627 = score(doc=4778,freq=6.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.2082456 = fieldWeight in 4778, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4778)
      0.26666668 = coord(4/15)
    
    Abstract
    Online public access catalogues enable users to undertake subject searching by classification schedules, natural language, or controlled language terminology. In practice the 1st method is little used. Controlled language systems require indexers to index specifically and consistently. A comparative survey was made of indexing practices at Amsterdam and Mijmegen university libraries. On average Amsterdam assigned each document 3.5 index terms against 1.8 at Nijmegen. This discrepancy in indexing policy is the result of long-standing practices in each institution. Nijmegen has failed to utilise the advantages offered by online cataloges
    Source
    Open. 23(1991) no.2, S.45-49
  4. Wilson, P.: ¬The end of specifity (1979) 0.03
    0.0317111 = product of:
      0.11891663 = sum of:
        0.037118454 = weight(_text_:23 in 2274) [ClassicSimilarity], result of:
          0.037118454 = score(doc=2274,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.31678912 = fieldWeight in 2274, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0625 = fieldNorm(doc=2274)
        0.037118454 = weight(_text_:23 in 2274) [ClassicSimilarity], result of:
          0.037118454 = score(doc=2274,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.31678912 = fieldWeight in 2274, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0625 = fieldNorm(doc=2274)
        0.037118454 = weight(_text_:23 in 2274) [ClassicSimilarity], result of:
          0.037118454 = score(doc=2274,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.31678912 = fieldWeight in 2274, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0625 = fieldNorm(doc=2274)
        0.00756127 = weight(_text_:in in 2274) [ClassicSimilarity], result of:
          0.00756127 = score(doc=2274,freq=4.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.17003182 = fieldWeight in 2274, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2274)
      0.26666668 = coord(4/15)
    
    Abstract
    Recently announced subject cataloging practices at the Library of Congress, calling for systematic duplication of entries at specific and generic levels, are in direct violation of the rule of exclusively specific entry, hitherto accepted by LC. It is argued that if the new practices are justified, consistency calls for their general application, which results in abandonment of the rule. But the new practices do not accomplish their ostensible goals, do not reveal more of the content of LC's collections, do introduce new inconveniences, do constitute a pointless enlargement of catalogs, and hence should be abandoned
    Source
    Library resources and technical services. 23(1979), S.116-122
  5. Tseng, Y.-H.: Keyword extraction techniques and relevance feedback (1997) 0.03
    0.028772514 = product of:
      0.107896924 = sum of:
        0.032478645 = weight(_text_:23 in 1830) [ClassicSimilarity], result of:
          0.032478645 = score(doc=1830,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.27719048 = fieldWeight in 1830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1830)
        0.032478645 = weight(_text_:23 in 1830) [ClassicSimilarity], result of:
          0.032478645 = score(doc=1830,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.27719048 = fieldWeight in 1830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1830)
        0.032478645 = weight(_text_:23 in 1830) [ClassicSimilarity], result of:
          0.032478645 = score(doc=1830,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.27719048 = fieldWeight in 1830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1830)
        0.010460991 = weight(_text_:in in 1830) [ClassicSimilarity], result of:
          0.010460991 = score(doc=1830,freq=10.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.23523843 = fieldWeight in 1830, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1830)
      0.26666668 = coord(4/15)
    
    Abstract
    Automatic keyword extraction is an important and fundamental technology in an advanced information retrieval systems. Briefly compares several major keyword extraction methods, lists their advantages and disadvantages, and reports recent research progress in Taiwan. Also describes the application of a keyword extraction algorithm in an information retrieval system for relevance feedback. Preliminary analysis shows that the error rate of extracting relevant keywords is 18%, and that the precision rate is over 50%. The main disadvantage of this approach is that the extraction results depend on the retrieval results, which in turn depend on the data held by the database. Apart from collecting more data, this problem can be alleviated by the application of a thesaurus constructed by the same keyword extraction algorithm
    Date
    13. 5.1996 21:43:23
    Footnote
    [In Chinesisch]
  6. Connell, T.H.: Use of the LCSH system : realities (1996) 0.03
    0.02847801 = product of:
      0.10679253 = sum of:
        0.032478645 = weight(_text_:23 in 6941) [ClassicSimilarity], result of:
          0.032478645 = score(doc=6941,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.27719048 = fieldWeight in 6941, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6941)
        0.032478645 = weight(_text_:23 in 6941) [ClassicSimilarity], result of:
          0.032478645 = score(doc=6941,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.27719048 = fieldWeight in 6941, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6941)
        0.032478645 = weight(_text_:23 in 6941) [ClassicSimilarity], result of:
          0.032478645 = score(doc=6941,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.27719048 = fieldWeight in 6941, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6941)
        0.009356594 = weight(_text_:in in 6941) [ClassicSimilarity], result of:
          0.009356594 = score(doc=6941,freq=8.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.21040362 = fieldWeight in 6941, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6941)
      0.26666668 = coord(4/15)
    
    Abstract
    Explores the question of whether academic libraries keep up with the changes in the LCSH system. Analysis of the handling of 15 subject headings in 50 academic library catalogues available via the Internet found that libraries are not consistently maintaining subject authority control, or making syndetic references and scope notes in their catalogues. Discusses the results from the perspective of the libraries' performance, performance on the headings overall, performance on references, performance on the type of change made to the headings,a nd performance within 3 widely used onlien catalogue systems (DRA, INNOPAC and NOTIS). Discusses the implications of the findings in relationship to expressions of dissatisfaction with the effectiveness of subject cataloguing expressed by discussion groups on the Internet
    Source
    Cataloging and classification quarterly. 23(1996) no.1, S.73-98
  7. Mann, T.: 'Cataloging must change!' and indexer consistency studies : misreading the evidence at our peril (1997) 0.02
    0.024890373 = product of:
      0.09333889 = sum of:
        0.02783884 = weight(_text_:23 in 492) [ClassicSimilarity], result of:
          0.02783884 = score(doc=492,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.23759183 = fieldWeight in 492, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=492)
        0.02783884 = weight(_text_:23 in 492) [ClassicSimilarity], result of:
          0.02783884 = score(doc=492,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.23759183 = fieldWeight in 492, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=492)
        0.02783884 = weight(_text_:23 in 492) [ClassicSimilarity], result of:
          0.02783884 = score(doc=492,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.23759183 = fieldWeight in 492, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=492)
        0.009822378 = weight(_text_:in in 492) [ClassicSimilarity], result of:
          0.009822378 = score(doc=492,freq=12.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.22087781 = fieldWeight in 492, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=492)
      0.26666668 = coord(4/15)
    
    Abstract
    An earlier article ('Cataloging must change' by D. Gregor and C. Mandel in: Library journal 116(1991) no.6, S.42-47) has popularized the belief that there is low consistency (only 10-20% agreement) among subject cataloguers in assigning LCSH. Because of this alleged lack og consistency, the article suggests, cataloguers 'can be more accepting in variations in subject choices' in copy cataloguing. Argues that this inference is based on a serious misreading of previous studies of indexer consistency. The 10-20% figure actually derives from studies of people trying to guess the same natural language key words, precisely in the absence of vocabulary control mechanisms such as thesauri or LCSH. Concludes that sources cited fail support their conclusion and some directly contradict it. Raises the concern that a naive acceptance by the library profession of the 10-20% claim can only have negative consequences for the quality of subject cataloguing created, and accepted throughout the country
    Source
    Cataloging and classification quarterly. 23(1997) nos.3/4, S.3-45
  8. Veenema, F.: To index or not to index (1996) 0.02
    0.017077435 = product of:
      0.08538717 = sum of:
        0.06431496 = weight(_text_:software in 7247) [ClassicSimilarity], result of:
          0.06431496 = score(doc=7247,freq=4.0), product of:
            0.12969498 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.032692216 = queryNorm
            0.49589399 = fieldWeight in 7247, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=7247)
        0.009260627 = weight(_text_:in in 7247) [ClassicSimilarity], result of:
          0.009260627 = score(doc=7247,freq=6.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.2082456 = fieldWeight in 7247, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=7247)
        0.011811584 = product of:
          0.035434753 = sum of:
            0.035434753 = weight(_text_:22 in 7247) [ClassicSimilarity], result of:
              0.035434753 = score(doc=7247,freq=2.0), product of:
                0.114482574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032692216 = queryNorm
                0.30952093 = fieldWeight in 7247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7247)
          0.33333334 = coord(1/3)
      0.2 = coord(3/15)
    
    Abstract
    Describes an experiment comparing the performance of automatic full-text indexing software for personal computers with the human intellectual assignment of indexing terms in each document in a collection. Considers the times required to index the document, to retrieve documents satisfying 5 typical foreseen information needs, and the recall and precision ratios of searching. The software used is QuickFinder facility in WordPerfect 6.1 for Windows
    Source
    Canadian journal of information and library science. 21(1996) no.2, S.1-22
  9. Gretz, M.; Thomas, M.: Indexierungen in biomedizinischen Literaturdatenbanken : eine vergleichende Analyse (1991) 0.01
    0.012571282 = product of:
      0.062856406 = sum of:
        0.027772553 = weight(_text_:und in 5104) [ClassicSimilarity], result of:
          0.027772553 = score(doc=5104,freq=10.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.38329202 = fieldWeight in 5104, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5104)
        0.013232223 = weight(_text_:in in 5104) [ClassicSimilarity], result of:
          0.013232223 = score(doc=5104,freq=16.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.29755569 = fieldWeight in 5104, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5104)
        0.021851625 = weight(_text_:der in 5104) [ClassicSimilarity], result of:
          0.021851625 = score(doc=5104,freq=6.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.29922754 = fieldWeight in 5104, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5104)
      0.2 = coord(3/15)
    
    Abstract
    Auf der Grundlage von vier Originaldokumenten, d.h. dokumentarischen Bezugseinheiten (DBEs), wird die Indexierung in vier biomedizinischen Online-Datenbanken (MEDLINE, EMBASE, BIOSIS PREVIEWS, SCISEARCH) analysiert. Anhand von Beispielen werden inahltliche Erschließung, Indexierungstiefe, Indexierungsbreite, Indexierungskonsistenz, Präzision (durch syntaktisches Indexieren, Gewichtung, Proximity Operatoren) und Wiederauffindbarkeit (Recall) der in den Datenbanken gespeicherten Dokumentationseinheien (DBEs) untersucht. Die zeitaufwendigere intellektuelle Indexierung bei MEDLINE und EMBASE erweist sich als wesentlich präziser als die schneller verfügbare maschinelle Zuteilung von Deskriptoren in BIOSIS PREVIEWS und SCISEARCH. In Teil 1 der Untersuchung werden die Indexierungen in MEDLINE und EMBASE, in Teil 2 die Deskriptorenzuteilungen in BIOSIS PREVIEWS und SCISEARCH verglichen
  10. Ladewig, C.; Rieger, M.: Ähnlichkeitsmessung mit und ohne aspektische Indexierung (1998) 0.01
    0.011495457 = product of:
      0.057477284 = sum of:
        0.03174006 = weight(_text_:und in 2526) [ClassicSimilarity], result of:
          0.03174006 = score(doc=2526,freq=10.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.438048 = fieldWeight in 2526, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=2526)
        0.005346625 = weight(_text_:in in 2526) [ClassicSimilarity], result of:
          0.005346625 = score(doc=2526,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.120230645 = fieldWeight in 2526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2526)
        0.0203906 = weight(_text_:der in 2526) [ClassicSimilarity], result of:
          0.0203906 = score(doc=2526,freq=4.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.27922085 = fieldWeight in 2526, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=2526)
      0.2 = coord(3/15)
    
    Abstract
    Für eine fiktive Dokumentmenge wird eine Dokument-Wort-Matrix erstellt und mittels zweier Suchanfragen, ebenfalls als Matrix dargestellt, die Retrievalergebnisse ermittelt. Den Wörtern der Dokumentmenge werden in einem zweiten Schritt Aspekte zugeordnet und die Untersuchung erneut durchgeführt. Ein Vergleich bestätigt die schon früher gefundenen Vorteile des aspektischen Indexierung gegenüber anderen Methoden der Retrievalverbesserung, wie Trunkierung und Controlled Terms
    Source
    nfd Information - Wissenschaft und Praxis. 49(1998) H.8, S.459-462
  11. Harter, S.P.; Cheng, Y.-R.: Colinked descriptors : improving vocabulary selection for end-user searching (1996) 0.01
    0.010554807 = product of:
      0.052774034 = sum of:
        0.03410816 = weight(_text_:software in 4216) [ClassicSimilarity], result of:
          0.03410816 = score(doc=4216,freq=2.0), product of:
            0.12969498 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.032692216 = queryNorm
            0.2629875 = fieldWeight in 4216, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=4216)
        0.010645939 = weight(_text_:und in 4216) [ClassicSimilarity], result of:
          0.010645939 = score(doc=4216,freq=2.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.14692576 = fieldWeight in 4216, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4216)
        0.008019937 = weight(_text_:in in 4216) [ClassicSimilarity], result of:
          0.008019937 = score(doc=4216,freq=8.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.18034597 = fieldWeight in 4216, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4216)
      0.2 = coord(3/15)
    
    Abstract
    This article introduces a new concept and technique for information retrieval called 'colinked descriptors'. Borrowed from an analogous idea in bibliometrics - cocited references - colinked descriptors provide a theory and method for identifying search terms that, by hypothesis, will be superior to those entered initially by a searcher. The theory suggests a means of moving automatically from 2 or more initial search terms, to other terms that should be superior in retrieval performance to the 2 original terms. A research project designed to test this colinked descriptor hypothesis is reported. The results suggest that the approach is effective, although methodological problems in testing the idea are reported. Algorithms to generate colinked descriptors can be incorporated easily into system interfaces, front-end or pre-search systems, or help software, in any database that employs a thesaurus. The potential use of colinked descriptors is a strong argument for building richer and more complex thesauri that reflect as many legitimate links among descriptors as possible
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  12. Huffman, G.D.; Vital, D.A.; Bivins, R.G.: Generating indices with lexical association methods : term uniqueness (1990) 0.01
    0.005989687 = product of:
      0.04492265 = sum of:
        0.040196855 = weight(_text_:software in 4152) [ClassicSimilarity], result of:
          0.040196855 = score(doc=4152,freq=4.0), product of:
            0.12969498 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.032692216 = queryNorm
            0.30993375 = fieldWeight in 4152, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4152)
        0.004725794 = weight(_text_:in in 4152) [ClassicSimilarity], result of:
          0.004725794 = score(doc=4152,freq=4.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.10626988 = fieldWeight in 4152, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4152)
      0.13333334 = coord(2/15)
    
    Abstract
    A software system has been developed which orders citations retrieved from an online database in terms of relevancy. The system resulted from an effort generated by NASA's Technology Utilization Program to create new advanced software tools to largely automate the process of determining relevancy of database citations retrieved to support large technology transfer studies. The ranking is based on the generation of an enriched vocabulary using lexical association methods, a user assessment of the vocabulary and a combination of the user assessment and the lexical metric. One of the key elements in relevancy ranking is the enriched vocabulary -the terms mst be both unique and descriptive. This paper examines term uniqueness. Six lexical association methods were employed to generate characteristic word indices. A limited subset of the terms - the highest 20,40,60 and 7,5% of the uniquess words - we compared and uniquess factors developed. Computational times were also measured. It was found that methods based on occurrences and signal produced virtually the same terms. The limited subset of terms producedby the exact and centroid discrimination value were also nearly identical. Unique terms sets were produced by teh occurrence, variance and discrimination value (centroid), An end-user evaluation showed that the generated terms were largely distinct and had values of word precision which were consistent with values of the search precision.
  13. Cleverdon, C.W.: ASLIB Cranfield Research Project : Report on the first stage of an investigation into the comparative efficiency of indexing systems (1960) 0.00
    0.003431642 = product of:
      0.025737314 = sum of:
        0.008019937 = weight(_text_:in in 6158) [ClassicSimilarity], result of:
          0.008019937 = score(doc=6158,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.18034597 = fieldWeight in 6158, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=6158)
        0.017717376 = product of:
          0.05315213 = sum of:
            0.05315213 = weight(_text_:22 in 6158) [ClassicSimilarity], result of:
              0.05315213 = score(doc=6158,freq=2.0), product of:
                0.114482574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032692216 = queryNorm
                0.46428138 = fieldWeight in 6158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6158)
          0.33333334 = coord(1/3)
      0.13333334 = coord(2/15)
    
    Footnote
    Rez. in: College and research libraries 22(1961) no.3, S.228 (G. Jahoda)
  14. Tinker, F.F.: Imprecision in meaning measured by inconsistency of indexing (1966-68) 0.00
    0.0032568686 = product of:
      0.024426512 = sum of:
        0.017743232 = weight(_text_:und in 2275) [ClassicSimilarity], result of:
          0.017743232 = score(doc=2275,freq=2.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.24487628 = fieldWeight in 2275, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=2275)
        0.0066832816 = weight(_text_:in in 2275) [ClassicSimilarity], result of:
          0.0066832816 = score(doc=2275,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.15028831 = fieldWeight in 2275, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2275)
      0.13333334 = coord(2/15)
    
    Content
    Ergebnisse: (1) Wenn SW frei gewählt, Recherche um so schwieriger, je mehr SW; (2) 'ältere' SW häufiger und weniger genau verwendet als 'jüngere'; (3) viele Wörter mit ungenauer Bedeutung
  15. Chan, L.M.: Inter-indexer consistency in subject cataloging (1989) 0.00
    0.003127362 = product of:
      0.023455214 = sum of:
        0.014194585 = weight(_text_:und in 2276) [ClassicSimilarity], result of:
          0.014194585 = score(doc=2276,freq=2.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.19590102 = fieldWeight in 2276, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=2276)
        0.009260627 = weight(_text_:in in 2276) [ClassicSimilarity], result of:
          0.009260627 = score(doc=2276,freq=6.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.2082456 = fieldWeight in 2276, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2276)
      0.13333334 = coord(2/15)
    
    Abstract
    The purpose of the current study has been twofold: (1) to develop a valid methodology for studying indexing consistency in MARC records and, (2) to study such consistency in subject cataloging practice between non-LC libraries and the Library of Congress
    Content
    Die Studie enthält Konsistenzzahlen bezogen auf die LCSH. Diese Zahlen sind kategorienbezogen und können teilweise auf die RSWK übertragen werden
  16. Neshat, N.; Horri, A.: ¬A study of subject indexing consistency between the National Library of Iran and Humanities Libraries in the area of Iranian studies (2006) 0.00
    0.0027728172 = product of:
      0.020796128 = sum of:
        0.010460991 = weight(_text_:in in 230) [ClassicSimilarity], result of:
          0.010460991 = score(doc=230,freq=10.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.23523843 = fieldWeight in 230, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=230)
        0.010335136 = product of:
          0.031005409 = sum of:
            0.031005409 = weight(_text_:22 in 230) [ClassicSimilarity], result of:
              0.031005409 = score(doc=230,freq=2.0), product of:
                0.114482574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032692216 = queryNorm
                0.2708308 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.13333334 = coord(2/15)
    
    Abstract
    This study represents an attempt to compare indexing consistency between the catalogers of the National Library of Iran (NLI) on one side and 12 major academic and special libraries located in Tehran on the other. The research findings indicate that in 75% of the libraries the subject inconsistency values are 60% to 85%. In terms of subject classes, the consistency values are 10% to 35.2%, the mean of which is 22.5%. Moreover, the findings show that whenever the number of assigned terms increases, the probability of consistency decreases. This confirms Markey's findings in 1984.
    Date
    4. 1.2007 10:22:26
  17. Bellamy, L.M.; Bickham, L.: Thesaurus development for subject cataloging (1989) 0.00
    0.0026150004 = product of:
      0.019612502 = sum of:
        0.010645939 = weight(_text_:und in 2262) [ClassicSimilarity], result of:
          0.010645939 = score(doc=2262,freq=2.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.14692576 = fieldWeight in 2262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2262)
        0.008966564 = weight(_text_:in in 2262) [ClassicSimilarity], result of:
          0.008966564 = score(doc=2262,freq=10.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.20163295 = fieldWeight in 2262, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2262)
      0.13333334 = coord(2/15)
    
    Abstract
    The biomedical book collection in the Genetech Library and Information Services was first inventoried and cataloged in 1983 when it totaled about 2000 titles. Cataloging records were retrieved from the OCLC system and used as a basis for cataloging. A year of cataloging produced a list of 1900 subject terms. More than one term describing the same concept often appears on the list, and no hierarchical structure related the terms to one another. As the collection grew, the subject catalog became increasingly inconsistent. To bring consistency to subject cataloging, a thesaurus of biomedical terms was constructed using the list of subject headings as a basis. This thesaurus follows the broad categories of the National Library of Medicine's Medical Subject Headings and, with some exceptions, the Guidelines for the Establishment and Development of Monolingual Thesauri. It has enabled the cataloger in providing greater in-depth subject analysis of materials added to the collection and in consistently assigning subject headings to cataloging record.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  18. Kedar, R.; Shoham, S.: ¬The subject cataloging of monographs with the use of a thesaurus (2003) 0.00
    0.0026150004 = product of:
      0.019612502 = sum of:
        0.010645939 = weight(_text_:und in 2700) [ClassicSimilarity], result of:
          0.010645939 = score(doc=2700,freq=2.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.14692576 = fieldWeight in 2700, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2700)
        0.008966564 = weight(_text_:in in 2700) [ClassicSimilarity], result of:
          0.008966564 = score(doc=2700,freq=10.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.20163295 = fieldWeight in 2700, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2700)
      0.13333334 = coord(2/15)
    
    Abstract
    This paper presents the findings of a study of indexing procedure with the use of a thesaurus for post-coordination. In the first phase of the study, the indexing records of 50 books, prepared by a central cataloging service (the Israeli Center for Libraries), were compared with the indexing records for these books prepared by three independent indexers. In the second phase, indexing records for three books prepared by 51 librarians were studied. In both phases, indexing records were analyzed for mistakes and possible reasons for these mistakes are offered.
    Series
    Advances in knowledge organization; vol.8
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  19. Taniguchi, S.: Recording evidence in bibliographic records and descriptive metadata (2005) 0.00
    0.002490809 = product of:
      0.018681066 = sum of:
        0.009822378 = weight(_text_:in in 3565) [ClassicSimilarity], result of:
          0.009822378 = score(doc=3565,freq=12.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.22087781 = fieldWeight in 3565, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3565)
        0.008858688 = product of:
          0.026576065 = sum of:
            0.026576065 = weight(_text_:22 in 3565) [ClassicSimilarity], result of:
              0.026576065 = score(doc=3565,freq=2.0), product of:
                0.114482574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032692216 = queryNorm
                0.23214069 = fieldWeight in 3565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3565)
          0.33333334 = coord(1/3)
      0.13333334 = coord(2/15)
    
    Abstract
    In this article recording evidence for data values in addition to the values themselves in bibliographic records and descriptive metadata is proposed, with the aim of improving the expressiveness and reliability of those records and metadata. Recorded evidence indicates why and how data values are recorded for elements. Recording the history of changes in data values is also proposed, with the aim of reinforcing recorded evidence. First, evidence that can be recorded is categorized into classes: identifiers of rules or tasks, action descriptions of them, and input and output data of them. Dates of recording values and evidence are an additional class. Then, the relative usefulness of evidence classes and also levels (i.e., the record, data element, or data value level) to which an individual evidence class is applied, is examined. Second, examples that can be viewed as recorded evidence in existing bibliographic records and current cataloging rules are shown. Third, some examples of bibliographic records and descriptive metadata with notes of evidence are demonstrated. Fourth, ways of using recorded evidence are addressed.
    Date
    18. 6.2005 13:16:22
  20. Leininger, K.: Interindexer consistency in PsychINFO (2000) 0.00
    0.0021072212 = product of:
      0.015804159 = sum of:
        0.00694547 = weight(_text_:in in 2552) [ClassicSimilarity], result of:
          0.00694547 = score(doc=2552,freq=6.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.1561842 = fieldWeight in 2552, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2552)
        0.008858688 = product of:
          0.026576065 = sum of:
            0.026576065 = weight(_text_:22 in 2552) [ClassicSimilarity], result of:
              0.026576065 = score(doc=2552,freq=2.0), product of:
                0.114482574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032692216 = queryNorm
                0.23214069 = fieldWeight in 2552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2552)
          0.33333334 = coord(1/3)
      0.13333334 = coord(2/15)
    
    Abstract
    Reports results of a study to examine interindexer consistency (the degree to which indexers, when assigning terms to a chosen record, will choose the same terms to reflect that record) in the PsycINFO database using 60 records that were inadvertently processed twice between 1996 and 1998. Five aspects of interindexer consistency were analysed. Two methods were used to calculate interindexer consistency: one posited by Hooper (1965) and the other by Rollin (1981). Aspects analysed were: checktag consistency (66.24% using Hooper's calculation and 77.17% using Rollin's); major-to-all term consistency (49.31% and 62.59% respectively); overall indexing consistency (49.02% and 63.32%); classification code consistency (44.17% and 45.00%); and major-to-major term consistency (43.24% and 56.09%). The average consistency across all categories was 50.4% using Hooper's method and 60.83% using Rollin's. Although comparison with previous studies is difficult due to methodological variations in the overall study of indexing consistency and the specific characteristics of the database, results generally support previous findings when trends and similar studies are analysed.
    Date
    9. 2.1997 18:44:22

Authors

Languages

Types

  • a 73
  • m 1
  • r 1
  • More… Less…