Search (19 results, page 1 of 1)

  • × year_i:[1990 TO 2000}
  • × theme_ss:"Indexierungsstudien"
  1. Veenema, F.: To index or not to index (1996) 0.05
    0.0482846 = product of:
      0.1448538 = sum of:
        0.1448538 = sum of:
          0.09100107 = weight(_text_:indexing in 7247) [ClassicSimilarity], result of:
            0.09100107 = score(doc=7247,freq=4.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.47848347 = fieldWeight in 7247, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0625 = fieldNorm(doc=7247)
          0.053852726 = weight(_text_:22 in 7247) [ClassicSimilarity], result of:
            0.053852726 = score(doc=7247,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.30952093 = fieldWeight in 7247, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=7247)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes an experiment comparing the performance of automatic full-text indexing software for personal computers with the human intellectual assignment of indexing terms in each document in a collection. Considers the times required to index the document, to retrieve documents satisfying 5 typical foreseen information needs, and the recall and precision ratios of searching. The software used is QuickFinder facility in WordPerfect 6.1 for Windows
    Source
    Canadian journal of information and library science. 21(1996) no.2, S.1-22
  2. Booth, A.: How consistent is MEDLINE indexing? (1990) 0.04
    0.042249024 = product of:
      0.12674707 = sum of:
        0.12674707 = sum of:
          0.079625934 = weight(_text_:indexing in 3510) [ClassicSimilarity], result of:
            0.079625934 = score(doc=3510,freq=4.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.41867304 = fieldWeight in 3510, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3510)
          0.047121134 = weight(_text_:22 in 3510) [ClassicSimilarity], result of:
            0.047121134 = score(doc=3510,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.2708308 = fieldWeight in 3510, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3510)
      0.33333334 = coord(1/3)
    
    Abstract
    A known-item search for abstracts to previously retrieved references revealed that 2 documents from the same annual volume had been indexed twice. Working from the premise that the whole volume may have been double-indexed, a search strategy was devised that limited the journal code to the year in question. 57 references were retrieved, comprising 28 pairs of duplicates plus a citation for the whole volume. Author, title, source and descriptors were requested off-line and the citations were paired with their duplicates. The 4 categories of descriptors-major descriptors, minor descriptors, subheadings and check-tags-were compared for depth and consistency of indexing and lessons that might be learnt from the study are discussed.
    Source
    Health libraries review. 7(1990) no.1, S.22-26
  3. Soergel, D.: Indexing and retrieval performance : the logical evidence (1994) 0.03
    0.028152019 = product of:
      0.08445606 = sum of:
        0.08445606 = product of:
          0.16891211 = sum of:
            0.16891211 = weight(_text_:indexing in 579) [ClassicSimilarity], result of:
              0.16891211 = score(doc=579,freq=18.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.8881396 = fieldWeight in 579, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=579)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This article presents a logical analysis of the characteristics of indexing and their effects on retrieval performance.It establishes the ability to ask the questions one needs to ask as the foundation of performance evaluation, and recall and discrimination as the basic quantitative performance measures for binary noninteractive retrieval systems. It then defines the characteristics of indexing that affect retrieval - namely, indexing devices, viewpoint-based and importance-based indexing exhaustivity, indexing specifity, indexing correctness, and indexing consistency - and examines in detail their effects on retrieval. It concludes that retrieval performance depends chiefly on the match between indexing and the requirements of the individual query and on the adaption of the query formulation to the characteristics of the retrieval system, and that the ensuing complexity must be considered in the design and testing of retrieval systems
  4. Reich, P.; Biever, E.J.: Indexing consistency : The input/output function of thesauri (1991) 0.03
    0.026269745 = product of:
      0.07880923 = sum of:
        0.07880923 = product of:
          0.15761846 = sum of:
            0.15761846 = weight(_text_:indexing in 2258) [ClassicSimilarity], result of:
              0.15761846 = score(doc=2258,freq=12.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.82875764 = fieldWeight in 2258, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2258)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This study measures inter-indexer consistency as determined by the number of identical terms assigned to the same document by two different indexing organizations using the same thesaurus as a source for the entry vocabulary. The authors derive consistency figures of 24 percent and 45 percent for two samples. Factors in the consistency failures include variations in indexing depth, differences in choice of concepts for indexing, different indexing policies, and a highly specific indexing vocabulray. Results indicate that broad search strategies are often necessary for adequate search yields.
  5. Braam, R.R.; Bruil, J.: Quality of indexing information : authors' views on indexing of their articles in chemical abstracts online CA-file (1992) 0.02
    0.02275027 = product of:
      0.068250805 = sum of:
        0.068250805 = product of:
          0.13650161 = sum of:
            0.13650161 = weight(_text_:indexing in 2638) [ClassicSimilarity], result of:
              0.13650161 = score(doc=2638,freq=16.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.7177252 = fieldWeight in 2638, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2638)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Studies the quality of subject indexing by Chemical Abstracts Indexing Service by confronting authors with the particular indexing terms attributed to their computer, for 270 articles published in 54 journals, 5 articles out of each journal. Responses (80%) indicate the superior quality of keywords, both as content descriptors and as retrieval tools. Author judgements on these 2 different aspects do not always converge, however. CAS's indexing policy to cover only 'new' aspects is reflected in author's judgements that index lists are somewhat incomplete, in particular in the case of thesaurus terms (index headings). The large effort expanded by CAS in maintaining and using a subject thesuaurs, in order to select valid index headings, as compared to quick and cheap keyword postings, does not lead to clear superior quality of thesaurus terms for document description nor in retrieval. Some 20% of papers were not placed in 'proper' CA main section, according to authors. As concerns the use of indexing data by third parties, in bibliometrics, users should be aware of the indexing policies behind the data, in order to prevent invalid interpretations
  6. Tonta, Y.: ¬A study of indexing consistency between Library of Congress and British Library catalogers (1991) 0.02
    0.02144916 = product of:
      0.064347476 = sum of:
        0.064347476 = product of:
          0.12869495 = sum of:
            0.12869495 = weight(_text_:indexing in 2277) [ClassicSimilarity], result of:
              0.12869495 = score(doc=2277,freq=8.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6766778 = fieldWeight in 2277, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2277)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Indexing consistency between Library of Congress and British Library catalogers using the LCSH is compared.82 titles published in 1987 in the field of library and information science were identified for comparison, and for each title its LC subject headings, assigned by both LC and BL catalogers, were compared. By applying Hooper's 'consistency of a pair' equation, the average indexing consistency value was calculated for the 82 titles. The average indexing value between LC and BL catalogers is 16% for exact matches, and 36% for partial matches
  7. Iivonen, M.; Kivimäki, K.: Common entities and missing properties : similarities and differences in the indexing of concepts (1998) 0.02
    0.019702308 = product of:
      0.059106924 = sum of:
        0.059106924 = product of:
          0.11821385 = sum of:
            0.11821385 = weight(_text_:indexing in 3074) [ClassicSimilarity], result of:
              0.11821385 = score(doc=3074,freq=12.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6215682 = fieldWeight in 3074, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3074)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The selection and representation of concepts in indexing of the same documents in 2 databases of library and information studies are considered. the authors compare the indexing of 49 documents in KINF and LISA. They focus on the types of concepts presented in indexing, the degree of concept consistency in indexing, and similarities and differences in the indexing of concepts. The largest group of indexed concepts in both databases was the category of entities while concepts belonging to the category of properties were almost missing in both databases. The second largest group of indexed concepts in KINF was the category of activities and in LISA the category of dimensions. Although the concept consistency between KINF and LISA remained rather low and was only 34%, there were approximately 2,2 concepts per document which were indexed from the same documents in both databses. These common concepts belonged mostly to the category of entities
  8. Iivonen, M.: Interindexer consistency and the indexing environment (1990) 0.02
    0.018768014 = product of:
      0.05630404 = sum of:
        0.05630404 = product of:
          0.11260808 = sum of:
            0.11260808 = weight(_text_:indexing in 3593) [ClassicSimilarity], result of:
              0.11260808 = score(doc=3593,freq=8.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5920931 = fieldWeight in 3593, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3593)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Considers the interindexer consistency between indexers working in various organisations and reports on the result of an empirical study. The interindexer consistency was low, but there were clear differences depending on whether the consistency was calculated on the basis to terms or concepts or aspects. The fact that the consistency figures remained low can be explained. The low indexing consistency caused by indexing errors also seems to be difficult to control. Indexing consistency and its control have a clear impact on how feasible and useful centralised services and union catalogues are and can be from the point of view of subject description.
  9. Iivonen, M.: ¬The impact of the indexing environment on interindexer consistency (1990) 0.02
    0.018575516 = product of:
      0.055726547 = sum of:
        0.055726547 = product of:
          0.11145309 = sum of:
            0.11145309 = weight(_text_:indexing in 4779) [ClassicSimilarity], result of:
              0.11145309 = score(doc=4779,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5860202 = fieldWeight in 4779, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4779)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The interindexer consistency between indexers working in 10 libraries was considered. The indexing environment is described with the help of organisational theory. Interindexer consistency was low, but there were clear differences depending on whether consistency was calculated on the basis of terms or concepts or aspects. Discusses the indexing environment's connections to interindexer consistency
  10. Hersh, W.R.; Hickam, D.H.: ¬A comparison of two methods for indexing and retrieval from a full-text medical database (1992) 0.02
    0.016253578 = product of:
      0.04876073 = sum of:
        0.04876073 = product of:
          0.09752146 = sum of:
            0.09752146 = weight(_text_:indexing in 4526) [ClassicSimilarity], result of:
              0.09752146 = score(doc=4526,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5127677 = fieldWeight in 4526, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4526)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports results of a study of 2 information retrieval systems on a 2.000 document full text medical database. The first system, SAPHIRE, features concept based automatic indexing and statistical retrieval techniques, while the second system, SWORD, features traditional word based Boolean techniques, 16 medical students at Oregon Health Sciences Univ. each performed 10 searches and their results, recorded in terms of recall and precision, showed nearly equal performance for both systems. SAPHIRE was also compared with a version of SWORD modified to use automatic indexing and ranked retrieval. Using batch input of queries, the latter method performed slightly better
  11. David, C.; Giroux, L.; Bertrand-Gastaldy, S.; Lanteigne, D.: Indexing as problem solving : a cognitive approach to consistency (1995) 0.02
    0.016086869 = product of:
      0.048260607 = sum of:
        0.048260607 = product of:
          0.09652121 = sum of:
            0.09652121 = weight(_text_:indexing in 3609) [ClassicSimilarity], result of:
              0.09652121 = score(doc=3609,freq=8.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5075084 = fieldWeight in 3609, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3609)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Indexers differ in their judgement as to which terms reflect adequately the content of a document. Studies of interindexers' consistency identified several factors associated with low consistency, but failed to provide a comprehensive model of this phenomenon. Our research applies theories and methods from cognitive psychology to the study of indexing behavior. From a theoretical standpoint, indexing is considered as a problem solving situation. To access to the cognitive processes of indexers, 3 kinds of verbal reports are used. We will present results of an experiment in which 4 experienced indexers indexed the same documents. It will be shown that the 3 kinds of verbal reports provide complementary data on strategic behavior, and that it is of prime importance to consider the indexing task as an ill-defined problem, where the solution is partly defined by the indexer him(her)self
  12. Burgin, R.: ¬The effect of indexing exhaustivity on retrieval performance (1991) 0.02
    0.016086869 = product of:
      0.048260607 = sum of:
        0.048260607 = product of:
          0.09652121 = sum of:
            0.09652121 = weight(_text_:indexing in 5262) [ClassicSimilarity], result of:
              0.09652121 = score(doc=5262,freq=8.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5075084 = fieldWeight in 5262, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5262)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The study was based on the collection examnined by W.H. Shaw (Inf. proc. man. 26(1990) no.6, S.693-703, 705-718), a test collection of 1239 articles, indexed with the term cystic fibrosis; and 100 queries with 3 sets of relevance evaluations from subject experts. The effect of variations in indexing exhaustivity on retrieval performance in a vector space retrieval system was investigated by using a term weight threshold to construct different document representations for a test collection. Retrieval results showed that retrieval performance, as measured by the mean optimal measure for all queries at a term weight threshold, was highest at the most exhaustive representation, and decreased slightly as terms were eliminated and the indexing representation became less exhaustive. The findings suggest that the vector space model is more robust against variations in indexing exhaustivity that is the single-link clustering model
  13. Haanen, E.: Specificiteit en consistentie : een kwantitatief oderzoek naar trefwoordtoekenning door UBA en UBN (1991) 0.02
    0.015166845 = product of:
      0.045500536 = sum of:
        0.045500536 = product of:
          0.09100107 = sum of:
            0.09100107 = weight(_text_:indexing in 4778) [ClassicSimilarity], result of:
              0.09100107 = score(doc=4778,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.47848347 = fieldWeight in 4778, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4778)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Online public access catalogues enable users to undertake subject searching by classification schedules, natural language, or controlled language terminology. In practice the 1st method is little used. Controlled language systems require indexers to index specifically and consistently. A comparative survey was made of indexing practices at Amsterdam and Mijmegen university libraries. On average Amsterdam assigned each document 3.5 index terms against 1.8 at Nijmegen. This discrepancy in indexing policy is the result of long-standing practices in each institution. Nijmegen has failed to utilise the advantages offered by online cataloges
  14. David, C.; Giroux, L.; Bertrand-Gastaldy, S.; Lanteigne, D.: Indexing as problem solving : a cognitive approach to consistency (1995) 0.02
    0.015166845 = product of:
      0.045500536 = sum of:
        0.045500536 = product of:
          0.09100107 = sum of:
            0.09100107 = weight(_text_:indexing in 3833) [ClassicSimilarity], result of:
              0.09100107 = score(doc=3833,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.47848347 = fieldWeight in 3833, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3833)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents results of an experiment in which 8 indexers (4 beginners and 4 experts) were asked to index the same 4 documents with 2 different thesauri. The 3 kind of verbal reports provide complementary data on strategic behaviour. it is of prime importance to consider the indexing task as an ill-defined problem, where the solutionm is partly defined by the indexer
  15. Edwards, S.: Indexing practices at the National Agricultural Library (1993) 0.02
    0.015166845 = product of:
      0.045500536 = sum of:
        0.045500536 = product of:
          0.09100107 = sum of:
            0.09100107 = weight(_text_:indexing in 555) [ClassicSimilarity], result of:
              0.09100107 = score(doc=555,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.47848347 = fieldWeight in 555, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=555)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This article discusses indexing practices at the National Agriculture Library. Indexers at NAL scan over 2,200 incoming journals for input into its bibliographic database, AGRICOLA. The National Agriculture Library's coverage extends worldwide covering a broad range of agriculture subjects. Access to AGRICOLA occurs in several ways: onsite search, commercial vendors, Dialog Information Services, Inc. and BRS Information Technologies. The National Agricultural Library uses CAB THESAURUS to describe the subject content of articles in AGRICOLA.
  16. Rowley, J.: ¬The controlled versus natural indexing languages debate revisited : a perspective on information retrieval practice and research (1994) 0.01
    0.014988055 = product of:
      0.044964164 = sum of:
        0.044964164 = product of:
          0.08992833 = sum of:
            0.08992833 = weight(_text_:indexing in 7151) [ClassicSimilarity], result of:
              0.08992833 = score(doc=7151,freq=10.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.47284302 = fieldWeight in 7151, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=7151)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This article revisits the debate concerning controlled and natural indexing languages, as used in searching the databases of the online hosts, in-house information retrieval systems, online public access catalogues and databases stored on CD-ROM. The debate was first formulated in the early days of information retrieval more than a century ago but, despite significant advance in technology, remains unresolved. The article divides the history of the debate into four eras. Era one was characterised by the introduction of controlled vocabulary. Era two focused on comparisons between different indexing languages in order to assess which was best. Era three saw a number of case studies of limited generalisability and a general recognition that the best search performance can be achieved by the parallel use of the two types of indexing languages. The emphasis in Era four has been on the development of end-user-based systems, including online public access catalogues and databases on CD-ROM. Recent developments in the use of expert systems techniques to support the representation of meaning may lead to systems which offer significant support to the user in end-user searching. In the meantime, however, information retrieval in practice involves a mixture of natural and controlled indexing languages used to search a wide variety of different kinds of databases
  17. Ballard, R.M.: Indexing and its relevance to technical processing (1993) 0.01
    0.014988055 = product of:
      0.044964164 = sum of:
        0.044964164 = product of:
          0.08992833 = sum of:
            0.08992833 = weight(_text_:indexing in 554) [ClassicSimilarity], result of:
              0.08992833 = score(doc=554,freq=10.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.47284302 = fieldWeight in 554, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=554)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The development of regional on-line catalogs and in-house information systems for retrieval of references provide examples of the impact of indexing theory and applications on technical processing. More emphasis must be given to understanding the techniques for evaluating the effectiveness of a file, irrespective of whether that file was created as a library catalog or an index to information sources. The most significant advances in classification theory in recent decades has been as a result of efforts to improve effectiveness of indexing systems. Library classification systems are indexing languages or systems. Courses offered for the preparation of indexers in the United States and the United Kingdom are reviewed. A point of congruence for both the indexer and the library classifier would appear to be the need for a thorough preparation in the techniques of subject analysis. Any subject heading list will suffer from omissions as well as the inclusion of terms which the patron will never use. Indexing theory has provided the technical services department with methods for evaluation of effectiveness. The writer does not believe that these techniques are used, nor do current courses, workshops, and continuing education programs stress them. When theory is totally subjugated to practice, critical thinking and maximum effectiveness will suffer.
  18. Kautto, V.: Classing and indexing : a comparative time study (1992) 0.01
    0.011375135 = product of:
      0.034125403 = sum of:
        0.034125403 = product of:
          0.068250805 = sum of:
            0.068250805 = weight(_text_:indexing in 2670) [ClassicSimilarity], result of:
              0.068250805 = score(doc=2670,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.3588626 = fieldWeight in 2670, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2670)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A total of 16 classifiers made a subject analysis of a set of books such that some of the books were first classified by the UDC anf then indexed with terms from the General Finnish Subject Headings while another set were processed in the opposite order. Finally books on the same subject were either classifies or indexed. The total number of books processed was 581. A comparison was made of the time required for processing in different situations and of the number of classes or subject headings used. The time figures were compared with corresponding data from the British Library (1972) and the Library of Congress (1990 and 1991). The author finds that the contents analysis requires one third, classification one third and indexing obe third of the time, if the document is both classified and indexed. There was a plausible correlation (o.51) between the length of experience in classification and the decrease in the time required for classing. The average number of UDC numbers was 4,3 and the average number of terms from the list of subject headings was 4,0
  19. Soergel, D.: Indexing and retrieval performance : the logical evidence (1997) 0.01
    0.01072458 = product of:
      0.032173738 = sum of:
        0.032173738 = product of:
          0.064347476 = sum of:
            0.064347476 = weight(_text_:indexing in 578) [ClassicSimilarity], result of:
              0.064347476 = score(doc=578,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.3383389 = fieldWeight in 578, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=578)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)