Search (46 results, page 1 of 3)

  • × theme_ss:"Indexierungsstudien"
  1. Taniguchi, S.: Recording evidence in bibliographic records and descriptive metadata (2005) 0.08
    0.08369591 = product of:
      0.2678269 = sum of:
        0.115593545 = weight(_text_:descriptive in 3565) [ClassicSimilarity], result of:
          0.115593545 = score(doc=3565,freq=6.0), product of:
            0.17974061 = queryWeight, product of:
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.032090448 = queryNorm
            0.64311314 = fieldWeight in 3565, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.046875 = fieldNorm(doc=3565)
        0.0381542 = product of:
          0.0763084 = sum of:
            0.0763084 = weight(_text_:rules in 3565) [ClassicSimilarity], result of:
              0.0763084 = score(doc=3565,freq=4.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.47215426 = fieldWeight in 3565, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3565)
          0.5 = coord(1/2)
        0.024727343 = weight(_text_:american in 3565) [ClassicSimilarity], result of:
          0.024727343 = score(doc=3565,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.22601068 = fieldWeight in 3565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.046875 = fieldNorm(doc=3565)
        0.0763084 = weight(_text_:rules in 3565) [ClassicSimilarity], result of:
          0.0763084 = score(doc=3565,freq=4.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.47215426 = fieldWeight in 3565, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.046875 = fieldNorm(doc=3565)
        0.013043438 = product of:
          0.026086876 = sum of:
            0.026086876 = weight(_text_:22 in 3565) [ClassicSimilarity], result of:
              0.026086876 = score(doc=3565,freq=2.0), product of:
                0.11237528 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032090448 = queryNorm
                0.23214069 = fieldWeight in 3565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3565)
          0.5 = coord(1/2)
      0.3125 = coord(5/16)
    
    Abstract
    In this article recording evidence for data values in addition to the values themselves in bibliographic records and descriptive metadata is proposed, with the aim of improving the expressiveness and reliability of those records and metadata. Recorded evidence indicates why and how data values are recorded for elements. Recording the history of changes in data values is also proposed, with the aim of reinforcing recorded evidence. First, evidence that can be recorded is categorized into classes: identifiers of rules or tasks, action descriptions of them, and input and output data of them. Dates of recording values and evidence are an additional class. Then, the relative usefulness of evidence classes and also levels (i.e., the record, data element, or data value level) to which an individual evidence class is applied, is examined. Second, examples that can be viewed as recorded evidence in existing bibliographic records and current cataloging rules are shown. Third, some examples of bibliographic records and descriptive metadata with notes of evidence are demonstrated. Fourth, ways of using recorded evidence are addressed.
    Date
    18. 6.2005 13:16:22
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.8, S.872-882
  2. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.03
    0.02959627 = product of:
      0.07892338 = sum of:
        0.016506769 = weight(_text_:author in 1858) [ClassicSimilarity], result of:
          0.016506769 = score(doc=1858,freq=2.0), product of:
            0.15482868 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.032090448 = queryNorm
            0.106613114 = fieldWeight in 1858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.015625 = fieldNorm(doc=1858)
        0.008843721 = weight(_text_:26 in 1858) [ClassicSimilarity], result of:
          0.008843721 = score(doc=1858,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.07803638 = fieldWeight in 1858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.015625 = fieldNorm(doc=1858)
        0.022245986 = weight(_text_:descriptive in 1858) [ClassicSimilarity], result of:
          0.022245986 = score(doc=1858,freq=2.0), product of:
            0.17974061 = queryWeight, product of:
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.032090448 = queryNorm
            0.12376717 = fieldWeight in 1858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.015625 = fieldNorm(doc=1858)
        0.0089930305 = product of:
          0.017986061 = sum of:
            0.017986061 = weight(_text_:rules in 1858) [ClassicSimilarity], result of:
              0.017986061 = score(doc=1858,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.111287825 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.5 = coord(1/2)
        0.017986061 = weight(_text_:rules in 1858) [ClassicSimilarity], result of:
          0.017986061 = score(doc=1858,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.111287825 = fieldWeight in 1858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.015625 = fieldNorm(doc=1858)
        0.004347813 = product of:
          0.008695626 = sum of:
            0.008695626 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.008695626 = score(doc=1858,freq=2.0), product of:
                0.11237528 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032090448 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.5 = coord(1/2)
      0.375 = coord(6/16)
    
    Date
    22. 9.1997 19:16:05
    Footnote
    Rez. in JASIST 54(2003) no.4, S.356-357 (S.J. Lincicum): "Reliance upon shared cataloging in academic libraries in the United States has been driven largely by the need to reduce the expense of cataloging operations without muck regard for the Impact that this approach might have an the quality of the records included in local catalogs. In recent years, ever increasing pressures have prompted libraries to adopt practices such as "rapid" copy cataloging that purposely reduce the scrutiny applied to bibliographic records downloaded from shared databases, possibly increasing the number of errors that slip through unnoticed. Errors in bibliographic records can lead to serious problems for library catalog users. If the data contained in bibliographic records is inaccurate, users will have difficulty discovering and recognizing resources in a library's collection that are relevant to their needs. Thus, it has become increasingly important to understand the extent and nature of errors that occur in the records found in large shared bibliographic databases, such as OCLC WorldCat, to develop cataloging practices optimized for the shared cataloging environment. Although this monograph raises a few legitimate concerns about recent trends in cataloging practice, it fails to provide the "detailed look" at misinformation in library catalogs arising from linguistic errors and mistakes in subject analysis promised by the publisher. A basic premise advanced throughout the text is that a certain amount of linguistic and subject knowledge is required to catalog library materials effectively. The author emphasizes repeatedly that most catalogers today are asked to catalog an increasingly diverse array of materials, and that they are often required to work in languages or subject areas of which they have little or no knowledge. He argues that the records contributed to shared databases are increasingly being created by catalogers with inadequate linguistic or subject expertise. This adversely affects the quality of individual library catalogs because errors often go uncorrected as records are downloaded from shared databases to local catalogs by copy catalogers who possess even less knowledge. Calling misinformation an "evil phenomenon," Bade states that his main goal is to discuss, "two fundamental types of misinformation found in bibliographic and authority records in library catalogs: that arising from linguistic errors, and that caused by errors in subject analysis, including missing or wrong subject headings" (p. 2). After a superficial discussion of "other" types of errors that can occur in bibliographic records, such as typographical errors and errors in the application of descriptive cataloging rules, Bade begins his discussion of linguistic errors. He asserts that sharing bibliographic records created by catalogers with inadequate linguistic or subject knowledge has, "disastrous effects an the library community" (p. 6). To support this bold assertion, Bade provides as evidence little more than a laundry list of errors that he has personally observed in bibliographic records over the years. When he eventually cites several studies that have addressed the availability and quality of records available for materials in languages other than English, he fails to describe the findings of these studies in any detail, let alone relate the findings to his own observations in a meaningful way. Bade claims that a lack of linguistic expertise among catalogers is the "primary source for linguistic misinformation in our databases" (p. 10), but he neither cites substantive data from existing studies nor provides any new data regarding the overall level of linguistic knowledge among catalogers to support this claim. The section concludes with a brief list of eight sensible, if unoriginal, suggestions for coping with the challenge of cataloging materials in unfamiliar languages.
    Bade begins his discussion of errors in subject analysis by summarizing the contents of seven records containing what he considers to be egregious errors. The examples were drawn only from items that he has encountered in the course of his work. Five of the seven records were full-level ("I" level) records for Eastern European materials created between 1996 and 2000 in the OCLC WorldCat database. The final two examples were taken from records created by Bade himself over an unspecified period of time. Although he is to be commended for examining the actual items cataloged and for examining mostly items that he claims to have adequate linguistic and subject expertise to evaluate reliably, Bade's methodology has major flaws. First and foremost, the number of examples provided is completely inadequate to draw any conclusions about the extent of the problem. Although an in-depth qualitative analysis of a small number of records might have yielded some valuable insight into factors that contribute to errors in subject analysis, Bade provides no Information about the circumstances under which the live OCLC records he critiques were created. Instead, he offers simplistic explanations for the errors based solely an his own assumptions. He supplements his analysis of examples with an extremely brief survey of other studies regarding errors in subject analysis, which consists primarily of criticism of work done by Sheila Intner. In the end, it is impossible to draw any reliable conclusions about the nature or extent of errors in subject analysis found in records in shared bibliographic databases based an Bade's analysis. In the final third of the essay, Bade finally reveals his true concern: the deintellectualization of cataloging. It would strengthen the essay tremendously to present this as the primary premise from the very beginning, as this section offers glimpses of a compelling argument. Bade laments, "Many librarians simply do not sec cataloging as an intellectual activity requiring an educated mind" (p. 20). Commenting an recent trends in copy cataloging practice, he declares, "The disaster of our time is that this work is being done more and more by people who can neither evaluate nor correct imported errors and offen are forbidden from even thinking about it" (p. 26). Bade argues that the most valuable content found in catalog records is the intellectual content contributed by knowledgeable catalogers, and he asserts that to perform intellectually demanding tasks such as subject analysis reliably and effectively, catalogers must have the linguistic and subject knowledge required to gain at least a rudimentary understanding of the materials that they describe. He contends that requiring catalogers to quickly dispense with materials in unfamiliar languages and subjects clearly undermines their ability to perform the intellectual work of cataloging and leads to an increasing number of errors in the bibliographic records contributed to shared databases.
  3. Boll, J.J.: DDC classification rules : an outline history and comparison of two sets of rules (1988) 0.03
    0.028912194 = product of:
      0.23129755 = sum of:
        0.07709918 = product of:
          0.15419836 = sum of:
            0.15419836 = weight(_text_:rules in 404) [ClassicSimilarity], result of:
              0.15419836 = score(doc=404,freq=12.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.95409435 = fieldWeight in 404, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=404)
          0.5 = coord(1/2)
        0.15419836 = weight(_text_:rules in 404) [ClassicSimilarity], result of:
          0.15419836 = score(doc=404,freq=12.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.95409435 = fieldWeight in 404, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0546875 = fieldNorm(doc=404)
      0.125 = coord(2/16)
    
    Abstract
    Melvil Dewey provided generally applicable classification guidelines or rules with his classification schedules, beginning with the second edition of his scheme. Many cataloging textbooks have adopted these guidelines. Recent editions of the DDC, however, provide considerably changed, quite intricate, and edition-specific rules. The resulting two different sets of classification rules are similar in theory but very different in application. Classifiers must be aware of both sets. They are summarized in two decision charts that are intended to illustrate the differences and similarities between the two sets of rules and to encourage consistent classification decisions. The need is expressed for a parallel, end-user-oriented searching code
  4. Booth, A.: How consistent is MEDLINE indexing? (1990) 0.02
    0.01948951 = product of:
      0.103944056 = sum of:
        0.05777369 = weight(_text_:author in 3510) [ClassicSimilarity], result of:
          0.05777369 = score(doc=3510,freq=2.0), product of:
            0.15482868 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.032090448 = queryNorm
            0.3731459 = fieldWeight in 3510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
        0.030953024 = weight(_text_:26 in 3510) [ClassicSimilarity], result of:
          0.030953024 = score(doc=3510,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.27312735 = fieldWeight in 3510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
        0.015217344 = product of:
          0.030434689 = sum of:
            0.030434689 = weight(_text_:22 in 3510) [ClassicSimilarity], result of:
              0.030434689 = score(doc=3510,freq=2.0), product of:
                0.11237528 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032090448 = queryNorm
                0.2708308 = fieldWeight in 3510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3510)
          0.5 = coord(1/2)
      0.1875 = coord(3/16)
    
    Abstract
    A known-item search for abstracts to previously retrieved references revealed that 2 documents from the same annual volume had been indexed twice. Working from the premise that the whole volume may have been double-indexed, a search strategy was devised that limited the journal code to the year in question. 57 references were retrieved, comprising 28 pairs of duplicates plus a citation for the whole volume. Author, title, source and descriptors were requested off-line and the citations were paired with their duplicates. The 4 categories of descriptors-major descriptors, minor descriptors, subheadings and check-tags-were compared for depth and consistency of indexing and lessons that might be learnt from the study are discussed.
    Source
    Health libraries review. 7(1990) no.1, S.22-26
  5. Lee, D.H.; Schleyer, T.: Social tagging is no substitute for controlled indexing : a comparison of Medical Subject Headings and CiteULike tags assigned to 231,388 papers (2012) 0.02
    0.018436946 = product of:
      0.09833038 = sum of:
        0.022109302 = weight(_text_:26 in 383) [ClassicSimilarity], result of:
          0.022109302 = score(doc=383,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.19509095 = fieldWeight in 383, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=383)
        0.055614963 = weight(_text_:descriptive in 383) [ClassicSimilarity], result of:
          0.055614963 = score(doc=383,freq=2.0), product of:
            0.17974061 = queryWeight, product of:
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.032090448 = queryNorm
            0.3094179 = fieldWeight in 383, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.0390625 = fieldNorm(doc=383)
        0.02060612 = weight(_text_:american in 383) [ClassicSimilarity], result of:
          0.02060612 = score(doc=383,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.18834224 = fieldWeight in 383, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.0390625 = fieldNorm(doc=383)
      0.1875 = coord(3/16)
    
    Abstract
    Social tagging and controlled indexing both facilitate access to information resources. Given the increasing popularity of social tagging and the limitations of controlled indexing (primarily cost and scalability), it is reasonable to investigate to what degree social tagging could substitute for controlled indexing. In this study, we compared CiteULike tags to Medical Subject Headings (MeSH) terms for 231,388 citations indexed in MEDLINE. In addition to descriptive analyses of the data sets, we present a paper-by-paper analysis of tags and MeSH terms: the number of common annotations, Jaccard similarity, and coverage ratio. In the analysis, we apply three increasingly progressive levels of text processing, ranging from normalization to stemming, to reduce the impact of lexical differences. Annotations of our corpus consisted of over 76,968 distinct tags and 21,129 distinct MeSH terms. The top 20 tags/MeSH terms showed little direct overlap. On a paper-by-paper basis, the number of common annotations ranged from 0.29 to 0.5 and the Jaccard similarity from 2.12% to 3.3% using increased levels of text processing. At most, 77,834 citations (33.6%) shared at least one annotation. Our results show that CiteULike tags and MeSH terms are quite distinct lexically, reflecting different viewpoints/processes between social tagging and controlled indexing.
    Date
    26. 8.2012 14:29:37
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.9, S.1747-1757
  6. Mann, T.: 'Cataloging must change!' and indexer consistency studies : misreading the evidence at our peril (1997) 0.01
    0.014868774 = product of:
      0.118950196 = sum of:
        0.059475098 = weight(_text_:cataloguing in 492) [ClassicSimilarity], result of:
          0.059475098 = score(doc=492,freq=4.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.4168361 = fieldWeight in 492, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.046875 = fieldNorm(doc=492)
        0.059475098 = weight(_text_:cataloguing in 492) [ClassicSimilarity], result of:
          0.059475098 = score(doc=492,freq=4.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.4168361 = fieldWeight in 492, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.046875 = fieldNorm(doc=492)
      0.125 = coord(2/16)
    
    Abstract
    An earlier article ('Cataloging must change' by D. Gregor and C. Mandel in: Library journal 116(1991) no.6, S.42-47) has popularized the belief that there is low consistency (only 10-20% agreement) among subject cataloguers in assigning LCSH. Because of this alleged lack og consistency, the article suggests, cataloguers 'can be more accepting in variations in subject choices' in copy cataloguing. Argues that this inference is based on a serious misreading of previous studies of indexer consistency. The 10-20% figure actually derives from studies of people trying to guess the same natural language key words, precisely in the absence of vocabulary control mechanisms such as thesauri or LCSH. Concludes that sources cited fail support their conclusion and some directly contradict it. Raises the concern that a naive acceptance by the library profession of the 10-20% claim can only have negative consequences for the quality of subject cataloguing created, and accepted throughout the country
  7. Hersh, W.R.; Hickam, D.H.: ¬A comparison of two methods for indexing and retrieval from a full-text medical database (1992) 0.01
    0.014154989 = product of:
      0.075493276 = sum of:
        0.030953024 = weight(_text_:26 in 4526) [ClassicSimilarity], result of:
          0.030953024 = score(doc=4526,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.27312735 = fieldWeight in 4526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4526)
        0.028848568 = weight(_text_:american in 4526) [ClassicSimilarity], result of:
          0.028848568 = score(doc=4526,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.26367915 = fieldWeight in 4526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4526)
        0.015691686 = product of:
          0.031383373 = sum of:
            0.031383373 = weight(_text_:ed in 4526) [ClassicSimilarity], result of:
              0.031383373 = score(doc=4526,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.27501947 = fieldWeight in 4526, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4526)
          0.5 = coord(1/2)
      0.1875 = coord(3/16)
    
    Source
    Proceedings of the 55th Annual Meeting of the American Society for Information Science, Pittsburgh, 26.-29.10.92. Ed.: D. Shaw
  8. Connell, T.H.: Use of the LCSH system : realities (1996) 0.01
    0.012266113 = product of:
      0.09812891 = sum of:
        0.049064454 = weight(_text_:cataloguing in 6941) [ClassicSimilarity], result of:
          0.049064454 = score(doc=6941,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.34387225 = fieldWeight in 6941, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6941)
        0.049064454 = weight(_text_:cataloguing in 6941) [ClassicSimilarity], result of:
          0.049064454 = score(doc=6941,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.34387225 = fieldWeight in 6941, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6941)
      0.125 = coord(2/16)
    
    Abstract
    Explores the question of whether academic libraries keep up with the changes in the LCSH system. Analysis of the handling of 15 subject headings in 50 academic library catalogues available via the Internet found that libraries are not consistently maintaining subject authority control, or making syndetic references and scope notes in their catalogues. Discusses the results from the perspective of the libraries' performance, performance on the headings overall, performance on references, performance on the type of change made to the headings,a nd performance within 3 widely used onlien catalogue systems (DRA, INNOPAC and NOTIS). Discusses the implications of the findings in relationship to expressions of dissatisfaction with the effectiveness of subject cataloguing expressed by discussion groups on the Internet
  9. Losee, R.: ¬A performance model of the length and number of subject headings and index phrases (2004) 0.01
    0.01011716 = product of:
      0.08093728 = sum of:
        0.026979093 = product of:
          0.053958185 = sum of:
            0.053958185 = weight(_text_:rules in 3725) [ClassicSimilarity], result of:
              0.053958185 = score(doc=3725,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.33386347 = fieldWeight in 3725, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3725)
          0.5 = coord(1/2)
        0.053958185 = weight(_text_:rules in 3725) [ClassicSimilarity], result of:
          0.053958185 = score(doc=3725,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.33386347 = fieldWeight in 3725, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.046875 = fieldNorm(doc=3725)
      0.125 = coord(2/16)
    
    Abstract
    When assigning subject headings or index terms to a document, how many terms or phrases should be used to represent the document? The contribution of an indexing phrase to locating and ordering documents can be compared to the contribution of a full-text query to finding documents. The length and number of phrases needed to equal the contribution of a full-text query is the subject of this paper. The appropriate number of phrases is determined in part by the length of the phrases. We suggest several rules that may be used to determine how many subject headings should be assigned, given index phrase lengths, and provide a general model for this process. A difference between characteristics of indexing "hard" science and "social" science literature is suggested.
  10. Huffman, G.D.; Vital, D.A.; Bivins, R.G.: Generating indices with lexical association methods : term uniqueness (1990) 0.01
    0.009715533 = product of:
      0.07772426 = sum of:
        0.022109302 = weight(_text_:26 in 4152) [ClassicSimilarity], result of:
          0.022109302 = score(doc=4152,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.19509095 = fieldWeight in 4152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4152)
        0.055614963 = weight(_text_:descriptive in 4152) [ClassicSimilarity], result of:
          0.055614963 = score(doc=4152,freq=2.0), product of:
            0.17974061 = queryWeight, product of:
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.032090448 = queryNorm
            0.3094179 = fieldWeight in 4152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4152)
      0.125 = coord(2/16)
    
    Abstract
    A software system has been developed which orders citations retrieved from an online database in terms of relevancy. The system resulted from an effort generated by NASA's Technology Utilization Program to create new advanced software tools to largely automate the process of determining relevancy of database citations retrieved to support large technology transfer studies. The ranking is based on the generation of an enriched vocabulary using lexical association methods, a user assessment of the vocabulary and a combination of the user assessment and the lexical metric. One of the key elements in relevancy ranking is the enriched vocabulary -the terms mst be both unique and descriptive. This paper examines term uniqueness. Six lexical association methods were employed to generate characteristic word indices. A limited subset of the terms - the highest 20,40,60 and 7,5% of the uniquess words - we compared and uniquess factors developed. Computational times were also measured. It was found that methods based on occurrences and signal produced virtually the same terms. The limited subset of terms producedby the exact and centroid discrimination value were also nearly identical. Unique terms sets were produced by teh occurrence, variance and discrimination value (centroid), An end-user evaluation showed that the generated terms were largely distinct and had values of word precision which were consistent with values of the search precision.
    Source
    Information processing and management. 26(1990) no.4, S.549-558
  11. Zunde, P.; Dexter, M.E.: Factors affecting indexing performance (1969) 0.01
    0.00954434 = product of:
      0.07635472 = sum of:
        0.049454685 = weight(_text_:american in 7496) [ClassicSimilarity], result of:
          0.049454685 = score(doc=7496,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.45202136 = fieldWeight in 7496, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.09375 = fieldNorm(doc=7496)
        0.026900033 = product of:
          0.053800065 = sum of:
            0.053800065 = weight(_text_:ed in 7496) [ClassicSimilarity], result of:
              0.053800065 = score(doc=7496,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.47146195 = fieldWeight in 7496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7496)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Source
    Cooperating information societies: Proceedings of the 32nd Annual Meeting of the American Society for Information Science, San Francisco, CA, 1.-4.10.1969. Ed.: J.B. North
  12. Veenema, F.: To index or not to index (1996) 0.01
    0.006595767 = product of:
      0.052766137 = sum of:
        0.035374884 = weight(_text_:26 in 7247) [ClassicSimilarity], result of:
          0.035374884 = score(doc=7247,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.31214553 = fieldWeight in 7247, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0625 = fieldNorm(doc=7247)
        0.017391251 = product of:
          0.034782503 = sum of:
            0.034782503 = weight(_text_:22 in 7247) [ClassicSimilarity], result of:
              0.034782503 = score(doc=7247,freq=2.0), product of:
                0.11237528 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032090448 = queryNorm
                0.30952093 = fieldWeight in 7247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7247)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Date
    26. 2.1997 10:45:53
    Source
    Canadian journal of information and library science. 21(1996) no.2, S.1-22
  13. David, C.; Giroux, L.; Bertrand-Gastaldy, S.; Lanteigne, D.: Indexing as problem solving : a cognitive approach to consistency (1995) 0.01
    0.006362893 = product of:
      0.050903145 = sum of:
        0.03296979 = weight(_text_:american in 3833) [ClassicSimilarity], result of:
          0.03296979 = score(doc=3833,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.30134758 = fieldWeight in 3833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.0625 = fieldNorm(doc=3833)
        0.017933354 = product of:
          0.035866708 = sum of:
            0.035866708 = weight(_text_:ed in 3833) [ClassicSimilarity], result of:
              0.035866708 = score(doc=3833,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.31430796 = fieldWeight in 3833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3833)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Source
    Forging new partnerships in information: converging technologies. Proceedings of the 58th Annual Meeting of the American Society for Information Science, ASIS'95, Chicago, IL, 9-12 October 1995. Ed.: T. Kinney
  14. Neshat, N.; Horri, A.: ¬A study of subject indexing consistency between the National Library of Iran and Humanities Libraries in the area of Iranian studies (2006) 0.01
    0.005771296 = product of:
      0.04617037 = sum of:
        0.030953024 = weight(_text_:26 in 230) [ClassicSimilarity], result of:
          0.030953024 = score(doc=230,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.27312735 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=230)
        0.015217344 = product of:
          0.030434689 = sum of:
            0.030434689 = weight(_text_:22 in 230) [ClassicSimilarity], result of:
              0.030434689 = score(doc=230,freq=2.0), product of:
                0.11237528 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032090448 = queryNorm
                0.2708308 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=230)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Date
    4. 1.2007 10:22:26
  15. Hurwitz, F.I.: ¬A study of indexer consistency (1969) 0.01
    0.00515153 = product of:
      0.08242448 = sum of:
        0.08242448 = weight(_text_:american in 2270) [ClassicSimilarity], result of:
          0.08242448 = score(doc=2270,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.753369 = fieldWeight in 2270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.15625 = fieldNorm(doc=2270)
      0.0625 = coord(1/16)
    
    Source
    American documentation. 20(1969), S.92-94
  16. Cooper, W.S.: Is interindexer consistency a hobgoblin? (1969) 0.01
    0.00515153 = product of:
      0.08242448 = sum of:
        0.08242448 = weight(_text_:american in 2273) [ClassicSimilarity], result of:
          0.08242448 = score(doc=2273,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.753369 = fieldWeight in 2273, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.15625 = fieldNorm(doc=2273)
      0.0625 = coord(1/16)
    
    Source
    American documentation. 20(1969), S.268-278
  17. Cleverdon, C.W.: Evaluation tests of information retrieval systems (1970) 0.00
    0.0044218604 = product of:
      0.07074977 = sum of:
        0.07074977 = weight(_text_:26 in 2272) [ClassicSimilarity], result of:
          0.07074977 = score(doc=2272,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.62429106 = fieldWeight in 2272, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.125 = fieldNorm(doc=2272)
      0.0625 = coord(1/16)
    
    Source
    Journal of documentation. 26(1970), S.55-67
  18. Lancaster, F.W.; Mills, J.: Testing indexes and index language devices : the ASLIB Cranfield project (1964) 0.00
    0.004121224 = product of:
      0.06593958 = sum of:
        0.06593958 = weight(_text_:american in 2261) [ClassicSimilarity], result of:
          0.06593958 = score(doc=2261,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.60269517 = fieldWeight in 2261, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.125 = fieldNorm(doc=2261)
      0.0625 = coord(1/16)
    
    Source
    American documentation. 15(1964), S.4-13
  19. Zunde, P.; Dexter, M.E.: Indexing consistency and quality (1969) 0.00
    0.004121224 = product of:
      0.06593958 = sum of:
        0.06593958 = weight(_text_:american in 2264) [ClassicSimilarity], result of:
          0.06593958 = score(doc=2264,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.60269517 = fieldWeight in 2264, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.125 = fieldNorm(doc=2264)
      0.0625 = coord(1/16)
    
    Source
    American documentation. 20(1969), S.259-267
  20. Richmond, P.A.: Review of the Cranfield project (1963) 0.00
    0.004121224 = product of:
      0.06593958 = sum of:
        0.06593958 = weight(_text_:american in 2269) [ClassicSimilarity], result of:
          0.06593958 = score(doc=2269,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.60269517 = fieldWeight in 2269, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.125 = fieldNorm(doc=2269)
      0.0625 = coord(1/16)
    
    Source
    American documentation. 14(1963), S.307-311

Languages

  • e 45
  • f 1
  • More… Less…

Types

  • a 44
  • m 1
  • r 1
  • More… Less…