Search (17 results, page 1 of 1)

  • × theme_ss:"Indexierungsstudien"
  1. Ballard, R.M.: Indexing and its relevance to technical processing (1993) 0.05
    0.048028074 = product of:
      0.16809826 = sum of:
        0.09998428 = weight(_text_:united in 554) [ClassicSimilarity], result of:
          0.09998428 = score(doc=554,freq=4.0), product of:
            0.22812355 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.04066292 = queryNorm
            0.43829006 = fieldWeight in 554, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.0390625 = fieldNorm(doc=554)
        0.068113975 = weight(_text_:states in 554) [ClassicSimilarity], result of:
          0.068113975 = score(doc=554,freq=2.0), product of:
            0.22391328 = queryWeight, product of:
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.04066292 = queryNorm
            0.304198 = fieldWeight in 554, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.0390625 = fieldNorm(doc=554)
      0.2857143 = coord(2/7)
    
    Abstract
    The development of regional on-line catalogs and in-house information systems for retrieval of references provide examples of the impact of indexing theory and applications on technical processing. More emphasis must be given to understanding the techniques for evaluating the effectiveness of a file, irrespective of whether that file was created as a library catalog or an index to information sources. The most significant advances in classification theory in recent decades has been as a result of efforts to improve effectiveness of indexing systems. Library classification systems are indexing languages or systems. Courses offered for the preparation of indexers in the United States and the United Kingdom are reviewed. A point of congruence for both the indexer and the library classifier would appear to be the need for a thorough preparation in the techniques of subject analysis. Any subject heading list will suffer from omissions as well as the inclusion of terms which the patron will never use. Indexing theory has provided the technical services department with methods for evaluation of effectiveness. The writer does not believe that these techniques are used, nor do current courses, workshops, and continuing education programs stress them. When theory is totally subjugated to practice, critical thinking and maximum effectiveness will suffer.
  2. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.03
    0.03099436 = product of:
      0.07232017 = sum of:
        0.028279826 = weight(_text_:united in 1858) [ClassicSimilarity], result of:
          0.028279826 = score(doc=1858,freq=2.0), product of:
            0.22812355 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.04066292 = queryNorm
            0.12396715 = fieldWeight in 1858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.015625 = fieldNorm(doc=1858)
        0.03853108 = weight(_text_:states in 1858) [ClassicSimilarity], result of:
          0.03853108 = score(doc=1858,freq=4.0), product of:
            0.22391328 = queryWeight, product of:
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.04066292 = queryNorm
            0.17208037 = fieldWeight in 1858, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.015625 = fieldNorm(doc=1858)
        0.0055092643 = product of:
          0.011018529 = sum of:
            0.011018529 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.011018529 = score(doc=1858,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Date
    22. 9.1997 19:16:05
    Footnote
    Rez. in JASIST 54(2003) no.4, S.356-357 (S.J. Lincicum): "Reliance upon shared cataloging in academic libraries in the United States has been driven largely by the need to reduce the expense of cataloging operations without muck regard for the Impact that this approach might have an the quality of the records included in local catalogs. In recent years, ever increasing pressures have prompted libraries to adopt practices such as "rapid" copy cataloging that purposely reduce the scrutiny applied to bibliographic records downloaded from shared databases, possibly increasing the number of errors that slip through unnoticed. Errors in bibliographic records can lead to serious problems for library catalog users. If the data contained in bibliographic records is inaccurate, users will have difficulty discovering and recognizing resources in a library's collection that are relevant to their needs. Thus, it has become increasingly important to understand the extent and nature of errors that occur in the records found in large shared bibliographic databases, such as OCLC WorldCat, to develop cataloging practices optimized for the shared cataloging environment. Although this monograph raises a few legitimate concerns about recent trends in cataloging practice, it fails to provide the "detailed look" at misinformation in library catalogs arising from linguistic errors and mistakes in subject analysis promised by the publisher. A basic premise advanced throughout the text is that a certain amount of linguistic and subject knowledge is required to catalog library materials effectively. The author emphasizes repeatedly that most catalogers today are asked to catalog an increasingly diverse array of materials, and that they are often required to work in languages or subject areas of which they have little or no knowledge. He argues that the records contributed to shared databases are increasingly being created by catalogers with inadequate linguistic or subject expertise. This adversely affects the quality of individual library catalogs because errors often go uncorrected as records are downloaded from shared databases to local catalogs by copy catalogers who possess even less knowledge. Calling misinformation an "evil phenomenon," Bade states that his main goal is to discuss, "two fundamental types of misinformation found in bibliographic and authority records in library catalogs: that arising from linguistic errors, and that caused by errors in subject analysis, including missing or wrong subject headings" (p. 2). After a superficial discussion of "other" types of errors that can occur in bibliographic records, such as typographical errors and errors in the application of descriptive cataloging rules, Bade begins his discussion of linguistic errors. He asserts that sharing bibliographic records created by catalogers with inadequate linguistic or subject knowledge has, "disastrous effects an the library community" (p. 6). To support this bold assertion, Bade provides as evidence little more than a laundry list of errors that he has personally observed in bibliographic records over the years. When he eventually cites several studies that have addressed the availability and quality of records available for materials in languages other than English, he fails to describe the findings of these studies in any detail, let alone relate the findings to his own observations in a meaningful way. Bade claims that a lack of linguistic expertise among catalogers is the "primary source for linguistic misinformation in our databases" (p. 10), but he neither cites substantive data from existing studies nor provides any new data regarding the overall level of linguistic knowledge among catalogers to support this claim. The section concludes with a brief list of eight sensible, if unoriginal, suggestions for coping with the challenge of cataloging materials in unfamiliar languages.
  3. White, H.; Willis, C.; Greenberg, J.: HIVEing : the effect of a semantic web technology on inter-indexer consistency (2014) 0.01
    0.010350773 = product of:
      0.072455406 = sum of:
        0.072455406 = sum of:
          0.044909086 = weight(_text_:design in 1781) [ClassicSimilarity], result of:
            0.044909086 = score(doc=1781,freq=4.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.29373983 = fieldWeight in 1781, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1781)
          0.027546322 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
            0.027546322 = score(doc=1781,freq=2.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.19345059 = fieldWeight in 1781, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1781)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - The purpose of this paper is to examine the effect of the Helping Interdisciplinary Vocabulary Engineering (HIVE) system on the inter-indexer consistency of information professionals when assigning keywords to a scientific abstract. This study examined first, the inter-indexer consistency of potential HIVE users; second, the impact HIVE had on consistency; and third, challenges associated with using HIVE. Design/methodology/approach - A within-subjects quasi-experimental research design was used for this study. Data were collected using a task-scenario based questionnaire. Analysis was performed on consistency results using Hooper's and Rolling's inter-indexer consistency measures. A series of t-tests was used to judge the significance between consistency measure results. Findings - Results suggest that HIVE improves inter-indexing consistency. Working with HIVE increased consistency rates by 22 percent (Rolling's) and 25 percent (Hooper's) when selecting relevant terms from all vocabularies. A statistically significant difference exists between the assignment of free-text keywords and machine-aided keywords. Issues with homographs, disambiguation, vocabulary choice, and document structure were all identified as potential challenges. Research limitations/implications - Research limitations for this study can be found in the small number of vocabularies used for the study. Future research will include implementing HIVE into the Dryad Repository and studying its application in a repository system. Originality/value - This paper showcases several features used in HIVE system. By using traditional consistency measures to evaluate a semantic web technology, this paper emphasizes the link between traditional indexing and next generation machine-aided indexing (MAI) tools.
  4. Cleverdon, C.W.; Mills, J.; Keen, M.: Factors determining the performance of indexing systems : ASLIB Cranfield research project (1966) 0.01
    0.0063511035 = product of:
      0.044457722 = sum of:
        0.044457722 = product of:
          0.088915445 = sum of:
            0.088915445 = weight(_text_:design in 5363) [ClassicSimilarity], result of:
              0.088915445 = score(doc=5363,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.58157516 = fieldWeight in 5363, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5363)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Issue
    Vol.1: Design; Vol.2: Test results
  5. Cleverdon, C.W.: ASLIB Cranfield Research Project : Report on the first stage of an investigation into the comparative efficiency of indexing systems (1960) 0.00
    0.0047222264 = product of:
      0.033055585 = sum of:
        0.033055585 = product of:
          0.06611117 = sum of:
            0.06611117 = weight(_text_:22 in 6158) [ClassicSimilarity], result of:
              0.06611117 = score(doc=6158,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.46428138 = fieldWeight in 6158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6158)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Footnote
    Rez. in: College and research libraries 22(1961) no.3, S.228 (G. Jahoda)
  6. Keen, E.M.: Designing and testing an interactive ranked retrieval system for professional searchers (1994) 0.00
    0.0039287265 = product of:
      0.027501086 = sum of:
        0.027501086 = product of:
          0.05500217 = sum of:
            0.05500217 = weight(_text_:design in 1066) [ClassicSimilarity], result of:
              0.05500217 = score(doc=1066,freq=6.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.35975635 = fieldWeight in 1066, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1066)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Reports 3 explorations of ranked system design. 2 tests used a 'cystic fibrosis' test collection with 100 queries. Experiment 1 compared a Boolean with a ranked interactive system using a subject qualified trained searcher, and reporting recall and precision results. Experiment 2 compared 15 different ranked match algorithms in a batch mode using 2 test collections, and included some new proximate pairs and term weighting approaches. Experiment 3 is a design plan for an interactive ranked prototype offering mid search algorithm choices plus other manual search devices (such as obligatory and unwanted terms), as influenced by thinking aloud comments from experiment 1. Concludes that, in Boolean versus ranked using inverse collection frequency, the searcher inspected more records on ranked than Boolean and so achieved a higher recall but lower precision; however, the presentation order of the relevant records, was, on average, very similar in both systems. Concludes also that: query reformulation was quite strongly practised in ranked searching but does not appear to have been effective; the term pairs proximate weithing methods in experiment 2 enhanced precision on both test collections when used with inverse collection frequency weighting (ICF); and the design plan for an interactive prototype adds to a selection of match algorithms other devices, such as obligatory and unwanted term marking, evidence for this being found from think aloud comments
  7. Soergel, D.: Indexing and retrieval performance : the logical evidence (1994) 0.00
    0.0031755518 = product of:
      0.022228861 = sum of:
        0.022228861 = product of:
          0.044457722 = sum of:
            0.044457722 = weight(_text_:design in 579) [ClassicSimilarity], result of:
              0.044457722 = score(doc=579,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.29078758 = fieldWeight in 579, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=579)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    This article presents a logical analysis of the characteristics of indexing and their effects on retrieval performance.It establishes the ability to ask the questions one needs to ask as the foundation of performance evaluation, and recall and discrimination as the basic quantitative performance measures for binary noninteractive retrieval systems. It then defines the characteristics of indexing that affect retrieval - namely, indexing devices, viewpoint-based and importance-based indexing exhaustivity, indexing specifity, indexing correctness, and indexing consistency - and examines in detail their effects on retrieval. It concludes that retrieval performance depends chiefly on the match between indexing and the requirements of the individual query and on the adaption of the query formulation to the characteristics of the retrieval system, and that the ensuing complexity must be considered in the design and testing of retrieval systems
  8. Veenema, F.: To index or not to index (1996) 0.00
    0.003148151 = product of:
      0.022037057 = sum of:
        0.022037057 = product of:
          0.044074114 = sum of:
            0.044074114 = weight(_text_:22 in 7247) [ClassicSimilarity], result of:
              0.044074114 = score(doc=7247,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.30952093 = fieldWeight in 7247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7247)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Canadian journal of information and library science. 21(1996) no.2, S.1-22
  9. Booth, A.: How consistent is MEDLINE indexing? (1990) 0.00
    0.0027546322 = product of:
      0.019282425 = sum of:
        0.019282425 = product of:
          0.03856485 = sum of:
            0.03856485 = weight(_text_:22 in 3510) [ClassicSimilarity], result of:
              0.03856485 = score(doc=3510,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.2708308 = fieldWeight in 3510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3510)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Health libraries review. 7(1990) no.1, S.22-26
  10. Neshat, N.; Horri, A.: ¬A study of subject indexing consistency between the National Library of Iran and Humanities Libraries in the area of Iranian studies (2006) 0.00
    0.0027546322 = product of:
      0.019282425 = sum of:
        0.019282425 = product of:
          0.03856485 = sum of:
            0.03856485 = weight(_text_:22 in 230) [ClassicSimilarity], result of:
              0.03856485 = score(doc=230,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.2708308 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=230)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    4. 1.2007 10:22:26
  11. Bodoff, D.; Richter-Levin, Y.: Viewpoints in indexing term assignment (2020) 0.00
    0.0027219015 = product of:
      0.01905331 = sum of:
        0.01905331 = product of:
          0.03810662 = sum of:
            0.03810662 = weight(_text_:design in 5765) [ClassicSimilarity], result of:
              0.03810662 = score(doc=5765,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.24924651 = fieldWeight in 5765, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5765)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    The literature on assigned indexing considers three possible viewpoints-the author's viewpoint as evidenced in the title, the users' viewpoint, and the indexer's viewpoint-and asks whether and which of those views should be reflected in an indexer's choice of terms to assign to an item. We study this question empirically, as opposed to normatively. Based on the literature that discusses whose viewpoints should be reflected, we construct a research model that includes those same three viewpoints as factors that might be influencing term assignment in actual practice. In the unique study design that we employ, the records of term assignments made by identified indexers in academic libraries are cross-referenced with the results of a survey that those same indexers completed on political views. Our results indicate that in our setting, variance in term assignment was best explained by indexers' personal political views.
  12. Taniguchi, S.: Recording evidence in bibliographic records and descriptive metadata (2005) 0.00
    0.0023611132 = product of:
      0.016527792 = sum of:
        0.016527792 = product of:
          0.033055585 = sum of:
            0.033055585 = weight(_text_:22 in 3565) [ClassicSimilarity], result of:
              0.033055585 = score(doc=3565,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.23214069 = fieldWeight in 3565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3565)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    18. 6.2005 13:16:22
  13. Leininger, K.: Interindexer consistency in PsychINFO (2000) 0.00
    0.0023611132 = product of:
      0.016527792 = sum of:
        0.016527792 = product of:
          0.033055585 = sum of:
            0.033055585 = weight(_text_:22 in 2552) [ClassicSimilarity], result of:
              0.033055585 = score(doc=2552,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.23214069 = fieldWeight in 2552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2552)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    9. 2.1997 18:44:22
  14. Olson, H.A.; Wolfram, D.: Syntagmatic relationships and indexing consistency on a larger scale (2008) 0.00
    0.0022682515 = product of:
      0.01587776 = sum of:
        0.01587776 = product of:
          0.03175552 = sum of:
            0.03175552 = weight(_text_:design in 2214) [ClassicSimilarity], result of:
              0.03175552 = score(doc=2214,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.20770542 = fieldWeight in 2214, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2214)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - The purpose of this article is to examine interindexer consistency on a larger scale than other studies have done to determine if group consensus is reached by larger numbers of indexers and what, if any, relationships emerge between assigned terms. Design/methodology/approach - In total, 64 MLIS students were recruited to assign up to five terms to a document. The authors applied basic data modeling and the exploratory statistical techniques of multi-dimensional scaling (MDS) and hierarchical cluster analysis to determine whether relationships exist in indexing consistency and the coocurrence of assigned terms. Findings - Consistency in the assignment of indexing terms to a document follows an inverse shape, although it is not strictly power law-based unlike many other social phenomena. The exploratory techniques revealed that groups of terms clustered together. The resulting term cooccurrence relationships were largely syntagmatic. Research limitations/implications - The results are based on the indexing of one article by non-expert indexers and are, thus, not generalizable. Based on the study findings, along with the growing popularity of folksonomies and the apparent authority of communally developed information resources, communally developed indexes based on group consensus may have merit. Originality/value - Consistency in the assignment of indexing terms has been studied primarily on a small scale. Few studies have examined indexing on a larger scale with more than a handful of indexers. Recognition of the differences in indexing assignment has implications for the development of public information systems, especially those that do not use a controlled vocabulary and those tagged by end-users. In such cases, multiple access points that accommodate the different ways that users interpret content are needed so that searchers may be guided to relevant content despite using different terminology.
  15. Hughes, A.V.; Rafferty, P.: Inter-indexer consistency in graphic materials indexing at the National Library of Wales (2011) 0.00
    0.0022682515 = product of:
      0.01587776 = sum of:
        0.01587776 = product of:
          0.03175552 = sum of:
            0.03175552 = weight(_text_:design in 4488) [ClassicSimilarity], result of:
              0.03175552 = score(doc=4488,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.20770542 = fieldWeight in 4488, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4488)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - This paper seeks to report a project to investigate the degree of inter-indexer consistency in the assignment of controlled vocabulary topical subject index terms to identical graphical images by different indexers at the National Library of Wales (NLW). Design/methodology/approach - An experimental quantitative methodology was devised to investigate inter-indexer consistency. Additionally, the project investigated the relationship, if any, between indexing exhaustivity and consistency, and the relationship, if any, between indexing consistency/exhaustivity and broad category of graphic format. Findings - Inter-indexer consistency in the assignment of topical subject index terms to graphic materials at the NLW was found to be generally low and highly variable. Inter-indexer consistency fell within the range 10.8 per cent to 48.0 per cent. Indexing exhaustivity varied substantially from indexer to indexer, with a mean assignment of 3.8 terms by each indexer to each image, falling within the range 2.5 to 4.7 terms. The broad category of graphic format, whether photographic or non-photographic, was found to have little influence on either inter-indexer consistency or indexing exhaustivity. Indexing exhaustivity and inter-indexer consistency exhibited a tendency toward a direct, positive relationship. The findings are necessarily limited as this is a small-scale study within a single institution. Originality/value - Previous consistency studies have almost exclusively investigated the indexing of print materials, with very little research published for non-print media. With the literature also rich in discussion of the added complexities of subjectively representing the intellectual content of visual media, this study attempts to enrich existing knowledge on indexing consistency for graphic materials and to address a noticeable gap in information theory.
  16. Subrahmanyam, B.: Library of Congress Classification numbers : issues of consistency and their implications for union catalogs (2006) 0.00
    0.0019675945 = product of:
      0.013773161 = sum of:
        0.013773161 = product of:
          0.027546322 = sum of:
            0.027546322 = weight(_text_:22 in 5784) [ClassicSimilarity], result of:
              0.027546322 = score(doc=5784,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.19345059 = fieldWeight in 5784, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5784)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    10. 9.2000 17:38:22
  17. Lin, Y,-l.; Trattner, C.; Brusilovsky, P.; He, D.: ¬The impact of image descriptions on user tagging behavior : a study of the nature and functionality of crowdsourced tags (2015) 0.00
    0.0018146011 = product of:
      0.012702207 = sum of:
        0.012702207 = product of:
          0.025404414 = sum of:
            0.025404414 = weight(_text_:design in 2159) [ClassicSimilarity], result of:
              0.025404414 = score(doc=2159,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.16616434 = fieldWeight in 2159, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2159)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Crowdsourcing has emerged as a way to harvest social wisdom from thousands of volunteers to perform a series of tasks online. However, little research has been devoted to exploring the impact of various factors such as the content of a resource or crowdsourcing interface design on user tagging behavior. Although images' titles and descriptions are frequently available in image digital libraries, it is not clear whether they should be displayed to crowdworkers engaged in tagging. This paper focuses on offering insight to the curators of digital image libraries who face this dilemma by examining (i) how descriptions influence the user in his/her tagging behavior and (ii) how this relates to the (a) nature of the tags, (b) the emergent folksonomy, and (c) the findability of the images in the tagging system. We compared two different methods for collecting image tags from Amazon's Mechanical Turk's crowdworkers-with and without image descriptions. Several properties of generated tags were examined from different perspectives: diversity, specificity, reusability, quality, similarity, descriptiveness, and so on. In addition, the study was carried out to examine the impact of image description on supporting users' information seeking with a tag cloud interface. The results showed that the properties of tags are affected by the crowdsourcing approach. Tags from the "with description" condition are more diverse and more specific than tags from the "without description" condition, while the latter has a higher tag reuse rate. A user study also revealed that different tag sets provided different support for search. Tags produced "with description" shortened the path to the target results, whereas tags produced without description increased user success in the search task.