Search (21 results, page 1 of 2)

  • × theme_ss:"Indexierungsstudien"
  1. Broxis, P.F.: ASSIA social science information service (1989) 0.08
    0.08368771 = product of:
      0.16737542 = sum of:
        0.11509455 = weight(_text_:social in 1511) [ClassicSimilarity], result of:
          0.11509455 = score(doc=1511,freq=4.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.6230592 = fieldWeight in 1511, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.078125 = fieldNorm(doc=1511)
        0.052280862 = product of:
          0.104561724 = sum of:
            0.104561724 = weight(_text_:aspects in 1511) [ClassicSimilarity], result of:
              0.104561724 = score(doc=1511,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.4993796 = fieldWeight in 1511, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1511)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    ASSIA (Applied Social Science Index and Abtracts) started in 1987 as a bimonthly indexing and abstracting service in the society field, aimed at practitioners as well as sociologists. Considers the following aspects of the service: arrangement of ASSIA; journal coverage; indexing approach; services for subscribers; and who are the users?
  2. Leininger, K.: Interindexer consistency in PsychINFO (2000) 0.03
    0.031595502 = product of:
      0.12638201 = sum of:
        0.12638201 = sum of:
          0.08872356 = weight(_text_:aspects in 2552) [ClassicSimilarity], result of:
            0.08872356 = score(doc=2552,freq=4.0), product of:
              0.20938325 = queryWeight, product of:
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.046325076 = queryNorm
              0.42373765 = fieldWeight in 2552, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.046875 = fieldNorm(doc=2552)
          0.03765845 = weight(_text_:22 in 2552) [ClassicSimilarity], result of:
            0.03765845 = score(doc=2552,freq=2.0), product of:
              0.16222252 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046325076 = queryNorm
              0.23214069 = fieldWeight in 2552, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2552)
      0.25 = coord(1/4)
    
    Abstract
    Reports results of a study to examine interindexer consistency (the degree to which indexers, when assigning terms to a chosen record, will choose the same terms to reflect that record) in the PsycINFO database using 60 records that were inadvertently processed twice between 1996 and 1998. Five aspects of interindexer consistency were analysed. Two methods were used to calculate interindexer consistency: one posited by Hooper (1965) and the other by Rollin (1981). Aspects analysed were: checktag consistency (66.24% using Hooper's calculation and 77.17% using Rollin's); major-to-all term consistency (49.31% and 62.59% respectively); overall indexing consistency (49.02% and 63.32%); classification code consistency (44.17% and 45.00%); and major-to-major term consistency (43.24% and 56.09%). The average consistency across all categories was 50.4% using Hooper's method and 60.83% using Rollin's. Although comparison with previous studies is difficult due to methodological variations in the overall study of indexing consistency and the specific characteristics of the database, results generally support previous findings when trends and similar studies are analysed.
    Date
    9. 2.1997 18:44:22
  3. Lee, D.H.; Schleyer, T.: Social tagging is no substitute for controlled indexing : a comparison of Medical Subject Headings and CiteULike tags assigned to 231,388 papers (2012) 0.02
    0.024918701 = product of:
      0.099674806 = sum of:
        0.099674806 = weight(_text_:social in 383) [ClassicSimilarity], result of:
          0.099674806 = score(doc=383,freq=12.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.5395851 = fieldWeight in 383, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=383)
      0.25 = coord(1/4)
    
    Abstract
    Social tagging and controlled indexing both facilitate access to information resources. Given the increasing popularity of social tagging and the limitations of controlled indexing (primarily cost and scalability), it is reasonable to investigate to what degree social tagging could substitute for controlled indexing. In this study, we compared CiteULike tags to Medical Subject Headings (MeSH) terms for 231,388 citations indexed in MEDLINE. In addition to descriptive analyses of the data sets, we present a paper-by-paper analysis of tags and MeSH terms: the number of common annotations, Jaccard similarity, and coverage ratio. In the analysis, we apply three increasingly progressive levels of text processing, ranging from normalization to stemming, to reduce the impact of lexical differences. Annotations of our corpus consisted of over 76,968 distinct tags and 21,129 distinct MeSH terms. The top 20 tags/MeSH terms showed little direct overlap. On a paper-by-paper basis, the number of common annotations ranged from 0.29 to 0.5 and the Jaccard similarity from 2.12% to 3.3% using increased levels of text processing. At most, 77,834 citations (33.6%) shared at least one annotation. Our results show that CiteULike tags and MeSH terms are quite distinct lexically, reflecting different viewpoints/processes between social tagging and controlled indexing.
    Theme
    Social tagging
  4. Swanson, D.R.: Some unexplained aspects of the Cranfield tests of indexing performance factors (1971) 0.02
    0.020912344 = product of:
      0.083649375 = sum of:
        0.083649375 = product of:
          0.16729875 = sum of:
            0.16729875 = weight(_text_:aspects in 2337) [ClassicSimilarity], result of:
              0.16729875 = score(doc=2337,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.79900736 = fieldWeight in 2337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.125 = fieldNorm(doc=2337)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  5. Biagetti, M.T.: Indexing and scientific research needs (2006) 0.01
    0.0142422225 = product of:
      0.05696889 = sum of:
        0.05696889 = weight(_text_:social in 235) [ClassicSimilarity], result of:
          0.05696889 = score(doc=235,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.30839854 = fieldWeight in 235, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0546875 = fieldNorm(doc=235)
      0.25 = coord(1/4)
    
    Abstract
    The paper examines main problems of semantic indexing taking into consideration the connection with the needs of scientific research, in particular in the field of Social Sciences. Multi-modal indexing approach, which allows researchers to find documents according to different dimensions of research, is described. Request-oriented indexing and Pragmatic approach are also discussed and, finally, the possibility of assuming as fundamental principle, in indexing, C. S. Peirce theory of Abduction, is outlined.
  6. Losee, R.: ¬A performance model of the length and number of subject headings and index phrases (2004) 0.01
    0.01220762 = product of:
      0.04883048 = sum of:
        0.04883048 = weight(_text_:social in 3725) [ClassicSimilarity], result of:
          0.04883048 = score(doc=3725,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.26434162 = fieldWeight in 3725, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046875 = fieldNorm(doc=3725)
      0.25 = coord(1/4)
    
    Abstract
    When assigning subject headings or index terms to a document, how many terms or phrases should be used to represent the document? The contribution of an indexing phrase to locating and ordering documents can be compared to the contribution of a full-text query to finding documents. The length and number of phrases needed to equal the contribution of a full-text query is the subject of this paper. The appropriate number of phrases is determined in part by the length of the phrases. We suggest several rules that may be used to determine how many subject headings should be assigned, given index phrase lengths, and provide a general model for this process. A difference between characteristics of indexing "hard" science and "social" science literature is suggested.
  7. Braam, R.R.; Bruil, J.: Quality of indexing information : authors' views on indexing of their articles in chemical abstracts online CA-file (1992) 0.01
    0.011090445 = product of:
      0.04436178 = sum of:
        0.04436178 = product of:
          0.08872356 = sum of:
            0.08872356 = weight(_text_:aspects in 2638) [ClassicSimilarity], result of:
              0.08872356 = score(doc=2638,freq=4.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.42373765 = fieldWeight in 2638, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2638)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Studies the quality of subject indexing by Chemical Abstracts Indexing Service by confronting authors with the particular indexing terms attributed to their computer, for 270 articles published in 54 journals, 5 articles out of each journal. Responses (80%) indicate the superior quality of keywords, both as content descriptors and as retrieval tools. Author judgements on these 2 different aspects do not always converge, however. CAS's indexing policy to cover only 'new' aspects is reflected in author's judgements that index lists are somewhat incomplete, in particular in the case of thesaurus terms (index headings). The large effort expanded by CAS in maintaining and using a subject thesuaurs, in order to select valid index headings, as compared to quick and cheap keyword postings, does not lead to clear superior quality of thesaurus terms for document description nor in retrieval. Some 20% of papers were not placed in 'proper' CA main section, according to authors. As concerns the use of indexing data by third parties, in bibliometrics, users should be aware of the indexing policies behind the data, in order to prevent invalid interpretations
  8. Iivonen, M.: ¬The impact of the indexing environment on interindexer consistency (1990) 0.01
    0.010456172 = product of:
      0.041824687 = sum of:
        0.041824687 = product of:
          0.083649375 = sum of:
            0.083649375 = weight(_text_:aspects in 4779) [ClassicSimilarity], result of:
              0.083649375 = score(doc=4779,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.39950368 = fieldWeight in 4779, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4779)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The interindexer consistency between indexers working in 10 libraries was considered. The indexing environment is described with the help of organisational theory. Interindexer consistency was low, but there were clear differences depending on whether consistency was calculated on the basis of terms or concepts or aspects. Discusses the indexing environment's connections to interindexer consistency
  9. Olson, H.A.; Wolfram, D.: Syntagmatic relationships and indexing consistency on a larger scale (2008) 0.01
    0.010173016 = product of:
      0.040692065 = sum of:
        0.040692065 = weight(_text_:social in 2214) [ClassicSimilarity], result of:
          0.040692065 = score(doc=2214,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.22028469 = fieldWeight in 2214, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2214)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this article is to examine interindexer consistency on a larger scale than other studies have done to determine if group consensus is reached by larger numbers of indexers and what, if any, relationships emerge between assigned terms. Design/methodology/approach - In total, 64 MLIS students were recruited to assign up to five terms to a document. The authors applied basic data modeling and the exploratory statistical techniques of multi-dimensional scaling (MDS) and hierarchical cluster analysis to determine whether relationships exist in indexing consistency and the coocurrence of assigned terms. Findings - Consistency in the assignment of indexing terms to a document follows an inverse shape, although it is not strictly power law-based unlike many other social phenomena. The exploratory techniques revealed that groups of terms clustered together. The resulting term cooccurrence relationships were largely syntagmatic. Research limitations/implications - The results are based on the indexing of one article by non-expert indexers and are, thus, not generalizable. Based on the study findings, along with the growing popularity of folksonomies and the apparent authority of communally developed information resources, communally developed indexes based on group consensus may have merit. Originality/value - Consistency in the assignment of indexing terms has been studied primarily on a small scale. Few studies have examined indexing on a larger scale with more than a handful of indexers. Recognition of the differences in indexing assignment has implications for the development of public information systems, especially those that do not use a controlled vocabulary and those tagged by end-users. In such cases, multiple access points that accommodate the different ways that users interpret content are needed so that searchers may be guided to relevant content despite using different terminology.
  10. Moreiro-González, J.-A.; Bolaños-Mejías, C.: Folksonomy indexing from the assignment of free tags to setup subject : a search analysis into the domain of legal history (2018) 0.01
    0.010173016 = product of:
      0.040692065 = sum of:
        0.040692065 = weight(_text_:social in 4640) [ClassicSimilarity], result of:
          0.040692065 = score(doc=4640,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.22028469 = fieldWeight in 4640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4640)
      0.25 = coord(1/4)
    
    Abstract
    The behaviour and lexical quality of the folksonomies is examined by comparing two online social networks: Library-Thing (for books) and Flickr (for photos). We presented a case study that combines quantitative and qualitative elements, singularized by the lexical and functional framework. Our query was made by "Legal History" and by the synonyms "Law History" and "History of Law." We then examined the relevance, consistency and precision of the tags attached to the retrieved documents, in addition to their lexical composition. We identified the difficulties caused by free tagging and some of the folksonomy solutions that have been found to solve them. The results are presented in comparative tables, giving special attention to related tags within each retrieved document. Although the number of ambiguous or inconsistent tags is not very large, these do nevertheless represent the most obvious problem to search and retrieval in folksonomies. Relevance is high when the terms are assigned by especially competent taggers. Even with less expert taggers, ambiguity is often successfully corrected by contextualizing the concepts within related tags. A propinquity to associative and taxonomic lexical semantic knowledge is reached via contextual relationships.
  11. Cleverdon, C.W.: ASLIB Cranfield Research Project : Report on the first stage of an investigation into the comparative efficiency of indexing systems (1960) 0.01
    0.009414612 = product of:
      0.03765845 = sum of:
        0.03765845 = product of:
          0.0753169 = sum of:
            0.0753169 = weight(_text_:22 in 6158) [ClassicSimilarity], result of:
              0.0753169 = score(doc=6158,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.46428138 = fieldWeight in 6158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6158)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: College and research libraries 22(1961) no.3, S.228 (G. Jahoda)
  12. Iivonen, M.: Interindexer consistency and the indexing environment (1990) 0.01
    0.009149151 = product of:
      0.036596604 = sum of:
        0.036596604 = product of:
          0.07319321 = sum of:
            0.07319321 = weight(_text_:aspects in 3593) [ClassicSimilarity], result of:
              0.07319321 = score(doc=3593,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.3495657 = fieldWeight in 3593, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3593)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Considers the interindexer consistency between indexers working in various organisations and reports on the result of an empirical study. The interindexer consistency was low, but there were clear differences depending on whether the consistency was calculated on the basis to terms or concepts or aspects. The fact that the consistency figures remained low can be explained. The low indexing consistency caused by indexing errors also seems to be difficult to control. Indexing consistency and its control have a clear impact on how feasible and useful centralised services and union catalogues are and can be from the point of view of subject description.
  13. Lin, Y,-l.; Trattner, C.; Brusilovsky, P.; He, D.: ¬The impact of image descriptions on user tagging behavior : a study of the nature and functionality of crowdsourced tags (2015) 0.01
    0.008138414 = product of:
      0.032553654 = sum of:
        0.032553654 = weight(_text_:social in 2159) [ClassicSimilarity], result of:
          0.032553654 = score(doc=2159,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.17622775 = fieldWeight in 2159, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.03125 = fieldNorm(doc=2159)
      0.25 = coord(1/4)
    
    Abstract
    Crowdsourcing has emerged as a way to harvest social wisdom from thousands of volunteers to perform a series of tasks online. However, little research has been devoted to exploring the impact of various factors such as the content of a resource or crowdsourcing interface design on user tagging behavior. Although images' titles and descriptions are frequently available in image digital libraries, it is not clear whether they should be displayed to crowdworkers engaged in tagging. This paper focuses on offering insight to the curators of digital image libraries who face this dilemma by examining (i) how descriptions influence the user in his/her tagging behavior and (ii) how this relates to the (a) nature of the tags, (b) the emergent folksonomy, and (c) the findability of the images in the tagging system. We compared two different methods for collecting image tags from Amazon's Mechanical Turk's crowdworkers-with and without image descriptions. Several properties of generated tags were examined from different perspectives: diversity, specificity, reusability, quality, similarity, descriptiveness, and so on. In addition, the study was carried out to examine the impact of image description on supporting users' information seeking with a tag cloud interface. The results showed that the properties of tags are affected by the crowdsourcing approach. Tags from the "with description" condition are more diverse and more specific than tags from the "without description" condition, while the latter has a higher tag reuse rate. A user study also revealed that different tag sets provided different support for search. Tags produced "with description" shortened the path to the target results, whereas tags produced without description increased user success in the search task.
  14. Peset, F.; Garzón-Farinós, F.; González, L.M.; García-Massó, X.; Ferrer-Sapena, A.; Toca-Herrera, J.L.; Sánchez-Pérez, E.A.: Survival analysis of author keywords : an application to the library and information sciences area (2020) 0.01
    0.0065351077 = product of:
      0.026140431 = sum of:
        0.026140431 = product of:
          0.052280862 = sum of:
            0.052280862 = weight(_text_:aspects in 5774) [ClassicSimilarity], result of:
              0.052280862 = score(doc=5774,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2496898 = fieldWeight in 5774, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5774)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Our purpose is to adapt a statistical method for the analysis of discrete numerical series to the keywords appearing in scientific articles of a given area. As an example, we apply our methodological approach to the study of the keywords in the Library and Information Sciences (LIS) area. Our objective is to detect the new author keywords that appear in a fixed knowledge area in the period of 1 year in order to quantify the probabilities of survival for 10 years as a function of the impact of the journals where they appeared. Many of the new keywords appearing in the LIS field are ephemeral. Actually, more than half are never used again. In general, the terms most commonly used in the LIS area come from other areas. The average survival time of these keywords is approximately 3 years, being slightly higher in the case of words that were published in journals classified in the second quartile of the area. We believe that measuring the appearance and disappearance of terms will allow understanding some relevant aspects of the evolution of a discipline, providing in this way a new bibliometric approach.
  15. Veenema, F.: To index or not to index (1996) 0.01
    0.006276408 = product of:
      0.025105633 = sum of:
        0.025105633 = product of:
          0.050211266 = sum of:
            0.050211266 = weight(_text_:22 in 7247) [ClassicSimilarity], result of:
              0.050211266 = score(doc=7247,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.30952093 = fieldWeight in 7247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7247)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Canadian journal of information and library science. 21(1996) no.2, S.1-22
  16. Booth, A.: How consistent is MEDLINE indexing? (1990) 0.01
    0.005491857 = product of:
      0.021967428 = sum of:
        0.021967428 = product of:
          0.043934856 = sum of:
            0.043934856 = weight(_text_:22 in 3510) [ClassicSimilarity], result of:
              0.043934856 = score(doc=3510,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2708308 = fieldWeight in 3510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3510)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Health libraries review. 7(1990) no.1, S.22-26
  17. Neshat, N.; Horri, A.: ¬A study of subject indexing consistency between the National Library of Iran and Humanities Libraries in the area of Iranian studies (2006) 0.01
    0.005491857 = product of:
      0.021967428 = sum of:
        0.021967428 = product of:
          0.043934856 = sum of:
            0.043934856 = weight(_text_:22 in 230) [ClassicSimilarity], result of:
              0.043934856 = score(doc=230,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2708308 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=230)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    4. 1.2007 10:22:26
  18. Taniguchi, S.: Recording evidence in bibliographic records and descriptive metadata (2005) 0.00
    0.004707306 = product of:
      0.018829225 = sum of:
        0.018829225 = product of:
          0.03765845 = sum of:
            0.03765845 = weight(_text_:22 in 3565) [ClassicSimilarity], result of:
              0.03765845 = score(doc=3565,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.23214069 = fieldWeight in 3565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3565)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    18. 6.2005 13:16:22
  19. Subrahmanyam, B.: Library of Congress Classification numbers : issues of consistency and their implications for union catalogs (2006) 0.00
    0.0039227554 = product of:
      0.015691021 = sum of:
        0.015691021 = product of:
          0.031382043 = sum of:
            0.031382043 = weight(_text_:22 in 5784) [ClassicSimilarity], result of:
              0.031382043 = score(doc=5784,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.19345059 = fieldWeight in 5784, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5784)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    10. 9.2000 17:38:22
  20. White, H.; Willis, C.; Greenberg, J.: HIVEing : the effect of a semantic web technology on inter-indexer consistency (2014) 0.00
    0.0039227554 = product of:
      0.015691021 = sum of:
        0.015691021 = product of:
          0.031382043 = sum of:
            0.031382043 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
              0.031382043 = score(doc=1781,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.19345059 = fieldWeight in 1781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1781)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to examine the effect of the Helping Interdisciplinary Vocabulary Engineering (HIVE) system on the inter-indexer consistency of information professionals when assigning keywords to a scientific abstract. This study examined first, the inter-indexer consistency of potential HIVE users; second, the impact HIVE had on consistency; and third, challenges associated with using HIVE. Design/methodology/approach - A within-subjects quasi-experimental research design was used for this study. Data were collected using a task-scenario based questionnaire. Analysis was performed on consistency results using Hooper's and Rolling's inter-indexer consistency measures. A series of t-tests was used to judge the significance between consistency measure results. Findings - Results suggest that HIVE improves inter-indexing consistency. Working with HIVE increased consistency rates by 22 percent (Rolling's) and 25 percent (Hooper's) when selecting relevant terms from all vocabularies. A statistically significant difference exists between the assignment of free-text keywords and machine-aided keywords. Issues with homographs, disambiguation, vocabulary choice, and document structure were all identified as potential challenges. Research limitations/implications - Research limitations for this study can be found in the small number of vocabularies used for the study. Future research will include implementing HIVE into the Dryad Repository and studying its application in a repository system. Originality/value - This paper showcases several features used in HIVE system. By using traditional consistency measures to evaluate a semantic web technology, this paper emphasizes the link between traditional indexing and next generation machine-aided indexing (MAI) tools.