Search (22 results, page 1 of 2)

  • × theme_ss:"Indexierungsstudien"
  1. Ballard, R.M.: Indexing and its relevance to technical processing (1993) 0.04
    0.03681399 = product of:
      0.12884896 = sum of:
        0.05258407 = weight(_text_:processing in 554) [ClassicSimilarity], result of:
          0.05258407 = score(doc=554,freq=4.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.3162615 = fieldWeight in 554, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=554)
        0.07626488 = weight(_text_:techniques in 554) [ClassicSimilarity], result of:
          0.07626488 = score(doc=554,freq=6.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.42150658 = fieldWeight in 554, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0390625 = fieldNorm(doc=554)
      0.2857143 = coord(2/7)
    
    Abstract
    The development of regional on-line catalogs and in-house information systems for retrieval of references provide examples of the impact of indexing theory and applications on technical processing. More emphasis must be given to understanding the techniques for evaluating the effectiveness of a file, irrespective of whether that file was created as a library catalog or an index to information sources. The most significant advances in classification theory in recent decades has been as a result of efforts to improve effectiveness of indexing systems. Library classification systems are indexing languages or systems. Courses offered for the preparation of indexers in the United States and the United Kingdom are reviewed. A point of congruence for both the indexer and the library classifier would appear to be the need for a thorough preparation in the techniques of subject analysis. Any subject heading list will suffer from omissions as well as the inclusion of terms which the patron will never use. Indexing theory has provided the technical services department with methods for evaluation of effectiveness. The writer does not believe that these techniques are used, nor do current courses, workshops, and continuing education programs stress them. When theory is totally subjugated to practice, critical thinking and maximum effectiveness will suffer.
  2. Hersh, W.R.; Hickam, D.H.: ¬A comparison of two methods for indexing and retrieval from a full-text medical database (1992) 0.01
    0.012454004 = product of:
      0.08717802 = sum of:
        0.08717802 = weight(_text_:techniques in 4526) [ClassicSimilarity], result of:
          0.08717802 = score(doc=4526,freq=4.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.48182213 = fieldWeight in 4526, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4526)
      0.14285715 = coord(1/7)
    
    Abstract
    Reports results of a study of 2 information retrieval systems on a 2.000 document full text medical database. The first system, SAPHIRE, features concept based automatic indexing and statistical retrieval techniques, while the second system, SWORD, features traditional word based Boolean techniques, 16 medical students at Oregon Health Sciences Univ. each performed 10 searches and their results, recorded in terms of recall and precision, showed nearly equal performance for both systems. SAPHIRE was also compared with a version of SWORD modified to use automatic indexing and ranked retrieval. Using batch input of queries, the latter method performed slightly better
  3. Evedove, P.R. Dal; Evedove Tartarotti, R.C. Dal; Lopes Fujita, M.S.: Verbal protocols in Brazilian information science : a perspective from indexing studies (2018) 0.01
    0.011411925 = product of:
      0.07988347 = sum of:
        0.07988347 = weight(_text_:digital in 4783) [ClassicSimilarity], result of:
          0.07988347 = score(doc=4783,freq=4.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.493069 = fieldWeight in 4783, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0625 = fieldNorm(doc=4783)
      0.14285715 = coord(1/7)
    
    Source
    Challenges and opportunities for knowledge organization in the digital age: proceedings of the Fifteenth International ISKO Conference, 9-11 July 2018, Porto, Portugal / organized by: International Society for Knowledge Organization (ISKO), ISKO Spain and Portugal Chapter, University of Porto - Faculty of Arts and Humanities, Research Centre in Communication, Information and Digital Culture (CIC.digital) - Porto. Eds.: F. Ribeiro u. M.E. Cerveira
  4. Olson, H.A.; Wolfram, D.: Syntagmatic relationships and indexing consistency on a larger scale (2008) 0.01
    0.008895717 = product of:
      0.062270015 = sum of:
        0.062270015 = weight(_text_:techniques in 2214) [ClassicSimilarity], result of:
          0.062270015 = score(doc=2214,freq=4.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.34415868 = fieldWeight in 2214, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2214)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - The purpose of this article is to examine interindexer consistency on a larger scale than other studies have done to determine if group consensus is reached by larger numbers of indexers and what, if any, relationships emerge between assigned terms. Design/methodology/approach - In total, 64 MLIS students were recruited to assign up to five terms to a document. The authors applied basic data modeling and the exploratory statistical techniques of multi-dimensional scaling (MDS) and hierarchical cluster analysis to determine whether relationships exist in indexing consistency and the coocurrence of assigned terms. Findings - Consistency in the assignment of indexing terms to a document follows an inverse shape, although it is not strictly power law-based unlike many other social phenomena. The exploratory techniques revealed that groups of terms clustered together. The resulting term cooccurrence relationships were largely syntagmatic. Research limitations/implications - The results are based on the indexing of one article by non-expert indexers and are, thus, not generalizable. Based on the study findings, along with the growing popularity of folksonomies and the apparent authority of communally developed information resources, communally developed indexes based on group consensus may have merit. Originality/value - Consistency in the assignment of indexing terms has been studied primarily on a small scale. Few studies have examined indexing on a larger scale with more than a handful of indexers. Recognition of the differences in indexing assignment has implications for the development of public information systems, especially those that do not use a controlled vocabulary and those tagged by end-users. In such cases, multiple access points that accommodate the different ways that users interpret content are needed so that searchers may be guided to relevant content despite using different terminology.
  5. Tseng, Y.-H.: Keyword extraction techniques and relevance feedback (1997) 0.01
    0.008806311 = product of:
      0.06164417 = sum of:
        0.06164417 = weight(_text_:techniques in 1830) [ClassicSimilarity], result of:
          0.06164417 = score(doc=1830,freq=2.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.3406997 = fieldWeight in 1830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1830)
      0.14285715 = coord(1/7)
    
  6. Larson, R.R.: Experiments in automatic Library of Congress Classification (1992) 0.01
    0.0075482656 = product of:
      0.052837856 = sum of:
        0.052837856 = weight(_text_:techniques in 1054) [ClassicSimilarity], result of:
          0.052837856 = score(doc=1054,freq=2.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.2920283 = fieldWeight in 1054, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.046875 = fieldNorm(doc=1054)
      0.14285715 = coord(1/7)
    
    Abstract
    This article presents the results of research into the automatic selection of Library of Congress Classification numbers based on the titles and subject headings in MARC records. The method used in this study was based on partial match retrieval techniques using various elements of new recors (i.e., those to be classified) as "queries", and a test database of classification clusters generated from previously classified MARC records. Sixty individual methods for automatic classification were tested on a set of 283 new records, using all combinations of four different partial match methods, five query types, and three representations of search terms. The results indicate that if the best method for a particular case can be determined, then up to 86% of the new records may be correctly classified. The single method with the best accuracy was able to select the correct classification for about 46% of the new records.
  7. Lee, D.H.; Schleyer, T.: Social tagging is no substitute for controlled indexing : a comparison of Medical Subject Headings and CiteULike tags assigned to 231,388 papers (2012) 0.01
    0.0075120106 = product of:
      0.05258407 = sum of:
        0.05258407 = weight(_text_:processing in 383) [ClassicSimilarity], result of:
          0.05258407 = score(doc=383,freq=4.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.3162615 = fieldWeight in 383, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=383)
      0.14285715 = coord(1/7)
    
    Abstract
    Social tagging and controlled indexing both facilitate access to information resources. Given the increasing popularity of social tagging and the limitations of controlled indexing (primarily cost and scalability), it is reasonable to investigate to what degree social tagging could substitute for controlled indexing. In this study, we compared CiteULike tags to Medical Subject Headings (MeSH) terms for 231,388 citations indexed in MEDLINE. In addition to descriptive analyses of the data sets, we present a paper-by-paper analysis of tags and MeSH terms: the number of common annotations, Jaccard similarity, and coverage ratio. In the analysis, we apply three increasingly progressive levels of text processing, ranging from normalization to stemming, to reduce the impact of lexical differences. Annotations of our corpus consisted of over 76,968 distinct tags and 21,129 distinct MeSH terms. The top 20 tags/MeSH terms showed little direct overlap. On a paper-by-paper basis, the number of common annotations ranged from 0.29 to 0.5 and the Jaccard similarity from 2.12% to 3.3% using increased levels of text processing. At most, 77,834 citations (33.6%) shared at least one annotation. Our results show that CiteULike tags and MeSH terms are quite distinct lexically, reflecting different viewpoints/processes between social tagging and controlled indexing.
  8. Taghva, K.; Borsack, J.; Nartker, T.; Condit, A.: ¬The role of manually-assigned keywords in query expansion (2004) 0.01
    0.0074365106 = product of:
      0.05205557 = sum of:
        0.05205557 = weight(_text_:processing in 2567) [ClassicSimilarity], result of:
          0.05205557 = score(doc=2567,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.3130829 = fieldWeight in 2567, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2567)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 40(2004) no.3, S.441-458
  9. Kautto, V.: Classing and indexing : a comparative time study (1992) 0.01
    0.006374152 = product of:
      0.04461906 = sum of:
        0.04461906 = weight(_text_:processing in 2670) [ClassicSimilarity], result of:
          0.04461906 = score(doc=2670,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.26835677 = fieldWeight in 2670, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=2670)
      0.14285715 = coord(1/7)
    
    Abstract
    A total of 16 classifiers made a subject analysis of a set of books such that some of the books were first classified by the UDC anf then indexed with terms from the General Finnish Subject Headings while another set were processed in the opposite order. Finally books on the same subject were either classifies or indexed. The total number of books processed was 581. A comparison was made of the time required for processing in different situations and of the number of classes or subject headings used. The time figures were compared with corresponding data from the British Library (1972) and the Library of Congress (1990 and 1991). The author finds that the contents analysis requires one third, classification one third and indexing obe third of the time, if the document is both classified and indexed. There was a plausible correlation (o.51) between the length of experience in classification and the decrease in the time required for classing. The average number of UDC numbers was 4,3 and the average number of terms from the list of subject headings was 4,0
  10. Burgin, R.: ¬The effect of indexing exhaustivity on retrieval performance (1991) 0.01
    0.006374152 = product of:
      0.04461906 = sum of:
        0.04461906 = weight(_text_:processing in 5262) [ClassicSimilarity], result of:
          0.04461906 = score(doc=5262,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.26835677 = fieldWeight in 5262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=5262)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 27(1991) no.6, S.623-628
  11. Rowley, J.: ¬The controlled versus natural indexing languages debate revisited : a perspective on information retrieval practice and research (1994) 0.01
    0.006290222 = product of:
      0.044031553 = sum of:
        0.044031553 = weight(_text_:techniques in 7151) [ClassicSimilarity], result of:
          0.044031553 = score(doc=7151,freq=2.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.24335694 = fieldWeight in 7151, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0390625 = fieldNorm(doc=7151)
      0.14285715 = coord(1/7)
    
    Abstract
    This article revisits the debate concerning controlled and natural indexing languages, as used in searching the databases of the online hosts, in-house information retrieval systems, online public access catalogues and databases stored on CD-ROM. The debate was first formulated in the early days of information retrieval more than a century ago but, despite significant advance in technology, remains unresolved. The article divides the history of the debate into four eras. Era one was characterised by the introduction of controlled vocabulary. Era two focused on comparisons between different indexing languages in order to assess which was best. Era three saw a number of case studies of limited generalisability and a general recognition that the best search performance can be achieved by the parallel use of the two types of indexing languages. The emphasis in Era four has been on the development of end-user-based systems, including online public access catalogues and databases on CD-ROM. Recent developments in the use of expert systems techniques to support the representation of meaning may lead to systems which offer significant support to the user in end-user searching. In the meantime, however, information retrieval in practice involves a mixture of natural and controlled indexing languages used to search a wide variety of different kinds of databases
  12. Lin, Y,-l.; Trattner, C.; Brusilovsky, P.; He, D.: ¬The impact of image descriptions on user tagging behavior : a study of the nature and functionality of crowdsourced tags (2015) 0.01
    0.0057059624 = product of:
      0.039941736 = sum of:
        0.039941736 = weight(_text_:digital in 2159) [ClassicSimilarity], result of:
          0.039941736 = score(doc=2159,freq=4.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.2465345 = fieldWeight in 2159, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=2159)
      0.14285715 = coord(1/7)
    
    Abstract
    Crowdsourcing has emerged as a way to harvest social wisdom from thousands of volunteers to perform a series of tasks online. However, little research has been devoted to exploring the impact of various factors such as the content of a resource or crowdsourcing interface design on user tagging behavior. Although images' titles and descriptions are frequently available in image digital libraries, it is not clear whether they should be displayed to crowdworkers engaged in tagging. This paper focuses on offering insight to the curators of digital image libraries who face this dilemma by examining (i) how descriptions influence the user in his/her tagging behavior and (ii) how this relates to the (a) nature of the tags, (b) the emergent folksonomy, and (c) the findability of the images in the tagging system. We compared two different methods for collecting image tags from Amazon's Mechanical Turk's crowdworkers-with and without image descriptions. Several properties of generated tags were examined from different perspectives: diversity, specificity, reusability, quality, similarity, descriptiveness, and so on. In addition, the study was carried out to examine the impact of image description on supporting users' information seeking with a tag cloud interface. The results showed that the properties of tags are affected by the crowdsourcing approach. Tags from the "with description" condition are more diverse and more specific than tags from the "without description" condition, while the latter has a higher tag reuse rate. A user study also revealed that different tag sets provided different support for search. Tags produced "with description" shortened the path to the target results, whereas tags produced without description increased user success in the search task.
  13. Huffman, G.D.; Vital, D.A.; Bivins, R.G.: Generating indices with lexical association methods : term uniqueness (1990) 0.01
    0.005311793 = product of:
      0.03718255 = sum of:
        0.03718255 = weight(_text_:processing in 4152) [ClassicSimilarity], result of:
          0.03718255 = score(doc=4152,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.22363065 = fieldWeight in 4152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4152)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 26(1990) no.4, S.549-558
  14. Cleverdon, C.W.: ASLIB Cranfield Research Project : Report on the first stage of an investigation into the comparative efficiency of indexing systems (1960) 0.00
    0.004769796 = product of:
      0.03338857 = sum of:
        0.03338857 = product of:
          0.06677714 = sum of:
            0.06677714 = weight(_text_:22 in 6158) [ClassicSimilarity], result of:
              0.06677714 = score(doc=6158,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.46428138 = fieldWeight in 6158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6158)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Footnote
    Rez. in: College and research libraries 22(1961) no.3, S.228 (G. Jahoda)
  15. Veenema, F.: To index or not to index (1996) 0.00
    0.003179864 = product of:
      0.022259047 = sum of:
        0.022259047 = product of:
          0.044518095 = sum of:
            0.044518095 = weight(_text_:22 in 7247) [ClassicSimilarity], result of:
              0.044518095 = score(doc=7247,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.30952093 = fieldWeight in 7247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7247)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Canadian journal of information and library science. 21(1996) no.2, S.1-22
  16. Booth, A.: How consistent is MEDLINE indexing? (1990) 0.00
    0.0027823811 = product of:
      0.019476667 = sum of:
        0.019476667 = product of:
          0.038953334 = sum of:
            0.038953334 = weight(_text_:22 in 3510) [ClassicSimilarity], result of:
              0.038953334 = score(doc=3510,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.2708308 = fieldWeight in 3510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3510)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Health libraries review. 7(1990) no.1, S.22-26
  17. Neshat, N.; Horri, A.: ¬A study of subject indexing consistency between the National Library of Iran and Humanities Libraries in the area of Iranian studies (2006) 0.00
    0.0027823811 = product of:
      0.019476667 = sum of:
        0.019476667 = product of:
          0.038953334 = sum of:
            0.038953334 = weight(_text_:22 in 230) [ClassicSimilarity], result of:
              0.038953334 = score(doc=230,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.2708308 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=230)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    4. 1.2007 10:22:26
  18. Taniguchi, S.: Recording evidence in bibliographic records and descriptive metadata (2005) 0.00
    0.002384898 = product of:
      0.016694285 = sum of:
        0.016694285 = product of:
          0.03338857 = sum of:
            0.03338857 = weight(_text_:22 in 3565) [ClassicSimilarity], result of:
              0.03338857 = score(doc=3565,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.23214069 = fieldWeight in 3565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3565)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    18. 6.2005 13:16:22
  19. Leininger, K.: Interindexer consistency in PsychINFO (2000) 0.00
    0.002384898 = product of:
      0.016694285 = sum of:
        0.016694285 = product of:
          0.03338857 = sum of:
            0.03338857 = weight(_text_:22 in 2552) [ClassicSimilarity], result of:
              0.03338857 = score(doc=2552,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.23214069 = fieldWeight in 2552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2552)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    9. 2.1997 18:44:22
  20. Subrahmanyam, B.: Library of Congress Classification numbers : issues of consistency and their implications for union catalogs (2006) 0.00
    0.0019874151 = product of:
      0.013911906 = sum of:
        0.013911906 = product of:
          0.027823811 = sum of:
            0.027823811 = weight(_text_:22 in 5784) [ClassicSimilarity], result of:
              0.027823811 = score(doc=5784,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.19345059 = fieldWeight in 5784, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5784)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    10. 9.2000 17:38:22