Search (63 results, page 1 of 4)

  • × theme_ss:"Automatisches Indexieren"
  1. Polity, Y.: Vers une ergonomie linguistique (1994) 0.07
    0.069699265 = product of:
      0.104548894 = sum of:
        0.067894526 = weight(_text_:bibliographic in 36) [ClassicSimilarity], result of:
          0.067894526 = score(doc=36,freq=2.0), product of:
            0.19731061 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.05068286 = queryNorm
            0.34409973 = fieldWeight in 36, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=36)
        0.036654368 = product of:
          0.073308736 = sum of:
            0.073308736 = weight(_text_:searching in 36) [ClassicSimilarity], result of:
              0.073308736 = score(doc=36,freq=2.0), product of:
                0.20502694 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.05068286 = queryNorm
                0.3575566 = fieldWeight in 36, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.0625 = fieldNorm(doc=36)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Analyzed a special type of man-mchine interaction, that of searching an information system with natural language. A model for full text processing for information retrieval was proposed that considered the system's users and how they employ information. Describes how INIST (the National Institute for Scientific and Technical Information) is developing computer assisted indexing as an aid to improving relevance when retrieving information from bibliographic data banks
  2. Hirawa, M.: Role of keywords in the network searching era (1998) 0.07
    0.069699265 = product of:
      0.104548894 = sum of:
        0.067894526 = weight(_text_:bibliographic in 3446) [ClassicSimilarity], result of:
          0.067894526 = score(doc=3446,freq=2.0), product of:
            0.19731061 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.05068286 = queryNorm
            0.34409973 = fieldWeight in 3446, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=3446)
        0.036654368 = product of:
          0.073308736 = sum of:
            0.073308736 = weight(_text_:searching in 3446) [ClassicSimilarity], result of:
              0.073308736 = score(doc=3446,freq=2.0), product of:
                0.20502694 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.05068286 = queryNorm
                0.3575566 = fieldWeight in 3446, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3446)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A survey of Japanese OPACs available on the Internet was conducted relating to use of keywords for subject access. The findings suggest that present OPACs are not capable of storing subject-oriented information. Currently available keyword access derives from a merely title-based retrieval system. Contents data should be added to bibliographic records as an efficient way of providing subject access, and costings for this process should be estimated. Word standardisation issues must also be addressed
  3. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.05
    0.053056818 = product of:
      0.15917045 = sum of:
        0.15917045 = sum of:
          0.11110266 = weight(_text_:searching in 5001) [ClassicSimilarity], result of:
            0.11110266 = score(doc=5001,freq=6.0), product of:
              0.20502694 = queryWeight, product of:
                4.0452914 = idf(docFreq=2103, maxDocs=44218)
                0.05068286 = queryNorm
              0.541893 = fieldWeight in 5001, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.0452914 = idf(docFreq=2103, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
          0.048067793 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
            0.048067793 = score(doc=5001,freq=2.0), product of:
              0.17748274 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05068286 = queryNorm
              0.2708308 = fieldWeight in 5001, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
      0.33333334 = coord(1/3)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  4. Humphrey, S.M.: Automatic indexing of documents from journal descriptors : a preliminary investigation (1999) 0.05
    0.05227445 = product of:
      0.078411676 = sum of:
        0.050920896 = weight(_text_:bibliographic in 3769) [ClassicSimilarity], result of:
          0.050920896 = score(doc=3769,freq=2.0), product of:
            0.19731061 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.05068286 = queryNorm
            0.2580748 = fieldWeight in 3769, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=3769)
        0.027490778 = product of:
          0.054981556 = sum of:
            0.054981556 = weight(_text_:searching in 3769) [ClassicSimilarity], result of:
              0.054981556 = score(doc=3769,freq=2.0), product of:
                0.20502694 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.05068286 = queryNorm
                0.26816747 = fieldWeight in 3769, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3769)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A new, fully automated approach for indedexing documents is presented based on associating textwords in a training set of bibliographic citations with the indexing of journals. This journal-level indexing is in the form of a consistent, timely set of journal descriptors (JDs) indexing the individual journals themselves. This indexing is maintained in journal records in a serials authority database. The advantage of this novel approach is that the training set does not depend on previous manual indexing of thousands of documents (i.e., any such indexing already in the training set is not used), but rather the relatively small intellectual effort of indexing at the journal level, usually a matter of a few thousand unique journals for which retrospective indexing to maintain consistency and currency may be feasible. If successful, JD indexing would provide topical categorization of documents outside the training set, i.e., journal articles, monographs, Web documents, reports from the grey literature, etc., and therefore be applied in searching. Because JDs are quite general, corresponding to subject domains, their most problable use would be for improving or refining search results
  5. Milstead, J.L.: Methodologies for subject analysis in bibliographic databases (1992) 0.03
    0.028005064 = product of:
      0.08401519 = sum of:
        0.08401519 = weight(_text_:bibliographic in 2311) [ClassicSimilarity], result of:
          0.08401519 = score(doc=2311,freq=4.0), product of:
            0.19731061 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.05068286 = queryNorm
            0.4258017 = fieldWeight in 2311, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2311)
      0.33333334 = coord(1/3)
    
    Abstract
    The goal of the study was to determine the state of the art of subject analysis as applied to large bibliographic data bases. The intent was to gather and evaluate information, casting it in a form that could be applied by management. There was no attempt to determine actual costs or trade-offs among costs and possible benefits. Commercial automatic indexing packages were also reviewed. The overall conclusion was that data base producers should begin working seriously on upgrading their thesauri and codifying their indexing policies as a means of moving toward development of machine aids to indexing, but that fully automatic indexing is not yet ready for wholesale implementation
  6. Luhn, H.P.: ¬A statistical approach to the mechanical encoding and searching of literary information (1957) 0.02
    0.024436247 = product of:
      0.073308736 = sum of:
        0.073308736 = product of:
          0.14661747 = sum of:
            0.14661747 = weight(_text_:searching in 5453) [ClassicSimilarity], result of:
              0.14661747 = score(doc=5453,freq=2.0), product of:
                0.20502694 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.05068286 = queryNorm
                0.7151132 = fieldWeight in 5453, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.125 = fieldNorm(doc=5453)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  7. Garfield, E.: KeyWords Plus : ISI's breakthrough retrieval method (1990) 0.02
    0.021381717 = product of:
      0.06414515 = sum of:
        0.06414515 = product of:
          0.1282903 = sum of:
            0.1282903 = weight(_text_:searching in 4345) [ClassicSimilarity], result of:
              0.1282903 = score(doc=4345,freq=2.0), product of:
                0.20502694 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.05068286 = queryNorm
                0.6257241 = fieldWeight in 4345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4345)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Issue
    Pt.1: Expanding your searching power on Current Contents on Diskette.
  8. Gomez, I.: Coping with the problem of subject classification diversity (1996) 0.02
    0.01980257 = product of:
      0.05940771 = sum of:
        0.05940771 = weight(_text_:bibliographic in 5074) [ClassicSimilarity], result of:
          0.05940771 = score(doc=5074,freq=2.0), product of:
            0.19731061 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.05068286 = queryNorm
            0.30108726 = fieldWeight in 5074, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5074)
      0.33333334 = coord(1/3)
    
    Abstract
    The delimination of a research field in bibliometric studies presents the problem of the diversity of subject classifications used in the sources of input and output data. Classification of documents according the thematic codes or keywords is the most accurate method, mainly used is specialized bibliographic or patent databases. Classification of journals in disciplines presents lower specifity, and some shortcomings as the change over time of both journals and disciplines and the increasing interdisciplinarity of research. Standardization of subject classifications emerges as an important point in bibliometric studies in order to allow international comparisons, although flexibility is needed to meet the needs of local studies
  9. Pulgarin, A.; Gil-Leiva, I.: Bibliometric analysis of the automatic indexing literature : 1956-2000 (2004) 0.02
    0.01980257 = product of:
      0.05940771 = sum of:
        0.05940771 = weight(_text_:bibliographic in 2566) [ClassicSimilarity], result of:
          0.05940771 = score(doc=2566,freq=2.0), product of:
            0.19731061 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.05068286 = queryNorm
            0.30108726 = fieldWeight in 2566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2566)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a bibliometric study of a corpus of 839 bibliographic references about automatic indexing, covering the period 1956-2000. We analyse the distribution of authors and works, the obsolescence and its dispersion, and the distribution of the literature by topic, year, and source type. We conclude that: (i) there has been a constant interest on the part of researchers; (ii) the most studied topics were the techniques and methods employed and the general aspects of automatic indexing; (iii) the productivity of the authors does fit a Lotka distribution (Dmax=0.02 and critical value=0.054); (iv) the annual aging factor is 95%; and (v) the dispersion of the literature is low.
  10. Golub, K.: Automated subject indexing : an overview (2021) 0.02
    0.01980257 = product of:
      0.05940771 = sum of:
        0.05940771 = weight(_text_:bibliographic in 718) [ClassicSimilarity], result of:
          0.05940771 = score(doc=718,freq=2.0), product of:
            0.19731061 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.05068286 = queryNorm
            0.30108726 = fieldWeight in 718, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=718)
      0.33333334 = coord(1/3)
    
    Abstract
    In the face of the ever-increasing document volume, libraries around the globe are more and more exploring (semi-) automated approaches to subject indexing. This helps sustain bibliographic objectives, enrich metadata, and establish more connections across documents from various collections, effectively leading to improved information retrieval and access. However, generally accepted automated approaches that are functional in operative systems are lacking. This article aims to provide an overview of basic principles used for automated subject indexing, major approaches in relation to their possible application in actual library systems, existing working examples, as well as related challenges calling for further research.
  11. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.02
    0.018311542 = product of:
      0.05493462 = sum of:
        0.05493462 = product of:
          0.10986924 = sum of:
            0.10986924 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.10986924 = score(doc=402,freq=2.0), product of:
                0.17748274 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05068286 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  12. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.02
    0.016022598 = product of:
      0.048067793 = sum of:
        0.048067793 = product of:
          0.09613559 = sum of:
            0.09613559 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.09613559 = score(doc=262,freq=2.0), product of:
                0.17748274 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05068286 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20.10.2000 12:22:23
  13. Hlava, M.M.K.: Automatic indexing : comparing rule-based and statistics-based indexing systems (2005) 0.02
    0.016022598 = product of:
      0.048067793 = sum of:
        0.048067793 = product of:
          0.09613559 = sum of:
            0.09613559 = weight(_text_:22 in 6265) [ClassicSimilarity], result of:
              0.09613559 = score(doc=6265,freq=2.0), product of:
                0.17748274 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05068286 = queryNorm
                0.5416616 = fieldWeight in 6265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6265)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information outlook. 9(2005) no.8, S.22-23
  14. Pritchard, J.: Information retrieval : smarter indexing (1991) 0.02
    0.015272654 = product of:
      0.04581796 = sum of:
        0.04581796 = product of:
          0.09163592 = sum of:
            0.09163592 = weight(_text_:searching in 4890) [ClassicSimilarity], result of:
              0.09163592 = score(doc=4890,freq=2.0), product of:
                0.20502694 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.05068286 = queryNorm
                0.44694576 = fieldWeight in 4890, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4890)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes full text retrieval (FTR) which indexes every occurrence of every word except defined 'stop' words. This permits much more sophisticated searching than with keyword indexing. Also discusses document imaging processing (DIP). Lists suppliers and users of the software and describes the experiences of ESOO's Planning Division with Computer Intertrade Ltd. (CIL) ImagePro DIP and their operational practices
  15. Ferber, R.: Automated indexing with thesaurus descriptors : a co-occurence based approach to multilingual retrieval (1997) 0.01
    0.014144694 = product of:
      0.04243408 = sum of:
        0.04243408 = weight(_text_:bibliographic in 4144) [ClassicSimilarity], result of:
          0.04243408 = score(doc=4144,freq=2.0), product of:
            0.19731061 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.05068286 = queryNorm
            0.21506234 = fieldWeight in 4144, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4144)
      0.33333334 = coord(1/3)
    
    Abstract
    Indexing documents with descriptors from a multilingual thesaurus is an approach to multilingual information retrieval. However, manual indexing is expensive. Automazed indexing methods in general use terms found in the document. Thesaurus descriptors are complex terms that are often not used in documents or have specific meanings within the thesaurus; therefore most weighting schemes of automated indexing methods are not suited to select thesaurus descriptors. In this paper a linear associative system is described that uses similarity values extracted from a large corpus of manually indexed documents to construct a rank ordering of the descriptors for a given document title. The system is adaptive and has to be tuned with a training sample of records for the specific task. The system was tested on a corpus of some 80.000 bibliographic records. The results show a high variability with changing parameter values. This indicated that it is very important to empirically adapt the model to the specific situation it is used in. The overall median of the manually assigned descriptors in the automatically generated ranked list of all 3.631 descriptors is 14 for the set used to adapt the system and 11 for a test set not used in the optimization process. This result shows that the optimization is not a fitting to a specific training set but a real adaptation of the model to the setting
  16. Wang, S.; Koopman, R.: Embed first, then predict (2019) 0.01
    0.014144694 = product of:
      0.04243408 = sum of:
        0.04243408 = weight(_text_:bibliographic in 5400) [ClassicSimilarity], result of:
          0.04243408 = score(doc=5400,freq=2.0), product of:
            0.19731061 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.05068286 = queryNorm
            0.21506234 = fieldWeight in 5400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5400)
      0.33333334 = coord(1/3)
    
    Abstract
    Automatic subject prediction is a desirable feature for modern digital library systems, as manual indexing can no longer cope with the rapid growth of digital collections. It is also desirable to be able to identify a small set of entities (e.g., authors, citations, bibliographic records) which are most relevant to a query. This gets more difficult when the amount of data increases dramatically. Data sparsity and model scalability are the major challenges to solving this type of extreme multilabel classification problem automatically. In this paper, we propose to address this problem in two steps: we first embed different types of entities into the same semantic space, where similarity could be computed easily; second, we propose a novel non-parametric method to identify the most relevant entities in addition to direct semantic similarities. We show how effectively this approach predicts even very specialised subjects, which are associated with few documents in the training set and are more problematic for a classifier.
  17. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.01
    0.013733655 = product of:
      0.041200966 = sum of:
        0.041200966 = product of:
          0.08240193 = sum of:
            0.08240193 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.08240193 = score(doc=58,freq=2.0), product of:
                0.17748274 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05068286 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    14. 6.2015 22:12:44
  18. Hauer, M.: Automatische Indexierung (2000) 0.01
    0.013733655 = product of:
      0.041200966 = sum of:
        0.041200966 = product of:
          0.08240193 = sum of:
            0.08240193 = weight(_text_:22 in 5887) [ClassicSimilarity], result of:
              0.08240193 = score(doc=5887,freq=2.0), product of:
                0.17748274 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05068286 = queryNorm
                0.46428138 = fieldWeight in 5887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5887)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Wissen in Aktion: Wege des Knowledge Managements. 22. Online-Tagung der DGI, Frankfurt am Main, 2.-4.5.2000. Proceedings. Hrsg.: R. Schmidt
  19. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.01
    0.013733655 = product of:
      0.041200966 = sum of:
        0.041200966 = product of:
          0.08240193 = sum of:
            0.08240193 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.08240193 = score(doc=2051,freq=2.0), product of:
                0.17748274 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05068286 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    14. 6.2015 22:12:56
  20. Hauer, M.: Tiefenindexierung im Bibliothekskatalog : 17 Jahre intelligentCAPTURE (2019) 0.01
    0.013733655 = product of:
      0.041200966 = sum of:
        0.041200966 = product of:
          0.08240193 = sum of:
            0.08240193 = weight(_text_:22 in 5629) [ClassicSimilarity], result of:
              0.08240193 = score(doc=5629,freq=2.0), product of:
                0.17748274 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05068286 = queryNorm
                0.46428138 = fieldWeight in 5629, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5629)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    B.I.T.online. 22(2019) H.2, S.163-166

Years

Languages

  • e 44
  • d 16
  • f 1
  • ja 1
  • ru 1
  • More… Less…

Types

  • a 57
  • el 2
  • s 2
  • x 2
  • m 1
  • More… Less…