Search (161 results, page 1 of 9)

  • × theme_ss:"Automatisches Indexieren"
  1. Stankovic, R. et al.: Indexing of textual databases based on lexical resources : a case study for Serbian (2016) 0.05
    0.047685735 = product of:
      0.0715286 = sum of:
        0.03772102 = weight(_text_:on in 2759) [ClassicSimilarity], result of:
          0.03772102 = score(doc=2759,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.3436586 = fieldWeight in 2759, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.078125 = fieldNorm(doc=2759)
        0.03380758 = product of:
          0.06761516 = sum of:
            0.06761516 = weight(_text_:22 in 2759) [ClassicSimilarity], result of:
              0.06761516 = score(doc=2759,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.38690117 = fieldWeight in 2759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2759)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    1. 2.2016 18:25:22
    Source
    Semantic keyword-based search on structured data sources: First COST Action IC1302 International KEYSTONE Conference, IKC 2015, Coimbra, Portugal, September 8-9, 2015. Revised Selected Papers. Eds.: J. Cardoso et al
  2. Biebricher, N.; Fuhr, N.; Lustig, G.; Schwantner, M.; Knorz, G.: ¬The automatic indexing system AIR/PHYS : from research to application (1988) 0.04
    0.040320244 = product of:
      0.060480364 = sum of:
        0.026672786 = weight(_text_:on in 1952) [ClassicSimilarity], result of:
          0.026672786 = score(doc=1952,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24300331 = fieldWeight in 1952, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.078125 = fieldNorm(doc=1952)
        0.03380758 = product of:
          0.06761516 = sum of:
            0.06761516 = weight(_text_:22 in 1952) [ClassicSimilarity], result of:
              0.06761516 = score(doc=1952,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.38690117 = fieldWeight in 1952, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1952)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    16. 8.1998 12:51:22
    Source
    Proceedings of the 11th annual conference on research and development in information retrieval. Ed.: Y. Chiaramella
  3. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.03
    0.033380013 = product of:
      0.050070018 = sum of:
        0.026404712 = weight(_text_:on in 5001) [ClassicSimilarity], result of:
          0.026404712 = score(doc=5001,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24056101 = fieldWeight in 5001, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5001)
        0.023665305 = product of:
          0.04733061 = sum of:
            0.04733061 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
              0.04733061 = score(doc=5001,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2708308 = fieldWeight in 5001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  4. Bordoni, L.; Pazienza, M.T.: Documents automatic indexing in an environmental domain (1997) 0.03
    0.033380013 = product of:
      0.050070018 = sum of:
        0.026404712 = weight(_text_:on in 530) [ClassicSimilarity], result of:
          0.026404712 = score(doc=530,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24056101 = fieldWeight in 530, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=530)
        0.023665305 = product of:
          0.04733061 = sum of:
            0.04733061 = weight(_text_:22 in 530) [ClassicSimilarity], result of:
              0.04733061 = score(doc=530,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2708308 = fieldWeight in 530, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=530)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes an application of Natural Language Processing (NLP) techniques, in HIRMA (Hypertextual Information Retrieval Managed by ARIOSTO), to the problem of document indexing by referring to a system which incorporates natural language processing techniques to determine the subject of the text of documents and to associate them with relevant semantic indexes. Describes briefly the overall system, details of its implementation on a corpus of scientific abstracts related to environmental topics and experimental evidence of the system's behaviour. Analyzes in detail an experiment designed to evaluate the system's retrieval ability in terms of recall and precision
    Source
    International forum on information and documentation. 22(1997) no.1, S.17-28
  5. Wolfekuhler, M.R.; Punch, W.F.: Finding salient features for personal Web pages categories (1997) 0.03
    0.033380013 = product of:
      0.050070018 = sum of:
        0.026404712 = weight(_text_:on in 2673) [ClassicSimilarity], result of:
          0.026404712 = score(doc=2673,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24056101 = fieldWeight in 2673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.023665305 = product of:
          0.04733061 = sum of:
            0.04733061 = weight(_text_:22 in 2673) [ClassicSimilarity], result of:
              0.04733061 = score(doc=2673,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2708308 = fieldWeight in 2673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2673)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Examines techniques that discover features in sets of pre-categorized documents, such that similar documents can be found on the WWW. Examines techniques which will classifiy training examples with high accuracy, then explains why this is not necessarily useful. Describes a method for extracting word clusters from the raw document features. Results show that the clustering technique is successful in discovering word groups in personal Web pages which can be used to find similar information on the WWW
    Date
    1. 8.1996 22:08:06
  6. Plaunt, C.; Norgard, B.A.: ¬An association-based method for automatic indexing with a controlled vocabulary (1998) 0.02
    0.023842867 = product of:
      0.0357643 = sum of:
        0.01886051 = weight(_text_:on in 1794) [ClassicSimilarity], result of:
          0.01886051 = score(doc=1794,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 1794, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1794)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 1794) [ClassicSimilarity], result of:
              0.03380758 = score(doc=1794,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 1794, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1794)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this article, we describe and test a two-stage algorithm based on a lexical collocation technique which maps from the lexical clues contained in a document representation into a controlled vocabulary list of subject headings. Using a collection of 4.626 INSPEC documents, we create a 'dictionary' of associations between the lexical items contained in the titles, authors, and abstracts, and controlled vocabulary subject headings assigned to those records by human indexers using a likelihood ratio statistic as the measure of association. In the deployment stage, we use the dictiony to predict which of the controlled vocabulary subject headings best describe new documents when they are presented to the system. Our evaluation of this algorithm, in which we compare the automatically assigned subject headings to the subject headings assigned to the test documents by human catalogers, shows that we can obtain results comparable to, and consistent with, human cataloging. In effect we have cast this as a classic partial match information retrieval problem. We consider the problem to be one of 'retrieving' (or assigning) the most probably 'relevant' (or correct) controlled vocabulary subject headings to a document based on the clues contained in that document
    Date
    11. 9.2000 19:53:22
  7. Milstead, J.L.: Thesauri in a full-text world (1998) 0.02
    0.023842867 = product of:
      0.0357643 = sum of:
        0.01886051 = weight(_text_:on in 2337) [ClassicSimilarity], result of:
          0.01886051 = score(doc=2337,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 2337, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2337)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 2337) [ClassicSimilarity], result of:
              0.03380758 = score(doc=2337,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 2337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2337)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Despite early claims to the contemporary, thesauri continue to find use as access tools for information in the full-text environment. Their mode of use is changing, but this change actually represents an expansion rather than a contrdiction of their utility. Thesauri and similar vocabulary tools can complement full-text access by aiding users in focusing their searches, by supplementing the linguistic analysis of the text search engine, and even by serving as one of the tools used by the linguistic engine for its analysis. While human indexing contunues to be used for many databases, the trend is to increase the use of machine aids for this purpose. All machine-aided indexing (MAI) systems rely on thesauri as the basis for term selection. In the 21st century, the balance of effort between human and machine will change at both input and output, but thesauri will continue to play an important role for the foreseeable future
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  8. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.02
    0.01803071 = product of:
      0.054092128 = sum of:
        0.054092128 = product of:
          0.108184256 = sum of:
            0.108184256 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.108184256 = score(doc=402,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  9. Garfield, E.: KeyWords Plus : ISI's breakthrough retrieval method (1990) 0.02
    0.017603142 = product of:
      0.052809425 = sum of:
        0.052809425 = weight(_text_:on in 4345) [ClassicSimilarity], result of:
          0.052809425 = score(doc=4345,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.48112202 = fieldWeight in 4345, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.109375 = fieldNorm(doc=4345)
      0.33333334 = coord(1/3)
    
    Issue
    Pt.1: Expanding your searching power on Current Contents on Diskette.
  10. Martins, A.L.; Souza, R.R.; Ribeiro de Mello, H.: ¬The use of noun phrases in information retrieval : proposing a mechanism for automatic classification (2014) 0.02
    0.016128099 = product of:
      0.024192147 = sum of:
        0.010669114 = weight(_text_:on in 1441) [ClassicSimilarity], result of:
          0.010669114 = score(doc=1441,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.097201325 = fieldWeight in 1441, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1441)
        0.013523032 = product of:
          0.027046064 = sum of:
            0.027046064 = weight(_text_:22 in 1441) [ClassicSimilarity], result of:
              0.027046064 = score(doc=1441,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.15476047 = fieldWeight in 1441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1441)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper presents a research on syntactic structures known as noun phrases (NP) being applied to increase the effectiveness and efficiency of the mechanisms for the document's classification. Our hypothesis is the fact that the NP can be used instead of single words as a semantic aggregator to reduce the number of words that will be used for the classification system without losing its semantic coverage, increasing its efficiency. The experiment divided the documents classification process in three phases: a) NP preprocessing b) system training; and c) classification experiments. In the first step, a corpus of digitalized texts was submitted to a natural language processing platform1 in which the part-of-speech tagging was done, and them PERL scripts pertaining to the PALAVRAS package were used to extract the Noun Phrases. The preprocessing also involved the tasks of a) removing NP low meaning pre-modifiers, as quantifiers; b) identification of synonyms and corresponding substitution for common hyperonyms; and c) stemming of the relevant words contained in the NP, for similitude checking with other NPs. The first tests with the resulting documents have demonstrated its effectiveness. We have compared the structural similarity of the documents before and after the whole pre-processing steps of phase one. The texts maintained the consistency with the original and have kept the readability. The second phase involves submitting the modified documents to a SVM algorithm to identify clusters and classify the documents. The classification rules are to be established using a machine learning approach. Finally, tests will be conducted to check the effectiveness of the whole process.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  11. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.02
    0.01577687 = product of:
      0.04733061 = sum of:
        0.04733061 = product of:
          0.09466122 = sum of:
            0.09466122 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.09466122 = score(doc=262,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20.10.2000 12:22:23
  12. Hlava, M.M.K.: Automatic indexing : comparing rule-based and statistics-based indexing systems (2005) 0.02
    0.01577687 = product of:
      0.04733061 = sum of:
        0.04733061 = product of:
          0.09466122 = sum of:
            0.09466122 = weight(_text_:22 in 6265) [ClassicSimilarity], result of:
              0.09466122 = score(doc=6265,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.5416616 = fieldWeight in 6265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6265)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information outlook. 9(2005) no.8, S.22-23
  13. Salton, G.; Araya, J.: On the use of clustered file organizations in information search and retrieval (1990) 0.02
    0.015088407 = product of:
      0.04526522 = sum of:
        0.04526522 = weight(_text_:on in 2409) [ClassicSimilarity], result of:
          0.04526522 = score(doc=2409,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.4123903 = fieldWeight in 2409, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.09375 = fieldNorm(doc=2409)
      0.33333334 = coord(1/3)
    
    Source
    Library classification and its functions. Int. Conf. on ..., 20.-21.6.1989, Edmonton, Alberta. Ed.: A. Nitecki u. T. Fell
  14. Salton, G.; Wong, A.: Generation and search of clustered files (1978) 0.01
    0.014225486 = product of:
      0.042676456 = sum of:
        0.042676456 = weight(_text_:on in 2411) [ClassicSimilarity], result of:
          0.042676456 = score(doc=2411,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.3888053 = fieldWeight in 2411, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.125 = fieldNorm(doc=2411)
      0.33333334 = coord(1/3)
    
    Source
    ACM transactions on database systems. 3(1978) no.4, S.321-346
  15. Salton, G.; Yang, C.S.: On the specification of term values in automatic indexing (1973) 0.01
    0.014225486 = product of:
      0.042676456 = sum of:
        0.042676456 = weight(_text_:on in 5476) [ClassicSimilarity], result of:
          0.042676456 = score(doc=5476,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.3888053 = fieldWeight in 5476, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.125 = fieldNorm(doc=5476)
      0.33333334 = coord(1/3)
    
  16. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.01
    0.013523031 = product of:
      0.040569093 = sum of:
        0.040569093 = product of:
          0.081138186 = sum of:
            0.081138186 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.081138186 = score(doc=58,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    14. 6.2015 22:12:44
  17. Hauer, M.: Automatische Indexierung (2000) 0.01
    0.013523031 = product of:
      0.040569093 = sum of:
        0.040569093 = product of:
          0.081138186 = sum of:
            0.081138186 = weight(_text_:22 in 5887) [ClassicSimilarity], result of:
              0.081138186 = score(doc=5887,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.46428138 = fieldWeight in 5887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5887)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Wissen in Aktion: Wege des Knowledge Managements. 22. Online-Tagung der DGI, Frankfurt am Main, 2.-4.5.2000. Proceedings. Hrsg.: R. Schmidt
  18. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.01
    0.013523031 = product of:
      0.040569093 = sum of:
        0.040569093 = product of:
          0.081138186 = sum of:
            0.081138186 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.081138186 = score(doc=2051,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    14. 6.2015 22:12:56
  19. Hauer, M.: Tiefenindexierung im Bibliothekskatalog : 17 Jahre intelligentCAPTURE (2019) 0.01
    0.013523031 = product of:
      0.040569093 = sum of:
        0.040569093 = product of:
          0.081138186 = sum of:
            0.081138186 = weight(_text_:22 in 5629) [ClassicSimilarity], result of:
              0.081138186 = score(doc=5629,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.46428138 = fieldWeight in 5629, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5629)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    B.I.T.online. 22(2019) H.2, S.163-166
  20. Salton, G.: Automatic text processing : the transformation, analysis, and retrieval of information by computer (1989) 0.01
    0.012573673 = product of:
      0.03772102 = sum of:
        0.03772102 = weight(_text_:on in 1307) [ClassicSimilarity], result of:
          0.03772102 = score(doc=1307,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.3436586 = fieldWeight in 1307, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.078125 = fieldNorm(doc=1307)
      0.33333334 = coord(1/3)
    
    COMPASS
    Information retrieval / Use of / On-line computers
    Subject
    Information retrieval / Use of / On-line computers

Years

Languages

Types

  • a 151
  • el 16
  • m 3
  • x 2
  • s 1
  • More… Less…

Classifications