Search (74 results, page 1 of 4)

  • × theme_ss:"Automatisches Indexieren"
  1. Galvez, C.; Moya-Anegón, F. de: ¬An evaluation of conflation accuracy using finite-state transducers (2006) 0.08
    0.077783585 = product of:
      0.15556717 = sum of:
        0.13135578 = weight(_text_:graphic in 5599) [ClassicSimilarity], result of:
          0.13135578 = score(doc=5599,freq=2.0), product of:
            0.29924196 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.045191016 = queryNorm
            0.43896174 = fieldWeight in 5599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.046875 = fieldNorm(doc=5599)
        0.024211394 = product of:
          0.048422787 = sum of:
            0.048422787 = weight(_text_:methods in 5599) [ClassicSimilarity], result of:
              0.048422787 = score(doc=5599,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.26651827 = fieldWeight in 5599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5599)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - To evaluate the accuracy of conflation methods based on finite-state transducers (FSTs). Design/methodology/approach - Incorrectly lemmatized and stemmed forms may lead to the retrieval of inappropriate documents. Experimental studies to date have focused on retrieval performance, but very few on conflation performance. The process of normalization we used involved a linguistic toolbox that allowed us to construct, through graphic interfaces, electronic dictionaries represented internally by FSTs. The lexical resources developed were applied to a Spanish test corpus for merging term variants in canonical lemmatized forms. Conflation performance was evaluated in terms of an adaptation of recall and precision measures, based on accuracy and coverage, not actual retrieval. The results were compared with those obtained using a Spanish version of the Porter algorithm. Findings - The conclusion is that the main strength of lemmatization is its accuracy, whereas its main limitation is the underanalysis of variant forms. Originality/value - The report outlines the potential of transducers in their application to normalization processes.
  2. Kutschekmanesch, S.; Lutes, B.; Moelle, K.; Thiel, U.; Tzeras, K.: Automated multilingual indexing : a synthesis of rule-based and thesaurus-based methods (1998) 0.04
    0.03548306 = product of:
      0.14193223 = sum of:
        0.14193223 = sum of:
          0.080704644 = weight(_text_:methods in 4157) [ClassicSimilarity], result of:
            0.080704644 = score(doc=4157,freq=2.0), product of:
              0.18168657 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.045191016 = queryNorm
              0.4441971 = fieldWeight in 4157, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.078125 = fieldNorm(doc=4157)
          0.06122759 = weight(_text_:22 in 4157) [ClassicSimilarity], result of:
            0.06122759 = score(doc=4157,freq=2.0), product of:
              0.15825124 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045191016 = queryNorm
              0.38690117 = fieldWeight in 4157, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=4157)
      0.25 = coord(1/4)
    
    Source
    Information und Märkte: 50. Deutscher Dokumentartag 1998, Kongreß der Deutschen Gesellschaft für Dokumentation e.V. (DGD), Rheinische Friedrich-Wilhelms-Universität Bonn, 22.-24. September 1998. Hrsg. von Marlies Ockenfeld u. Gerhard J. Mantwill
  3. Newman, D.J.; Block, S.: Probabilistic topic decomposition of an eighteenth-century American newspaper (2006) 0.02
    0.024838142 = product of:
      0.09935257 = sum of:
        0.09935257 = sum of:
          0.056493253 = weight(_text_:methods in 5291) [ClassicSimilarity], result of:
            0.056493253 = score(doc=5291,freq=2.0), product of:
              0.18168657 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.045191016 = queryNorm
              0.31093797 = fieldWeight in 5291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5291)
          0.042859312 = weight(_text_:22 in 5291) [ClassicSimilarity], result of:
            0.042859312 = score(doc=5291,freq=2.0), product of:
              0.15825124 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045191016 = queryNorm
              0.2708308 = fieldWeight in 5291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5291)
      0.25 = coord(1/4)
    
    Abstract
    We use a probabilistic mixture decomposition method to determine topics in the Pennsylvania Gazette, a major colonial U.S. newspaper from 1728-1800. We assess the value of several topic decomposition techniques for historical research and compare the accuracy and efficacy of various methods. After determining the topics covered by the 80,000 articles and advertisements in the entire 18th century run of the Gazette, we calculate how the prevalence of those topics changed over time, and give historically relevant examples of our findings. This approach reveals important information about the content of this colonial newspaper, and suggests the value of such approaches to a more complete understanding of early American print culture and society.
    Date
    22. 7.2006 17:32:00
  4. Griffiths, A.; Robinson, L.A.; Willett, P.: Hierarchic agglomerative clustering methods for automatic document classification (1984) 0.02
    0.016140928 = product of:
      0.064563714 = sum of:
        0.064563714 = product of:
          0.12912743 = sum of:
            0.12912743 = weight(_text_:methods in 2414) [ClassicSimilarity], result of:
              0.12912743 = score(doc=2414,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.71071535 = fieldWeight in 2414, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.125 = fieldNorm(doc=2414)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  5. Suominen, O.; Koskenniemi, I.: Annif Analyzer Shootout : comparing text lemmatization methods for automated subject indexing (2022) 0.01
    0.013345277 = product of:
      0.053381108 = sum of:
        0.053381108 = product of:
          0.106762215 = sum of:
            0.106762215 = weight(_text_:methods in 658) [ClassicSimilarity], result of:
              0.106762215 = score(doc=658,freq=14.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.5876176 = fieldWeight in 658, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=658)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Automated text classification is an important function for many AI systems relevant to libraries, including automated subject indexing and classification. When implemented using the traditional natural language processing (NLP) paradigm, one key part of the process is the normalization of words using stemming or lemmatization, which reduces the amount of linguistic variation and often improves the quality of classification. In this paper, we compare the output of seven different text lemmatization algorithms as well as two baseline methods. We measure how the choice of method affects the quality of text classification using example corpora in three languages. The experiments have been performed using the open source Annif toolkit for automated subject indexing and classification, but should generalize also to other NLP toolkits and similar text classification tasks. The results show that lemmatization methods in most cases outperform baseline methods in text classification particularly for Finnish and Swedish text, but not English, where baseline methods are most effective. The differences between lemmatization methods are quite small. The systematic comparison will help optimize text classification pipelines and inform the further development of the Annif toolkit to incorporate a wider choice of normalization methods.
  6. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.01
    0.012245518 = product of:
      0.048982073 = sum of:
        0.048982073 = product of:
          0.097964145 = sum of:
            0.097964145 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.097964145 = score(doc=402,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  7. Goller, C.; Löning, J.; Will, T.; Wolff, W.: Automatic document classification : a thourough evaluation of various methods (2000) 0.01
    0.012105697 = product of:
      0.048422787 = sum of:
        0.048422787 = product of:
          0.096845575 = sum of:
            0.096845575 = weight(_text_:methods in 5480) [ClassicSimilarity], result of:
              0.096845575 = score(doc=5480,freq=8.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.53303653 = fieldWeight in 5480, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5480)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    (Automatic) document classification is generally defined as content-based assignment of one or more predefined categories to documents. Usually, machine learning, statistical pattern recognition, or neural network approaches are used to construct classifiers automatically. In this paper we thoroughly evaluate a wide variety of these methods on a document classification task for German text. We evaluate different feature construction and selection methods and various classifiers. Our main results are: (1) feature selection is necessary not only to reduce learning and classification time, but also to avoid overfitting (even for Support Vector Machines); (2) surprisingly, our morphological analysis does not improve classification quality compared to a letter 5-gram approach; (3) Support Vector Machines are significantly better than all other classification methods
  8. Salton, G.: Fast document classification in automatic information retrieval (1978) 0.01
    0.011413361 = product of:
      0.045653444 = sum of:
        0.045653444 = product of:
          0.09130689 = sum of:
            0.09130689 = weight(_text_:methods in 2331) [ClassicSimilarity], result of:
              0.09130689 = score(doc=2331,freq=4.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.5025517 = fieldWeight in 2331, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2331)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    A classified or clustered file is one where related or similar records are grouped into classes or clusters of items in such a way that all itmes within a cluster are jointly retrievable. Clustered files are easily adapted to to broad and narrow search strategies, and simple file updating methods are available. An inexpensive file clustering method applicable to large files is given together with appropriate file search methods
  9. Witschel, H.F.: Terminology extraction and automatic indexing : comparison and qualitative evaluation of methods (2005) 0.01
    0.011278818 = product of:
      0.045115273 = sum of:
        0.045115273 = product of:
          0.09023055 = sum of:
            0.09023055 = weight(_text_:methods in 1842) [ClassicSimilarity], result of:
              0.09023055 = score(doc=1842,freq=10.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.4966275 = fieldWeight in 1842, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1842)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Many terminology engineering processes involve the task of automatic terminology extraction: before the terminology of a given domain can be modelled, organised or standardised, important concepts (or terms) of this domain have to be identified and fed into terminological databases. These serve in further steps as a starting point for compiling dictionaries, thesauri or maybe even terminological ontologies for the domain. For the extraction of the initial concepts, extraction methods are needed that operate on specialised language texts. On the other hand, many machine learning or information retrieval applications require automatic indexing techniques. In Machine Learning applications concerned with the automatic clustering or classification of texts, often feature vectors are needed that describe the contents of a given text briefly but meaningfully. These feature vectors typically consist of a fairly small set of index terms together with weights indicating their importance. Short but meaningful descriptions of document contents as provided by good index terms are also useful to humans: some knowledge management applications (e.g. topic maps) use them as a set of basic concepts (topics). The author believes that the tasks of terminology extraction and automatic indexing have much in common and can thus benefit from the same set of basic algorithms. It is the goal of this paper to outline some methods that may be used in both contexts, but also to find the discriminating factors between the two tasks that call for the variation of parameters or application of different techniques. The discussion of these methods will be based on statistical, syntactical and especially morphological properties of (index) terms. The paper is concluded by the presentation of some qualitative and quantitative results comparing statistical and morphological methods.
  10. Lu, K.; Mao, J.; Li, G.: Toward effective automated weighted subject indexing : a comparison of different approaches in different environments (2018) 0.01
    0.011278818 = product of:
      0.045115273 = sum of:
        0.045115273 = product of:
          0.09023055 = sum of:
            0.09023055 = weight(_text_:methods in 4292) [ClassicSimilarity], result of:
              0.09023055 = score(doc=4292,freq=10.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.4966275 = fieldWeight in 4292, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4292)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Subject indexing plays an important role in supporting subject access to information resources. Current subject indexing systems do not make adequate distinctions on the importance of assigned subject descriptors. Assigning numeric weights to subject descriptors to distinguish their importance to the documents can strengthen the role of subject metadata. Automated methods are more cost-effective. This study compares different automated weighting methods in different environments. Two evaluation methods were used to assess the performance. Experiments on three datasets in the biomedical domain suggest the performance of different weighting methods depends on whether it is an abstract or full text environment. Mutual information with bag-of-words representation shows the best average performance in the full text environment, while cosine with bag-of-words representation is the best in an abstract environment. The cosine measure has relatively consistent and robust performance. A direct weighting method, IDF (Inverse Document Frequency), can produce quick and reasonable estimates of the weights. Bag-of-words representation generally outperforms the concept-based representation. Further improvement in performance can be obtained by using the learning-to-rank method to integrate different weighting methods. This study follows up Lu and Mao (Journal of the Association for Information Science and Technology, 66, 1776-1784, 2015), in which an automated weighted subject indexing method was proposed and validated. The findings from this study contribute to more effective weighted subject indexing.
  11. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.01
    0.010714828 = product of:
      0.042859312 = sum of:
        0.042859312 = product of:
          0.085718624 = sum of:
            0.085718624 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.085718624 = score(doc=262,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    20.10.2000 12:22:23
  12. Hlava, M.M.K.: Automatic indexing : comparing rule-based and statistics-based indexing systems (2005) 0.01
    0.010714828 = product of:
      0.042859312 = sum of:
        0.042859312 = product of:
          0.085718624 = sum of:
            0.085718624 = weight(_text_:22 in 6265) [ClassicSimilarity], result of:
              0.085718624 = score(doc=6265,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.5416616 = fieldWeight in 6265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6265)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information outlook. 9(2005) no.8, S.22-23
  13. Jones, R.L.: Automatic document content analysis : the AIDA project (1992) 0.01
    0.010088081 = product of:
      0.040352322 = sum of:
        0.040352322 = product of:
          0.080704644 = sum of:
            0.080704644 = weight(_text_:methods in 2607) [ClassicSimilarity], result of:
              0.080704644 = score(doc=2607,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.4441971 = fieldWeight in 2607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2607)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The AIDA project is a research program being carried out by Computer Power in Canberra, Australia, in collaboration with the Australian Parliament. Its primary objective is to develop practical methods for carrying out document content analysis with minimal human intervention. The different techniques employed by AIDA to achieve its results are described
  14. McKiernan, G.: Automated categorisation of Web resources : a profile of selected projects, research, products, and services (1996) 0.01
    0.010088081 = product of:
      0.040352322 = sum of:
        0.040352322 = product of:
          0.080704644 = sum of:
            0.080704644 = weight(_text_:methods in 2533) [ClassicSimilarity], result of:
              0.080704644 = score(doc=2533,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.4441971 = fieldWeight in 2533, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2533)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Profiles several representative current efforts that apply established as well as more innovative methods of automated classification, organization or other method of categorisation of WWW resources
  15. Tsai, C.-F.; McGarry, K.; Tait, J.: Qualitative evaluation of automatic assignment of keywords to images (2006) 0.01
    0.010088081 = product of:
      0.040352322 = sum of:
        0.040352322 = product of:
          0.080704644 = sum of:
            0.080704644 = weight(_text_:methods in 963) [ClassicSimilarity], result of:
              0.080704644 = score(doc=963,freq=8.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.4441971 = fieldWeight in 963, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=963)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In image retrieval, most systems lack user-centred evaluation since they are assessed by some chosen ground truth dataset. The results reported through precision and recall assessed against the ground truth are thought of as being an acceptable surrogate for the judgment of real users. Much current research focuses on automatically assigning keywords to images for enhancing retrieval effectiveness. However, evaluation methods are usually based on system-level assessment, e.g. classification accuracy based on some chosen ground truth dataset. In this paper, we present a qualitative evaluation methodology for automatic image indexing systems. The automatic indexing task is formulated as one of image annotation, or automatic metadata generation for images. The evaluation is composed of two individual methods. First, the automatic indexing annotation results are assessed by human subjects. Second, the subjects are asked to annotate some chosen images as the test set whose annotations are used as ground truth. Then, the system is tested by the test set whose annotation results are judged against the ground truth. Only one of these methods is reported for most systems on which user-centred evaluation are conducted. We believe that both methods need to be considered for full evaluation. We also provide an example evaluation of our system based on this methodology. According to this study, our proposed evaluation methodology is able to provide deeper understanding of the system's performance.
  16. Salton, G.; Buckley, C.: Approaches to global text analysis (1990) 0.01
    0.009986691 = product of:
      0.039946765 = sum of:
        0.039946765 = product of:
          0.07989353 = sum of:
            0.07989353 = weight(_text_:methods in 4901) [ClassicSimilarity], result of:
              0.07989353 = score(doc=4901,freq=4.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.43973273 = fieldWeight in 4901, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4901)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Current approaches to the analysis of natural language text are not viable for documents of unrestricted scope. A global text analysis system is proposed designed to identify homogeneous text environments in which the meaning of text words and phrases remains unambiguous, and useful term relationships may be automatically determined. The proposed methods include document clustering methods, as well as comparisons of local document excerpts in specified global contexts, leading to structured text representations in which similar texts, or text excerpts, are appropriately linked
  17. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.01
    0.009184138 = product of:
      0.03673655 = sum of:
        0.03673655 = product of:
          0.0734731 = sum of:
            0.0734731 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.0734731 = score(doc=58,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    14. 6.2015 22:12:44
  18. Hauer, M.: Automatische Indexierung (2000) 0.01
    0.009184138 = product of:
      0.03673655 = sum of:
        0.03673655 = product of:
          0.0734731 = sum of:
            0.0734731 = weight(_text_:22 in 5887) [ClassicSimilarity], result of:
              0.0734731 = score(doc=5887,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.46428138 = fieldWeight in 5887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5887)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Wissen in Aktion: Wege des Knowledge Managements. 22. Online-Tagung der DGI, Frankfurt am Main, 2.-4.5.2000. Proceedings. Hrsg.: R. Schmidt
  19. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.01
    0.009184138 = product of:
      0.03673655 = sum of:
        0.03673655 = product of:
          0.0734731 = sum of:
            0.0734731 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.0734731 = score(doc=2051,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    14. 6.2015 22:12:56
  20. Hauer, M.: Tiefenindexierung im Bibliothekskatalog : 17 Jahre intelligentCAPTURE (2019) 0.01
    0.009184138 = product of:
      0.03673655 = sum of:
        0.03673655 = product of:
          0.0734731 = sum of:
            0.0734731 = weight(_text_:22 in 5629) [ClassicSimilarity], result of:
              0.0734731 = score(doc=5629,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.46428138 = fieldWeight in 5629, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5629)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    B.I.T.online. 22(2019) H.2, S.163-166

Years

Languages

Types

  • a 68
  • el 5
  • m 2
  • s 2
  • x 2
  • More… Less…

Classifications