Search (26 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.07252696 = sum of:
      0.054075442 = product of:
        0.21630177 = sum of:
          0.21630177 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.21630177 = score(doc=562,freq=2.0), product of:
              0.38486624 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.045395818 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.01845152 = product of:
        0.03690304 = sum of:
          0.03690304 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.03690304 = score(doc=562,freq=2.0), product of:
              0.15896842 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045395818 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.05
    0.048132036 = product of:
      0.09626407 = sum of:
        0.09626407 = sum of:
          0.053210527 = weight(_text_:bibliographic in 2560) [ClassicSimilarity], result of:
            0.053210527 = score(doc=2560,freq=2.0), product of:
              0.17672792 = queryWeight, product of:
                3.893044 = idf(docFreq=2449, maxDocs=44218)
                0.045395818 = queryNorm
              0.30108726 = fieldWeight in 2560, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.893044 = idf(docFreq=2449, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2560)
          0.043053545 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
            0.043053545 = score(doc=2560,freq=2.0), product of:
              0.15896842 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045395818 = queryNorm
              0.2708308 = fieldWeight in 2560, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2560)
      0.5 = coord(1/2)
    
    Date
    22. 9.2008 18:31:54
    Source
    International cataloguing and bibliographic control. 36(2007) no.4, S.78-82
  3. Schiminovich, S.: Automatic classification and retrieval of documents by means of a bibliographic pattern discovery algorithm (1971) 0.03
    0.026605263 = product of:
      0.053210527 = sum of:
        0.053210527 = product of:
          0.10642105 = sum of:
            0.10642105 = weight(_text_:bibliographic in 4846) [ClassicSimilarity], result of:
              0.10642105 = score(doc=4846,freq=2.0), product of:
                0.17672792 = queryWeight, product of:
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.045395818 = queryNorm
                0.6021745 = fieldWeight in 4846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4846)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Wang, J.: ¬An extensive study on automated Dewey Decimal Classification (2009) 0.02
    0.01900376 = product of:
      0.03800752 = sum of:
        0.03800752 = product of:
          0.07601504 = sum of:
            0.07601504 = weight(_text_:bibliographic in 3172) [ClassicSimilarity], result of:
              0.07601504 = score(doc=3172,freq=8.0), product of:
                0.17672792 = queryWeight, product of:
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.045395818 = queryNorm
                0.43012467 = fieldWeight in 3172, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3172)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we present a theoretical analysis and extensive experiments on the automated assignment of Dewey Decimal Classification (DDC) classes to bibliographic data with a supervised machine-learning approach. Library classification systems, such as the DDC, impose great obstacles on state-of-art text categorization (TC) technologies, including deep hierarchy, data sparseness, and skewed distribution. We first analyze statistically the document and category distributions over the DDC, and discuss the obstacles imposed by bibliographic corpora and library classification schemes on TC technology. To overcome these obstacles, we propose an innovative algorithm to reshape the DDC structure into a balanced virtual tree by balancing the category distribution and flattening the hierarchy. To improve the classification effectiveness to a level acceptable to real-world applications, we propose an interactive classification model that is able to predict a class of any depth within a limited number of user interactions. The experiments are conducted on a large bibliographic collection created by the Library of Congress within the science and technology domains over 10 years. With no more than three interactions, a classification accuracy of nearly 90% is achieved, thus providing a practical solution to the automatic bibliographic classification problem.
  5. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.02
    0.01845152 = product of:
      0.03690304 = sum of:
        0.03690304 = product of:
          0.07380608 = sum of:
            0.07380608 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.07380608 = score(doc=1046,freq=2.0), product of:
                0.15896842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045395818 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 14:17:22
  6. Ahmed, M.; Mukhopadhyay, M.; Mukhopadhyay, P.: Automated knowledge organization : AI ML based subject indexing system for libraries (2023) 0.02
    0.016457738 = product of:
      0.032915477 = sum of:
        0.032915477 = product of:
          0.06583095 = sum of:
            0.06583095 = weight(_text_:bibliographic in 977) [ClassicSimilarity], result of:
              0.06583095 = score(doc=977,freq=6.0), product of:
                0.17672792 = queryWeight, product of:
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.045395818 = queryNorm
                0.3724989 = fieldWeight in 977, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=977)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The research study as reported here is an attempt to explore the possibilities of an AI/ML-based semi-automated indexing system in a library setup to handle large volumes of documents. It uses the Python virtual environment to install and configure an open source AI environment (named Annif) to feed the LOD (Linked Open Data) dataset of Library of Congress Subject Headings (LCSH) as a standard KOS (Knowledge Organisation System). The framework deployed the Turtle format of LCSH after cleaning the file with Skosify, applied an array of backend algorithms (namely TF-IDF, Omikuji, and NN-Ensemble) to measure relative performance, and selected Snowball as an analyser. The training of Annif was conducted with a large set of bibliographic records populated with subject descriptors (MARC tag 650$a) and indexed by trained LIS professionals. The training dataset is first treated with MarcEdit to export it in a format suitable for OpenRefine, and then in OpenRefine it undergoes many steps to produce a bibliographic record set suitable to train Annif. The framework, after training, has been tested with a bibliographic dataset to measure indexing efficiencies, and finally, the automated indexing framework is integrated with data wrangling software (OpenRefine) to produce suggested headings on a mass scale. The entire framework is based on open-source software, open datasets, and open standards.
  7. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.02
    0.015376267 = product of:
      0.030752534 = sum of:
        0.030752534 = product of:
          0.061505068 = sum of:
            0.061505068 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.061505068 = score(doc=611,freq=2.0), product of:
                0.15896842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045395818 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
  8. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.02
    0.015376267 = product of:
      0.030752534 = sum of:
        0.030752534 = product of:
          0.061505068 = sum of:
            0.061505068 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.061505068 = score(doc=2748,freq=2.0), product of:
                0.15896842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045395818 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  9. Guerrero-Bote, V.P.; Moya Anegón, F. de; Herrero Solana, V.: Document organization using Kohonen's algorithm (2002) 0.02
    0.0152030075 = product of:
      0.030406015 = sum of:
        0.030406015 = product of:
          0.06081203 = sum of:
            0.06081203 = weight(_text_:bibliographic in 2564) [ClassicSimilarity], result of:
              0.06081203 = score(doc=2564,freq=2.0), product of:
                0.17672792 = queryWeight, product of:
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.045395818 = queryNorm
                0.34409973 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The classification of documents from a bibliographic database is a task that is linked to processes of information retrieval based on partial matching. A method is described of vectorizing reference documents from LISA which permits their topological organization using Kohonen's algorithm. As an example a map is generated of 202 documents from LISA, and an analysis is made of the possibilities of this type of neural network with respect to the development of information retrieval systems based on graphical browsing.
  10. Golub, K.; Hansson, J.; Soergel, D.; Tudhope, D.: Managing classification in libraries : a methodological outline for evaluating automatic subject indexing and classification in Swedish library catalogues (2015) 0.01
    0.0134376865 = product of:
      0.026875373 = sum of:
        0.026875373 = product of:
          0.053750746 = sum of:
            0.053750746 = weight(_text_:bibliographic in 2300) [ClassicSimilarity], result of:
              0.053750746 = score(doc=2300,freq=4.0), product of:
                0.17672792 = queryWeight, product of:
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.045395818 = queryNorm
                0.30414405 = fieldWeight in 2300, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2300)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Subject terms play a crucial role in resource discovery but require substantial effort to produce. Automatic subject classification and indexing address problems of scale and sustainability and can be used to enrich existing bibliographic records, establish more connections across and between resources and enhance consistency of bibliographic data. The paper aims to put forward a complex methodological framework to evaluate automatic classification tools of Swedish textual documents based on the Dewey Decimal Classification (DDC) recently introduced to Swedish libraries. Three major complementary approaches are suggested: a quality-built gold standard, retrieval effects, domain analysis. The gold standard is built based on input from at least two catalogue librarians, end-users expert in the subject, end users inexperienced in the subject and automated tools. Retrieval effects are studied through a combination of assigned and free tasks, including factual and comprehensive types. The study also takes into consideration the different role and character of subject terms in various knowledge domains, such as scientific disciplines. As a theoretical framework, domain analysis is used and applied in relation to the implementation of DDC in Swedish libraries and chosen domains of knowledge within the DDC itself.
  11. Wille, J.: Automatisches Klassifizieren bibliographischer Beschreibungsdaten : Vorgehensweise und Ergebnisse (2006) 0.01
    0.013302632 = product of:
      0.026605263 = sum of:
        0.026605263 = product of:
          0.053210527 = sum of:
            0.053210527 = weight(_text_:bibliographic in 6090) [ClassicSimilarity], result of:
              0.053210527 = score(doc=6090,freq=2.0), product of:
                0.17672792 = queryWeight, product of:
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.045395818 = queryNorm
                0.30108726 = fieldWeight in 6090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6090)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Diese Arbeit befasst sich mit den praktischen Aspekten des Automatischen Klassifizierens bibliographischer Referenzdaten. Im Vordergrund steht die konkrete Vorgehensweise anhand des eigens zu diesem Zweck entwickelten Open Source-Programms COBRA "Classification Of Bibliographic Records, Automatic". Es werden die Rahmenbedingungen und Parameter f¨ur einen Einsatz im bibliothekarischen Umfeld geklärt. Schließlich erfolgt eine Auswertung von Klassifizierungsergebnissen am Beispiel sozialwissenschaftlicher Daten aus der Datenbank SOLIS.
  12. Reiner, U.: DDC-based search in the data of the German National Bibliography (2008) 0.01
    0.011402255 = product of:
      0.02280451 = sum of:
        0.02280451 = product of:
          0.04560902 = sum of:
            0.04560902 = weight(_text_:bibliographic in 2166) [ClassicSimilarity], result of:
              0.04560902 = score(doc=2166,freq=2.0), product of:
                0.17672792 = queryWeight, product of:
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.045395818 = queryNorm
                0.2580748 = fieldWeight in 2166, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2166)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In 2004, the German National Library began to classify title records of the German National Bibliography according to subject groups based on the divisions of the Dewey Decimal Classification (DDC). Since 2006, all titles of the main series of the German National Bibliography are classified in strict compliance with the DDC. On this basis, an enhanced DDC-based search can be realized - e.g., searching the data of the German National Bibliography for title records using number components of synthesized classification numbers or searching for DDC numbers using unclassified title records. This paper gives an account of the current research and development of the DDC-based search. The work is conducted in the VZG project Colibri that focuses on the automatic analysis of DDC-synthesized numbers and the automatic classification of bibliographic title records.
  13. Desale, S.K.; Kumbhar, R.: Research on automatic classification of documents in library environment : a literature review (2013) 0.01
    0.011402255 = product of:
      0.02280451 = sum of:
        0.02280451 = product of:
          0.04560902 = sum of:
            0.04560902 = weight(_text_:bibliographic in 1071) [ClassicSimilarity], result of:
              0.04560902 = score(doc=1071,freq=2.0), product of:
                0.17672792 = queryWeight, product of:
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.045395818 = queryNorm
                0.2580748 = fieldWeight in 1071, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.893044 = idf(docFreq=2449, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1071)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper aims to provide an overview of automatic classification research, which focuses on issues related to the automatic classification of documents in a library environment. The review covers literature published in mainstream library and information science studies. The review was done on literature published in both academic and professional LIS journals and other documents. This review reveals that basically three types of research are being done on automatic classification: 1) hierarchical classification using different library classification schemes, 2) text categorization and document categorization using different type of classifiers with or without using training documents, and 3) automatic bibliographic classification. Predominantly this research is directed towards solving problems of organization of digital documents in an online environment. However, very little research is devoted towards solving the problems of arrangement of physical documents.
  14. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.01
    0.010763386 = product of:
      0.021526773 = sum of:
        0.021526773 = product of:
          0.043053545 = sum of:
            0.043053545 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.043053545 = score(doc=141,freq=2.0), product of:
                0.15896842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045395818 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.1-22
  15. Dubin, D.: Dimensions and discriminability (1998) 0.01
    0.010763386 = product of:
      0.021526773 = sum of:
        0.021526773 = product of:
          0.043053545 = sum of:
            0.043053545 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.043053545 = score(doc=2338,freq=2.0), product of:
                0.15896842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045395818 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
  16. Automatic classification research at OCLC (2002) 0.01
    0.010763386 = product of:
      0.021526773 = sum of:
        0.021526773 = product of:
          0.043053545 = sum of:
            0.043053545 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.043053545 = score(doc=1563,freq=2.0), product of:
                0.15896842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045395818 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 9:22:09
  17. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.010763386 = product of:
      0.021526773 = sum of:
        0.021526773 = product of:
          0.043053545 = sum of:
            0.043053545 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.043053545 = score(doc=1673,freq=2.0), product of:
                0.15896842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045395818 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
  18. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.01
    0.010763386 = product of:
      0.021526773 = sum of:
        0.021526773 = product of:
          0.043053545 = sum of:
            0.043053545 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.043053545 = score(doc=5273,freq=2.0), product of:
                0.15896842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045395818 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:24:52
  19. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.01
    0.00922576 = product of:
      0.01845152 = sum of:
        0.01845152 = product of:
          0.03690304 = sum of:
            0.03690304 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.03690304 = score(doc=2760,freq=2.0), product of:
                0.15896842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045395818 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 19:11:54
  20. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.01
    0.00922576 = product of:
      0.01845152 = sum of:
        0.01845152 = product of:
          0.03690304 = sum of:
            0.03690304 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.03690304 = score(doc=3051,freq=2.0), product of:
                0.15896842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045395818 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 19:51:28