Search (30 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Fong, A.C.M.: Mining a Web citation database for document clustering (2002) 0.08
    0.080164835 = product of:
      0.2404945 = sum of:
        0.2404945 = weight(_text_:citation in 3940) [ClassicSimilarity], result of:
          0.2404945 = score(doc=3940,freq=4.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            1.0257815 = fieldWeight in 3940, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.109375 = fieldNorm(doc=3940)
      0.33333334 = coord(1/3)
    
    Theme
    Citation indexing
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.06648674 = product of:
      0.099730104 = sum of:
        0.079408415 = product of:
          0.23822524 = sum of:
            0.23822524 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.23822524 = score(doc=562,freq=2.0), product of:
                0.4238747 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04999695 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.02032169 = product of:
          0.04064338 = sum of:
            0.04064338 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.04064338 = score(doc=562,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Kwok, K.L.: ¬The use of titles and cited titles as document representations for automatic classification (1975) 0.06
    0.056685098 = product of:
      0.17005529 = sum of:
        0.17005529 = weight(_text_:citation in 4347) [ClassicSimilarity], result of:
          0.17005529 = score(doc=4347,freq=2.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.725337 = fieldWeight in 4347, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.109375 = fieldNorm(doc=4347)
      0.33333334 = coord(1/3)
    
    Theme
    Citation indexing
  4. Yang, P.; Gao, W.; Tan, Q.; Wong, K.-F.: ¬A link-bridged topic model for cross-domain document classification (2013) 0.04
    0.035064813 = product of:
      0.105194435 = sum of:
        0.105194435 = weight(_text_:citation in 2706) [ClassicSimilarity], result of:
          0.105194435 = score(doc=2706,freq=6.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.44868594 = fieldWeight in 2706, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2706)
      0.33333334 = coord(1/3)
    
    Abstract
    Transfer learning utilizes labeled data available from some related domain (source domain) for achieving effective knowledge transformation to the target domain. However, most state-of-the-art cross-domain classification methods treat documents as plain text and ignore the hyperlink (or citation) relationship existing among the documents. In this paper, we propose a novel cross-domain document classification approach called Link-Bridged Topic model (LBT). LBT consists of two key steps. Firstly, LBT utilizes an auxiliary link network to discover the direct or indirect co-citation relationship among documents by embedding the background knowledge into a graph kernel. The mined co-citation relationship is leveraged to bridge the gap across different domains. Secondly, LBT simultaneously combines the content information and link structures into a unified latent topic model. The model is based on an assumption that the documents of source and target domains share some common topics from the point of view of both content information and link structure. By mapping both domains data into the latent topic spaces, LBT encodes the knowledge about domain commonality and difference as the shared topics with associated differential probabilities. The learned latent topics must be consistent with the source and target data, as well as content and link statistics. Then the shared topics act as the bridge to facilitate knowledge transfer from the source to the target domains. Experiments on different types of datasets show that our algorithm significantly improves the generalization performance of cross-domain document classification.
  5. Sojka, P.; Lee, M.; Rehurek, R.; Hatlapatka, R.; Kucbel, M.; Bouche, T.; Goutorbe, C.; Anghelache, R.; Wojciechowski, K.: Toolset for entity and semantic associations : Final Release (2013) 0.03
    0.034356356 = product of:
      0.10306907 = sum of:
        0.10306907 = weight(_text_:citation in 1057) [ClassicSimilarity], result of:
          0.10306907 = score(doc=1057,freq=4.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.4396206 = fieldWeight in 1057, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=1057)
      0.33333334 = coord(1/3)
    
    Abstract
    In this document we describe the final release of the toolset for entity and semantic associations, integrating two versions (language dependent and language independent) of Unsupervised Document Similarity implemented by MU (using gensim tool) and Citation Indexing, Resolution and Matching (UJF/CMD). We give a brief description of tools, the rationale behind decisions made, and provide elementary evaluation. Tools are integrated in the main project result, EuDML website, and they deliver the needed functionality for exploratory searching and browsing the collected documents. EuDML users and content providers thus benefit from millions of algorithmically generated similarity and citation links, developed using state of the art machine learning and matching methods.
  6. Ibekwe-SanJuan, F.; SanJuan, E.: From term variants to research topics (2002) 0.02
    0.020244677 = product of:
      0.06073403 = sum of:
        0.06073403 = weight(_text_:citation in 1853) [ClassicSimilarity], result of:
          0.06073403 = score(doc=1853,freq=2.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.25904894 = fieldWeight in 1853, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1853)
      0.33333334 = coord(1/3)
    
    Abstract
    In a scientific and technological watch (STW) task, an expert user needs to survey the evolution of research topics in his area of specialisation in order to detect interesting changes. The majority of methods proposing evaluation metrics (bibliometrics and scientometrics studies) for STW rely solely an statistical data analysis methods (Co-citation analysis, co-word analysis). Such methods usually work an structured databases where the units of analysis (words, keywords) are already attributed to documents by human indexers. The advent of huge amounts of unstructured textual data has rendered necessary the integration of natural language processing (NLP) techniques to first extract meaningful units from texts. We propose a method for STW which is NLP-oriented. The method not only analyses texts linguistically in order to extract terms from them, but also uses linguistic relations (syntactic variations) as the basis for clustering. Terms and variation relations are formalised as weighted di-graphs which the clustering algorithm, CPCL (Classification by Preferential Clustered Link) will seek to reduce in order to produces classes. These classes ideally represent the research topics present in the corpus. The results of the classification are subjected to validation by an expert in STW.
  7. Smiraglia, R.P.; Cai, X.: Tracking the evolution of clustering, machine learning, automatic indexing and automatic classification in knowledge organization (2017) 0.02
    0.020244677 = product of:
      0.06073403 = sum of:
        0.06073403 = weight(_text_:citation in 3627) [ClassicSimilarity], result of:
          0.06073403 = score(doc=3627,freq=2.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.25904894 = fieldWeight in 3627, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3627)
      0.33333334 = coord(1/3)
    
    Abstract
    A very important extension of the traditional domain of knowledge organization (KO) arises from attempts to incorporate techniques devised in the computer science domain for automatic concept extraction and for grouping, categorizing, clustering and otherwise organizing knowledge using mechanical means. Four specific terms have emerged to identify the most prevalent techniques: machine learning, clustering, automatic indexing, and automatic classification. Our study presents three domain analytical case analyses in search of answers. The first case relies on citations located using the ISKO-supported "Knowledge Organization Bibliography." The second case relies on works in both Web of Science and SCOPUS. Case three applies co-word analysis and citation analysis to the contents of the papers in the present special issue. We observe scholars involved in "clustering" and "automatic classification" who share common thematic emphases. But we have found no coherence, no common activity and no social semantics. We have not found a research front, or a common teleology within the KO domain. We also have found a lively group of authors who have succeeded in submitting papers to this special issue, and their work quite interestingly aligns with the case studies we report. There is an emphasis on KO for information retrieval; there is much work on clustering (which involves conceptual points within texts) and automatic classification (which involves semantic groupings at the meta-document level).
  8. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.013547793 = product of:
      0.04064338 = sum of:
        0.04064338 = product of:
          0.08128676 = sum of:
            0.08128676 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.08128676 = score(doc=1046,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    5. 5.2003 14:17:22
  9. Huang, Y.-L.: ¬A theoretic and empirical research of cluster indexing for Mandarine Chinese full text document (1998) 0.01
    0.013073232 = product of:
      0.039219696 = sum of:
        0.039219696 = product of:
          0.07843939 = sum of:
            0.07843939 = weight(_text_:reports in 513) [ClassicSimilarity], result of:
              0.07843939 = score(doc=513,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.34833482 = fieldWeight in 513, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=513)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Since most popular commercialized systems for full text retrieval are designed with full text scaning and Boolean logic query mode, these systems use an oversimplified relationship between the indexing form and the content of document. Reports the use of Singular Value Decomposition (SVD) to develop a Cluster Indexing Model (CIM) based on a Vector Space Model (VSM) in orer to explore the index theory of cluster indexing for chinese full text documents. From a series of experiments, it was found that the indexing performance of CIM is better than traditional VSM, and has almost equivalent effectiveness of the authority control of index terms
  10. Yang, Y.; Liu, X.: ¬A re-examination of text categorization methods (1999) 0.01
    0.013073232 = product of:
      0.039219696 = sum of:
        0.039219696 = product of:
          0.07843939 = sum of:
            0.07843939 = weight(_text_:reports in 3386) [ClassicSimilarity], result of:
              0.07843939 = score(doc=3386,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.34833482 = fieldWeight in 3386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3386)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper reports a controlled study with statistical significance tests an five text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classifier, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classifier. We focus an the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF significantly outperform NNet and NB when the number of positive training instances per category are small (less than ten, and that all the methods perform comparably when the categories are sufficiently common (over 300 instances).
  11. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.0112898275 = product of:
      0.033869483 = sum of:
        0.033869483 = product of:
          0.067738965 = sum of:
            0.067738965 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.067738965 = score(doc=611,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 8.2009 12:54:24
  12. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.01
    0.0112898275 = product of:
      0.033869483 = sum of:
        0.033869483 = product of:
          0.067738965 = sum of:
            0.067738965 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.067738965 = score(doc=2748,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 2.2016 18:25:22
  13. Na, J.-C.; Sui, H.; Khoo, C.; Chan, S.; Zhou, Y.: Effectiveness of simple linguistic processing in automatic sentiment classification of product reviews (2004) 0.01
    0.009338023 = product of:
      0.02801407 = sum of:
        0.02801407 = product of:
          0.05602814 = sum of:
            0.05602814 = weight(_text_:reports in 2624) [ClassicSimilarity], result of:
              0.05602814 = score(doc=2624,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.24881059 = fieldWeight in 2624, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2624)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper reports a study in automatic sentiment classification, i.e., automatically classifying documents as expressing positive or negative Sentiments/opinions. The study investigates the effectiveness of using SVM (Support Vector Machine) an various text features to classify product reviews into recommended (positive Sentiment) and not recommended (negative sentiment). Compared with traditional topical classification, it was hypothesized that syntactic and semantic processing of text would be more important for sentiment classification. In the first part of this study, several different approaches, unigrams (individual words), selected words (such as verb, adjective, and adverb), and words labelled with part-of-speech tags were investigated. A sample of 1,800 various product reviews was retrieved from Review Centre (www.reviewcentre.com) for the study. 1,200 reviews were used for training, and 600 for testing. Using SVM, the baseline unigram approach obtained an accuracy rate of around 76%. The use of selected words obtained a marginally better result of 77.33%. Error analysis suggests various approaches for improving classification accuracy: use of negation phrase, making inference from superficial words, and solving the problem of comments an parts. The second part of the study that is in progress investigates the use of negation phrase through simple linguistic processing to improve classification accuracy. This approach increased the accuracy rate up to 79.33%.
  14. Han, K.; Rezapour, R.; Nakamura, K.; Devkota, D.; Miller, D.C.; Diesner, J.: ¬An expert-in-the-loop method for domain-specific document categorization based on small training data (2023) 0.01
    0.009338023 = product of:
      0.02801407 = sum of:
        0.02801407 = product of:
          0.05602814 = sum of:
            0.05602814 = weight(_text_:reports in 967) [ClassicSimilarity], result of:
              0.05602814 = score(doc=967,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.24881059 = fieldWeight in 967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=967)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Automated text categorization methods are of broad relevance for domain experts since they free researchers and practitioners from manual labeling, save their resources (e.g., time, labor), and enrich the data with information helpful to study substantive questions. Despite a variety of newly developed categorization methods that require substantial amounts of annotated data, little is known about how to build models when (a) labeling texts with categories requires substantial domain expertise and/or in-depth reading, (b) only a few annotated documents are available for model training, and (c) no relevant computational resources, such as pretrained models, are available. In a collaboration with environmental scientists who study the socio-ecological impact of funded biodiversity conservation projects, we develop a method that integrates deep domain expertise with computational models to automatically categorize project reports based on a small sample of 93 annotated documents. Our results suggest that domain expertise can improve automated categorization and that the magnitude of these improvements is influenced by the experts' understanding of categories and their confidence in their annotation, as well as data sparsity and additional category characteristics such as the portion of exclusive keywords that can identify a category.
  15. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.01
    0.007902879 = product of:
      0.023708638 = sum of:
        0.023708638 = product of:
          0.047417276 = sum of:
            0.047417276 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.047417276 = score(doc=141,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Pages
    S.1-22
  16. Dubin, D.: Dimensions and discriminability (1998) 0.01
    0.007902879 = product of:
      0.023708638 = sum of:
        0.023708638 = product of:
          0.047417276 = sum of:
            0.047417276 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.047417276 = score(doc=2338,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.1997 19:16:05
  17. Automatic classification research at OCLC (2002) 0.01
    0.007902879 = product of:
      0.023708638 = sum of:
        0.023708638 = product of:
          0.047417276 = sum of:
            0.047417276 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.047417276 = score(doc=1563,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    5. 5.2003 9:22:09
  18. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.007902879 = product of:
      0.023708638 = sum of:
        0.023708638 = product of:
          0.047417276 = sum of:
            0.047417276 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.047417276 = score(doc=1673,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 8.1996 22:08:06
  19. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.01
    0.007902879 = product of:
      0.023708638 = sum of:
        0.023708638 = product of:
          0.047417276 = sum of:
            0.047417276 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.047417276 = score(doc=5273,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 16:24:52
  20. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.01
    0.007902879 = product of:
      0.023708638 = sum of:
        0.023708638 = product of:
          0.047417276 = sum of:
            0.047417276 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.047417276 = score(doc=2560,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2008 18:31:54