Search (24 results, page 1 of 2)

  • × theme_ss:"Automatisches Abstracting"
  1. Endres-Niggemeyer, B.: ¬An empirical process model of abstracting (1992) 0.02
    0.022878483 = product of:
      0.18302786 = sum of:
        0.18302786 = weight(_text_:maschine in 8834) [ClassicSimilarity], result of:
          0.18302786 = score(doc=8834,freq=2.0), product of:
            0.21420717 = queryWeight, product of:
              6.444614 = idf(docFreq=190, maxDocs=44218)
              0.03323817 = queryNorm
            0.8544432 = fieldWeight in 8834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.444614 = idf(docFreq=190, maxDocs=44218)
              0.09375 = fieldNorm(doc=8834)
      0.125 = coord(1/8)
    
    Source
    Mensch und Maschine: Informationelle Schnittstellen der Kommunikation. Proc. des 3. Int. Symposiums für Informationswissenschaft (ISI'92), 5.-7.11.1992 in Saarbrücken. Hrsg.: H.H. Zimmermann, H.-D. Luckhardt u. A. Schulz
  2. Wang, S.; Koopman, R.: Embed first, then predict (2019) 0.00
    0.004232504 = product of:
      0.016930016 = sum of:
        0.009356364 = product of:
          0.046781816 = sum of:
            0.046781816 = weight(_text_:problem in 5400) [ClassicSimilarity], result of:
              0.046781816 = score(doc=5400,freq=4.0), product of:
                0.1410789 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03323817 = queryNorm
                0.33160037 = fieldWeight in 5400, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5400)
          0.2 = coord(1/5)
        0.007573652 = product of:
          0.022720955 = sum of:
            0.022720955 = weight(_text_:29 in 5400) [ClassicSimilarity], result of:
              0.022720955 = score(doc=5400,freq=2.0), product of:
                0.116921484 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03323817 = queryNorm
                0.19432661 = fieldWeight in 5400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5400)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Automatic subject prediction is a desirable feature for modern digital library systems, as manual indexing can no longer cope with the rapid growth of digital collections. It is also desirable to be able to identify a small set of entities (e.g., authors, citations, bibliographic records) which are most relevant to a query. This gets more difficult when the amount of data increases dramatically. Data sparsity and model scalability are the major challenges to solving this type of extreme multilabel classification problem automatically. In this paper, we propose to address this problem in two steps: we first embed different types of entities into the same semantic space, where similarity could be computed easily; second, we propose a novel non-parametric method to identify the most relevant entities in addition to direct semantic similarities. We show how effectively this approach predicts even very specialised subjects, which are associated with few documents in the training set and are more problematic for a classifier.
    Date
    29. 9.2019 12:18:42
  3. Kim, H.H.; Kim, Y.H.: Generic speech summarization of transcribed lecture videos : using tags and their semantic relations (2016) 0.00
    0.0037697933 = product of:
      0.030158347 = sum of:
        0.030158347 = product of:
          0.04523752 = sum of:
            0.022720955 = weight(_text_:29 in 2640) [ClassicSimilarity], result of:
              0.022720955 = score(doc=2640,freq=2.0), product of:
                0.116921484 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03323817 = queryNorm
                0.19432661 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2640)
            0.022516565 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.022516565 = score(doc=2640,freq=2.0), product of:
                0.1163944 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03323817 = queryNorm
                0.19345059 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2640)
          0.6666667 = coord(2/3)
      0.125 = coord(1/8)
    
    Date
    22. 1.2016 12:29:41
  4. Pinto, M.: Engineering the production of meta-information : the abstracting concern (2003) 0.00
    0.0037487666 = product of:
      0.029990133 = sum of:
        0.029990133 = product of:
          0.089970395 = sum of:
            0.089970395 = weight(_text_:29 in 4667) [ClassicSimilarity], result of:
              0.089970395 = score(doc=4667,freq=4.0), product of:
                0.116921484 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03323817 = queryNorm
                0.7694941 = fieldWeight in 4667, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4667)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    27.11.2005 18:29:55
    Source
    Journal of information science. 29(2003) no.5, S.405-418
  5. Salton, G.; Allan, J.; Buckley, C.; Singhal, A.: Automatic analysis, theme generation, and summarization of machine readable texts (1994) 0.00
    0.001893413 = product of:
      0.015147304 = sum of:
        0.015147304 = product of:
          0.04544191 = sum of:
            0.04544191 = weight(_text_:29 in 1949) [ClassicSimilarity], result of:
              0.04544191 = score(doc=1949,freq=2.0), product of:
                0.116921484 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03323817 = queryNorm
                0.38865322 = fieldWeight in 1949, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1949)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    16. 8.1998 12:30:29
  6. Ercan, G.; Cicekli, I.: Using lexical chains for keyword extraction (2007) 0.00
    0.0016373637 = product of:
      0.0130989095 = sum of:
        0.0130989095 = product of:
          0.065494545 = sum of:
            0.065494545 = weight(_text_:problem in 951) [ClassicSimilarity], result of:
              0.065494545 = score(doc=951,freq=4.0), product of:
                0.1410789 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03323817 = queryNorm
                0.46424055 = fieldWeight in 951, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=951)
          0.2 = coord(1/5)
      0.125 = coord(1/8)
    
    Abstract
    Keywords can be considered as condensed versions of documents and short forms of their summaries. In this paper, the problem of automatic extraction of keywords from documents is treated as a supervised learning task. A lexical chain holds a set of semantically related words of a text and it can be said that a lexical chain represents the semantic content of a portion of the text. Although lexical chains have been extensively used in text summarization, their usage for keyword extraction problem has not been fully investigated. In this paper, a keyword extraction technique that uses lexical chains is described, and encouraging results are obtained.
  7. Craven, T.C.: ¬A phrase flipper for the assistance of writers of abstracts and other text (1995) 0.00
    0.0015147304 = product of:
      0.012117843 = sum of:
        0.012117843 = product of:
          0.03635353 = sum of:
            0.03635353 = weight(_text_:29 in 4897) [ClassicSimilarity], result of:
              0.03635353 = score(doc=4897,freq=2.0), product of:
                0.116921484 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03323817 = queryNorm
                0.31092256 = fieldWeight in 4897, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4897)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    17. 8.1996 10:29:59
  8. Goh, A.; Hui, S.C.: TES: a text extraction system (1996) 0.00
    0.0015011043 = product of:
      0.012008835 = sum of:
        0.012008835 = product of:
          0.036026504 = sum of:
            0.036026504 = weight(_text_:22 in 6599) [ClassicSimilarity], result of:
              0.036026504 = score(doc=6599,freq=2.0), product of:
                0.1163944 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03323817 = queryNorm
                0.30952093 = fieldWeight in 6599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6599)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    26. 2.1997 10:22:43
  9. Robin, J.; McKeown, K.: Empirically designing and evaluating a new revision-based model for summary generation (1996) 0.00
    0.0015011043 = product of:
      0.012008835 = sum of:
        0.012008835 = product of:
          0.036026504 = sum of:
            0.036026504 = weight(_text_:22 in 6751) [ClassicSimilarity], result of:
              0.036026504 = score(doc=6751,freq=2.0), product of:
                0.1163944 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03323817 = queryNorm
                0.30952093 = fieldWeight in 6751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6751)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    6. 3.1997 16:22:15
  10. Jones, P.A.; Bradbeer, P.V.G.: Discovery of optimal weights in a concept selection system (1996) 0.00
    0.0015011043 = product of:
      0.012008835 = sum of:
        0.012008835 = product of:
          0.036026504 = sum of:
            0.036026504 = weight(_text_:22 in 6974) [ClassicSimilarity], result of:
              0.036026504 = score(doc=6974,freq=2.0), product of:
                0.1163944 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03323817 = queryNorm
                0.30952093 = fieldWeight in 6974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6974)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  11. Hirao, T.; Okumura, M.; Yasuda, N.; Isozaki, H.: Supervised automatic evaluation for summarization with voted regression model (2007) 0.00
    0.0014034546 = product of:
      0.011227637 = sum of:
        0.011227637 = product of:
          0.056138184 = sum of:
            0.056138184 = weight(_text_:problem in 942) [ClassicSimilarity], result of:
              0.056138184 = score(doc=942,freq=4.0), product of:
                0.1410789 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03323817 = queryNorm
                0.39792046 = fieldWeight in 942, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=942)
          0.2 = coord(1/5)
      0.125 = coord(1/8)
    
    Abstract
    The high quality evaluation of generated summaries is needed if we are to improve automatic summarization systems. Although human evaluation provides better results than automatic evaluation methods, its cost is huge and it is difficult to reproduce the results. Therefore, we need an automatic method that simulates human evaluation if we are to improve our summarization system efficiently. Although automatic evaluation methods have been proposed, they are unreliable when used for individual summaries. To solve this problem, we propose a supervised automatic evaluation method based on a new regression model called the voted regression model (VRM). VRM has two characteristics: (1) model selection based on 'corrected AIC' to avoid multicollinearity, (2) voting by the selected models to alleviate the problem of overfitting. Evaluation results obtained for TSC3 and DUC2004 show that our method achieved error reductions of about 17-51% compared with conventional automatic evaluation methods. Moreover, our method obtained the highest correlation coefficients in several different experiments.
  12. Uyttendaele, C.; Moens, M.-F.; Dumortier, J.: SALOMON: automatic abstracting of legal cases for effective access to court decisions (1998) 0.00
    0.0013253891 = product of:
      0.010603113 = sum of:
        0.010603113 = product of:
          0.031809337 = sum of:
            0.031809337 = weight(_text_:29 in 495) [ClassicSimilarity], result of:
              0.031809337 = score(doc=495,freq=2.0), product of:
                0.116921484 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03323817 = queryNorm
                0.27205724 = fieldWeight in 495, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=495)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    17. 7.1996 14:16:29
  13. Ruda, S.: Abstracting: eine Auswahlbibliographie (1992) 0.00
    0.0011577909 = product of:
      0.009262327 = sum of:
        0.009262327 = product of:
          0.046311636 = sum of:
            0.046311636 = weight(_text_:problem in 6603) [ClassicSimilarity], result of:
              0.046311636 = score(doc=6603,freq=2.0), product of:
                0.1410789 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03323817 = queryNorm
                0.3282676 = fieldWeight in 6603, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6603)
          0.2 = coord(1/5)
      0.125 = coord(1/8)
    
    Abstract
    Die vorliegende Auswahlbibliographie ist in 9 Themenbereiche unterteilt. Der erste Abschnitt enthält Literatur, in der auf Abstracts und Abstracting-Verfahren allgemein eingegangen und ein Überblick über den Stand der Forschung gegeben wird. Im nächsten Abschnitt werden solche Aufsätze referiert, die die historische Entwicklung des Abstracting beschreiben. Im dritten Teil sind Abstracting-Richtlinien verschiedener Institutionen aufgelistet. Lexikalische, syntaktische und semantische Textkondensierungsverfahren sind das Thema der in Abschnitt 4 präsentierten Arbeiten. Textstrukturen von Abstracts werden unter Punkt 5 betrachtet, und die Arbeiten des nächsten Themenbereiches befassen sich mit dem Problem des Schreibens von Abstracts. Der siebte Abschnitt listet sog. 'maschinelle' und maschinen-unterstützte Abstracting-Methoden auf. Anschließend werden 'maschinelle' und maschinenunterstützte Abstracting-Verfahren, Abstracts im Vergleich zu ihren Primärtexten sowie Abstracts im allgemeien bewertet. Den Abschluß bilden Bibliographien
  14. Zajic, D.; Dorr, B.J.; Lin, J.; Schwartz, R.: Multi-candidate reduction : sentence compression as a tool for document summarization tasks (2007) 0.00
    0.0011577909 = product of:
      0.009262327 = sum of:
        0.009262327 = product of:
          0.046311636 = sum of:
            0.046311636 = weight(_text_:problem in 944) [ClassicSimilarity], result of:
              0.046311636 = score(doc=944,freq=2.0), product of:
                0.1410789 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03323817 = queryNorm
                0.3282676 = fieldWeight in 944, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=944)
          0.2 = coord(1/5)
      0.125 = coord(1/8)
    
    Abstract
    This article examines the application of two single-document sentence compression techniques to the problem of multi-document summarization-a "parse-and-trim" approach and a statistical noisy-channel approach. We introduce the multi-candidate reduction (MCR) framework for multi-document summarization, in which many compressed candidates are generated for each source sentence. These candidates are then selected for inclusion in the final summary based on a combination of static and dynamic features. Evaluations demonstrate that sentence compression is a valuable component of a larger multi-document summarization framework.
  15. Vanderwende, L.; Suzuki, H.; Brockett, J.M.; Nenkova, A.: Beyond SumBasic : task-focused summarization with sentence simplification and lexical expansion (2007) 0.00
    0.0011258282 = product of:
      0.009006626 = sum of:
        0.009006626 = product of:
          0.027019877 = sum of:
            0.027019877 = weight(_text_:22 in 948) [ClassicSimilarity], result of:
              0.027019877 = score(doc=948,freq=2.0), product of:
                0.1163944 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03323817 = queryNorm
                0.23214069 = fieldWeight in 948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=948)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems.
  16. Ye, S.; Chua, T.-S.; Kan, M.-Y.; Qiu, L.: Document concept lattice for text understanding and summarization (2007) 0.00
    9.923922E-4 = product of:
      0.0079391375 = sum of:
        0.0079391375 = product of:
          0.039695688 = sum of:
            0.039695688 = weight(_text_:problem in 941) [ClassicSimilarity], result of:
              0.039695688 = score(doc=941,freq=2.0), product of:
                0.1410789 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03323817 = queryNorm
                0.28137225 = fieldWeight in 941, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=941)
          0.2 = coord(1/5)
      0.125 = coord(1/8)
    
    Abstract
    We argue that the quality of a summary can be evaluated based on how many concepts in the original document(s) that can be preserved after summarization. Here, a concept refers to an abstract or concrete entity or its action often expressed by diverse terms in text. Summary generation can thus be considered as an optimization problem of selecting a set of sentences with minimal answer loss. In this paper, we propose a document concept lattice that indexes the hierarchy of local topics tied to a set of frequent concepts and the corresponding sentences containing these topics. The local topics will specify the promising sub-spaces related to the selected concepts and sentences. Based on this lattice, the summary is an optimized selection of a set of distinct and salient local topics that lead to maximal coverage of concepts with the given number of sentences. Our summarizer based on the concept lattice has demonstrated competitive performance in Document Understanding Conference 2005 and 2006 evaluations as well as follow-on tests.
  17. Sweeney, S.; Crestani, F.; Losada, D.E.: 'Show me more' : incremental length summarisation using novelty detection (2008) 0.00
    9.467065E-4 = product of:
      0.007573652 = sum of:
        0.007573652 = product of:
          0.022720955 = sum of:
            0.022720955 = weight(_text_:29 in 2054) [ClassicSimilarity], result of:
              0.022720955 = score(doc=2054,freq=2.0), product of:
                0.116921484 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03323817 = queryNorm
                0.19432661 = fieldWeight in 2054, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2054)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    29. 7.2008 19:35:12
  18. Wu, Y.-f.B.; Li, Q.; Bot, R.S.; Chen, X.: Finding nuggets in documents : a machine learning approach (2006) 0.00
    9.3819026E-4 = product of:
      0.007505522 = sum of:
        0.007505522 = product of:
          0.022516565 = sum of:
            0.022516565 = weight(_text_:22 in 5290) [ClassicSimilarity], result of:
              0.022516565 = score(doc=5290,freq=2.0), product of:
                0.1163944 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03323817 = queryNorm
                0.19345059 = fieldWeight in 5290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5290)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    22. 7.2006 17:25:48
  19. Oh, H.; Nam, S.; Zhu, Y.: Structured abstract summarization of scientific articles : summarization using full-text section information (2023) 0.00
    9.3819026E-4 = product of:
      0.007505522 = sum of:
        0.007505522 = product of:
          0.022516565 = sum of:
            0.022516565 = weight(_text_:22 in 889) [ClassicSimilarity], result of:
              0.022516565 = score(doc=889,freq=2.0), product of:
                0.1163944 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03323817 = queryNorm
                0.19345059 = fieldWeight in 889, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=889)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    22. 1.2023 18:57:12
  20. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.00
    9.3819026E-4 = product of:
      0.007505522 = sum of:
        0.007505522 = product of:
          0.022516565 = sum of:
            0.022516565 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
              0.022516565 = score(doc=1012,freq=2.0), product of:
                0.1163944 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03323817 = queryNorm
                0.19345059 = fieldWeight in 1012, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1012)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    22. 6.2023 14:55:20