Search (17 results, page 1 of 1)

  • × theme_ss:"Automatisches Abstracting"
  1. Endres-Niggemeyer, B.; Jauris-Heipke, S.; Pinsky, S.M.; Ulbricht, U.: Wissen gewinnen durch Wissen : Ontologiebasierte Informationsextraktion (2006) 0.01
    0.014749947 = product of:
      0.14749947 = sum of:
        0.14749947 = weight(_text_:ontologie in 6016) [ClassicSimilarity], result of:
          0.14749947 = score(doc=6016,freq=8.0), product of:
            0.19081406 = queryWeight, product of:
              6.996407 = idf(docFreq=109, maxDocs=44218)
              0.02727315 = queryNorm
            0.7730011 = fieldWeight in 6016, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.996407 = idf(docFreq=109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6016)
      0.1 = coord(1/10)
    
    Abstract
    Die ontologiebasierte Informationsextraktion, über die hier berichtet wird, ist Teil eines Systems zum automatischen Zusammenfassen, das sich am Vorgehen kompetenter Menschen orientiert. Dahinter steht die Annahme, dass Menschen die Ergebnisse eines Systems leichter übernehmen können, wenn sie mit Verfahren erarbeitet worden sind, die sie selbst auch benutzen. Das erste Anwendungsgebiet ist Knochenmarktransplantation (KMT). Im Kern des Systems Summit-BMT (Summarize It in Bone Marrow Transplantation) steht eine Ontologie des Fachgebietes. Sie ist als MySQL-Datenbank realisiert und versorgt menschliche Benutzer und Systemkomponenten mit Wissen. Summit-BMT unterstützt die Frageformulierung mit einem empirisch fundierten Szenario-Interface. Die Retrievalergebnisse werden durch ein Textpassagenretrieval vorselektiert und dann kognitiv fundierten Agenten unterbreitet, die unter Einsatz ihrer Wissensbasis / Ontologie genauer prüfen, ob die Propositionen aus der Benutzerfrage getroffen werden. Die relevanten Textclips aus dem Duelldokument werden in das Szenarioformular eingetragen und mit einem Link zu ihrem Vorkommen im Original präsentiert. In diesem Artikel stehen die Ontologie und ihr Gebrauch zur wissensbasierten Informationsextraktion im Mittelpunkt. Die Ontologiedatenbank hält unterschiedliche Wissenstypen so bereit, dass sie leicht kombiniert werden können: Konzepte, Propositionen und ihre syntaktisch-semantischen Schemata, Unifikatoren, Paraphrasen und Definitionen von Frage-Szenarios. Auf sie stützen sich die Systemagenten, welche von Menschen adaptierte Zusammenfassungsstrategien ausführen. Mängel in anderen Verarbeitungsschritten führen zu Verlusten, aber die eigentliche Qualität der Ergebnisse steht und fällt mit der Qualität der Ontologie. Erste Tests der Extraktionsleistung fallen verblüffend positiv aus.
  2. Endres-Niggemeyer, B.; Ziegert, C.: SummIt-BMT : (Summarize It in BMT) in Diagnose und Therapie, Abschlussbericht (2002) 0.01
    0.0127738295 = product of:
      0.1277383 = sum of:
        0.1277383 = weight(_text_:ontologie in 4497) [ClassicSimilarity], result of:
          0.1277383 = score(doc=4497,freq=6.0), product of:
            0.19081406 = queryWeight, product of:
              6.996407 = idf(docFreq=109, maxDocs=44218)
              0.02727315 = queryNorm
            0.6694386 = fieldWeight in 4497, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.996407 = idf(docFreq=109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4497)
      0.1 = coord(1/10)
    
    Abstract
    SummIt-BMT (Summarize It in Bone Marrow Transplantation) - das Zielsystem des Projektes - soll Ärzten in der Knochenmarktransplantation durch kognitiv fundiertes Zusammenfassen (Endres-Niggemeyer, 1998) aus dem WWW eine schnelle Informationsaufnahme ermöglichen. Im bmbffinanzierten Teilprojekt, über das hier zu berichten ist, liegt der Schwerpunkt auf den klinischen Fragestellungen. SummIt-BMT hat als zentrale Komponente eine KMT-Ontologie. Den Systemablauf veranschaulicht Abb. 1: Benutzer geben ihren Informationsbedarf in ein strukturiertes Szenario ein. Sie ziehen dazu Begriffe aus der Ontologie heran. Aus dem Szenario werden Fragen an Suchmaschinen abgeleitet. Die Summit-BMT-Metasuchmaschine stößt Google an und sucht in Medline, der zentralen Literaturdatenbank der Medizin. Das Suchergebnis wird aufbereitet. Dabei werden Links zu Volltexten verfolgt und die Volltexte besorgt. Die beschafften Dokumente werden mit einem Schlüsselwortretrieval auf Passagen untersucht, in denen sich Suchkonzepte aus der Frage / Ontologie häufen. Diese Passagen werden zum Zusammenfassen vorgeschlagen. In ihnen werden die Aussagen syntaktisch analysiert. Die Systemagenten untersuchen sie. Lassen Aussagen sich mit einer semantischen Relation an die Frage anbinden, tragen also zur deren Beantwortung bei, werden sie in die Zusammenfassung aufgenommen, es sei denn, andere Agenten machen Hinderungsgründe geltend, z.B. Redundanz. Das Ergebnis der Zusammenfassung wird in das Frage/Antwort-Szenario integriert. Präsentiert werden Exzerpte aus den Quelldokumenten. Mit einem Link vermitteln sie einen sofortigen Rückgriff auf die Quelle. SummIt-BMT ist zum nächsten Durchgang von Informationssuche und Zusammenfassung bereit, sobald der Benutzer dies wünscht.
  3. Endres-Niggemeyer, B.: Bessere Information durch Zusammenfassen aus dem WWW (1999) 0.01
    0.011799959 = product of:
      0.11799958 = sum of:
        0.11799958 = weight(_text_:ontologie in 4496) [ClassicSimilarity], result of:
          0.11799958 = score(doc=4496,freq=2.0), product of:
            0.19081406 = queryWeight, product of:
              6.996407 = idf(docFreq=109, maxDocs=44218)
              0.02727315 = queryNorm
            0.6184009 = fieldWeight in 4496, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.996407 = idf(docFreq=109, maxDocs=44218)
              0.0625 = fieldNorm(doc=4496)
      0.1 = coord(1/10)
    
    Abstract
    Am Beispiel der Knochenmarktransplantation, eines medizinischen Spezialgebietes, wird im folgenden dargelegt, wie man BenutzerInnen eine großen Teil des Aufwandes bei der Wissensbeschaffung abnehmen kann, indem man Suchergebnisse aus dem Netz fragebezogen zusammenfaßt. Dadurch wird in zeitkritischen Situationen, wie sie in Diagnose und Therapie alltäglich sind, die Aufnahme neuen Wissens ermöglicht. Auf einen Überblick über den Stand des Textzusammenfassens und der Ontologieentwicklung folgt eine Systemskizze, in der die Informationssuche im WWW durch ein kognitiv fundiertes Zusammenfassungssystem ergänzt wird. Dazu wird eine Fach-Ontologie vorgeschlagen, die das benötigte Wissen organisiert und repräsentiert.
  4. Shen, D.; Yang, Q.; Chen, Z.: Noise reduction through summarization for Web-page classification (2007) 0.01
    0.00577674 = product of:
      0.0577674 = sum of:
        0.0577674 = weight(_text_:web in 953) [ClassicSimilarity], result of:
          0.0577674 = score(doc=953,freq=18.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.64902663 = fieldWeight in 953, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=953)
      0.1 = coord(1/10)
    
    Abstract
    Due to a large variety of noisy information embedded in Web pages, Web-page classification is much more difficult than pure-text classification. In this paper, we propose to improve the Web-page classification performance by removing the noise through summarization techniques. We first give empirical evidence that ideal Web-page summaries generated by human editors can indeed improve the performance of Web-page classification algorithms. We then put forward a new Web-page summarization algorithm based on Web-page layout and evaluate it along with several other state-of-the-art text summarization algorithms on the LookSmart Web directory. Experimental results show that the classification algorithms (NB or SVM) augmented by any summarization approach can achieve an improvement by more than 5.0% as compared to pure-text-based classification algorithms. We further introduce an ensemble method to combine the different summarization algorithms. The ensemble summarization method achieves more than 12.0% improvement over pure-text based methods.
  5. Yulianti, E.; Huspi, S.; Sanderson, M.: Tweet-biased summarization (2016) 0.00
    0.0032093 = product of:
      0.032093 = sum of:
        0.032093 = weight(_text_:web in 2926) [ClassicSimilarity], result of:
          0.032093 = score(doc=2926,freq=8.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.36057037 = fieldWeight in 2926, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2926)
      0.1 = coord(1/10)
    
    Abstract
    We examined whether the microblog comments given by people after reading a web document could be exploited to improve the accuracy of a web document summarization system. We examined the effect of social information (i.e., tweets) on the accuracy of the generated summaries by comparing the user preference for TBS (tweet-biased summary) with GS (generic summary). The result of crowdsourcing-based evaluation shows that the user preference for TBS was significantly higher than GS. We also took random samples of the documents to see the performance of summaries in a traditional evaluation using ROUGE, which, in general, TBS was also shown to be better than GS. We further analyzed the influence of the number of tweets pointed to a web document on summarization accuracy, finding a positive moderate correlation between the number of tweets pointed to a web document and the performance of generated TBS as measured by user preference. The results show that incorporating social information into the summary generation process can improve the accuracy of summary. The reason for people choosing one summary over another in a crowdsourcing-based evaluation is also presented in this article.
  6. Liang, S.-F.; Devlin, S.; Tait, J.: Investigating sentence weighting components for automatic summarisation (2007) 0.00
    0.0019255801 = product of:
      0.0192558 = sum of:
        0.0192558 = weight(_text_:web in 899) [ClassicSimilarity], result of:
          0.0192558 = score(doc=899,freq=2.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.21634221 = fieldWeight in 899, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=899)
      0.1 = coord(1/10)
    
    Abstract
    The work described here initially formed part of a triangulation exercise to establish the effectiveness of the Query Term Order algorithm. It subsequently proved to be a reliable indicator for summarising English web documents. We utilised the human summaries from the Document Understanding Conference data, and generated queries automatically for testing the QTO algorithm. Six sentence weighting schemes that made use of Query Term Frequency and QTO were constructed to produce system summaries, and this paper explains the process of combining and balancing the weighting components. The summaries produced were evaluated by the ROUGE-1 metric, and the results showed that using QTO in a weighting combination resulted in the best performance. We also found that using a combination of more weighting components always produced improved performance compared to any single weighting component.
  7. Xu, D.; Cheng, G.; Qu, Y.: Preferences in Wikipedia abstracts : empirical findings and implications for automatic entity summarization (2014) 0.00
    0.0019255801 = product of:
      0.0192558 = sum of:
        0.0192558 = weight(_text_:web in 2700) [ClassicSimilarity], result of:
          0.0192558 = score(doc=2700,freq=2.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.21634221 = fieldWeight in 2700, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2700)
      0.1 = coord(1/10)
    
    Abstract
    The volume of entity-centric structured data grows rapidly on the Web. The description of an entity, composed of property-value pairs (a.k.a. features), has become very large in many applications. To avoid information overload, efforts have been made to automatically select a limited number of features to be shown to the user based on certain criteria, which is called automatic entity summarization. However, to the best of our knowledge, there is a lack of extensive studies on how humans rank and select features in practice, which can provide empirical support and inspire future research. In this article, we present a large-scale statistical analysis of the descriptions of entities provided by DBpedia and the abstracts of their corresponding Wikipedia articles, to empirically study, along several different dimensions, which kinds of features are preferable when humans summarize. Implications for automatic entity summarization are drawn from the findings.
  8. Ou, S.; Khoo, C.S.G.; Goh, D.H.: Multi-document summarization of news articles using an event-based framework (2006) 0.00
    0.00160465 = product of:
      0.0160465 = sum of:
        0.0160465 = weight(_text_:web in 657) [ClassicSimilarity], result of:
          0.0160465 = score(doc=657,freq=2.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.18028519 = fieldWeight in 657, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=657)
      0.1 = coord(1/10)
    
    Abstract
    Purpose - The purpose of this research is to develop a method for automatic construction of multi-document summaries of sets of news articles that might be retrieved by a web search engine in response to a user query. Design/methodology/approach - Based on the cross-document discourse analysis, an event-based framework is proposed for integrating and organizing information extracted from different news articles. It has a hierarchical structure in which the summarized information is presented at the top level and more detailed information given at the lower levels. A tree-view interface was implemented for displaying a multi-document summary based on the framework. A preliminary user evaluation was performed by comparing the framework-based summaries against the sentence-based summaries. Findings - In a small evaluation, all the human subjects preferred the framework-based summaries to the sentence-based summaries. It indicates that the event-based framework is an effective way to summarize a set of news articles reporting an event or a series of relevant events. Research limitations/implications - Limited to event-based news articles only, not applicable to news critiques and other kinds of news articles. A summarization system based on the event-based framework is being implemented. Practical implications - Multi-document summarization of news articles can adopt the proposed event-based framework. Originality/value - An event-based framework for summarizing sets of news articles was developed and evaluated using a tree-view interface for displaying such summaries.
  9. Ou, S.; Khoo, S.G.; Goh, D.H.: Automatic multidocument summarization of research abstracts : design and user evaluation (2007) 0.00
    0.00160465 = product of:
      0.0160465 = sum of:
        0.0160465 = weight(_text_:web in 522) [ClassicSimilarity], result of:
          0.0160465 = score(doc=522,freq=2.0), product of:
            0.08900621 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02727315 = queryNorm
            0.18028519 = fieldWeight in 522, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=522)
      0.1 = coord(1/10)
    
    Abstract
    The purpose of this study was to develop a method for automatic construction of multidocument summaries of sets of research abstracts that may be retrieved by a digital library or search engine in response to a user query. Sociology dissertation abstracts were selected as the sample domain in this study. A variable-based framework was proposed for integrating and organizing research concepts and relationships as well as research methods and contextual relations extracted from different dissertation abstracts. Based on the framework, a new summarization method was developed, which parses the discourse structure of abstracts, extracts research concepts and relationships, integrates the information across different abstracts, and organizes and presents them in a Web-based interface. The focus of this article is on the user evaluation that was performed to assess the overall quality and usefulness of the summaries. Two types of variable-based summaries generated using the summarization method-with or without the use of a taxonomy-were compared against a sentence-based summary that lists only the research-objective sentences extracted from each abstract and another sentence-based summary generated using the MEAD system that extracts important sentences. The evaluation results indicate that the majority of sociological researchers (70%) and general users (64%) preferred the variable-based summaries generated with the use of the taxonomy.
  10. Goh, A.; Hui, S.C.: TES: a text extraction system (1996) 0.00
    9.853694E-4 = product of:
      0.009853695 = sum of:
        0.009853695 = product of:
          0.029561082 = sum of:
            0.029561082 = weight(_text_:22 in 6599) [ClassicSimilarity], result of:
              0.029561082 = score(doc=6599,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.30952093 = fieldWeight in 6599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6599)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    26. 2.1997 10:22:43
  11. Robin, J.; McKeown, K.: Empirically designing and evaluating a new revision-based model for summary generation (1996) 0.00
    9.853694E-4 = product of:
      0.009853695 = sum of:
        0.009853695 = product of:
          0.029561082 = sum of:
            0.029561082 = weight(_text_:22 in 6751) [ClassicSimilarity], result of:
              0.029561082 = score(doc=6751,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.30952093 = fieldWeight in 6751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6751)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    6. 3.1997 16:22:15
  12. Jones, P.A.; Bradbeer, P.V.G.: Discovery of optimal weights in a concept selection system (1996) 0.00
    9.853694E-4 = product of:
      0.009853695 = sum of:
        0.009853695 = product of:
          0.029561082 = sum of:
            0.029561082 = weight(_text_:22 in 6974) [ClassicSimilarity], result of:
              0.029561082 = score(doc=6974,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.30952093 = fieldWeight in 6974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6974)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  13. Vanderwende, L.; Suzuki, H.; Brockett, J.M.; Nenkova, A.: Beyond SumBasic : task-focused summarization with sentence simplification and lexical expansion (2007) 0.00
    7.39027E-4 = product of:
      0.00739027 = sum of:
        0.00739027 = product of:
          0.02217081 = sum of:
            0.02217081 = weight(_text_:22 in 948) [ClassicSimilarity], result of:
              0.02217081 = score(doc=948,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.23214069 = fieldWeight in 948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=948)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems.
  14. Wu, Y.-f.B.; Li, Q.; Bot, R.S.; Chen, X.: Finding nuggets in documents : a machine learning approach (2006) 0.00
    6.1585585E-4 = product of:
      0.0061585587 = sum of:
        0.0061585587 = product of:
          0.018475676 = sum of:
            0.018475676 = weight(_text_:22 in 5290) [ClassicSimilarity], result of:
              0.018475676 = score(doc=5290,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.19345059 = fieldWeight in 5290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5290)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    22. 7.2006 17:25:48
  15. Kim, H.H.; Kim, Y.H.: Generic speech summarization of transcribed lecture videos : using tags and their semantic relations (2016) 0.00
    6.1585585E-4 = product of:
      0.0061585587 = sum of:
        0.0061585587 = product of:
          0.018475676 = sum of:
            0.018475676 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.018475676 = score(doc=2640,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.19345059 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2640)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    22. 1.2016 12:29:41
  16. Oh, H.; Nam, S.; Zhu, Y.: Structured abstract summarization of scientific articles : summarization using full-text section information (2023) 0.00
    6.1585585E-4 = product of:
      0.0061585587 = sum of:
        0.0061585587 = product of:
          0.018475676 = sum of:
            0.018475676 = weight(_text_:22 in 889) [ClassicSimilarity], result of:
              0.018475676 = score(doc=889,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.19345059 = fieldWeight in 889, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=889)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    22. 1.2023 18:57:12
  17. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.00
    6.1585585E-4 = product of:
      0.0061585587 = sum of:
        0.0061585587 = product of:
          0.018475676 = sum of:
            0.018475676 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
              0.018475676 = score(doc=1012,freq=2.0), product of:
                0.09550592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02727315 = queryNorm
                0.19345059 = fieldWeight in 1012, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1012)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    22. 6.2023 14:55:20