Search (4 results, page 1 of 1)

  • × theme_ss:"Automatisches Abstracting"
  • × year_i:[2010 TO 2020}
  1. Wang, W.; Hwang, D.: Abstraction Assistant : an automatic text abstraction system (2010) 0.01
    0.012443538 = product of:
      0.024887076 = sum of:
        0.024887076 = product of:
          0.14932245 = sum of:
            0.14932245 = weight(_text_:author's in 3981) [ClassicSimilarity], result of:
              0.14932245 = score(doc=3981,freq=2.0), product of:
                0.33518893 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.049878165 = queryNorm
                0.44548744 = fieldWeight in 3981, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3981)
          0.16666667 = coord(1/6)
      0.5 = coord(1/2)
    
    Abstract
    In the interest of standardization and quality assurance, it is desirable for authors and staff of access services to follow the American National Standards Institute (ANSI) guidelines in preparing abstracts. Using the statistical approach an extraction system (the Abstraction Assistant) was developed to generate informative abstracts to meet the ANSI guidelines for structural content elements. The system performance is evaluated by comparing the system-generated abstracts with the author's original abstracts and the manually enhanced system abstracts on three criteria: balance (satisfaction of the ANSI standards), fluency (text coherence), and understandability (clarity). The results suggest that it is possible to use the system output directly without manual modification, but there are issues that need to be addressed in further studies to make the system a better tool.
  2. Kim, H.H.; Kim, Y.H.: Generic speech summarization of transcribed lecture videos : using tags and their semantic relations (2016) 0.01
    0.008447254 = product of:
      0.016894508 = sum of:
        0.016894508 = product of:
          0.033789016 = sum of:
            0.033789016 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.033789016 = score(doc=2640,freq=2.0), product of:
                0.17466484 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049878165 = queryNorm
                0.19345059 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2640)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2016 12:29:41
  3. Cai, X.; Li, W.: Enhancing sentence-level clustering with integrated and interactive frameworks for theme-based summarization (2011) 0.01
    0.0059974636 = product of:
      0.011994927 = sum of:
        0.011994927 = product of:
          0.07196956 = sum of:
            0.07196956 = weight(_text_:after in 4770) [ClassicSimilarity], result of:
              0.07196956 = score(doc=4770,freq=2.0), product of:
                0.2549131 = queryWeight, product of:
                  5.1107154 = idf(docFreq=724, maxDocs=44218)
                  0.049878165 = queryNorm
                0.2823298 = fieldWeight in 4770, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.1107154 = idf(docFreq=724, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4770)
          0.16666667 = coord(1/6)
      0.5 = coord(1/2)
    
    Abstract
    Sentence clustering plays a pivotal role in theme-based summarization, which discovers topic themes defined as the clusters of highly related sentences to avoid redundancy and cover more diverse information. As the length of sentences is short and the content it contains is limited, the bag-of-words cosine similarity traditionally used for document clustering is no longer suitable. Special treatment for measuring sentence similarity is necessary. In this article, we study the sentence-level clustering problem. After exploiting concept- and context-enriched sentence vector representations, we develop two co-clustering frameworks to enhance sentence-level clustering for theme-based summarization-integrated clustering and interactive clustering-both allowing word and document to play an explicit role in sentence clustering as independent text objects rather than using word or concept as features of a sentence in a document set. In each framework, we experiment with two-level co-clustering (i.e., sentence-word co-clustering or sentence-document co-clustering) and three-level co-clustering (i.e., document-sentence-word co-clustering). Compared against concept- and context-oriented sentence-representation reformation, co-clustering shows a clear advantage in both intrinsic clustering quality evaluation and extrinsic summarization evaluation conducted on the Document Understanding Conferences (DUC) datasets.
  4. Yulianti, E.; Huspi, S.; Sanderson, M.: Tweet-biased summarization (2016) 0.01
    0.0059974636 = product of:
      0.011994927 = sum of:
        0.011994927 = product of:
          0.07196956 = sum of:
            0.07196956 = weight(_text_:after in 2926) [ClassicSimilarity], result of:
              0.07196956 = score(doc=2926,freq=2.0), product of:
                0.2549131 = queryWeight, product of:
                  5.1107154 = idf(docFreq=724, maxDocs=44218)
                  0.049878165 = queryNorm
                0.2823298 = fieldWeight in 2926, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.1107154 = idf(docFreq=724, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2926)
          0.16666667 = coord(1/6)
      0.5 = coord(1/2)
    
    Abstract
    We examined whether the microblog comments given by people after reading a web document could be exploited to improve the accuracy of a web document summarization system. We examined the effect of social information (i.e., tweets) on the accuracy of the generated summaries by comparing the user preference for TBS (tweet-biased summary) with GS (generic summary). The result of crowdsourcing-based evaluation shows that the user preference for TBS was significantly higher than GS. We also took random samples of the documents to see the performance of summaries in a traditional evaluation using ROUGE, which, in general, TBS was also shown to be better than GS. We further analyzed the influence of the number of tweets pointed to a web document on summarization accuracy, finding a positive moderate correlation between the number of tweets pointed to a web document and the performance of generated TBS as measured by user preference. The results show that incorporating social information into the summary generation process can improve the accuracy of summary. The reason for people choosing one summary over another in a crowdsourcing-based evaluation is also presented in this article.