Search (54 results, page 3 of 3)

  • × theme_ss:"Automatisches Abstracting"
  1. Moens, M.F.: Automatic indexing and abstracting of document texts (2000) 0.00
    3.415876E-4 = product of:
      0.007856515 = sum of:
        0.007856515 = product of:
          0.01571303 = sum of:
            0.01571303 = weight(_text_:1 in 6892) [ClassicSimilarity], result of:
              0.01571303 = score(doc=6892,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.27140775 = fieldWeight in 6892, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6892)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Isbn
    0-7923-7793-1
  2. Haag, M.: Automatic text summarization : Evaluation des Copernic Summarizer und mögliche Einsatzfelder in der Fachinformation der DaimlerCrysler AG (2002) 0.00
    3.3368162E-4 = product of:
      0.007674677 = sum of:
        0.007674677 = weight(_text_:und in 649) [ClassicSimilarity], result of:
          0.007674677 = score(doc=649,freq=2.0), product of:
            0.052235067 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.023567878 = queryNorm
            0.14692576 = fieldWeight in 649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=649)
      0.04347826 = coord(1/23)
    
  3. Endres-Niggemeyer, B.: Summarizing information (1998) 0.00
    2.8984674E-4 = product of:
      0.0066664745 = sum of:
        0.0066664745 = product of:
          0.013332949 = sum of:
            0.013332949 = weight(_text_:1 in 688) [ClassicSimilarity], result of:
              0.013332949 = score(doc=688,freq=4.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.23029712 = fieldWeight in 688, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.046875 = fieldNorm(doc=688)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Pages
    VII, 375 S. + 1 CD-ROM
    Signature
    79 BCA 129-1 (75 BCA 129-2)
  4. Liang, S.-F.; Devlin, S.; Tait, J.: Investigating sentence weighting components for automatic summarisation (2007) 0.00
    2.8984674E-4 = product of:
      0.0066664745 = sum of:
        0.0066664745 = product of:
          0.013332949 = sum of:
            0.013332949 = weight(_text_:1 in 899) [ClassicSimilarity], result of:
              0.013332949 = score(doc=899,freq=4.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.23029712 = fieldWeight in 899, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.046875 = fieldNorm(doc=899)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    The work described here initially formed part of a triangulation exercise to establish the effectiveness of the Query Term Order algorithm. It subsequently proved to be a reliable indicator for summarising English web documents. We utilised the human summaries from the Document Understanding Conference data, and generated queries automatically for testing the QTO algorithm. Six sentence weighting schemes that made use of Query Term Frequency and QTO were constructed to produce system summaries, and this paper explains the process of combining and balancing the weighting components. The summaries produced were evaluated by the ROUGE-1 metric, and the results showed that using QTO in a weighting combination resulted in the best performance. We also found that using a combination of more weighting components always produced improved performance compared to any single weighting component.
    Source
    Information processing and management. 43(2007) no.1, S.146-153
  5. Kannan, R.; Ghinea, G.; Swaminathan, S.: What do you wish to see? : A summarization system for movies based on user preferences (2015) 0.00
    2.8018325E-4 = product of:
      0.0064442144 = sum of:
        0.0064442144 = product of:
          0.012888429 = sum of:
            0.012888429 = weight(_text_:29 in 2683) [ClassicSimilarity], result of:
              0.012888429 = score(doc=2683,freq=2.0), product of:
                0.08290443 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.023567878 = queryNorm
                0.15546128 = fieldWeight in 2683, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2683)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Date
    25. 1.2016 18:45:29
  6. Summarising software for publishing (1996) 0.00
    2.7327013E-4 = product of:
      0.0062852125 = sum of:
        0.0062852125 = product of:
          0.012570425 = sum of:
            0.012570425 = weight(_text_:1 in 5121) [ClassicSimilarity], result of:
              0.012570425 = score(doc=5121,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.2171262 = fieldWeight in 5121, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5121)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    Digital publisher. 1(1996) no.4, S.15-20
  7. Ahmad, K.: Text summarisation : the role of lexical cohesion analysis (1995) 0.00
    2.7327013E-4 = product of:
      0.0062852125 = sum of:
        0.0062852125 = product of:
          0.012570425 = sum of:
            0.012570425 = weight(_text_:1 in 5795) [ClassicSimilarity], result of:
              0.012570425 = score(doc=5795,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.2171262 = fieldWeight in 5795, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5795)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    New review of document and text management. 1995, no.1, S.321-335
  8. Harabagiu, S.; Hickl, A.; Lacatusu, F.: Satisfying information needs with multi-document summaries (2007) 0.00
    2.7327013E-4 = product of:
      0.0062852125 = sum of:
        0.0062852125 = product of:
          0.012570425 = sum of:
            0.012570425 = weight(_text_:1 in 939) [ClassicSimilarity], result of:
              0.012570425 = score(doc=939,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.2171262 = fieldWeight in 939, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0625 = fieldNorm(doc=939)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    Generating summaries that meet the information needs of a user relies on (1) several forms of question decomposition; (2) different summarization approaches; and (3) textual inference for combining the summarization strategies. This novel framework for summarization has the advantage of producing highly responsive summaries, as indicated by the evaluation results.
  9. Yeh, J.-Y.; Ke, H.-R.; Yang, W.-P.; Meng, I.-H.: Text summarization using a trainable summarizer and latent semantic analysis (2005) 0.00
    2.4153895E-4 = product of:
      0.0055553955 = sum of:
        0.0055553955 = product of:
          0.011110791 = sum of:
            0.011110791 = weight(_text_:1 in 1003) [ClassicSimilarity], result of:
              0.011110791 = score(doc=1003,freq=4.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.19191428 = fieldWeight in 1003, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1003)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    This paper proposes two approaches to address text summarization: modified corpus-based approach (MCBA) and LSA-based T.R.M. approach (LSA + T.R.M.). The first is a trainable summarizer, which takes into account several features, including position, positive keyword, negative keyword, centrality, and the resemblance to the title, to generate summaries. Two new ideas are exploited: (1) sentence positions are ranked to emphasize the significances of different sentence positions, and (2) the score function is trained by the genetic algorithm (GA) to obtain a suitable combination of feature weights. The second uses latent semantic analysis (LSA) to derive the semantic matrix of a document or a corpus and uses semantic sentence representation to construct a semantic text relationship map. We evaluate LSA + T.R.M. both with single documents and at the corpus level to investigate the competence of LSA in text summarization. The two novel approaches were measured at several compression rates on a data corpus composed of 100 political articles. When the compression rate was 30%, an average f-measure of 49% for MCBA, 52% for MCBA + GA, 44% and 40% for LSA + T.R.M. in single-document and corpus level were achieved respectively.
    Source
    Information processing and management. 41(2005) no.1, S.75-95
  10. Galgani, F.; Compton, P.; Hoffmann, A.: Summarization based on bi-directional citation analysis (2015) 0.00
    2.4153895E-4 = product of:
      0.0055553955 = sum of:
        0.0055553955 = product of:
          0.011110791 = sum of:
            0.011110791 = weight(_text_:1 in 2685) [ClassicSimilarity], result of:
              0.011110791 = score(doc=2685,freq=4.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.19191428 = fieldWeight in 2685, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2685)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    Information processing and management. 51(2015) no.1, S.1-24
  11. Lee, J.-H.; Park, S.; Ahn, C.-M.; Kim, D.: Automatic generic document summarization based on non-negative matrix factorization (2009) 0.00
    2.3911135E-4 = product of:
      0.005499561 = sum of:
        0.005499561 = product of:
          0.010999122 = sum of:
            0.010999122 = weight(_text_:1 in 2448) [ClassicSimilarity], result of:
              0.010999122 = score(doc=2448,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.18998542 = fieldWeight in 2448, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2448)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    Information processing and management. 45(2009) no.1, S.20-34
  12. Hirao, T.; Okumura, M.; Yasuda, N.; Isozaki, H.: Supervised automatic evaluation for summarization with voted regression model (2007) 0.00
    2.0495258E-4 = product of:
      0.0047139092 = sum of:
        0.0047139092 = product of:
          0.0094278185 = sum of:
            0.0094278185 = weight(_text_:1 in 942) [ClassicSimilarity], result of:
              0.0094278185 = score(doc=942,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.16284466 = fieldWeight in 942, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.046875 = fieldNorm(doc=942)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    The high quality evaluation of generated summaries is needed if we are to improve automatic summarization systems. Although human evaluation provides better results than automatic evaluation methods, its cost is huge and it is difficult to reproduce the results. Therefore, we need an automatic method that simulates human evaluation if we are to improve our summarization system efficiently. Although automatic evaluation methods have been proposed, they are unreliable when used for individual summaries. To solve this problem, we propose a supervised automatic evaluation method based on a new regression model called the voted regression model (VRM). VRM has two characteristics: (1) model selection based on 'corrected AIC' to avoid multicollinearity, (2) voting by the selected models to alleviate the problem of overfitting. Evaluation results obtained for TSC3 and DUC2004 show that our method achieved error reductions of about 17-51% compared with conventional automatic evaluation methods. Moreover, our method obtained the highest correlation coefficients in several different experiments.
  13. Martinez-Romo, J.; Araujo, L.; Fernandez, A.D.: SemGraph : extracting keyphrases following a novel semantic graph-based approach (2016) 0.00
    2.0495258E-4 = product of:
      0.0047139092 = sum of:
        0.0047139092 = product of:
          0.0094278185 = sum of:
            0.0094278185 = weight(_text_:1 in 2832) [ClassicSimilarity], result of:
              0.0094278185 = score(doc=2832,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.16284466 = fieldWeight in 2832, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2832)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.71-82
  14. Kim, H.H.; Kim, Y.H.: ERP/MMR algorithm for classifying topic-relevant and topic-irrelevant visual shots of documentary videos (2019) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 5358) [ClassicSimilarity], result of:
              0.007856515 = score(doc=5358,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 5358, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5358)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    We propose and evaluate a video summarization method based on a topic relevance model, a maximal marginal relevance (MMR), and discriminant analysis to generate a semantically meaningful video skim. The topic relevance model uses event-related potential (ERP) components to describe the process of topic relevance judgment. More specifically, the topic relevance model indicates that N400 and P600, which have been successfully applied to the mismatch process of a stimulus and the discourse-internal reorganization and integration process of a stimulus, respectively, are used for the topic mismatch process of a topic-irrelevant video shot and the topic formation process of a topic-relevant video shot. To evaluate our proposed ERP/MMR-based method, we compared the video skims generated by the ERP/MMR-based, ERP-based, and shot boundary detection (SBD) methods with ground truth skims. The results showed that at a significance level of 0.05, the ROUGE-1 scores of the ERP/MMR method are statistically higher than those of the SBD method, and the diversity scores of the ERP/MMR method are statistically higher than those of the ERP method. This study suggested that the proposed method may be applied to the construction of a video skim without operational intervention, such as the insertion of a black screen between video shots.

Languages

  • e 35
  • d 19

Types

  • a 46
  • m 5
  • el 2
  • r 2
  • s 1
  • x 1
  • More… Less…