Search (18 results, page 1 of 1)

  • × theme_ss:"Referieren"
  1. Koltay, T.: ¬A hypertext tutorial on abstracting for library science students (1995) 0.02
    0.024223946 = product of:
      0.048447892 = sum of:
        0.016646868 = product of:
          0.06658747 = sum of:
            0.06658747 = weight(_text_:based in 3061) [ClassicSimilarity], result of:
              0.06658747 = score(doc=3061,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.47078028 = fieldWeight in 3061, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3061)
          0.25 = coord(1/4)
        0.031801023 = product of:
          0.063602045 = sum of:
            0.063602045 = weight(_text_:22 in 3061) [ClassicSimilarity], result of:
              0.063602045 = score(doc=3061,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.38690117 = fieldWeight in 3061, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3061)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Discusses briefly the application of hypertext in library user training with particular reference to a specific hypertext based tutorial designed to teach library school students the basics knowledge of abstracts and abstracting process
    Date
    27. 1.1996 18:22:06
    Theme
    Computer Based Training
  2. Endres-Niggemeyer, B.: Summarising text for intelligent communication : results of the Dagstuhl seminar (1994) 0.02
    0.023954237 = product of:
      0.09581695 = sum of:
        0.09581695 = weight(_text_:term in 8867) [ClassicSimilarity], result of:
          0.09581695 = score(doc=8867,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.4374403 = fieldWeight in 8867, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.046875 = fieldNorm(doc=8867)
      0.25 = coord(1/4)
    
    Abstract
    As a result of the transition to full-text storage, multimedia and networking, information systems are becoming more efficient but at the same time more difficult to use, in particular because users are confronted with information volumes that increasingly exceed individual processing capacities. Consequently, there is an increase in the demand for user aids such as summarising techniques. Against this background, the interdisciplinary Dagstuhl Seminar 'Summarising Text for Intelligent Communication' (Dec. 1993) outlined the academic state of the art with regard to summarising (abstracting) and proposed future directions for research and system development. Research is currently shifting its attention from text summarising to summarising states of affairs. Recycling solutions are put forward in order to satisfy short-term needs for summarisation products. In the medium and long term, it is necessary to devise concepts and methods of intelligent summarising which have a better formal and empirical grounding and a more modular organisation
  3. Pinto, M.: Abstracting/abstract adaptation to digital environments : research trends (2003) 0.02
    0.023954237 = product of:
      0.09581695 = sum of:
        0.09581695 = weight(_text_:term in 4446) [ClassicSimilarity], result of:
          0.09581695 = score(doc=4446,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.4374403 = fieldWeight in 4446, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.046875 = fieldNorm(doc=4446)
      0.25 = coord(1/4)
    
    Abstract
    The technological revolution is affecting the structure, form and content of documents, reducing the effectiveness of traditional abstracts that, to some extent, are inadequate to the new documentary conditions. Aims to show the directions in which abstracting/abstracts can evolve to achieve the necessary adequacy in the new digital environments. Three researching trends are proposed: theoretical, methodological and pragmatic. Theoretically, there are some needs for expanding the document concept, reengineering abstracting and designing interdisciplinary models. Methodologically, the trend is toward the structuring, automating and qualifying of the abstracts. Pragmatically, abstracts networking, combined with alternative and complementary models, open a new and promising horizon. Automating, structuring and qualifying abstracting/abstract offer some short-term prospects for progress. Concludes that reengineering, networking and visualising would be middle-term fruitful areas of research toward the full adequacy of abstracting in the new electronic age.
  4. Wan, X.; Yang, J.; Xiao, J.: Incorporating cross-document relationships between sentences for single document summarizations (2006) 0.02
    0.015656754 = product of:
      0.03131351 = sum of:
        0.0122329 = product of:
          0.0489316 = sum of:
            0.0489316 = weight(_text_:based in 2421) [ClassicSimilarity], result of:
              0.0489316 = score(doc=2421,freq=6.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.34595144 = fieldWeight in 2421, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2421)
          0.25 = coord(1/4)
        0.019080611 = product of:
          0.038161222 = sum of:
            0.038161222 = weight(_text_:22 in 2421) [ClassicSimilarity], result of:
              0.038161222 = score(doc=2421,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23214069 = fieldWeight in 2421, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2421)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Graph-based ranking algorithms have recently been proposed for single document summarizations and such algorithms evaluate the importance of a sentence by making use of the relationships between sentences in the document in a recursive way. In this paper, we investigate using other related or relevant documents to improve summarization of one single document based on the graph-based ranking algorithm. In addition to the within-document relationships between sentences in the specified document, the cross-document relationships between sentences in different documents are also taken into account in the proposed approach. We evaluate the performance of the proposed approach on DUC 2002 data with the ROUGE metric and results demonstrate that the cross-document relationships between sentences in different but related documents can significantly improve the performance of single document summarization.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  5. Palais, E.S.: Abstracting for reference librarians (1988) 0.01
    0.006360204 = product of:
      0.025440816 = sum of:
        0.025440816 = product of:
          0.05088163 = sum of:
            0.05088163 = weight(_text_:22 in 2832) [ClassicSimilarity], result of:
              0.05088163 = score(doc=2832,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.30952093 = fieldWeight in 2832, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2832)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Reference librarian. 1988, no.22, S.297-308
  6. Hartley, J.; Sydes, M.: Which layout do you prefer? : an analysis of readers' preferences for different typographic layouts of structured abstracts (1996) 0.00
    0.0047701527 = product of:
      0.019080611 = sum of:
        0.019080611 = product of:
          0.038161222 = sum of:
            0.038161222 = weight(_text_:22 in 4411) [ClassicSimilarity], result of:
              0.038161222 = score(doc=4411,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23214069 = fieldWeight in 4411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4411)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Journal of information science. 22(1996) no.1, S.27-37
  7. Ward, M.L.: ¬The future of the human indexer (1996) 0.00
    0.0047701527 = product of:
      0.019080611 = sum of:
        0.019080611 = product of:
          0.038161222 = sum of:
            0.038161222 = weight(_text_:22 in 7244) [ClassicSimilarity], result of:
              0.038161222 = score(doc=7244,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23214069 = fieldWeight in 7244, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7244)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    9. 2.1997 18:44:22
  8. Hartley, J.; Sydes, M.; Blurton, A.: Obtaining information accurately and quickly : are structured abstracts more efficient? (1996) 0.00
    0.003975128 = product of:
      0.015900511 = sum of:
        0.015900511 = product of:
          0.031801023 = sum of:
            0.031801023 = weight(_text_:22 in 7673) [ClassicSimilarity], result of:
              0.031801023 = score(doc=7673,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19345059 = fieldWeight in 7673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=7673)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Journal of information science. 22(1996) no.5, S.349-356
  9. Endres-Niggemeyer, B.: Content analysis : a special case of text compression (1989) 0.00
    0.0029427784 = product of:
      0.011771114 = sum of:
        0.011771114 = product of:
          0.047084454 = sum of:
            0.047084454 = weight(_text_:based in 3549) [ClassicSimilarity], result of:
              0.047084454 = score(doc=3549,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.33289194 = fieldWeight in 3549, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3549)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Presents a theoretical model, based on the Flower/Hayes model of expository writing, of the process involved in content analysis for abstracting and indexing.
  10. Molina, M.P.: Documentary abstracting : toward a methodological approach (1995) 0.00
    0.0023542228 = product of:
      0.009416891 = sum of:
        0.009416891 = product of:
          0.037667565 = sum of:
            0.037667565 = weight(_text_:based in 1790) [ClassicSimilarity], result of:
              0.037667565 = score(doc=1790,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.26631355 = fieldWeight in 1790, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1790)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    In the general abstracting process (GAP), there are 2 types of data: textual, within a particular framed trilogy (surface, deep, and rhetoric); and documentary (abstractor, means of production, and user demands). Proposes its development, the use of the following disciplines, among others: linguistics (structural, tranformational, and textual), logic (formal and fuzzy), and psychology (cognitive). The model for that textual transformation is based on a system of combined strategies with 4 key stages: reading understanding, selection, interpretation, and synthesis
  11. Monday, I.: ¬Les processus cognitifs et la redaction de résumes (1996) 0.00
    0.0023542228 = product of:
      0.009416891 = sum of:
        0.009416891 = product of:
          0.037667565 = sum of:
            0.037667565 = weight(_text_:based in 6917) [ClassicSimilarity], result of:
              0.037667565 = score(doc=6917,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.26631355 = fieldWeight in 6917, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6917)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Attempts to explain the intellectual and cognitive processes which govern the understanding and structure of a text, on the one hand, and writing a summary or abstract on the other, based on the literature of information science, education, cognitive psychology and psychiatry
  12. Koltay, T.: Abstracts and abstracting : a genre and set of skills for the twenty-first century (2010) 0.00
    0.0020808585 = product of:
      0.008323434 = sum of:
        0.008323434 = product of:
          0.033293735 = sum of:
            0.033293735 = weight(_text_:based in 4125) [ClassicSimilarity], result of:
              0.033293735 = score(doc=4125,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23539014 = fieldWeight in 4125, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4125)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Despite their changing role, abstracts remain useful in the digital world. Aimed at both information professionals and researchers who work and publish in different fields, this book summarizes the most important and up-to-date theory of abstracting, as well as giving advice and examples for the practice of writing different kinds of abstracts. The book discusses the length, the functions and basic structure of abstracts. A new approach is outlined on the questions of informative and indicative abstracts. The abstractors' personality, their linguistic and non-linguistic knowledge and skills are also discussed with special attention. The process of abstracting, its steps and models, as well as recipient's role are treated with special distinction. Abstracting is presented as an aimed (purported) understanding of the original text, its interpretation and then a special projection of the information deemed to be worth of abstracting into a new text.Despite the relatively large number of textbooks on the topic there is no up-to-date book on abstracting in the English language. In addition to providing a comprehensive coverage of the topic, the proposed book contains novel views - especially on informative and indicative abstracts. The discussion is based on an interdisciplinary approach, blending the methods of library and information science and linguistics. The book strives to a synthesis of theory and practice. The synthesis is based on a large and existing body of knowledge which, however, is often characterised by misleading terminology and flawed beliefs.
  13. Wilson, M.J.; Wilson, M.L.: ¬A comparison of techniques for measuring sensemaking and learning within participant-generated summaries (2013) 0.00
    0.0020808585 = product of:
      0.008323434 = sum of:
        0.008323434 = product of:
          0.033293735 = sum of:
            0.033293735 = weight(_text_:based in 612) [ClassicSimilarity], result of:
              0.033293735 = score(doc=612,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23539014 = fieldWeight in 612, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=612)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    While it is easy to identify whether someone has found a piece of information during a search task, it is much harder to measure how much someone has learned during the search process. Searchers who are learning often exhibit exploratory behaviors, and so current research is often focused on improving support for exploratory search. Consequently, we need effective measures of learning to demonstrate better support for exploratory search. Some approaches, such as quizzes, measure recall when learning from a fixed source of information. This research, however, focuses on techniques for measuring open-ended learning, which often involve analyzing handwritten summaries produced by participants after a task. There are two common techniques for analyzing such summaries: (a) counting facts and statements and (b) judging topic coverage. Both of these techniques, however, can be easily confounded by simple variables such as summary length. This article presents a new technique that measures depth of learning within written summaries based on Bloom's taxonomy (B.S. Bloom & M.D. Engelhart, 1956). This technique was generated using grounded theory and is designed to be less susceptible to such confounding variables. Together, these three categories of measure were compared by applying them to a large collection of written summaries produced in a task-based study, and our results provide insights into each of their strengths and weaknesses. Both fact-to-statement ratio and our own measure of depth of learning were effective while being less affected by confounding variables. Recommendations and clear areas of future work are provided to help continued research into supporting sensemaking and learning.
  14. Endres-Niggemeyer, B.; Maier, E.; Sigel, A.: How to implement a naturalistic model of abstracting : four core working steps of an expert abstractor (1995) 0.00
    0.002059945 = product of:
      0.00823978 = sum of:
        0.00823978 = product of:
          0.03295912 = sum of:
            0.03295912 = weight(_text_:based in 2930) [ClassicSimilarity], result of:
              0.03295912 = score(doc=2930,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23302436 = fieldWeight in 2930, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2930)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    4 working steps taken from a comprehensive empirical model of expert abstracting are studied in order to prepare an explorative implementation of a simulation model. It aims at explaining the knowledge processing activities during professional summarizing. Following the case-based and holistic strategy of qualitative empirical research, the main features of the simulation system were developed by investigating in detail a small but central test case - 4 working steps where an expert abstractor discovers what the paper is about and drafts the topic sentence of the abstract
  15. Armstrong, C.J.; Wheatley, A.: Writing abstracts for online databases : results of database producers' guidelines (1998) 0.00
    0.002059945 = product of:
      0.00823978 = sum of:
        0.00823978 = product of:
          0.03295912 = sum of:
            0.03295912 = weight(_text_:based in 3295) [ClassicSimilarity], result of:
              0.03295912 = score(doc=3295,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23302436 = fieldWeight in 3295, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3295)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Reports on one area of research in an Electronic Libraries Programme (eLib) MODELS (MOving to Distributed Environments for Library Services) supporting study in 3 investigative areas: examination of current database producers' guidelines for their abstract writers; a brief survey of abstracts in some traditional online databases; and a detailed survey of abstracts from 3 types of electronic database (print sourced online databases, Internet subject trees or directories, and Internet gateways). Examination of database producers' guidelines, reported here, gave a clear view of the intentions behind professionally produced traditional (printed index based) database abstracts and provided a benchmark against which to judge the conclusions of the larger investigations into abstract style, readability and content
  16. Spiteri, L.F.: Library and information science vs business : a comparison of approaches to abstracting (1997) 0.00
    0.002059945 = product of:
      0.00823978 = sum of:
        0.00823978 = product of:
          0.03295912 = sum of:
            0.03295912 = weight(_text_:based in 3699) [ClassicSimilarity], result of:
              0.03295912 = score(doc=3699,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23302436 = fieldWeight in 3699, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3699)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    The library and information science (LIS) literature on abstracting makes little mention about abstracting conducted in the corporate / business environment, whereas the business literature suggests that abstarcting is a very important component of business writing. Examines a variety of publications from LIS and business in order to compare and contrast their approaches to the following aspects of abstracting: definitions of abstracts; types of abstracts; purpose of abstracts; and writing of abstracts. Summarises the results of the examination which revealed a number of similarities, differences, and inadequacies in the ways in which both fields approach abstracting. Concludes that both fields need to develop more detailed guidelines concerning the cognitive process of abstracting and suggests improvements to the training af absractors based on these findings
  17. Spina, D.; Trippas, J.R.; Cavedon, L.; Sanderson, M.: Extracting audio summaries to support effective spoken document search (2017) 0.00
    0.0017656671 = product of:
      0.0070626684 = sum of:
        0.0070626684 = product of:
          0.028250674 = sum of:
            0.028250674 = weight(_text_:based in 3788) [ClassicSimilarity], result of:
              0.028250674 = score(doc=3788,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19973516 = fieldWeight in 3788, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3788)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    We address the challenge of extracting query biased audio summaries from podcasts to support users in making relevance decisions in spoken document search via an audio-only communication channel. We performed a crowdsourced experiment that demonstrates that transcripts of spoken documents created using Automated Speech Recognition (ASR), even with significant errors, are effective sources of document summaries or "snippets" for supporting users in making relevance judgments against a query. In particular, the results show that summaries generated from ASR transcripts are comparable, in utility and user-judged preference, to spoken summaries generated from error-free manual transcripts of the same collection. We also observed that content-based audio summaries are at least as preferred as synthesized summaries obtained from manually curated metadata, such as title and description. We describe a methodology for constructing a new test collection, which we have made publicly available.
  18. Wang, F.L.; Yang, C.C.: ¬The impact analysis of language differences on an automatic multilingual text summarization system (2006) 0.00
    0.0014713892 = product of:
      0.005885557 = sum of:
        0.005885557 = product of:
          0.023542227 = sum of:
            0.023542227 = weight(_text_:based in 5049) [ClassicSimilarity], result of:
              0.023542227 = score(doc=5049,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.16644597 = fieldWeight in 5049, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5049)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Based on the salient features of the documents, automatic text summarization systems extract the key sentences from source documents. This process supports the users in evaluating the relevance of the extracted documents returned by information retrieval systems. Because of this tool, efficient filtering can be achieved. Indirectly, these systems help to resolve the problem of information overloading. Many automatic text summarization systems have been implemented for use with different languages. It has been established that the grammatical and lexical differences between languages have a significant effect on text processing. However, the impact of the language differences on the automatic text summarization systems has not yet been investigated. The authors provide an impact analysis of language difference on automatic text summarization. It includes the effect on the extraction processes, the scoring mechanisms, the performance, and the matching of the extracted sentences, using the parallel corpus in English and Chinese as the tested object. The analysis results provide a greater understanding of language differences and promote the future development of more advanced text summarization techniques.