Search (3 results, page 1 of 1)

  • × author_ss:"Betts, L."
  • × theme_ss:"Referieren"
  1. Hartley, J.; Betts, L.: Common weaknesses in traditional abstracts in the social sciences (2009) 0.00
    0.0039907596 = product of:
      0.015963038 = sum of:
        0.015963038 = weight(_text_:information in 3115) [ClassicSimilarity], result of:
          0.015963038 = score(doc=3115,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2602176 = fieldWeight in 3115, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3115)
      0.25 = coord(1/4)
    
    Abstract
    Detailed checklists and questionnaires have been used in the past to assess the quality of structured abstracts in the medical sciences. The aim of this article is to report the findings when a simpler checklist was used to evaluate the quality of 100 traditional abstracts published in 53 different social science journals. Most of these abstracts contained information about the aims, methods, and results of the studies. However, many did not report details about the sample sizes, ages, or sexes of the participants, or where the research was carried out. The correlation between the lengths of the abstracts and the amount of information present was 0.37 (p < .001), suggesting that word limits for abstracts may restrict the presence of key information to some extent. We conclude that authors can improve the quality of information in traditional abstracts in the social sciences by using the simple checklist provided in this article.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.10, S.2010-2018
  2. Hartley, J.; Betts, L.: ¬The effects of spacing and titles on judgments of the effectiveness of structured abstracts (2007) 0.00
    0.0025760243 = product of:
      0.010304097 = sum of:
        0.010304097 = weight(_text_:information in 1325) [ClassicSimilarity], result of:
          0.010304097 = score(doc=1325,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16796975 = fieldWeight in 1325, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1325)
      0.25 = coord(1/4)
    
    Abstract
    Previous research assessing the effectiveness of structured abstracts has been limited in two respects. First, when comparing structured abstracts with traditional ones, investigators usually have rewritten the original abstracts, and thus confounded changes in the layout with changes in both the wording and the content of the text. Second, investigators have not always included the title of the article together with the abstract when asking participants to judge the quality of the abstracts, yet titles alert readers to the meaning of the materials that follow. The aim of this research was to redress these limitations. Three studies were carried out. Four versions of each of four abstracts were prepared. These versions consisted of structured/traditional abstracts matched in content, with and without titles. In Study 1, 64 undergraduates each rated one of these abstracts on six separate rating scales. In Study 2, 225 academics and research workers rated the abstracts electronically, and in Study 3, 252 information scientists did likewise. In Studies 1 and 3, the respondents rated the structured abstracts significantly more favorably than they did the traditional ones, but the presence or absence of titles had no effect on their judgments. In Study 2, no main effects were observed for structure or for titles. The layout of the text, together with the subheadings, contributed to the higher ratings of effectiveness for structured abstracts, but the presence or absence of titles had no clear effects in these experimental studies. It is likely that this spatial organization, together with the greater amount of information normally provided in structured abstracts, explains why structured abstracts are generally judged to be superior to traditional ones.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.14, S.2335-2340
  3. Hartley, J.; Betts, L.: Revising and polishing a structured abstract : is it worth the time and effort? (2008) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 2362) [ClassicSimilarity], result of:
          0.008413259 = score(doc=2362,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 2362, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2362)
      0.25 = coord(1/4)
    
    Abstract
    Many writers of structured abstracts spend a good deal of time revising and polishing their texts - but is it worth it? Do readers notice the difference? In this paper we report three studies of readers using rating scales to judge (electronically) the clarity of an original and a revised abstract, both as a whole and in its constituent parts. In Study 1, with approximately 250 academics and research workers, we found some significant differences in favor of the revised abstract, but in Study 2, with approximately 210 information scientists, we found no significant effects. Pooling the data from Studies 1 and 2, however, in Study 3, led to significant differences at a higher probability level between the perception of the original and revised abstract as a whole and between the same components as found in Study 1. These results thus indicate that the revised abstract as a whole, as well as certain specific components of it, were judged significantly clearer than the original one. In short, the results of these experiments show that readers can and do perceive differences between original and revised texts - sometimes - and that therefore these efforts are worth the time and effort.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.12, S.1870-1877