Search (2 results, page 1 of 1)

  • × author_ss:"Watters, C."
  • × theme_ss:"Internet"
  • × year_i:[2000 TO 2010}
  1. Jordan, C.; Watters, C.: Addressing gaps in knowledge while reading (2009) 0.02
    0.020393016 = product of:
      0.061179046 = sum of:
        0.061179046 = weight(_text_:search in 3158) [ClassicSimilarity], result of:
          0.061179046 = score(doc=3158,freq=6.0), product of:
            0.1839618 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.052928332 = queryNorm
            0.33256388 = fieldWeight in 3158, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3158)
      0.33333334 = coord(1/3)
    
    Abstract
    Reading is a common everyday activity for most of us. In this article, we examine the potential for using Wikipedia to fill in the gaps in one's own knowledge that may be encountered while reading. If gaps are encountered frequently while reading, then this may detract from the reader's final understanding of the given document. Our goal is to increase access to explanatory text for readers by retrieving a single Wikipedia article that is related to a text passage that has been highlighted. This approach differs from traditional search methods where the users formulate search queries and review lists of possibly relevant results. This explicit search activity can be disruptive to reading. Our approach is to minimize the user interaction involved in finding related information by removing explicit query formulation and providing a single relevant result. To evaluate the feasibility of this approach, we first examined the effectiveness of three contextual algorithms for retrieval. To evaluate the effectiveness for readers, we then developed a functional prototype that uses the text of the abstract being read as context and retrieves a single relevant Wikipedia article in response to a passage the user has highlighted. We conducted a small user study where participants were allowed to use the prototype while reading abstracts. The results from this initial study indicate that users found the prototype easy to use and that using the prototype significantly improved their stated understanding and confidence in that understanding of the academic abstracts they read.
  2. Kellar, M.; Watters, C.; Shepherd, M.: ¬A field study characterizing Web-based information seeking tasks (2007) 0.01
    0.011773913 = product of:
      0.03532174 = sum of:
        0.03532174 = weight(_text_:search in 335) [ClassicSimilarity], result of:
          0.03532174 = score(doc=335,freq=2.0), product of:
            0.1839618 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.052928332 = queryNorm
            0.19200584 = fieldWeight in 335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=335)
      0.33333334 = coord(1/3)
    
    Abstract
    Previous studies have examined various aspects of user behavior on the Web, including general information-seeking patterns, search engine use, and revisitation habits. Little research has been conducted to study how users navigate and interact with their Web browser across different information-seeking tasks. We have conducted a field study of 21 participants, in which we logged detailed Web usage and asked participants to provide task categorizations of their Web usage based on the following categories: Fact Finding, Information Gathering, Browsing, and Transactions. We used implicit measures logged during each task session to provide usage measures such as dwell time, number of pages viewed, and the use of specific browser navigation mechanisms. We also report on differences in how participants interacted with their Web browser across the range of information-seeking tasks. Within each type of task, we found several distinguishing characteristics. In particular, Information Gathering tasks were the most complex; participants spent more time completing this task, viewed more pages, and used the Web browser functions most heavily during this task. The results of this analysis have been used to provide implications for future support of information seeking on the Web as well as direction for future research in this area.