Search (12 results, page 1 of 1)

  • × author_ss:"Watters, C."
  1. Watters, C.; Amoudi, A.: Geosearcher : location-based ranking of search engine results (2003) 0.00
    0.0042839246 = product of:
      0.012851773 = sum of:
        0.012851773 = weight(_text_:a in 5152) [ClassicSimilarity], result of:
          0.012851773 = score(doc=5152,freq=30.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.24669915 = fieldWeight in 5152, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5152)
      0.33333334 = coord(1/3)
    
    Abstract
    Waters and Amoudi describe GeoSearcher, a prototype ranking program that arranges search engine results along a geo-spatial dimension without the provision of geo-spatial meta-tags or the use of geo-spatial feature extraction. GeoSearcher uses URL analysis, IptoLL, Whois, and the Getty Thesaurus of Geographic Names to determine site location. It accepts the first 200 sites returned by a search engine, identifies the coordinates, calculates their distance from a reference point and ranks in ascending order by this value. For any retrieved site the system checks if it has already been located in the current session, then sends the domain name to Whois to generate a return of a two letter country code and an area code. With no success the name is stripped one level and resent. If this fails the top level domain is tested for being a country code. Any remaining unmatched names go to IptoLL. Distance is calculated using the center point of the geographic area and a provided reference location. A test run on a set of 100 URLs from a search was successful in locating 90 sites. Eighty three pages could be manually found and 68 had sufficient information to verify location determination. Of these 65 ( 95%) had been assigned reasonably correct geographic locations. A random set of URLs used instead of a search result, yielded 80% success.
    Type
    a
  2. Watters, C.; Nizam, N.: Knowledge organization on the Web : the emergent role of social classification (2012) 0.00
    0.003793148 = product of:
      0.011379444 = sum of:
        0.011379444 = weight(_text_:a in 828) [ClassicSimilarity], result of:
          0.011379444 = score(doc=828,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.21843673 = fieldWeight in 828, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=828)
      0.33333334 = coord(1/3)
    
    Abstract
    There are close to a billion websites on the Internet with approximately 400 million users worldwide [www.internetworldstats.com]. People go to websites for a wide variety of different information tasks, from finding a restaurant to serious research. Many of the difficulties with searching the Web, as it is structured currently, can be attributed to increases to scale. The content of the Web is now so large that we only have a rough estimate of the number of sites and the range of information is extremely diverse, from blogs and photos to research articles and news videos.
    Source
    Categories, contexts and relations in knowledge organization: Proceedings of the Twelfth International ISKO Conference 6-9 August 2012, Mysore, India. Eds.: Neelameghan, A. u. K.S. Raghavan
    Type
    a
  3. Watters, C.; Shepherd, M.A.: Shifting the information paradigm from data-centered to user-centered (1994) 0.00
    0.0036685336 = product of:
      0.011005601 = sum of:
        0.011005601 = weight(_text_:a in 7290) [ClassicSimilarity], result of:
          0.011005601 = score(doc=7290,freq=22.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.21126054 = fieldWeight in 7290, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=7290)
      0.33333334 = coord(1/3)
    
    Abstract
    We are seeing a shift of the focus of information access from a data-centered to a user-centered paradigm. By a user-centered paradigm, we refer to information access that is driven not by the structure of the database in the system, but rather by views of the databases needed to satisfy an information need as perceived by the user. This paradigm relies on dynamic views that may be independent of the structure of the accessed databases. The user determines both what data is included in these dynamic views and what, if any, strucutres are imposed on the data in these views. The Daltext system is a prototype system that falls within the user-centered paradigm. it allows the user to define views dynamically. Daltext allows the user to determine what data is included in these dynamic views by specifying desired attributes and values. It allows the user to determine what, if any, structures are imposed on the data in these views by allowing the user to frame the view within a particular model. At the user's discretion, the view may be framed within a datastream model, a set model, a relational model, and/or a hierarchical model
    Type
    a
  4. Jordan, C.; Watters, C.: Addressing gaps in knowledge while reading (2009) 0.00
    0.0033183135 = product of:
      0.0099549405 = sum of:
        0.0099549405 = weight(_text_:a in 3158) [ClassicSimilarity], result of:
          0.0099549405 = score(doc=3158,freq=18.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19109234 = fieldWeight in 3158, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3158)
      0.33333334 = coord(1/3)
    
    Abstract
    Reading is a common everyday activity for most of us. In this article, we examine the potential for using Wikipedia to fill in the gaps in one's own knowledge that may be encountered while reading. If gaps are encountered frequently while reading, then this may detract from the reader's final understanding of the given document. Our goal is to increase access to explanatory text for readers by retrieving a single Wikipedia article that is related to a text passage that has been highlighted. This approach differs from traditional search methods where the users formulate search queries and review lists of possibly relevant results. This explicit search activity can be disruptive to reading. Our approach is to minimize the user interaction involved in finding related information by removing explicit query formulation and providing a single relevant result. To evaluate the feasibility of this approach, we first examined the effectiveness of three contextual algorithms for retrieval. To evaluate the effectiveness for readers, we then developed a functional prototype that uses the text of the abstract being read as context and retrieves a single relevant Wikipedia article in response to a passage the user has highlighted. We conducted a small user study where participants were allowed to use the prototype while reading abstracts. The results from this initial study indicate that users found the prototype easy to use and that using the prototype significantly improved their stated understanding and confidence in that understanding of the academic abstracts they read.
    Type
    a
  5. Carrick, C.; Watters, C.: Automatic association of news items (1997) 0.00
    0.00325127 = product of:
      0.009753809 = sum of:
        0.009753809 = weight(_text_:a in 1549) [ClassicSimilarity], result of:
          0.009753809 = score(doc=1549,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.18723148 = fieldWeight in 1549, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1549)
      0.33333334 = coord(1/3)
    
    Abstract
    Examines the problem of the association of related times of different media type, specifically photos and stories involved in the automatic generation of electronic editions. Determines to what degree any 2 news items refer to the same news event. This metric can be used: to link multimedia items that can be shown together, such as a video, photo, and text story related to a shipwreck or state visit; and to form clusters of very similar items from a variety of sources so that 1 or 2 can be chosen to represent that event in an edition. Discusses the specific assocoation of text and photo news items, although the approach applies to a larger domain of news including scripted news video clips and sripted radio broadcasts
    Footnote
    Contribution to a special issue devoted to electronic newspapers
    Type
    a
  6. Shepherd, M.; Watters, C.: Boundary objects and the digital library (2006) 0.00
    0.00296799 = product of:
      0.00890397 = sum of:
        0.00890397 = weight(_text_:a in 1490) [ClassicSimilarity], result of:
          0.00890397 = score(doc=1490,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.1709182 = fieldWeight in 1490, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1490)
      0.33333334 = coord(1/3)
    
    Abstract
    Boundary objects are entities shared by different communities but used differently by each group. The paper explores the multi faceted aspects of boundary objects in digital libraries. The issue of semantic interoperability from the perspective of 'communities of practice' and 'communities of interest' has been explored. While the concept of boundary objects holds some promise of resolving this problem, an efficient solution depends on how knowledge is represented so that it can be shared among various participants in a meaningful manner. Classification schemes can be used as a standard to implement boundary objects to bridge access to shared information resources for different users. The value and utility of adoption of "Absolute Syntax" for representation of subjects as a framework for boundary objects needs to be explored.
    Source
    Knowledge organization, information systems and other essays: Professor A. Neelameghan Festschrift. Ed. by K.S. Raghavan and K.N. Prasad
    Type
    a
  7. MacKay, B.; Watters, C.: ¬An examination of multisession web tasks (2012) 0.00
    0.0029264777 = product of:
      0.008779433 = sum of:
        0.008779433 = weight(_text_:a in 255) [ClassicSimilarity], result of:
          0.008779433 = score(doc=255,freq=14.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.1685276 = fieldWeight in 255, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=255)
      0.33333334 = coord(1/3)
    
    Abstract
    Today, people perform many types of tasks on the web, including those that require multiple web sessions. In this article, we build on research about web tasks and present an in-depth evaluation of the types of tasks people perform on the web over multiple web sessions. Multisession web tasks are goal-based tasks that often contain subtasks requiring more than one web session to complete. We will detail the results of two longitudinal studies that we conducted to explore this topic. The first study was a weeklong web-diary study where participants self-reported information on their own multisession tasks. The second study was a monthlong field study where participants used a customized version of Firefox, which logged their interactions for both their own multisession tasks and their other web activity. The results from both studies found that people perform eight different types of multisession tasks, that these tasks often consist of several subtasks, that these lasted different lengths of time, and that users have unique strategies to help continue the tasks which involved a variety of web and browser tools such as search engines and bookmarks and external applications such as Notepad or Word. Using the results from these studies, we have suggested three guidelines for developers to consider when designing browser-tool features to help people perform these types of tasks: (a) to maintain a list of current multisession tasks, (b) to support multitasking, and (c) to manage task-related information between sessions.
    Type
    a
  8. Shepherd, M.; Duffy, J.F.J.; Watters, C.; Gugle, N.: ¬The role of user profiles for news filtering (2001) 0.00
    0.002473325 = product of:
      0.0074199745 = sum of:
        0.0074199745 = weight(_text_:a in 5585) [ClassicSimilarity], result of:
          0.0074199745 = score(doc=5585,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.14243183 = fieldWeight in 5585, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5585)
      0.33333334 = coord(1/3)
    
    Abstract
    Most on-line news sources are electronic versions of "ink-on-paper" newspapers. These are versions that have been filtered, from the mass of news produced each day, by an editorial board with a given community profile in mind. As readers, we choose the filter rather than choose the stories. New technology, however, provides the potential for personalized versions to be filtered automatically from this mass of news on the basis of user profiles. People read the news for many reasons: to find out "what's going on," to be knowledgeable members of a community, and because the activity itself is pleasurable. Given this, we ask the question, "How much filtering is acceptable to readers?" In this study, an evaluation of user preference for personal editions versus community editions of on-line news was performed. A personalized edition of a local newspaper was created for each subject based on an elliptical model that combined the user profile and community profile as represented by the full edition of the local newspaper. The amount of emphasis given the user profile and the community profile was varied to test the subjects' reactions to different amounts of personalized filtering. The task was simply, "read the news," rather than any subject specific information retrieval task. The results indicate that users prefer the coarse-grained community filters to fine-grained personalized filters
    Type
    a
  9. Watters, C.; Wang, H.: Rating new documents for similarity (2000) 0.00
    0.0022989952 = product of:
      0.006896985 = sum of:
        0.006896985 = weight(_text_:a in 4856) [ClassicSimilarity], result of:
          0.006896985 = score(doc=4856,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.13239266 = fieldWeight in 4856, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4856)
      0.33333334 = coord(1/3)
    
    Abstract
    Electronic news has long held the promise of personalized and dynamic delivery of current event new items, particularly for Web users. Although wlwctronic versions of print news are now widely available, the personalization of that delivery has not yet been accomplished. In this paper, we present a methodology of associating news documents based on the extraction of feature phrases, where feature phrases identify dates, locations, people and organizations. A news representation is created from these feature phrases to define news objects that can then be compared and ranked to find related news items. Unlike tradtional information retrieval, we are much more interested in precision than recall. That is, the user would like to see one or more specifically related articles, rather than all somewhat related articles. The algorithm is designed to work interactively the the user using regular web browsers as the interface
    Type
    a
  10. Watters, C.: Information retrieval and the virtual document (1999) 0.00
    0.002212209 = product of:
      0.0066366266 = sum of:
        0.0066366266 = weight(_text_:a in 4319) [ClassicSimilarity], result of:
          0.0066366266 = score(doc=4319,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12739488 = fieldWeight in 4319, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=4319)
      0.33333334 = coord(1/3)
    
    Type
    a
  11. Kellar, M.; Watters, C.; Shepherd, M.: ¬A field study characterizing Web-based information seeking tasks (2007) 0.00
    0.0019158293 = product of:
      0.005747488 = sum of:
        0.005747488 = weight(_text_:a in 335) [ClassicSimilarity], result of:
          0.005747488 = score(doc=335,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.11032722 = fieldWeight in 335, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=335)
      0.33333334 = coord(1/3)
    
    Abstract
    Previous studies have examined various aspects of user behavior on the Web, including general information-seeking patterns, search engine use, and revisitation habits. Little research has been conducted to study how users navigate and interact with their Web browser across different information-seeking tasks. We have conducted a field study of 21 participants, in which we logged detailed Web usage and asked participants to provide task categorizations of their Web usage based on the following categories: Fact Finding, Information Gathering, Browsing, and Transactions. We used implicit measures logged during each task session to provide usage measures such as dwell time, number of pages viewed, and the use of specific browser navigation mechanisms. We also report on differences in how participants interacted with their Web browser across the range of information-seeking tasks. Within each type of task, we found several distinguishing characteristics. In particular, Information Gathering tasks were the most complex; participants spent more time completing this task, viewed more pages, and used the Web browser functions most heavily during this task. The results of this analysis have been used to provide implications for future support of information seeking on the Web as well as direction for future research in this area.
    Type
    a
  12. Watters, C.: Extending the multimedia class hierarchy for hypermedia applications (1996) 0.00
    0.0017697671 = product of:
      0.0053093014 = sum of:
        0.0053093014 = weight(_text_:a in 605) [ClassicSimilarity], result of:
          0.0053093014 = score(doc=605,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10191591 = fieldWeight in 605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=605)
      0.33333334 = coord(1/3)
    
    Type
    a