Search (5 results, page 1 of 1)

  • × author_ss:"Brusilovsky, P."
  1. Brusilovsky, P.; Eklund, J.; Schwarz, E.: Web-based education for all : a tool for development adaptive courseware (1998) 0.03
    0.02964738 = product of:
      0.05929476 = sum of:
        0.05929476 = sum of:
          0.009374379 = weight(_text_:a in 3620) [ClassicSimilarity], result of:
            0.009374379 = score(doc=3620,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.17652355 = fieldWeight in 3620, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=3620)
          0.04992038 = weight(_text_:22 in 3620) [ClassicSimilarity], result of:
            0.04992038 = score(doc=3620,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.30952093 = fieldWeight in 3620, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3620)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
    Type
    a
  2. He, D.; Brusilovsky, P.; Ahn, J.; Grady, J.; Farzan, R.; Peng, Y.; Yang, Y.; Rogati, M.: ¬An evaluation of adaptive filtering in the context of realistic task-based information exploration (2008) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 2048) [ClassicSimilarity], result of:
              0.008285859 = score(doc=2048,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 2048, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2048)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Exploratory search increasingly becomes an important research topic. Our interests focus on task-based information exploration, a specific type of exploratory search performed by a range of professional users, such as intelligence analysts. In this paper, we present an evaluation framework designed specifically for assessing and comparing performance of innovative information access tools created to support the work of intelligence analysts in the context of task-based information exploration. The motivation for the development of this framework came from our needs for testing systems in task-based information exploration, which cannot be satisfied by existing frameworks. The new framework is closely tied with the kind of tasks that intelligence analysts perform: complex, dynamic, and multiple facets and multiple stages. It views the user rather than the information system as the center of the evaluation, and examines how well users are served by the systems in their tasks. The evaluation framework examines the support of the systems at users' major information access stages, such as information foraging and sense-making. The framework is accompanied by a reference test collection that has 18 tasks scenarios and corresponding passage-level ground truth annotations. To demonstrate the usage of the framework and the reference test collection, we present a specific evaluation study on CAFÉ, an adaptive filtering engine designed for supporting task-based information exploration. This study is a successful use case of the framework, and the study indeed revealed various aspects of the information systems and their roles in supporting task-based information exploration.
    Type
    a
  3. Lin, Y,-l.; Trattner, C.; Brusilovsky, P.; He, D.: ¬The impact of image descriptions on user tagging behavior : a study of the nature and functionality of crowdsourced tags (2015) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 2159) [ClassicSimilarity], result of:
              0.008118451 = score(doc=2159,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 2159, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2159)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Crowdsourcing has emerged as a way to harvest social wisdom from thousands of volunteers to perform a series of tasks online. However, little research has been devoted to exploring the impact of various factors such as the content of a resource or crowdsourcing interface design on user tagging behavior. Although images' titles and descriptions are frequently available in image digital libraries, it is not clear whether they should be displayed to crowdworkers engaged in tagging. This paper focuses on offering insight to the curators of digital image libraries who face this dilemma by examining (i) how descriptions influence the user in his/her tagging behavior and (ii) how this relates to the (a) nature of the tags, (b) the emergent folksonomy, and (c) the findability of the images in the tagging system. We compared two different methods for collecting image tags from Amazon's Mechanical Turk's crowdworkers-with and without image descriptions. Several properties of generated tags were examined from different perspectives: diversity, specificity, reusability, quality, similarity, descriptiveness, and so on. In addition, the study was carried out to examine the impact of image description on supporting users' information seeking with a tag cloud interface. The results showed that the properties of tags are affected by the crowdsourcing approach. Tags from the "with description" condition are more diverse and more specific than tags from the "without description" condition, while the latter has a higher tag reuse rate. A user study also revealed that different tag sets provided different support for search. Tags produced "with description" shortened the path to the target results, whereas tags produced without description increased user success in the search task.
    Type
    a
  4. Ahn, J.-w.; Brusilovsky, P.: Adaptive visualization for exploratory information retrieval (2013) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 2717) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=2717,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 2717, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2717)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As the volume and breadth of online information is rapidly increasing, ad hoc search systems become less and less efficient to answer information needs of modern users. To support the growing complexity of search tasks, researchers in the field of information developed and explored a range of approaches that extend the traditional ad hoc retrieval paradigm. Among these approaches, personalized search systems and exploratory search systems attracted many followers. Personalized search explored the power of artificial intelligence techniques to provide tailored search results according to different user interests, contexts, and tasks. In contrast, exploratory search capitalized on the power of human intelligence by providing users with more powerful interfaces to support the search process. As these approaches are not contradictory, we believe that they can re-enforce each other. We argue that the effectiveness of personalized search systems may be increased by allowing users to interact with the system and learn/investigate the problem in order to reach the final goal. We also suggest that an interactive visualization approach could offer a good ground to combine the strong sides of personalized and exploratory search approaches. This paper proposes a specific way to integrate interactive visualization and personalized search and introduces an adaptive visualization based search system Adaptive VIBE that implements it. We tested the effectiveness of Adaptive VIBE and investigated its strengths and weaknesses by conducting a full-scale user study. The results show that Adaptive VIBE can improve the precision and the productivity of the personalized search system while helping users to discover more diverse sets of information.
    Type
    a
  5. Lee, D.H.; Brusilovsky, P.: ¬The first impression of conference papers : does it matter in predicting future citations? (2019) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 4677) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=4677,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 4677, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4677)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article explores the factors influencing the future citations of conference papers. We concentrated on the explanatory power of early attention on conference papers for citations collected from Google Scholar and Scopus. The early attention data includes users' online activities in a conference support system: CN3. Bookmarks from the bibliographic management system, Citeulike, were used as a collateral source of early attention. To examine the chronological contributions of 13 factors on citations, a multiple sequential regression analysis was conducted for three timepoints of the publication cycle-paper submission, time of conferences, and months after conferences. Our results illustrate that online readers' early attention of Citeulike bookmarks were found to have the most influence on the future impact of the conference papers. The early attention records from CN3 made noteworthy improvements to explaining both the Google and Scopus citations as well. We also found that the type of papers the number of papers presented at a conference, and the best article award records were significant factors influencing future citations. However, the magnitude of the effects made by online readers' early attention from both sources appears to be larger than these three traditional factors.
    Type
    a