Search (5 results, page 1 of 1)

  • × author_ss:"Brusilovsky, P."
  1. He, D.; Brusilovsky, P.; Ahn, J.; Grady, J.; Farzan, R.; Peng, Y.; Yang, Y.; Rogati, M.: ¬An evaluation of adaptive filtering in the context of realistic task-based information exploration (2008) 0.01
    0.007432959 = product of:
      0.029731836 = sum of:
        0.029731836 = weight(_text_:information in 2048) [ClassicSimilarity], result of:
          0.029731836 = score(doc=2048,freq=24.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.3359395 = fieldWeight in 2048, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2048)
      0.25 = coord(1/4)
    
    Abstract
    Exploratory search increasingly becomes an important research topic. Our interests focus on task-based information exploration, a specific type of exploratory search performed by a range of professional users, such as intelligence analysts. In this paper, we present an evaluation framework designed specifically for assessing and comparing performance of innovative information access tools created to support the work of intelligence analysts in the context of task-based information exploration. The motivation for the development of this framework came from our needs for testing systems in task-based information exploration, which cannot be satisfied by existing frameworks. The new framework is closely tied with the kind of tasks that intelligence analysts perform: complex, dynamic, and multiple facets and multiple stages. It views the user rather than the information system as the center of the evaluation, and examines how well users are served by the systems in their tasks. The evaluation framework examines the support of the systems at users' major information access stages, such as information foraging and sense-making. The framework is accompanied by a reference test collection that has 18 tasks scenarios and corresponding passage-level ground truth annotations. To demonstrate the usage of the framework and the reference test collection, we present a specific evaluation study on CAFÉ, an adaptive filtering engine designed for supporting task-based information exploration. This study is a successful use case of the framework, and the study indeed revealed various aspects of the information systems and their roles in supporting task-based information exploration.
    Source
    Information processing and management. 44(2008) no.2, S.511-533
  2. Brusilovsky, P.; Eklund, J.; Schwarz, E.: Web-based education for all : a tool for development adaptive courseware (1998) 0.01
    0.0068306234 = product of:
      0.027322493 = sum of:
        0.027322493 = product of:
          0.054644987 = sum of:
            0.054644987 = weight(_text_:22 in 3620) [ClassicSimilarity], result of:
              0.054644987 = score(doc=3620,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.30952093 = fieldWeight in 3620, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3620)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 8.1996 22:08:06
  3. Ahn, J.-w.; Brusilovsky, P.: Adaptive visualization for exploratory information retrieval (2013) 0.01
    0.0056770155 = product of:
      0.022708062 = sum of:
        0.022708062 = weight(_text_:information in 2717) [ClassicSimilarity], result of:
          0.022708062 = score(doc=2717,freq=14.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.256578 = fieldWeight in 2717, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2717)
      0.25 = coord(1/4)
    
    Abstract
    As the volume and breadth of online information is rapidly increasing, ad hoc search systems become less and less efficient to answer information needs of modern users. To support the growing complexity of search tasks, researchers in the field of information developed and explored a range of approaches that extend the traditional ad hoc retrieval paradigm. Among these approaches, personalized search systems and exploratory search systems attracted many followers. Personalized search explored the power of artificial intelligence techniques to provide tailored search results according to different user interests, contexts, and tasks. In contrast, exploratory search capitalized on the power of human intelligence by providing users with more powerful interfaces to support the search process. As these approaches are not contradictory, we believe that they can re-enforce each other. We argue that the effectiveness of personalized search systems may be increased by allowing users to interact with the system and learn/investigate the problem in order to reach the final goal. We also suggest that an interactive visualization approach could offer a good ground to combine the strong sides of personalized and exploratory search approaches. This paper proposes a specific way to integrate interactive visualization and personalized search and introduces an adaptive visualization based search system Adaptive VIBE that implements it. We tested the effectiveness of Adaptive VIBE and investigated its strengths and weaknesses by conducting a full-scale user study. The results show that Adaptive VIBE can improve the precision and the productivity of the personalized search system while helping users to discover more diverse sets of information.
    Footnote
    Beitrag im Rahmen einer Special section on Human-computer Information Retrieval.
    Source
    Information processing and management. 49(2013) no.5, S.1139-1164
  4. Lin, Y,-l.; Trattner, C.; Brusilovsky, P.; He, D.: ¬The impact of image descriptions on user tagging behavior : a study of the nature and functionality of crowdsourced tags (2015) 0.00
    0.002427594 = product of:
      0.009710376 = sum of:
        0.009710376 = weight(_text_:information in 2159) [ClassicSimilarity], result of:
          0.009710376 = score(doc=2159,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.10971737 = fieldWeight in 2159, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2159)
      0.25 = coord(1/4)
    
    Abstract
    Crowdsourcing has emerged as a way to harvest social wisdom from thousands of volunteers to perform a series of tasks online. However, little research has been devoted to exploring the impact of various factors such as the content of a resource or crowdsourcing interface design on user tagging behavior. Although images' titles and descriptions are frequently available in image digital libraries, it is not clear whether they should be displayed to crowdworkers engaged in tagging. This paper focuses on offering insight to the curators of digital image libraries who face this dilemma by examining (i) how descriptions influence the user in his/her tagging behavior and (ii) how this relates to the (a) nature of the tags, (b) the emergent folksonomy, and (c) the findability of the images in the tagging system. We compared two different methods for collecting image tags from Amazon's Mechanical Turk's crowdworkers-with and without image descriptions. Several properties of generated tags were examined from different perspectives: diversity, specificity, reusability, quality, similarity, descriptiveness, and so on. In addition, the study was carried out to examine the impact of image description on supporting users' information seeking with a tag cloud interface. The results showed that the properties of tags are affected by the crowdsourcing approach. Tags from the "with description" condition are more diverse and more specific than tags from the "without description" condition, while the latter has a higher tag reuse rate. A user study also revealed that different tag sets provided different support for search. Tags produced "with description" shortened the path to the target results, whereas tags produced without description increased user success in the search task.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1785-1798
  5. Lee, D.H.; Brusilovsky, P.: ¬The first impression of conference papers : does it matter in predicting future citations? (2019) 0.00
    0.0021457102 = product of:
      0.008582841 = sum of:
        0.008582841 = weight(_text_:information in 4677) [ClassicSimilarity], result of:
          0.008582841 = score(doc=4677,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 4677, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4677)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.1, S.83-95