Search (7 results, page 1 of 1)

  • × author_ss:"Shah, C."
  1. Shah, C.: Social information seeking : leveraging the wisdom of the crowd (2017) 0.09
    0.0891981 = product of:
      0.1783962 = sum of:
        0.15225576 = weight(_text_:social in 4260) [ClassicSimilarity], result of:
          0.15225576 = score(doc=4260,freq=28.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.8242298 = fieldWeight in 4260, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4260)
        0.026140431 = product of:
          0.052280862 = sum of:
            0.052280862 = weight(_text_:aspects in 4260) [ClassicSimilarity], result of:
              0.052280862 = score(doc=4260,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2496898 = fieldWeight in 4260, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4260)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This volume summarizes the author's work on social information seeking (SIS), and at the same time serves as an introduction to the topic. Sometimes also referred to as social search or social information retrieval, this is a relatively new area of study concerned with the seeking and acquiring of information from social spaces on the Internet. It involves studying situations, motivations, and methods involved in seeking and sharing of information in participatory online social sites, such as Yahoo! Answers, WikiAnswers, and Twitter, as well as building systems for supporting such activities. The first part of the book introduces various foundational concepts, including information seeking, social media, and social networking. As such it provides the necessary basis to then discuss how those aspects could intertwine in different ways to create methods, tools, and opportunities for supporting and leveraging SIS. Next, Part II discusses the social dimension and primarily examines the online question-answering activity. Part III then emphasizes the collaborative aspect of information seeking, and examines what happens when social and collaborative dimensions are considered together. Lastly, Part IV provides a synthesis by consolidating methods, systems, and evaluation techniques related to social and collaborative information seeking. The book is completed by a list of challenges and opportunities for both theoretical and practical SIS work. The book is intended mainly for researchers and graduate students looking for an introduction to this new field, as well as developers and system designers interested in building interactive information retrieval systems or social/community-driven interfaces.
    RSWK
    Social Media / Datenerhebung / Information Retrieval / Kooperation
    Subject
    Social Media / Datenerhebung / Information Retrieval / Kooperation
  2. Shah, C.; Kitzie, V.: Social Q&A and virtual reference : comparing apples and oranges with the help of experts and users (2012) 0.05
    0.05141192 = product of:
      0.10282384 = sum of:
        0.057547275 = weight(_text_:social in 457) [ClassicSimilarity], result of:
          0.057547275 = score(doc=457,freq=4.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.3115296 = fieldWeight in 457, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=457)
        0.04527656 = product of:
          0.09055312 = sum of:
            0.09055312 = weight(_text_:aspects in 457) [ClassicSimilarity], result of:
              0.09055312 = score(doc=457,freq=6.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.43247548 = fieldWeight in 457, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=457)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Online question-answering (Q&A) services are becoming increasingly popular among information seekers. We divide them into two categories, social Q&A (SQA) and virtual reference (VR), and examine how experts (librarians) and end users (students) evaluate information within both categories. To accomplish this, we first performed an extensive literature review and compiled a list of the aspects found to contribute to a "good" answer. These aspects were divided among three high-level concepts: relevance, quality, and satisfaction. We then interviewed both experts and users, asking them first to reflect on their online Q&A experiences and then comment on our list of aspects. These interviews uncovered two main disparities. One disparity was found between users' expectations with these services and how information was actually delivered among them, and the other disparity between the perceptions of users and experts with regard to the aforementioned three characteristics of relevance, quality, and satisfaction. Using qualitative analyses of both the interviews and relevant literature, we suggest ways to create better hybrid solutions for online Q&A and to bridge the gap between experts' and users' understandings of relevance, quality, and satisfaction, as well as the perceived importance of each in contributing to a good answer.
  3. Wang, Y.; Shah, C.: Investigating failures in information seeking episodes (2017) 0.03
    0.026329588 = product of:
      0.10531835 = sum of:
        0.10531835 = sum of:
          0.073936306 = weight(_text_:aspects in 2922) [ClassicSimilarity], result of:
            0.073936306 = score(doc=2922,freq=4.0), product of:
              0.20938325 = queryWeight, product of:
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.046325076 = queryNorm
              0.35311472 = fieldWeight in 2922, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2922)
          0.031382043 = weight(_text_:22 in 2922) [ClassicSimilarity], result of:
            0.031382043 = score(doc=2922,freq=2.0), product of:
              0.16222252 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046325076 = queryNorm
              0.19345059 = fieldWeight in 2922, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2922)
      0.25 = coord(1/4)
    
    Abstract
    Purpose People face barriers and failures in various kinds of information seeking experiences. These are often attributed to either the information seeker or the system/service they use. The purpose of this paper is to investigate how and why individuals fail to fulfill their information needs in all contexts and situations. It addresses the limitations of existing studies in examining the context of the task and information seeker's strategy and seeks to gain a holistic understanding of information seeking barriers and failures. Design/methodology/approach The primary method used for this investigation is a qualitative survey, in which 63 participants provided 208 real life examples of failures in information seeking. After analyzing the survey data, ten semi-structured interviews with another group of participants were conducted to further examine the survey findings. Data were analyzed using various theoretical frameworks of tasks, strategies, and barriers. Findings A careful examination of aspects of tasks, barriers, and strategies identified from the examples revealed that a wide range of external and internal factors caused people's failures. These factors were also caused or affected by multiple aspects of information seekers' tasks and strategies. People's information needs were often too contextual and specific to be fulfilled by the information retrieved. Other barriers, such as time constraint and institutional restrictions, also intensified the problem. Originality/value This paper highlights the importance of considering the information seeking episodes in which individuals fail to fulfill their needs in a holistic approach by analyzing their tasks, information needs, strategies, and obstacles. The modified theoretical frameworks and the coding methods used could also be instrumental for future research.
    Date
    20. 1.2015 18:30:22
  4. Radford, M.L.; Connaway, L.S.; Mikitish, S.; Alpert, M.; Shah, C.; Cooke, N.A.: Shared values, new vision : collaboration and communities of practice in virtual reference and SQA (2017) 0.01
    0.014386819 = product of:
      0.057547275 = sum of:
        0.057547275 = weight(_text_:social in 3352) [ClassicSimilarity], result of:
          0.057547275 = score(doc=3352,freq=4.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.3115296 = fieldWeight in 3352, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3352)
      0.25 = coord(1/4)
    
    Abstract
    This investigation of new approaches to improving collaboration, user/librarian experiences, and sustainability for virtual reference services (VRS) reports findings from a grant project titled "Cyber Synergy: Seeking Sustainability between Virtual Reference and Social Q&A Sites" (Radford, Connaway, & Shah, 2011-2014). In-depth telephone interviews with 50 VRS librarians included questions on collaboration, referral practices, and attitudes toward Social Question and Answer (SQA) services using the Critical Incident Technique (Flanagan, 1954). The Community of Practice (CoP) (Wenger, 1998; Davies, 2005) framework was found to be a useful conceptualization for understanding VRS professionals' approaches to their work. Findings indicate that participants usually refer questions from outside of their area of expertise to other librarians, but occasionally refer them to nonlibrarian experts. These referrals are made possible because participants believe that other VRS librarians are qualified and willing collaborators. Barriers to collaboration include not knowing appropriate librarians/experts for referral, inability to verify credentials, and perceived unwillingness to collaborate. Facilitators to collaboration include knowledge of appropriate collaborators who are qualified and willingness to refer. Answers from SQA services were perceived as less objective and authoritative, but participants were open to collaborating with nonlibrarian experts with confirmation of professional expertise or extensive knowledge.
  5. Hendahewa, C.; Shah, C.: Implicit search feature based approach to assist users in exploratory search tasks (2015) 0.01
    0.0065351077 = product of:
      0.026140431 = sum of:
        0.026140431 = product of:
          0.052280862 = sum of:
            0.052280862 = weight(_text_:aspects in 2678) [ClassicSimilarity], result of:
              0.052280862 = score(doc=2678,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2496898 = fieldWeight in 2678, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2678)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Analyzing and modeling users' online search behaviors when conducting exploratory search tasks could be instrumental in discovering search behavior patterns that can then be leveraged to assist users in reaching their search task goals. We propose a framework for evaluating exploratory search based on implicit features and user search action sequences extracted from the transactional log data to model different aspects of exploratory search namely uncertainty, creativity, exploration, and knowledge discovery. We show the effectiveness of the proposed framework by demonstrating how it can be used to understand and evaluate user search performance and thereby make meaningful recommendations to improve the overall search performance of users. We used data collected from a user study consisting of 18 users conducting an exploratory search task for two sessions with two different topics in the experimental analysis. With this analysis we show that we can effectively model their behavior using implicit features to predict the user's future performance level with above 70% accuracy in most cases. Further, using simulations we demonstrate that our search process based recommendations improve the search performance of low performing users over time and validate these findings using both qualitative and quantitative approaches.
  6. Le, L.T.; Shah, C.: Retrieving people : identifying potential answerers in Community Question-Answering (2018) 0.01
    0.0065351077 = product of:
      0.026140431 = sum of:
        0.026140431 = product of:
          0.052280862 = sum of:
            0.052280862 = weight(_text_:aspects in 4467) [ClassicSimilarity], result of:
              0.052280862 = score(doc=4467,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2496898 = fieldWeight in 4467, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4467)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Community Question-Answering (CQA) sites have become popular venues where people can ask questions, seek information, or share knowledge with a user community. Although responses on CQA sites are obviously slower than information retrieved by a search engine, one of the most frustrating aspects of CQAs occurs when an asker's posted question does not receive a reasonable answer or remains unanswered. CQA sites could improve users' experience by identifying potential answerers and routing appropriate questions to them. In this paper, we predict the potential answerers based on question content and user profiles. Our approach builds user profiles based on past activity. When a new question is posted, the proposed method computes scores between the question and all user profiles to find the potential answerers. We conduct extensive experimental evaluations on two popular CQA sites - Yahoo! Answers and Stack Overflow - to show the effectiveness of our algorithm. The results show that our technique is able to predict a small group of 1000 users from which at least one user will answer the question with a probability higher than 50% in both CQA sites. Further analysis indicates that topic interest and activity level can improve the correctness of our approach.
  7. González-Ibáñez, R.; Esparza-Villamán, A.; Vargas-Godoy, J.C.; Shah, C.: ¬A comparison of unimodal and multimodal models for implicit detection of relevance in interactive IR (2019) 0.01
    0.0065351077 = product of:
      0.026140431 = sum of:
        0.026140431 = product of:
          0.052280862 = sum of:
            0.052280862 = weight(_text_:aspects in 5417) [ClassicSimilarity], result of:
              0.052280862 = score(doc=5417,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2496898 = fieldWeight in 5417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5417)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Implicit detection of relevance has been approached by many during the last decade. From the use of individual measures to the use of multiple features from different sources (multimodality), studies have shown the feasibility to automatically detect whether a document is relevant. Despite promising results, it is not clear yet to what extent multimodality constitutes an effective approach compared to unimodality. In this article, we hypothesize that it is possible to build unimodal models capable of outperforming multimodal models in the detection of perceived relevance. To test this hypothesis, we conducted three experiments to compare unimodal and multimodal classification models built using a combination of 24 features. Our classification experiments showed that a univariate unimodal model based on the left-click feature supports our hypothesis. On the other hand, our prediction experiment suggests that multimodality slightly improves early classification compared to the best unimodal models. Based on our results, we argue that the feasibility for practical applications of state-of-the-art multimodal approaches may be strongly constrained by technology, cultural, ethical, and legal aspects, in which case unimodality may offer a better alternative today for supporting relevance detection in interactive information retrieval systems.