Search (17 results, page 1 of 1)

  • × author_ss:"Shah, C."
  1. Le, L.T.; Shah, C.: Retrieving people : identifying potential answerers in Community Question-Answering (2018) 0.01
    0.008890929 = product of:
      0.026672786 = sum of:
        0.026672786 = weight(_text_:on in 4467) [ClassicSimilarity], result of:
          0.026672786 = score(doc=4467,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24300331 = fieldWeight in 4467, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4467)
      0.33333334 = coord(1/3)
    
    Abstract
    Community Question-Answering (CQA) sites have become popular venues where people can ask questions, seek information, or share knowledge with a user community. Although responses on CQA sites are obviously slower than information retrieved by a search engine, one of the most frustrating aspects of CQAs occurs when an asker's posted question does not receive a reasonable answer or remains unanswered. CQA sites could improve users' experience by identifying potential answerers and routing appropriate questions to them. In this paper, we predict the potential answerers based on question content and user profiles. Our approach builds user profiles based on past activity. When a new question is posted, the proposed method computes scores between the question and all user profiles to find the potential answerers. We conduct extensive experimental evaluations on two popular CQA sites - Yahoo! Answers and Stack Overflow - to show the effectiveness of our algorithm. The results show that our technique is able to predict a small group of 1000 users from which at least one user will answer the question with a probability higher than 50% in both CQA sites. Further analysis indicates that topic interest and activity level can improve the correctness of our approach.
  2. Wang, Y.; Shah, C.: Authentic versus synthetic : an investigation of the influences of study settings and task configurations on search behaviors (2022) 0.01
    0.008890929 = product of:
      0.026672786 = sum of:
        0.026672786 = weight(_text_:on in 495) [ClassicSimilarity], result of:
          0.026672786 = score(doc=495,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24300331 = fieldWeight in 495, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=495)
      0.33333334 = coord(1/3)
    
    Abstract
    In information seeking and retrieval research, researchers often collect data about users' behaviors to predict task characteristics and personalize information for users. The reliability of user behavior may be directly influenced by data collection methods. This article reports on a mixed-methods study examining the impact of study setting (laboratory setting vs. remote setting) and task authenticity (authentic task vs. simulated task) on users' online browsing and searching behaviors. Thirty-six undergraduate participants finished one lab session and one remote session in which they completed one authentic and one simulated task. Using log data collected from 144 task sessions, this study demonstrates that the synthetic lab study setting and simulated tasks had significant influences mostly on behaviors related to content pages (e.g., page dwell time, number of pages visited per task). Meanwhile, first-query behaviors were less affected by study settings or task authenticity than whole-session behaviors, indicating the reliability of using first-query behaviors in task prediction. Qualitative interviews reveal why users were influenced. This study addresses methodological limitations in existing research and provides new insights and implications for researchers who collect online user search behavioral data.
  3. González-Ibáñez, R.; Esparza-Villamán, A.; Vargas-Godoy, J.C.; Shah, C.: ¬A comparison of unimodal and multimodal models for implicit detection of relevance in interactive IR (2019) 0.01
    0.0076997704 = product of:
      0.02309931 = sum of:
        0.02309931 = weight(_text_:on in 5417) [ClassicSimilarity], result of:
          0.02309931 = score(doc=5417,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21044704 = fieldWeight in 5417, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5417)
      0.33333334 = coord(1/3)
    
    Abstract
    Implicit detection of relevance has been approached by many during the last decade. From the use of individual measures to the use of multiple features from different sources (multimodality), studies have shown the feasibility to automatically detect whether a document is relevant. Despite promising results, it is not clear yet to what extent multimodality constitutes an effective approach compared to unimodality. In this article, we hypothesize that it is possible to build unimodal models capable of outperforming multimodal models in the detection of perceived relevance. To test this hypothesis, we conducted three experiments to compare unimodal and multimodal classification models built using a combination of 24 features. Our classification experiments showed that a univariate unimodal model based on the left-click feature supports our hypothesis. On the other hand, our prediction experiment suggests that multimodality slightly improves early classification compared to the best unimodal models. Based on our results, we argue that the feasibility for practical applications of state-of-the-art multimodal approaches may be strongly constrained by technology, cultural, ethical, and legal aspects, in which case unimodality may offer a better alternative today for supporting relevance detection in interactive information retrieval systems.
  4. Shah, C.; Anderson, T.; Hagen, L.; Zhang, Y.: ¬An iSchool approach to data science : human-centered, socially responsible, and context-driven (2021) 0.01
    0.0076997704 = product of:
      0.02309931 = sum of:
        0.02309931 = weight(_text_:on in 244) [ClassicSimilarity], result of:
          0.02309931 = score(doc=244,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21044704 = fieldWeight in 244, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=244)
      0.33333334 = coord(1/3)
    
    Abstract
    The Information Schools, also referred to as iSchools, have a unique approach to data science with three distinct components: human-centeredness, socially responsible, and rooted in context. In this position paper, we highlight and expand on these components and show how they are integrated in various research and educational activities related to data science that are being carried out at iSchools. We argue that the iSchool way of doing data science is not only highly relevant to the current times, but also crucial in solving problems of tomorrow. Specifically, we accentuate the issues of developing insights and solutions that are not only data-driven, but also incorporate human values, including transparency, privacy, ethics, fairness, and equity. This approach to data science has meaningful implications on how we educate the students and train the next generation of scholars and policymakers. Here, we provide some of those design decisions, rooted in evidence-based research, along with our perspective on how data science is currently situated and how it should be advanced in iSchools.
  5. Shah, C.; Kitzie, V.: Social Q&A and virtual reference : comparing apples and oranges with the help of experts and users (2012) 0.01
    0.0062868367 = product of:
      0.01886051 = sum of:
        0.01886051 = weight(_text_:on in 457) [ClassicSimilarity], result of:
          0.01886051 = score(doc=457,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 457, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=457)
      0.33333334 = coord(1/3)
    
    Abstract
    Online question-answering (Q&A) services are becoming increasingly popular among information seekers. We divide them into two categories, social Q&A (SQA) and virtual reference (VR), and examine how experts (librarians) and end users (students) evaluate information within both categories. To accomplish this, we first performed an extensive literature review and compiled a list of the aspects found to contribute to a "good" answer. These aspects were divided among three high-level concepts: relevance, quality, and satisfaction. We then interviewed both experts and users, asking them first to reflect on their online Q&A experiences and then comment on our list of aspects. These interviews uncovered two main disparities. One disparity was found between users' expectations with these services and how information was actually delivered among them, and the other disparity between the perceptions of users and experts with regard to the aforementioned three characteristics of relevance, quality, and satisfaction. Using qualitative analyses of both the interviews and relevant literature, we suggest ways to create better hybrid solutions for online Q&A and to bridge the gap between experts' and users' understandings of relevance, quality, and satisfaction, as well as the perceived importance of each in contributing to a good answer.
  6. Shah, C.; Hendahewa, C.; González-Ibáñez, R.: Rain or shine? : forecasting search process performance in exploratory search tasks (2016) 0.01
    0.0062868367 = product of:
      0.01886051 = sum of:
        0.01886051 = weight(_text_:on in 3012) [ClassicSimilarity], result of:
          0.01886051 = score(doc=3012,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 3012, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3012)
      0.33333334 = coord(1/3)
    
    Abstract
    Most information retrieval (IR) systems consider relevance, usefulness, and quality of information objects (documents, queries) for evaluation, prediction, and recommendation, often ignoring the underlying search process of information seeking. This may leave out opportunities for making recommendations that analyze the search process and/or recommend alternative search process instead of objects. To overcome this limitation, we investigated whether by analyzing a searcher's current processes we could forecast his likelihood of achieving a certain level of success with respect to search performance in the future. We propose a machine-learning-based method to dynamically evaluate and predict search performance several time-steps ahead at each given time point of the search process during an exploratory search task. Our prediction method uses a collection of features extracted from expression of information need and coverage of information. For testing, we used log data collected from 4 user studies that included 216 users (96 individuals and 60 pairs). Our results show 80-90% accuracy in prediction depending on the number of time-steps ahead. In effect, the work reported here provides a framework for evaluating search processes during exploratory search tasks and predicting search performance. Importantly, the proposed approach is based on user processes and is independent of any IR system.
  7. Shah, C.: Social information seeking : leveraging the wisdom of the crowd (2017) 0.01
    0.0062868367 = product of:
      0.01886051 = sum of:
        0.01886051 = weight(_text_:on in 4260) [ClassicSimilarity], result of:
          0.01886051 = score(doc=4260,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 4260, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4260)
      0.33333334 = coord(1/3)
    
    Abstract
    This volume summarizes the author's work on social information seeking (SIS), and at the same time serves as an introduction to the topic. Sometimes also referred to as social search or social information retrieval, this is a relatively new area of study concerned with the seeking and acquiring of information from social spaces on the Internet. It involves studying situations, motivations, and methods involved in seeking and sharing of information in participatory online social sites, such as Yahoo! Answers, WikiAnswers, and Twitter, as well as building systems for supporting such activities. The first part of the book introduces various foundational concepts, including information seeking, social media, and social networking. As such it provides the necessary basis to then discuss how those aspects could intertwine in different ways to create methods, tools, and opportunities for supporting and leveraging SIS. Next, Part II discusses the social dimension and primarily examines the online question-answering activity. Part III then emphasizes the collaborative aspect of information seeking, and examines what happens when social and collaborative dimensions are considered together. Lastly, Part IV provides a synthesis by consolidating methods, systems, and evaluation techniques related to social and collaborative information seeking. The book is completed by a list of challenges and opportunities for both theoretical and practical SIS work. The book is intended mainly for researchers and graduate students looking for an introduction to this new field, as well as developers and system designers interested in building interactive information retrieval systems or social/community-driven interfaces.
  8. Zhang, Y.; Wu, D.; Hagen, L.; Song, I.-Y.; Mostafa, J.; Oh, S.; Anderson, T.; Shah, C.; Bishop, B.W.; Hopfgartner, F.; Eckert, K.; Federer, L.; Saltz, J.S.: Data science curriculum in the iField (2023) 0.01
    0.0062868367 = product of:
      0.01886051 = sum of:
        0.01886051 = weight(_text_:on in 964) [ClassicSimilarity], result of:
          0.01886051 = score(doc=964,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 964, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=964)
      0.33333334 = coord(1/3)
    
    Abstract
    Many disciplines, including the broad Field of Information (iField), offer Data Science (DS) programs. There have been significant efforts exploring an individual discipline's identity and unique contributions to the broader DS education landscape. To advance DS education in the iField, the iSchool Data Science Curriculum Committee (iDSCC) was formed and charged with building and recommending a DS education framework for iSchools. This paper reports on the research process and findings of a series of studies to address important questions: What is the iField identity in the multidisciplinary DS education landscape? What is the status of DS education in iField schools? What knowledge and skills should be included in the core curriculum for iField DS education? What are the jobs available for DS graduates from the iField? What are the differences between graduate-level and undergraduate-level DS education? Answers to these questions will not only distinguish an iField approach to DS education but also define critical components of DS curriculum. The results will inform individual DS programs in the iField to develop curriculum to support undergraduate and graduate DS education in their local context.
    Footnote
    Beitrag in einem Special issue on "Data Science in the iField".
  9. Shah, C.: Collaborative information seeking : the art and science of making the whole greater than the sum of all (2012) 0.01
    0.006159817 = product of:
      0.01847945 = sum of:
        0.01847945 = weight(_text_:on in 360) [ClassicSimilarity], result of:
          0.01847945 = score(doc=360,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.16835764 = fieldWeight in 360, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=360)
      0.33333334 = coord(1/3)
    
    Abstract
    Today's complex, information-intensive problems often require people to work together. Mostly these tasks go far beyond simply searching together; they include information lookup, sharing, synthesis, and decision-making. In addition, they all have an end-goal that is mutually beneficial to all parties involved. Such "collaborative information seeking" (CIS) projects typically last several sessions and the participants all share an intention to contribute and benefit. Not surprisingly, these processes are highly interactive. Shah focuses on two individually well-understood notions: collaboration and information seeking, with the goal of bringing them together to show how it is a natural tendency for humans to work together on complex tasks. The first part of his book introduces the general notions of collaboration and information seeking, as well as related concepts, terminology, and frameworks; and thus provides the reader with a comprehensive treatment of the concepts underlying CIS. The second part of the book details CIS as a standalone domain. A series of frameworks, theories, and models are introduced to provide a conceptual basis for CIS. The final part describes several systems and applications of CIS, along with their broader implications on other fields such as computer-supported cooperative work (CSCW) and human-computer interaction (HCI). With this first comprehensive overview of an exciting new research field, Shah delivers to graduate students and researchers in academia and industry an encompassing description of the technologies involved, state-of-the-art results, and open challenges as well as research opportunities.
  10. Wang, Y.; Shah, C.: Investigating failures in information seeking episodes (2017) 0.01
    0.0056345966 = product of:
      0.01690379 = sum of:
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 2922) [ClassicSimilarity], result of:
              0.03380758 = score(doc=2922,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 2922, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2922)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20. 1.2015 18:30:22
  11. Shah, C.; Marchionini, G.: Awareness in collaborative information seeking (2010) 0.01
    0.0053345575 = product of:
      0.016003672 = sum of:
        0.016003672 = weight(_text_:on in 4082) [ClassicSimilarity], result of:
          0.016003672 = score(doc=4082,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 4082, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=4082)
      0.33333334 = coord(1/3)
    
    Abstract
    Support for explicit collaboration in information-seeking activities is increasingly recognized as a desideratum for search systems. Several tools have emerged recently that help groups of people with the same information-seeking goals to work together. Many issues for these collaborative information-seeking (CIS) environments remain understudied. The authors identified awareness as one of these issues in CIS, and they presented a user study that involved 42 pairs of participants, who worked in collaboration over 2 sessions with 3 instances of the authors' CIS system for exploratory search. They showed that while having awareness of personal actions and history is important for exploratory search tasks spanning multiple sessions, support for group awareness is even more significant for effective collaboration. In addition, they showed that support for such group awareness can be provided without compromising usability or introducing additional load on the users.
  12. Shah, C.: Effects of awareness on coordination in collaborative information seeking (2013) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 952) [ClassicSimilarity], result of:
          0.013336393 = score(doc=952,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 952, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=952)
      0.33333334 = coord(1/3)
    
  13. Shah, C.: Collaborative information seeking (2014) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 1193) [ClassicSimilarity], result of:
          0.013336393 = score(doc=1193,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 1193, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1193)
      0.33333334 = coord(1/3)
    
    Abstract
    The notions that information seeking is not always a solitary activity and that people working in collaboration for information intensive tasks should be studied and supported have become more prevalent in recent years. Several new research questions, methodologies, and systems have emerged around these notions that may prove to be useful beyond the field of collaborative information seeking (CIS), with relevance to the broader area of information seeking and behavior. This article provides an overview of such key research work from a variety of domains, including library and information science, computer-supported cooperative work, human-computer interaction, and information retrieval. It starts with explanations of collaboration and how CIS fits in different contexts, emphasizing the interactive, intentional, and mutually beneficial nature of CIS activities. Relations to similar and related fields such as collaborative information retrieval, collaborative information behavior, and collaborative filtering are also clarified. Next, the article presents a synthesis of various frameworks and models that exist in the field today, along with a new synthesis of 12 different dimensions of group activities. A discussion on issues and approaches relating to evaluating various parameters in CIS follows. Finally, a list of known issues and challenges is presented to provide an overview of research opportunities in this field.
  14. González-Ibáñez, R.; Shah, C.; White, R.W.: Capturing 'Collabportunities' : a method to evaluate collaboration opportunities in information search using pseudocollaboration (2015) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 2167) [ClassicSimilarity], result of:
          0.013336393 = score(doc=2167,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 2167, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2167)
      0.33333334 = coord(1/3)
    
    Abstract
    In explicit collaborative search, two or more individuals coordinate their efforts toward a shared goal. Every day, Internet users with similar information needs have the potential to collaborate. However, online search is typically performed in solitude. Existing search systems do not promote explicit collaborations, and collaboration opportunities (collabportunities) are missed. In this article, we describe a method to evaluate the feasibility of transforming these collabportunities into recommendations for explicit collaboration. We developed a technique called pseudocollaboration to evaluate the benefits and costs of collabportunities through simulations. We evaluate the performance of our method using three data sets: (a) data from single users' search sessions, (b) data with collaborative search sessions between pairs of searchers, and (c) logs from a large-scale search engine with search sessions of thousands of searchers. Our results establish when and how collabportunities would significantly help or hinder the search process versus searches conducted individually. The method that we describe has implications for the design and implementation of recommendation systems for explicit collaboration. It also connects system-mediated and user-mediated collaborative search, whereby the system evaluates the likely benefits of collaborating for a search task and helps searchers make more informed decisions on initiating and executing such a collaboration.
  15. Hendahewa, C.; Shah, C.: Implicit search feature based approach to assist users in exploratory search tasks (2015) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 2678) [ClassicSimilarity], result of:
          0.013336393 = score(doc=2678,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 2678, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2678)
      0.33333334 = coord(1/3)
    
    Abstract
    Analyzing and modeling users' online search behaviors when conducting exploratory search tasks could be instrumental in discovering search behavior patterns that can then be leveraged to assist users in reaching their search task goals. We propose a framework for evaluating exploratory search based on implicit features and user search action sequences extracted from the transactional log data to model different aspects of exploratory search namely uncertainty, creativity, exploration, and knowledge discovery. We show the effectiveness of the proposed framework by demonstrating how it can be used to understand and evaluate user search performance and thereby make meaningful recommendations to improve the overall search performance of users. We used data collected from a user study consisting of 18 users conducting an exploratory search task for two sessions with two different topics in the experimental analysis. With this analysis we show that we can effectively model their behavior using implicit features to predict the user's future performance level with above 70% accuracy in most cases. Further, using simulations we demonstrate that our search process based recommendations improve the search performance of low performing users over time and validate these findings using both qualitative and quantitative approaches.
  16. Choi, E.; Shah, C.: User motivations for asking questions in online Q&A services (2016) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 2896) [ClassicSimilarity], result of:
          0.013336393 = score(doc=2896,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 2896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2896)
      0.33333334 = coord(1/3)
    
    Abstract
    Online Q&A services are information sources where people identify their information need, formulate the need in natural language, and interact with one another to satisfy their needs. Even though in recent years online Q&A has considerably grown in popularity and impacted information-seeking behaviors, we still lack knowledge about what motivates people to ask a question in online Q&A environments. Yahoo! Answers and WikiAnswers were selected as the test beds in the study, and a sequential mixed method employing an Internet-based survey, a diary method, and interviews was used to investigate user motivations for asking a question in online Q&A services. Cognitive needs were found as the most significant motivation, driving people to ask a question. Yet, it was found that other motivational factors (e.g., tension free needs) also played an important role in user motivations for asking a question, depending on asker's contexts and situations. Understanding motivations for asking a question could provide a general framework of conceptualizing different contexts and situations of information needs in online Q&A. The findings have several implications not only for developing better question-answering processes in online Q&A environments, but also for gaining insights into the broader understanding of online information-seeking behaviors.
  17. Radford, M.L.; Connaway, L.S.; Mikitish, S.; Alpert, M.; Shah, C.; Cooke, N.A.: Shared values, new vision : collaboration and communities of practice in virtual reference and SQA (2017) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 3352) [ClassicSimilarity], result of:
          0.013336393 = score(doc=3352,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 3352, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3352)
      0.33333334 = coord(1/3)
    
    Abstract
    This investigation of new approaches to improving collaboration, user/librarian experiences, and sustainability for virtual reference services (VRS) reports findings from a grant project titled "Cyber Synergy: Seeking Sustainability between Virtual Reference and Social Q&A Sites" (Radford, Connaway, & Shah, 2011-2014). In-depth telephone interviews with 50 VRS librarians included questions on collaboration, referral practices, and attitudes toward Social Question and Answer (SQA) services using the Critical Incident Technique (Flanagan, 1954). The Community of Practice (CoP) (Wenger, 1998; Davies, 2005) framework was found to be a useful conceptualization for understanding VRS professionals' approaches to their work. Findings indicate that participants usually refer questions from outside of their area of expertise to other librarians, but occasionally refer them to nonlibrarian experts. These referrals are made possible because participants believe that other VRS librarians are qualified and willing collaborators. Barriers to collaboration include not knowing appropriate librarians/experts for referral, inability to verify credentials, and perceived unwillingness to collaborate. Facilitators to collaboration include knowledge of appropriate collaborators who are qualified and willingness to refer. Answers from SQA services were perceived as less objective and authoritative, but participants were open to collaborating with nonlibrarian experts with confirmation of professional expertise or extensive knowledge.