Search (10 results, page 1 of 1)

  • × author_ss:"He, D."
  1. Li, L.; He, D.; Zhang, C.; Geng, L.; Zhang, K.: Characterizing peer-judged answer quality on academic Q&A sites : a cross-disciplinary case study on ResearchGate (2018) 0.02
    0.020074995 = product of:
      0.04014999 = sum of:
        0.04014999 = sum of:
          0.00894975 = weight(_text_:a in 4637) [ClassicSimilarity], result of:
            0.00894975 = score(doc=4637,freq=14.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.1685276 = fieldWeight in 4637, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4637)
          0.03120024 = weight(_text_:22 in 4637) [ClassicSimilarity], result of:
            0.03120024 = score(doc=4637,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 4637, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4637)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Academic social (question and answer) Q&A sites are now utilised by millions of scholars and researchers for seeking and sharing discipline-specific information. However, little is known about the factors that can affect their votes on the quality of an answer, nor how the discipline might influence these factors. The paper aims to discuss this issue. Design/methodology/approach Using 1,021 answers collected over three disciplines (library and information services, history of art, and astrophysics) in ResearchGate, statistical analysis is performed to identify the characteristics of high-quality academic answers, and comparisons were made across the three disciplines. In particular, two major categories of characteristics of the answer provider and answer content were extracted and examined. Findings The results reveal that high-quality answers on academic social Q&A sites tend to possess two characteristics: first, they are provided by scholars with higher academic reputations (e.g. more followers, etc.); and second, they provide objective information (e.g. longer answer with fewer subjective opinions). However, the impact of these factors varies across disciplines, e.g., objectivity is more favourable in physics than in other disciplines. Originality/value The study is envisioned to help academic Q&A sites to select and recommend high-quality answers across different disciplines, especially in a cold-start scenario where the answer has not received enough judgements from peers.
    Date
    20. 1.2015 18:30:22
    Type
    a
  2. Chi, Y.; He, D.; Jeng, W.: Laypeople's source selection in online health information-seeking process (2020) 0.02
    0.01938208 = product of:
      0.03876416 = sum of:
        0.03876416 = sum of:
          0.0075639198 = weight(_text_:a in 34) [ClassicSimilarity], result of:
            0.0075639198 = score(doc=34,freq=10.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.14243183 = fieldWeight in 34, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=34)
          0.03120024 = weight(_text_:22 in 34) [ClassicSimilarity], result of:
            0.03120024 = score(doc=34,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 34, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=34)
      0.5 = coord(1/2)
    
    Abstract
    For laypeople, searching online health information resources can be challenging due to topic complexity and the large number of online sources with differing quality. The goal of this article is to examine, among all the available online sources, which online sources laypeople select to address their health-related information needs, and whether or how much the severity of a health condition influences their selection. Twenty-four participants were recruited individually, and each was asked (using a retrieval system called HIS) to search for information regarding a severe health condition and a mild health condition, respectively. The selected online health information sources were automatically captured by the HIS system and classified at both the website and webpage levels. Participants' selection behavior patterns were then plotted across the whole information-seeking process. Our results demonstrate that laypeople's source selection fluctuates during the health information-seeking process, and also varies by the severity of health conditions. This study reveals laypeople's real usage of different types of online health information sources, and engenders implications to the design of search engines, as well as the development of health literacy programs.
    Date
    12.11.2020 13:22:09
    Type
    a
  3. Xie, B.; He, D.; Mercer, T.; Wang, Y.; Wu, D.; Fleischmann, K.R.; Zhang, Y.; Yoder, L.H.; Stephens, K.K.; Mackert, M.; Lee, M.K.: Global health crises are also information crises : a call to action (2020) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 32) [ClassicSimilarity], result of:
              0.009471525 = score(doc=32,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 32, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=32)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this opinion paper, we argue that global health crises are also information crises. Using as an example the coronavirus disease 2019 (COVID-19) epidemic, we (a) examine challenges associated with what we term "global information crises"; (b) recommend changes needed for the field of information science to play a leading role in such crises; and (c) propose actionable items for short- and long-term research, education, and practice in information science.
    Type
    a
  4. Jeng, W.; DesAutels, S.; He, D.; Li, L.: Information exchange on an academic social networking site : a multidiscipline comparison on researchgate Q&A (2017) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 3431) [ClassicSimilarity], result of:
              0.00894975 = score(doc=3431,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 3431, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3431)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The increasing popularity of academic social networking sites (ASNSs) requires studies on the usage of ASNSs among scholars and evaluations of the effectiveness of these ASNSs. However, it is unclear whether current ASNSs have fulfilled their design goal, as scholars' actual online interactions on these platforms remain unexplored. To fill the gap, this article presents a study based on data collected from ResearchGate. Adopting a mixed-method design by conducting qualitative content analysis and statistical analysis on 1,128 posts collected from ResearchGate Q&A, we examine how scholars exchange information and resources, and how their practices vary across three distinct disciplines: library and information services, history of art, and astrophysics. Our results show that the effect of a questioner's intention (i.e., seeking information or discussion) is greater than disciplinary factors in some circumstances. Across the three disciplines, responses to questions provide various resources, including experts' contact details, citations, links to Wikipedia, images, and so on. We further discuss several implications of the understanding of scholarly information exchange and the design of better academic social networking interfaces, which should stimulate scholarly interactions by minimizing confusion, improving the clarity of questions, and promoting scholarly content management.
    Type
    a
  5. He, D.; Brusilovsky, P.; Ahn, J.; Grady, J.; Farzan, R.; Peng, Y.; Yang, Y.; Rogati, M.: ¬An evaluation of adaptive filtering in the context of realistic task-based information exploration (2008) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 2048) [ClassicSimilarity], result of:
              0.008285859 = score(doc=2048,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 2048, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2048)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Exploratory search increasingly becomes an important research topic. Our interests focus on task-based information exploration, a specific type of exploratory search performed by a range of professional users, such as intelligence analysts. In this paper, we present an evaluation framework designed specifically for assessing and comparing performance of innovative information access tools created to support the work of intelligence analysts in the context of task-based information exploration. The motivation for the development of this framework came from our needs for testing systems in task-based information exploration, which cannot be satisfied by existing frameworks. The new framework is closely tied with the kind of tasks that intelligence analysts perform: complex, dynamic, and multiple facets and multiple stages. It views the user rather than the information system as the center of the evaluation, and examines how well users are served by the systems in their tasks. The evaluation framework examines the support of the systems at users' major information access stages, such as information foraging and sense-making. The framework is accompanied by a reference test collection that has 18 tasks scenarios and corresponding passage-level ground truth annotations. To demonstrate the usage of the framework and the reference test collection, we present a specific evaluation study on CAFÉ, an adaptive filtering engine designed for supporting task-based information exploration. This study is a successful use case of the framework, and the study indeed revealed various aspects of the information systems and their roles in supporting task-based information exploration.
    Type
    a
  6. Jeng, W.; He, D.; Jiang, J.: User participation in an academic social networking service : a survey of open group users on Mendeley (2015) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 1815) [ClassicSimilarity], result of:
              0.008285859 = score(doc=1815,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 1815, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1815)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Although there are a number of social networking services that specifically target scholars, little has been published about the actual practices and the usage of these so-called academic social networking services (ASNSs). To fill this gap, we explore the populations of academics who engage in social activities using an ASNS; as an indicator of further engagement, we also determine their various motivations for joining a group in ASNSs. Using groups and their members in Mendeley as the platform for our case study, we obtained 146 participant responses from our online survey about users' common activities, usage habits, and motivations for joining groups. Our results show that (a) participants did not engage with social-based features as frequently and actively as they engaged with research-based features, and (b) users who joined more groups seemed to have a stronger motivation to increase their professional visibility and to contribute the research articles that they had read to the group reading list. Our results generate interesting insights into Mendeley's user populations, their activities, and their motivations relative to the social features of Mendeley. We also argue that further design of ASNSs is needed to take greater account of disciplinary differences in scholarly communication and to establish incentive mechanisms for encouraging user participation.
    Type
    a
  7. Oard, D.W.; He, D.; Wang, J.: User-assisted query translation for interactive cross-language information retrieval (2008) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 2030) [ClassicSimilarity], result of:
              0.008118451 = score(doc=2030,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 2030, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2030)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Interactive Cross-Language Information Retrieval (CLIR), a process in which searcher and system collaborate to find documents that satisfy an information need regardless of the language in which those documents are written, calls for designs in which synergies between searcher and system can be leveraged so that the strengths of one can cover weaknesses of the other. This paper describes an approach that employs user-assisted query translation to help searchers better understand the system's operation. Supporting interaction and interface designs are introduced, and results from three user studies are presented. The results indicate that experienced searchers presented with this new system evolve new search strategies that make effective use of the new capabilities, that they achieve retrieval effectiveness comparable to results obtained using fully automatic techniques, and that reported satisfaction with support for cross-language searching increased. The paper concludes with a description of a freely available interactive CLIR system that incorporates lessons learned from this research.
    Type
    a
  8. Lin, Y,-l.; Trattner, C.; Brusilovsky, P.; He, D.: ¬The impact of image descriptions on user tagging behavior : a study of the nature and functionality of crowdsourced tags (2015) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 2159) [ClassicSimilarity], result of:
              0.008118451 = score(doc=2159,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 2159, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2159)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Crowdsourcing has emerged as a way to harvest social wisdom from thousands of volunteers to perform a series of tasks online. However, little research has been devoted to exploring the impact of various factors such as the content of a resource or crowdsourcing interface design on user tagging behavior. Although images' titles and descriptions are frequently available in image digital libraries, it is not clear whether they should be displayed to crowdworkers engaged in tagging. This paper focuses on offering insight to the curators of digital image libraries who face this dilemma by examining (i) how descriptions influence the user in his/her tagging behavior and (ii) how this relates to the (a) nature of the tags, (b) the emergent folksonomy, and (c) the findability of the images in the tagging system. We compared two different methods for collecting image tags from Amazon's Mechanical Turk's crowdworkers-with and without image descriptions. Several properties of generated tags were examined from different perspectives: diversity, specificity, reusability, quality, similarity, descriptiveness, and so on. In addition, the study was carried out to examine the impact of image description on supporting users' information seeking with a tag cloud interface. The results showed that the properties of tags are affected by the crowdsourcing approach. Tags from the "with description" condition are more diverse and more specific than tags from the "without description" condition, while the latter has a higher tag reuse rate. A user study also revealed that different tag sets provided different support for search. Tags produced "with description" shortened the path to the target results, whereas tags produced without description increased user success in the search task.
    Type
    a
  9. He, D.; Wu, D.: Enhancing query translation with relevance feedback in translingual information retrieval : a study of the medication process (2011) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 4244) [ClassicSimilarity], result of:
              0.006765375 = score(doc=4244,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 4244, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4244)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As an effective technique for improving retrieval effectiveness, relevance feedback (RF) has been widely studied in both monolingual and translingual information retrieval (TLIR). The studies of RF in TLIR have been focused on query expansion (QE), in which queries are reformulated before and/or after they are translated. However, RF in TLIR actually not only can help select better query terms, but also can enhance query translation by adjusting translation probabilities and even resolving some out-of-vocabulary terms. In this paper, we propose a novel relevance feedback method called translation enhancement (TE), which uses the extracted translation relationships from relevant documents to revise the translation probabilities of query terms and to identify extra available translation alternatives so that the translated queries are more tuned to the current search. We studied TE using pseudo-relevance feedback (PRF) and interactive relevance feedback (IRF). Our results show that TE can significantly improve TLIR with both types of relevance feedback methods, and that the improvement is comparable to that of query expansion. More importantly, the effects of translation enhancement and query expansion are complementary. Their integration can produce further improvement, and makes TLIR more robust for a variety of queries.
    Type
    a
  10. Xiao, F.; Chi, Y.; He, D.: Promoting data use through understanding user behaviors : a model for human open government data interaction (2023) 0.00
    0.0014647468 = product of:
      0.0029294936 = sum of:
        0.0029294936 = product of:
          0.005858987 = sum of:
            0.005858987 = weight(_text_:a in 1190) [ClassicSimilarity], result of:
              0.005858987 = score(doc=1190,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.11032722 = fieldWeight in 1190, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1190)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Beitrag in: JASIST special issue on 'Who tweets scientific publications? A large-scale study of tweeting audiences in all areas of research'. Vgl.: https://asistdl.onlinelibrary.wiley.com/doi/10.1002/asi.24831.
    Type
    a