Search (43 results, page 1 of 3)

  • × author_ss:"Zhang, Y."
  1. Zhang, Y.: Developing a holistic model for digital library evaluation (2010) 0.03
    0.025645267 = product of:
      0.0384679 = sum of:
        0.019957317 = weight(_text_:to in 2360) [ClassicSimilarity], result of:
          0.019957317 = score(doc=2360,freq=8.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.24104178 = fieldWeight in 2360, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=2360)
        0.018510582 = product of:
          0.037021164 = sum of:
            0.037021164 = weight(_text_:22 in 2360) [ClassicSimilarity], result of:
              0.037021164 = score(doc=2360,freq=2.0), product of:
                0.15947726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045541126 = queryNorm
                0.23214069 = fieldWeight in 2360, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2360)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article reports the author's recent research in developing a holistic model for various levels of digital library (DL) evaluation in which perceived important criteria from heterogeneous stakeholder groups are organized and presented. To develop such a model, the author applied a three-stage research approach: exploration, confirmation, and verification. During the exploration stage, a literature review was conducted followed by an interview, along with a card sorting technique, to collect important criteria perceived by DL experts. Then the criteria identified were used for developing an online survey during the confirmation stage. Survey respondents (431 in total) from 22 countries rated the importance of the criteria. A holistic DL evaluation model was constructed using statistical techniques. Eventually, the verification stage was devised to test the reliability of the model in the context of searching and evaluating an operational DL. The proposed model fills two lacunae in the DL domain: (a) the lack of a comprehensive and flexible framework to guide and benchmark evaluations, and (b) the uncertainty about what divergence exists among heterogeneous DL stakeholders, including general users.
  2. Zhang, Y.: ¬The impact of Internet-based electronic resources on formal scholarly communication in the area of library and information science : a citation analysis (1998) 0.03
    0.025630686 = product of:
      0.038446028 = sum of:
        0.016631098 = weight(_text_:to in 2808) [ClassicSimilarity], result of:
          0.016631098 = score(doc=2808,freq=8.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.20086816 = fieldWeight in 2808, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2808)
        0.02181493 = product of:
          0.04362986 = sum of:
            0.04362986 = weight(_text_:22 in 2808) [ClassicSimilarity], result of:
              0.04362986 = score(doc=2808,freq=4.0), product of:
                0.15947726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045541126 = queryNorm
                0.27358043 = fieldWeight in 2808, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2808)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Internet based electronic resources are growing dramatically but there have been no empirical studies evaluating the impact of e-sources, as a whole, on formal scholarly communication. reports results of an investigation into how much e-sources have been used in formal scholarly communication, using a case study in the area of Library and Information Science (LIS) during the period 1994 to 1996. 4 citation based indicators were used in the study of the impact measurement. Concludes that, compared with the impact of print sources, the impact of e-sources on formal scholarly communication in LIS is small, as measured by e-sources cited, and does not increase significantly by year even though there is observable growth of these impact across the years. It is found that periodical format is related to the rate of citing e-sources, articles are more likely to cite e-sources than are print priodical articles. However, once authors cite electronic resource, there is no significant difference in the number of references per article by periodical format or by year. Suggests that, at this stage, citing e-sources may depend on authors rather than the periodical format in which authors choose to publish
    Date
    30. 1.1999 17:22:22
  3. Zhang, Y.; Liu, J.; Song, S.: ¬The design and evaluation of a nudge-based interface to facilitate consumers' evaluation of online health information credibility (2023) 0.02
    0.023862889 = product of:
      0.035794333 = sum of:
        0.02036885 = weight(_text_:to in 993) [ClassicSimilarity], result of:
          0.02036885 = score(doc=993,freq=12.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.24601223 = fieldWeight in 993, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=993)
        0.015425485 = product of:
          0.03085097 = sum of:
            0.03085097 = weight(_text_:22 in 993) [ClassicSimilarity], result of:
              0.03085097 = score(doc=993,freq=2.0), product of:
                0.15947726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045541126 = queryNorm
                0.19345059 = fieldWeight in 993, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=993)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Evaluating the quality of online health information (OHI) is a major challenge facing consumers. We designed PageGraph, an interface that displays quality indicators and associated values for a webpage, based on credibility evaluation models, the nudge theory, and existing empirical research concerning professionals' and consumers' evaluation of OHI quality. A qualitative evaluation of the interface with 16 participants revealed that PageGraph rendered the information and presentation nudges as intended. It provided the participants with easier access to quality indicators, encouraged fresh angles to assess information credibility, provided an evaluation framework, and encouraged validation of initial judgments. We then conducted a quantitative evaluation of the interface involving 60 participants using a between-subject experimental design. The control group used a regular web browser and evaluated the credibility of 12 preselected webpages, whereas the experimental group evaluated the same webpages with the assistance of PageGraph. PageGraph did not significantly influence participants' evaluation results. The results may be attributed to the insufficiency of the saliency and structure of the nudges implemented and the webpage stimuli's lack of sensitivity to the intervention. Future directions for applying nudges to support OHI evaluation were discussed.
    Date
    22. 6.2023 18:18:34
  4. Zhang, Y.; Jansen, B.J.; Spink, A.: Identification of factors predicting clickthrough in Web searching using neural network analysis (2009) 0.02
    0.023862753 = product of:
      0.035794128 = sum of:
        0.017283546 = weight(_text_:to in 2742) [ClassicSimilarity], result of:
          0.017283546 = score(doc=2742,freq=6.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.20874833 = fieldWeight in 2742, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.018510582 = product of:
          0.037021164 = sum of:
            0.037021164 = weight(_text_:22 in 2742) [ClassicSimilarity], result of:
              0.037021164 = score(doc=2742,freq=2.0), product of:
                0.15947726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045541126 = queryNorm
                0.23214069 = fieldWeight in 2742, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2742)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this research, we aim to identify factors that significantly affect the clickthrough of Web searchers. Our underlying goal is determine more efficient methods to optimize the clickthrough rate. We devise a clickthrough metric for measuring customer satisfaction of search engine results using the number of links visited, number of queries a user submits, and rank of clicked links. We use a neural network to detect the significant influence of searching characteristics on future user clickthrough. Our results show that high occurrences of query reformulation, lengthy searching duration, longer query length, and the higher ranking of prior clicked links correlate positively with future clickthrough. We provide recommendations for leveraging these findings for improving the performance of search engine retrieval and result ranking, along with implications for search engine marketing.
    Date
    22. 3.2009 17:49:11
  5. Zhang, Y.; Wu, M.; Zhang, G.; Lu, J.: Stepping beyond your comfort zone : diffusion-based network analytics for knowledge trajectory recommendation (2023) 0.02
    0.019885626 = product of:
      0.029828439 = sum of:
        0.014402954 = weight(_text_:to in 994) [ClassicSimilarity], result of:
          0.014402954 = score(doc=994,freq=6.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.17395693 = fieldWeight in 994, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=994)
        0.015425485 = product of:
          0.03085097 = sum of:
            0.03085097 = weight(_text_:22 in 994) [ClassicSimilarity], result of:
              0.03085097 = score(doc=994,freq=2.0), product of:
                0.15947726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045541126 = queryNorm
                0.19345059 = fieldWeight in 994, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=994)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Predicting a researcher's knowledge trajectories beyond their current foci can leverage potential inter-/cross-/multi-disciplinary interactions to achieve exploratory innovation. In this study, we present a method of diffusion-based network analytics for knowledge trajectory recommendation. The method begins by constructing a heterogeneous bibliometric network consisting of a co-topic layer and a co-authorship layer. A novel link prediction approach with a diffusion strategy is then used to capture the interactions between social elements (e.g., collaboration) and knowledge elements (e.g., technological similarity) in the process of exploratory innovation. This diffusion strategy differentiates the interactions occurring among homogeneous and heterogeneous nodes in the heterogeneous bibliometric network and weights the strengths of these interactions. Two sets of experiments-one with a local dataset and the other with a global dataset-demonstrate that the proposed method is prior to 10 selected baselines in link prediction, recommender systems, and upstream graph representation learning. A case study recommending knowledge trajectories of information scientists with topical hierarchy and explainable mediators reveals the proposed method's reliability and potential practical uses in broad scenarios.
    Date
    22. 6.2023 18:07:12
  6. Zhang, Y.: Searching for specific health-related information in MedlinePlus : behavioral patterns and user experience (2014) 0.01
    0.009601969 = product of:
      0.028805908 = sum of:
        0.028805908 = weight(_text_:to in 1180) [ClassicSimilarity], result of:
          0.028805908 = score(doc=1180,freq=24.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.34791386 = fieldWeight in 1180, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1180)
      0.33333334 = coord(1/3)
    
    Abstract
    Searches for specific factual health information constitute a significant part of consumer health information requests, but little is known about how users search for such information. This study attempts to fill this gap by observing users' behavior while using MedlinePlus to search for specific health information. Nineteen students participated in the study, and each performed 12 specific tasks. During the search process, they submitted short queries or complete questions, and they examined less than 1 result per search. Participants rarely reformulated queries; when they did, they tended to make a query more specific or more general, or iterate in different ways. Participants also browsed, primarily relying on the alphabetical list and the anatomical classification, to navigate to specific health topics. Participants overall had a positive experience with MedlinePlus, and the experience was significantly correlated with task difficulty and participants' spatial abilities. The results suggest that, to better support specific item search in the health domain, systems could provide a more "natural" interface to encourage users to ask questions; effective conceptual hierarchies could be implemented to help users reformulate queries; and the search results page should be reconceptualized as a place for accessing answers rather than documents. Moreover, multiple schemas should be provided to help users navigate to a health topic. The results also suggest that users' experience with information systems in general and health-related systems in particular should be evaluated in relation to contextual factors, such as task features and individual differences.
  7. Trace, C.B.; Zhang, Y.; Yi, S.; Williams-Brown, M.Y.: Information practices around genetic testing for ovarian cancer patients (2023) 0.01
    0.009601969 = product of:
      0.028805908 = sum of:
        0.028805908 = weight(_text_:to in 1071) [ClassicSimilarity], result of:
          0.028805908 = score(doc=1071,freq=24.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.34791386 = fieldWeight in 1071, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1071)
      0.33333334 = coord(1/3)
    
    Abstract
    Knowledge of ovarian cancer patients' information practices around cancer genetic testing (GT) is needed to inform interventions that promote patient access to GT-related information. We interviewed 21 ovarian cancer patients and survivors who had GT as part of the treatment process and analyzed the transcripts using the qualitative content analysis method. We found that patients' information practices, manifested in their information-seeking mode, information sources utilized, information assessment, and information use, showed three distinct styles: passive, semi-active, and active. Patients with the passive style primarily received information from clinical sources, encountered information, or delegated information-seeking to family members; they were not inclined to assess information themselves and seldom used it to learn or influence others. Women with semi-active and active styles adopted more active information-seeking modes to approach information, utilized information sources beyond clinical settings, attempted to assess the information found, and actively used it to learn, educate others, or advocate GT to family and friends. Guided by the social ecological model, we found multiple levels of influences, including personal, interpersonal, organizational, community, and societal, acting as motivators or barriers to patients' information practice. Based on these findings, we discussed strategies to promote patient access to GT-related information.
  8. Ku, Y.; Chiu, C.; Zhang, Y.; Chen, H.; Su, H.: Text mining self-disclosing health information for public health service (2014) 0.01
    0.0081475405 = product of:
      0.02444262 = sum of:
        0.02444262 = weight(_text_:to in 1262) [ClassicSimilarity], result of:
          0.02444262 = score(doc=1262,freq=12.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.29521468 = fieldWeight in 1262, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=1262)
      0.33333334 = coord(1/3)
    
    Abstract
    Understanding specific patterns or knowledge of self-disclosing health information could support public health surveillance and healthcare. This study aimed to develop an analytical framework to identify self-disclosing health information with unusual messages on web forums by leveraging advanced text-mining techniques. To demonstrate the performance of the proposed analytical framework, we conducted an experimental study on 2 major human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS) forums in Taiwan. The experimental results show that the classification accuracy increased significantly (up to 83.83%) when using features selected by the information gain technique. The results also show the importance of adopting domain-specific features in analyzing unusual messages on web forums. This study has practical implications for the prevention and support of HIV/AIDS healthcare. For example, public health agencies can re-allocate resources and deliver services to people who need help via social media sites. In addition, individuals can also join a social media site to get better suggestions and support from each other.
  9. Dang, Y.; Zhang, Y.; Chen, H.; Hu, P.J.-H.; Brown, S.A.; Larson, C.: Arizona Literature Mapper : an integrated approach to monitor and analyze global bioterrorism research literature (2009) 0.01
    0.007839975 = product of:
      0.023519924 = sum of:
        0.023519924 = weight(_text_:to in 2943) [ClassicSimilarity], result of:
          0.023519924 = score(doc=2943,freq=16.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.28407046 = fieldWeight in 2943, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2943)
      0.33333334 = coord(1/3)
    
    Abstract
    Biomedical research is critical to biodefense, which is drawing increasing attention from governments globally as well as from various research communities. The U.S. government has been closely monitoring and regulating biomedical research activities, particularly those studying or involving bioterrorism agents or diseases. Effective surveillance requires comprehensive understanding of extant biomedical research and timely detection of new developments or emerging trends. The rapid knowledge expansion, technical breakthroughs, and spiraling collaboration networks demand greater support for literature search and sharing, which cannot be effectively supported by conventional literature search mechanisms or systems. In this study, we propose an integrated approach that integrates advanced techniques for content analysis, network analysis, and information visualization. We design and implement Arizona Literature Mapper, a Web-based portal that allows users to gain timely, comprehensive understanding of bioterrorism research, including leading scientists, research groups, institutions as well as insights about current mainstream interests or emerging trends. We conduct two user studies to evaluate Arizona Literature Mapper and include a well-known system for benchmarking purposes. According to our results, Arizona Literature Mapper is significantly more effective for supporting users' search of bioterrorism publications than PubMed. Users consider Arizona Literature Mapper more useful and easier to use than PubMed. Users are also more satisfied with Arizona Literature Mapper and show stronger intentions to use it in the future. Assessments of Arizona Literature Mapper's analysis functions are also positive, as our subjects consider them useful, easy to use, and satisfactory. Our results have important implications that are also discussed in the article.
  10. Zhang, Y.; Sun, Y.; Xie, B.: Quality of health information for consumers on the web : a systematic review of indicators, criteria, tools, and evaluation results (2015) 0.01
    0.007839975 = product of:
      0.023519924 = sum of:
        0.023519924 = weight(_text_:to in 2218) [ClassicSimilarity], result of:
          0.023519924 = score(doc=2218,freq=16.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.28407046 = fieldWeight in 2218, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2218)
      0.33333334 = coord(1/3)
    
    Abstract
    The quality of online health information for consumers has been a critical issue that concerns all stakeholders in healthcare. To gain an understanding of how quality is evaluated, this systematic review examined 165 articles in which researchers evaluated the quality of consumer-oriented health information on the web against predefined criteria. It was found that studies typically evaluated quality in relation to the substance and formality of content, as well as to the design of technological platforms. Attention to design, particularly interactivity, privacy, and social and cultural appropriateness is on the rise, which suggests the permeation of a user-centered perspective into the evaluation of health information systems, and a growing recognition of the need to study these systems from a social-technical perspective. Researchers used many preexisting instruments to facilitate evaluation of the formality of content; however, only a few were used in multiple studies, and their validity was questioned. The quality of content (i.e., accuracy and completeness) was always evaluated using proprietary instruments constructed based on medical guidelines or textbooks. The evaluation results revealed that the quality of health information varied across medical domains and across websites, and that the overall quality remained problematic. Future research is needed to examine the quality of user-generated content and to explore opportunities offered by emerging new media that can facilitate the consumer evaluation of health information.
  11. Zhang, Y.; Zhang, C.: Enhancing keyphrase extraction from microblogs using human reading time (2021) 0.01
    0.007839975 = product of:
      0.023519924 = sum of:
        0.023519924 = weight(_text_:to in 237) [ClassicSimilarity], result of:
          0.023519924 = score(doc=237,freq=16.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.28407046 = fieldWeight in 237, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=237)
      0.33333334 = coord(1/3)
    
    Abstract
    The premise of manual keyphrase annotation is to read the corresponding content of an annotated object. Intuitively, when we read, more important words will occupy a longer reading time. Hence, by leveraging human reading time, we can find the salient words in the corresponding content. However, previous studies on keyphrase extraction ignore human reading features. In this article, we aim to leverage human reading time to extract keyphrases from microblog posts. There are two main tasks in this study. One is to determine how to measure the time spent by a human on reading a word. We use eye fixation durations (FDs) extracted from an open source eye-tracking corpus. Moreover, we propose strategies to make eye FD more effective on keyphrase extraction. The other task is to determine how to integrate human reading time into keyphrase extraction models. We propose two novel neural network models. The first is a model in which the human reading time is used as the ground truth of the attention mechanism. In the second model, we use human reading time as the external feature. Quantitative and qualitative experiments show that our proposed models yield better performance than the baseline models on two microblog datasets.
  12. Zhang, Y.: Using the Internet for survey research : a case study (2000) 0.01
    0.0076815756 = product of:
      0.023044726 = sum of:
        0.023044726 = weight(_text_:to in 4294) [ClassicSimilarity], result of:
          0.023044726 = score(doc=4294,freq=6.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.2783311 = fieldWeight in 4294, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0625 = fieldNorm(doc=4294)
      0.33333334 = coord(1/3)
    
    Abstract
    The Internet provides opportunities to conduct surveys more efficiently and effectively than traditional means. This article reviews previous studies that use the Internet for survey research. It discusses the methodological issues and problems associated with this nes approach. By presenting a case study, it seeks possible solutions to some of the problems, and explores the potential the Internet can offer to survey researchers
  13. Chen, H.; Zhang, Y.; Houston, A.L.: Semantic indexing and searching using a Hopfield net (1998) 0.01
    0.0074376534 = product of:
      0.02231296 = sum of:
        0.02231296 = weight(_text_:to in 5704) [ClassicSimilarity], result of:
          0.02231296 = score(doc=5704,freq=10.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.26949292 = fieldWeight in 5704, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=5704)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents a neural network approach to document semantic indexing. Reports results of a study to apply a Hopfield net algorithm to simulate human associative memory for concept exploration in the domain of computer science and engineering. The INSPEC database, consisting of 320.000 abstracts from leading periodical articles was used as the document test bed. Benchmark tests conformed that 3 parameters: maximum number of activated nodes; maximum allowable error; and maximum number of iterations; were useful in positively influencing network convergence behaviour without negatively impacting central processing unit performance. Another series of benchmark tests was performed to determine the effectiveness of various filtering techniques in reducing the negative impact of noisy input terms. Preliminary user tests conformed expectations that the Hopfield net is potentially useful as an associative memory technique to improve document recall and precision by solving discrepancies between indexer vocabularies and end user vocabularies
  14. Zhang, Y.: ¬The influence of mental models on undergraduate students' searching behavior on the Web (2008) 0.01
    0.0074376534 = product of:
      0.02231296 = sum of:
        0.02231296 = weight(_text_:to in 2097) [ClassicSimilarity], result of:
          0.02231296 = score(doc=2097,freq=10.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.26949292 = fieldWeight in 2097, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=2097)
      0.33333334 = coord(1/3)
    
    Abstract
    This article explores the effects of undergraduate students' mental models of the Web on their online searching behavior. Forty-four undergraduate students, mainly freshmen and sophomores, participated in the study. Subjects' mental models of the Web were treated as equally good styles and operationalized as drawings of their perceptions about the Web. Four types of mental models of the Web were identified based on the drawings and the associated descriptions: technical view, functional view, process view, and connection view. In the study, subjects were required to finish two search tasks. Searching behavior was measured from four aspects: navigation and performance, subjects' feelings about tasks and their own performances, query construction, and search patterns. The four mental model groups showed different navigation and querying behaviors, but the differences were not significant. Subjects' satisfaction with their own performances was found to be significantly correlated with the time to complete the task. The results also showed that the familiarity of the task to subjects had a major effect on their ways to start interaction, query construction, and search patterns.
  15. Zhang, Y.: Complex adaptive filtering user profile using graphical models (2008) 0.01
    0.0074376534 = product of:
      0.02231296 = sum of:
        0.02231296 = weight(_text_:to in 2445) [ClassicSimilarity], result of:
          0.02231296 = score(doc=2445,freq=10.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.26949292 = fieldWeight in 2445, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=2445)
      0.33333334 = coord(1/3)
    
    Abstract
    This article explores how to develop complex data driven user models that go beyond the bag of words model and topical relevance. We propose to learn from rich user specific information and to satisfy complex user criteria under the graphical modelling framework. We carried out a user study with a web based personal news filtering system, and collected extensive user information, including explicit user feedback, implicit user feedback and some contextual information. Experimental results on the data set collected demonstrate that the graphical modelling approach helps us to better understand the complex domain. The results also show that the complex data driven user modelling approach can improve the adaptive information filtering performance. We also discuss some practical issues while learning complex user models, including how to handle data noise and the missing data problem.
  16. Zhang, Y.: Dimensions and elements of people's mental models of an information-rich Web space (2010) 0.01
    0.0073336246 = product of:
      0.022000873 = sum of:
        0.022000873 = weight(_text_:to in 4098) [ClassicSimilarity], result of:
          0.022000873 = score(doc=4098,freq=14.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.2657236 = fieldWeight in 4098, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4098)
      0.33333334 = coord(1/3)
    
    Abstract
    Although considered proxies for people to interact with a system, mental models have produced limited practical implications for system design. This might be due to the lack of exploration of the elements of mental models resulting from the methodological challenge of measuring mental models. This study employed a new method, concept listing, to elicit people's mental models of an information-rich space, MedlinePlus, after they interacted with the system for 5 minutes. Thirty-eight undergraduate students participated in the study. The results showed that, in this short period of time, participants perceived MedlinePlus from many different aspects in relation to four components: the system as a whole, its content, information organization, and interface. Meanwhile, participants expressed evaluations of or emotions about the four components. In terms of the procedural knowledge, an integral part of people's mental models, only one participant identified a strategy more aligned to the capabilities of MedlinePlus to solve a hypothetical task; the rest planned to use general search and browse strategies. The composition of participants' mental models of MedlinePlus was consistent with that of their models of information-rich Web spaces in general.
  17. Zhang, Y.: Beyond quality and accessibility : source selection in consumer health information searching (2014) 0.01
    0.0073336246 = product of:
      0.022000873 = sum of:
        0.022000873 = weight(_text_:to in 1252) [ClassicSimilarity], result of:
          0.022000873 = score(doc=1252,freq=14.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.2657236 = fieldWeight in 1252, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1252)
      0.33333334 = coord(1/3)
    
    Abstract
    A systematic understanding of factors and criteria that affect consumers' selection of sources for health information is necessary for the design of effective health information services and information systems. However, current studies have overly focused on source attributes as indicators for 2 criteria, source quality and accessibility, and overlooked the role of other factors and criteria that help determine source selection. To fill this gap, guided by decision-making theories and the cognitive perspective to information search, we interviewed 30 participants about their reasons for using a wide range of sources for health information. Additionally, we asked each of them to report a critical incident in which sources were selected to fulfill a specific information need. Based on the analysis of the transcripts, 5 categories of factors were identified as influential to source selection: source-related factors, user-related factors, user-source relationships, characteristics of the problematic situation, and social influences. In addition, about a dozen criteria that mediate the influence of the factors on source-selection decisions were identified, including accessibility, quality, usability, interactivity, relevance, usefulness, familiarity, affection, anonymity, and appropriateness. These results significantly expanded the current understanding of the nature of costs and benefits involved in source-selection decisions, and strongly indicated that a personalized approach is needed for information services and information systems to provide effective access to health information sources for consumers.
  18. Zhang, Y.; Broussard, R.; Ke, W.; Gong, X.: Evaluation of a scatter/gather interface for supporting distinct health information search tasks (2014) 0.01
    0.0073336246 = product of:
      0.022000873 = sum of:
        0.022000873 = weight(_text_:to in 1261) [ClassicSimilarity], result of:
          0.022000873 = score(doc=1261,freq=14.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.2657236 = fieldWeight in 1261, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1261)
      0.33333334 = coord(1/3)
    
    Abstract
    Web search engines are important gateways for users to access health information. This study explored whether a search interface based on the Bing API and enabled by Scatter/Gather, a well-known document-clustering technique, can improve health information searches. Forty participants without medical backgrounds were randomly assigned to two interfaces: a baseline interface that resembles typical web search engines and a Scatter/Gather interface. Both groups performed two lookup and two exploratory health-related tasks. It was found that the baseline group was more likely to rephrase queries and less likely to access general-purpose sites than the Scatter/Gather group when completing exploratory tasks. Otherwise, the two groups did not differ in behavior and task performance, with participants in the Scatter/Gather group largely overlooking the features (key words, clusters, and the recluster function) designed to facilitate the exploration of semantic relationships between information objects, a potentially useful means for users in the rather unfamiliar domain of health. The results suggest a strong effect of users' mental models of search on their use of search interfaces and a high cognitive cost associated with using the Scatter/Gather features. It follows that novel features of a search interface should not only be compatible with users' mental models but also provide sufficient affordance to inform users of how they can be used. Compared with the interface, tasks showed more significant impacts on search behavior. In future studies, more effort should be devoted to identify salient features of health-related information needs.
  19. Zhang, Y.; Zhang, G.; Zhu, D.; Lu, J.: Scientific evolutionary pathways : identifying and visualizing relationships for scientific topics (2017) 0.01
    0.0073336246 = product of:
      0.022000873 = sum of:
        0.022000873 = weight(_text_:to in 3758) [ClassicSimilarity], result of:
          0.022000873 = score(doc=3758,freq=14.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.2657236 = fieldWeight in 3758, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3758)
      0.33333334 = coord(1/3)
    
    Abstract
    Whereas traditional science maps emphasize citation statistics and static relationships, this paper presents a term-based method to identify and visualize the evolutionary pathways of scientific topics in a series of time slices. First, we create a data preprocessing model for accurate term cleaning, consolidating, and clustering. Then we construct a simulated data streaming function and introduce a learning process to train a relationship identification function to adapt to changing environments in real time, where relationships of topic evolution, fusion, death, and novelty are identified. The main result of the method is a map of scientific evolutionary pathways. The visual routines provide a way to indicate the interactions among scientific subjects and a version in a series of time slices helps further illustrate such evolutionary pathways in detail. The detailed outline offers sufficient statistical information to delve into scientific topics and routines and then helps address meaningful insights with the assistance of expert knowledge. This empirical study focuses on scientific proposals granted by the United States National Science Foundation, and demonstrates the feasibility and reliability. Our method could be widely applied to a range of science, technology, and innovation policy research, and offer insight into the evolutionary pathways of scientific activities.
  20. Zhang, Y.; Zhang, C.; Li, J.: Joint modeling of characters, words, and conversation contexts for microblog keyphrase extraction (2020) 0.01
    0.0073336246 = product of:
      0.022000873 = sum of:
        0.022000873 = weight(_text_:to in 5816) [ClassicSimilarity], result of:
          0.022000873 = score(doc=5816,freq=14.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.2657236 = fieldWeight in 5816, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5816)
      0.33333334 = coord(1/3)
    
    Abstract
    Millions of messages are produced on microblog platforms every day, leading to the pressing need for automatic identification of key points from the massive texts. To absorb salient content from the vast bulk of microblog posts, this article focuses on the task of microblog keyphrase extraction. In previous work, most efforts treat messages as independent documents and might suffer from the data sparsity problem exhibited in short and informal microblog posts. On the contrary, we propose to enrich contexts via exploiting conversations initialized by target posts and formed by their replies, which are generally centered around relevant topics to the target posts and therefore helpful for keyphrase identification. Concretely, we present a neural keyphrase extraction framework, which has 2 modules: a conversation context encoder and a keyphrase tagger. The conversation context encoder captures indicative representation from their conversation contexts and feeds the representation into the keyphrase tagger, and the keyphrase tagger extracts salient words from target posts. The 2 modules were trained jointly to optimize the conversation context encoding and keyphrase extraction processes. In the conversation context encoder, we leverage hierarchical structures to capture the word-level indicative representation and message-level indicative representation hierarchically. In both of the modules, we apply character-level representations, which enables the model to explore morphological features and deal with the out-of-vocabulary problem caused by the informal language style of microblog messages. Extensive comparison results on real-life data sets indicate that our model outperforms state-of-the-art models from previous studies.

Years