Search (23 results, page 1 of 2)

  • × author_ss:"Zhang, Y."
  1. Zhang, Y.: Complex adaptive filtering user profile using graphical models (2008) 0.05
    0.047419276 = product of:
      0.09483855 = sum of:
        0.06940313 = weight(_text_:data in 2445) [ClassicSimilarity], result of:
          0.06940313 = score(doc=2445,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.46871632 = fieldWeight in 2445, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2445)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 2445) [ClassicSimilarity], result of:
              0.05087085 = score(doc=2445,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 2445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2445)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article explores how to develop complex data driven user models that go beyond the bag of words model and topical relevance. We propose to learn from rich user specific information and to satisfy complex user criteria under the graphical modelling framework. We carried out a user study with a web based personal news filtering system, and collected extensive user information, including explicit user feedback, implicit user feedback and some contextual information. Experimental results on the data set collected demonstrate that the graphical modelling approach helps us to better understand the complex domain. The results also show that the complex data driven user modelling approach can improve the adaptive information filtering performance. We also discuss some practical issues while learning complex user models, including how to handle data noise and the missing data problem.
    Source
    Information processing and management. 44(2008) no.6, S.1886-1900
  2. Zhang, Y.; Xu, W.: Fast exact maximum likelihood estimation for mixture of language model (2008) 0.03
    0.03466491 = product of:
      0.06932982 = sum of:
        0.043894395 = weight(_text_:data in 2082) [ClassicSimilarity], result of:
          0.043894395 = score(doc=2082,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 2082, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2082)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 2082) [ClassicSimilarity], result of:
              0.05087085 = score(doc=2082,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 2082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2082)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Language modeling is an effective and theoretically attractive probabilistic framework for text information retrieval. The basic idea of this approach is to estimate a language model of a given document (or document set), and then do retrieval or classification based on this model. A common language modeling approach assumes the data D is generated from a mixture of several language models. The core problem is to find the maximum likelihood estimation of one language model mixture, given the fixed mixture weights and the other language model mixture. The EM algorithm is usually used to find the solution. In this paper, we proof that an exact maximum likelihood estimation of the unknown mixture component exists and can be calculated using the new algorithm we proposed. We further improve the algorithm and provide an efficient algorithm of O(k) complexity to find the exact solution, where k is the number of words occurring at least once in data D. Furthermore, we proof the probabilities of many words are exactly zeros, and the MLE estimation is implemented as a feature selection technique explicitly.
    Source
    Information processing and management. 44(2008) no.3, S.1076-1085
  3. Zhang, X.; Fang, Y.; He, W.; Zhang, Y.; Liu, X.: Epistemic motivation, task reflexivity, and knowledge contribution behavior on team wikis : a cross-level moderation model (2019) 0.03
    0.028236724 = product of:
      0.05647345 = sum of:
        0.031038022 = weight(_text_:data in 5245) [ClassicSimilarity], result of:
          0.031038022 = score(doc=5245,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 5245, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5245)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 5245) [ClassicSimilarity], result of:
              0.05087085 = score(doc=5245,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 5245, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5245)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A cross-level model based on the information processing perspective and trait activation theory was developed and tested in order to investigate the effects of individual-level epistemic motivation and team-level task reflexivity on three different individual contribution behaviors (i.e., adding, deleting, and revising) in the process of knowledge creation on team wikis. Using the Hierarchical Linear Modeling software package and the 2-wave data from 166 individuals in 51 wiki-based teams, we found cross-level interaction effects between individual epistemic motivation and team task reflexivity on different knowledge contribution behaviors on wikis. Epistemic motivation exerted a positive effect on adding, which was strengthened by team task reflexivity. The effect of epistemic motivation on deleting was positive only when task reflexivity was high. In addition, epistemic motivation was strongly positively related to revising, regardless of the level of task reflexivity involved.
  4. Zhang, Y.; Trace, C.B.: ¬The quality of health and wellness self-tracking data : a consumer perspective (2022) 0.02
    0.019398764 = product of:
      0.077595055 = sum of:
        0.077595055 = weight(_text_:data in 459) [ClassicSimilarity], result of:
          0.077595055 = score(doc=459,freq=18.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.52404076 = fieldWeight in 459, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=459)
      0.25 = coord(1/4)
    
    Abstract
    Information quality (IQ) is key to users' satisfaction with information systems. Understanding what IQ means to users can effectively inform system improvement. Existing inquiries into self-tracking data quality primarily focus on accuracy. Interviewing 20 consumers who had self-tracked health indicators for at least 6 months, we identified eight dimensions that consumers apply to evaluate self-tracking data quality: value-added, accuracy, completeness, accessibility, ease of understanding, trustworthiness, aesthetics, and invasiveness. These dimensions fell into four categories-intrinsic, contextual, representational, and accessibility-suggesting that consumers judge self-tracking data quality not only based on the data's inherent quality but also considering tasks at hand, the clarity of data representation, and data accessibility. We also found that consumers' self-tracking data quality judgments are shaped primarily by their goals or motivations, subjective experience with tracked activities, mental models of how systems work, self-tracking tools' reputation, cost, and design, and domain knowledge and intuition, but less by more objective criteria such as scientific research results, validated devices, or consultation with experts. Future studies should develop and validate a scale for measuring consumers' perceptions of self-tracking data quality and commit efforts to develop technologies and training materials to enhance consumers' ability to evaluate data quality.
  5. Shah, C.; Anderson, T.; Hagen, L.; Zhang, Y.: ¬An iSchool approach to data science : human-centered, socially responsible, and context-driven (2021) 0.02
    0.017108103 = product of:
      0.06843241 = sum of:
        0.06843241 = weight(_text_:data in 244) [ClassicSimilarity], result of:
          0.06843241 = score(doc=244,freq=14.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.46216056 = fieldWeight in 244, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=244)
      0.25 = coord(1/4)
    
    Abstract
    The Information Schools, also referred to as iSchools, have a unique approach to data science with three distinct components: human-centeredness, socially responsible, and rooted in context. In this position paper, we highlight and expand on these components and show how they are integrated in various research and educational activities related to data science that are being carried out at iSchools. We argue that the iSchool way of doing data science is not only highly relevant to the current times, but also crucial in solving problems of tomorrow. Specifically, we accentuate the issues of developing insights and solutions that are not only data-driven, but also incorporate human values, including transparency, privacy, ethics, fairness, and equity. This approach to data science has meaningful implications on how we educate the students and train the next generation of scholars and policymakers. Here, we provide some of those design decisions, rooted in evidence-based research, along with our perspective on how data science is currently situated and how it should be advanced in iSchools.
  6. Zhang, Y.; Wu, D.; Hagen, L.; Song, I.-Y.; Mostafa, J.; Oh, S.; Anderson, T.; Shah, C.; Bishop, B.W.; Hopfgartner, F.; Eckert, K.; Federer, L.; Saltz, J.S.: Data science curriculum in the iField (2023) 0.01
    0.01293251 = product of:
      0.05173004 = sum of:
        0.05173004 = weight(_text_:data in 964) [ClassicSimilarity], result of:
          0.05173004 = score(doc=964,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34936053 = fieldWeight in 964, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=964)
      0.25 = coord(1/4)
    
    Abstract
    Many disciplines, including the broad Field of Information (iField), offer Data Science (DS) programs. There have been significant efforts exploring an individual discipline's identity and unique contributions to the broader DS education landscape. To advance DS education in the iField, the iSchool Data Science Curriculum Committee (iDSCC) was formed and charged with building and recommending a DS education framework for iSchools. This paper reports on the research process and findings of a series of studies to address important questions: What is the iField identity in the multidisciplinary DS education landscape? What is the status of DS education in iField schools? What knowledge and skills should be included in the core curriculum for iField DS education? What are the jobs available for DS graduates from the iField? What are the differences between graduate-level and undergraduate-level DS education? Answers to these questions will not only distinguish an iField approach to DS education but also define critical components of DS curriculum. The results will inform individual DS programs in the iField to develop curriculum to support undergraduate and graduate DS education in their local context.
    Footnote
    Beitrag in einem Special issue on "Data Science in the iField".
  7. Zhang, Y.; Ren, P.; Rijke, M. de: ¬A taxonomy, data set, and benchmark for detecting and classifying malevolent dialogue responses (2021) 0.01
    0.010973599 = product of:
      0.043894395 = sum of:
        0.043894395 = weight(_text_:data in 356) [ClassicSimilarity], result of:
          0.043894395 = score(doc=356,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 356, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=356)
      0.25 = coord(1/4)
    
    Abstract
    Conversational interfaces are increasingly popular as a way of connecting people to information. With the increased generative capacity of corpus-based conversational agents comes the need to classify and filter out malevolent responses that are inappropriate in terms of content and dialogue acts. Previous studies on the topic of detecting and classifying inappropriate content are mostly focused on a specific category of malevolence or on single sentences instead of an entire dialogue. We make three contributions to advance research on the malevolent dialogue response detection and classification (MDRDC) task. First, we define the task and present a hierarchical malevolent dialogue taxonomy. Second, we create a labeled multiturn dialogue data set and formulate the MDRDC task as a hierarchical classification task. Last, we apply state-of-the-art text classification methods to the MDRDC task, and report on experiments aimed at assessing the performance of these approaches.
  8. Zhang, Y.: Undergraduate students' mental models of the Web as an information retrieval system (2008) 0.01
    0.009144665 = product of:
      0.03657866 = sum of:
        0.03657866 = weight(_text_:data in 2385) [ClassicSimilarity], result of:
          0.03657866 = score(doc=2385,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 2385, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2385)
      0.25 = coord(1/4)
    
    Abstract
    This study explored undergraduate students' mental models of the Web as an information retrieval system. Mental models play an important role in people's interaction with information systems. Better understanding of people's mental models could inspire better interface design and user instruction. Multiple data-collection methods, including questionnaire, semistructured interview, drawing, and participant observation, were used to elicit students' mental models of the Web from different perspectives, though only data from interviews and drawing descriptions are reported in this article. Content analysis of the transcripts showed that students had utilitarian rather than structural mental models of the Web. The majority of participants saw the Web as a huge information resource where everything can be found rather than an infrastructure consisting of hardware and computer applications. Students had different mental models of how information is organized on the Web, and the models varied in correctness and complexity. Students' mental models of search on the Web were illustrated from three points of view: avenues of getting information, understanding of search engines' working mechanisms, and search tactics. The research results suggest that there are mainly three sources contributing to the construction of mental models: personal observation, communication with others, and class instruction. In addition to structural and functional aspects, mental models have an emotional dimension.
  9. Zhang, Y.; Zhang, G.; Zhu, D.; Lu, J.: Scientific evolutionary pathways : identifying and visualizing relationships for scientific topics (2017) 0.01
    0.009144665 = product of:
      0.03657866 = sum of:
        0.03657866 = weight(_text_:data in 3758) [ClassicSimilarity], result of:
          0.03657866 = score(doc=3758,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 3758, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3758)
      0.25 = coord(1/4)
    
    Abstract
    Whereas traditional science maps emphasize citation statistics and static relationships, this paper presents a term-based method to identify and visualize the evolutionary pathways of scientific topics in a series of time slices. First, we create a data preprocessing model for accurate term cleaning, consolidating, and clustering. Then we construct a simulated data streaming function and introduce a learning process to train a relationship identification function to adapt to changing environments in real time, where relationships of topic evolution, fusion, death, and novelty are identified. The main result of the method is a map of scientific evolutionary pathways. The visual routines provide a way to indicate the interactions among scientific subjects and a version in a series of time slices helps further illustrate such evolutionary pathways in detail. The detailed outline offers sufficient statistical information to delve into scientific topics and routines and then helps address meaningful insights with the assistance of expert knowledge. This empirical study focuses on scientific proposals granted by the United States National Science Foundation, and demonstrates the feasibility and reliability. Our method could be widely applied to a range of science, technology, and innovation policy research, and offer insight into the evolutionary pathways of scientific activities.
  10. Zhang, Y.; Zhang, C.; Li, J.: Joint modeling of characters, words, and conversation contexts for microblog keyphrase extraction (2020) 0.01
    0.009144665 = product of:
      0.03657866 = sum of:
        0.03657866 = weight(_text_:data in 5816) [ClassicSimilarity], result of:
          0.03657866 = score(doc=5816,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 5816, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5816)
      0.25 = coord(1/4)
    
    Abstract
    Millions of messages are produced on microblog platforms every day, leading to the pressing need for automatic identification of key points from the massive texts. To absorb salient content from the vast bulk of microblog posts, this article focuses on the task of microblog keyphrase extraction. In previous work, most efforts treat messages as independent documents and might suffer from the data sparsity problem exhibited in short and informal microblog posts. On the contrary, we propose to enrich contexts via exploiting conversations initialized by target posts and formed by their replies, which are generally centered around relevant topics to the target posts and therefore helpful for keyphrase identification. Concretely, we present a neural keyphrase extraction framework, which has 2 modules: a conversation context encoder and a keyphrase tagger. The conversation context encoder captures indicative representation from their conversation contexts and feeds the representation into the keyphrase tagger, and the keyphrase tagger extracts salient words from target posts. The 2 modules were trained jointly to optimize the conversation context encoding and keyphrase extraction processes. In the conversation context encoder, we leverage hierarchical structures to capture the word-level indicative representation and message-level indicative representation hierarchically. In both of the modules, we apply character-level representations, which enables the model to explore morphological features and deal with the out-of-vocabulary problem caused by the informal language style of microblog messages. Extensive comparison results on real-life data sets indicate that our model outperforms state-of-the-art models from previous studies.
  11. Zhang, Y.; Li, Y.: ¬A user-centered functional metadata evaluation of moving image collections (2008) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 1884) [ClassicSimilarity], result of:
          0.02586502 = score(doc=1884,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 1884, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1884)
      0.25 = coord(1/4)
    
    Abstract
    In this article, the authors report a series of evaluations of two metadata schemes developed for Moving Image Collections (MIC), an integrated online catalog of moving images. Through two online surveys and one experiment spanning various stages of metadata implementation, the MIC evaluation team explored a user-centered approach in which the four generic user tasks suggested by IFLA FRBR (International Association of Library Associations Functional Requirement for Bibliographic Records) were embedded in data collection and analyses. Diverse groups of users rated usefulness of individual metadata fields for finding, identifying, selecting, and obtaining moving images. The results demonstrate a consistency across these evaluations with respect to (a) identification of a set of useful metadata fields highly rated by target users for each of the FRBR generic tasks, and (b) indication of a significant interaction between MIC metadata fields and the FRBR generic tasks. The findings provide timely feedback for the MIC implementation specifically, and valuable suggestions to other similar metadata application settings in general. They also suggest the feasibility of using the four IFLA FRBR generic tasks as a framework for user-centered functional metadata evaluations.
  12. Zhang, X.; Li, Y.; Liu, J.; Zhang, Y.: Effects of interaction design in digital libraries on user interactions (2008) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 1898) [ClassicSimilarity], result of:
          0.02586502 = score(doc=1898,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 1898, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1898)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This study aims to investigate the effects of different search and browse features in digital libraries (DLs) on task interactions, and what features would lead to poor user experience. Design/methodology/approach - Three operational DLs: ACM, IEEE CS, and IEEE Xplore are used in this study. These three DLs present different features in their search and browsing designs. Two information-seeking tasks are constructed: one search task and one browsing task. An experiment was conducted in a usability laboratory. Data from 35 participants are collected on a set of measures for user interactions. Findings - The results demonstrate significant differences in many aspects of the user interactions between the three DLs. For both search and browse designs, the features that lead to poor user interactions are identified. Research limitations/implications - User interactions are affected by specific design features in DLs. Some of the design features may lead to poor user performance and should be improved. The study was limited mainly in the variety and the number of tasks used. Originality/value - The study provided empirical evidence to the effects of interaction design features in DLs on user interactions and performance. The results contribute to our knowledge about DL designs in general and about the three operational DLs in particular.
  13. Zhang, Y.; Kudva, S.: E-books versus print books : readers' choices and preferences across contexts (2014) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 1335) [ClassicSimilarity], result of:
          0.02586502 = score(doc=1335,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 1335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1335)
      0.25 = coord(1/4)
    
    Abstract
    With electronic book (e-book) sales and readership rising, are e-books positioned to replace print books? This study examines the preference for e-books and print books in the contexts of reading purpose, reading situation, and contextual variables such as age, gender, education level, race/ethnicity, income, community type, and Internet use. In addition, this study aims to identify factors that contribute to e-book adoption. Participants were a nationally representative sample of 2,986 people in the United States from the Reading Habits Survey, conducted by the Pew Research Center's Internet & American Life Project (http://pewinternet.org/Shared-Content/Data-Sets/2011/December-2011--Reading-Habits.aspx). While the results of this study support the notion that e-books have firmly established a place in people's lives, due to their convenience of access, e-books are not yet positioned to replace print books. Both print books and e-books have unique attributes and serve irreplaceable functions to meet people's reading needs, which may vary by individual demographic, contextual, and situational factors. At this point, the leading significant predictors of e-book adoption are the number of books read, the individual's income, the occurrence and frequency of reading for research topics of interest, and the individual's Internet use, followed by other variables such as race/ethnicity, reading for work/school, age, and education.
  14. Lu, C.; Zhang, Y.; Ahn, Y.-Y.; Ding, Y.; Zhang, C.; Ma, D.: Co-contributorship network and division of labor in individual scientific collaborations (2020) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 5963) [ClassicSimilarity], result of:
          0.02586502 = score(doc=5963,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 5963, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5963)
      0.25 = coord(1/4)
    
    Abstract
    Collaborations are pervasive in current science. Collaborations have been studied and encouraged in many disciplines. However, little is known about how a team really functions from the detailed division of labor within. In this research, we investigate the patterns of scientific collaboration and division of labor within individual scholarly articles by analyzing their co-contributorship networks. Co-contributorship networks are constructed by performing the one-mode projection of the author-task bipartite networks obtained from 138,787 articles published in PLoS journals. Given an article, we define 3 types of contributors: Specialists, Team-players, and Versatiles. Specialists are those who contribute to all their tasks alone; team-players are those who contribute to every task with other collaborators; and versatiles are those who do both. We find that team-players are the majority and they tend to contribute to the 5 most common tasks as expected, such as "data analysis" and "performing experiments." The specialists and versatiles are more prevalent than expected by our designed 2 null models. Versatiles tend to be senior authors associated with funding and supervision. Specialists are associated with 2 contrasting roles: the supervising role as team leaders or marginal and specialized contributors.
  15. Zhang, Y.; Zheng, G.; Yan, H.: Bridging information and communication technology and older adults by social network : an action research in Sichuan, China (2023) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 1089) [ClassicSimilarity], result of:
          0.02586502 = score(doc=1089,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 1089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1089)
      0.25 = coord(1/4)
    
    Abstract
    The extant literature demonstrates that the age-related digital divide prevents older adults from enhancing their quality of life. To bridge this gap and promote active aging, this study explores the interplay between social networks and older adults' use of information and communication technology (ICT). Using an action-oriented field research approach, we offered technical help (29 help sessions) to older adult participants recruited from western China. Then, we conducted content analysis to examine the obtained video, audio, and text data. Our results show that, first, different types of social networks significantly influence older adults' ICT use in terms of digital skills, engagement, and attitudes; however, these effects vary from person to person. In particular, our results highlight the crucial role of a stable and long-term supportive social network in learning and mastering ICT for older residents. Second, technical help facilitates the building and reinforcing of such a social network for the participants. Our study has strong implications in that policymakers can foster the digital inclusion of older people through supportive social networks.
  16. Chen, H.; Zhang, Y.; Houston, A.L.: Semantic indexing and searching using a Hopfield net (1998) 0.01
    0.0063588563 = product of:
      0.025435425 = sum of:
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 5704) [ClassicSimilarity], result of:
              0.05087085 = score(doc=5704,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 5704, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5704)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Presents a neural network approach to document semantic indexing. Reports results of a study to apply a Hopfield net algorithm to simulate human associative memory for concept exploration in the domain of computer science and engineering. The INSPEC database, consisting of 320.000 abstracts from leading periodical articles was used as the document test bed. Benchmark tests conformed that 3 parameters: maximum number of activated nodes; maximum allowable error; and maximum number of iterations; were useful in positively influencing network convergence behaviour without negatively impacting central processing unit performance. Another series of benchmark tests was performed to determine the effectiveness of various filtering techniques in reducing the negative impact of noisy input terms. Preliminary user tests conformed expectations that the Hopfield net is potentially useful as an associative memory technique to improve document recall and precision by solving discrepancies between indexer vocabularies and end user vocabularies
  17. Zhang, Y.: ¬The influence of mental models on undergraduate students' searching behavior on the Web (2008) 0.01
    0.0063588563 = product of:
      0.025435425 = sum of:
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 2097) [ClassicSimilarity], result of:
              0.05087085 = score(doc=2097,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 2097, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2097)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 44(2008) no.3, S.1330-1345
  18. Zhang, Y.: ¬The impact of Internet-based electronic resources on formal scholarly communication in the area of library and information science : a citation analysis (1998) 0.01
    0.0056077703 = product of:
      0.022431081 = sum of:
        0.022431081 = product of:
          0.044862162 = sum of:
            0.044862162 = weight(_text_:22 in 2808) [ClassicSimilarity], result of:
              0.044862162 = score(doc=2808,freq=4.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.27358043 = fieldWeight in 2808, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2808)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 1.1999 17:22:22
  19. Tenopir, C.; Wang, P.; Zhang, Y.; Simmons, B.; Pollard, R.: Academic users' interactions with ScienceDirect in search tasks : affective and cognitive behaviors (2008) 0.01
    0.005299047 = product of:
      0.021196188 = sum of:
        0.021196188 = product of:
          0.042392377 = sum of:
            0.042392377 = weight(_text_:processing in 2027) [ClassicSimilarity], result of:
              0.042392377 = score(doc=2027,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.22363065 = fieldWeight in 2027, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2027)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 44(2008) no.1, S.105-121
  20. Zhang, Y.: Developing a holistic model for digital library evaluation (2010) 0.00
    0.0047583506 = product of:
      0.019033402 = sum of:
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 2360) [ClassicSimilarity], result of:
              0.038066804 = score(doc=2360,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 2360, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2360)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This article reports the author's recent research in developing a holistic model for various levels of digital library (DL) evaluation in which perceived important criteria from heterogeneous stakeholder groups are organized and presented. To develop such a model, the author applied a three-stage research approach: exploration, confirmation, and verification. During the exploration stage, a literature review was conducted followed by an interview, along with a card sorting technique, to collect important criteria perceived by DL experts. Then the criteria identified were used for developing an online survey during the confirmation stage. Survey respondents (431 in total) from 22 countries rated the importance of the criteria. A holistic DL evaluation model was constructed using statistical techniques. Eventually, the verification stage was devised to test the reliability of the model in the context of searching and evaluating an operational DL. The proposed model fills two lacunae in the DL domain: (a) the lack of a comprehensive and flexible framework to guide and benchmark evaluations, and (b) the uncertainty about what divergence exists among heterogeneous DL stakeholders, including general users.