Search (14 results, page 1 of 1)

  • × author_ss:"Zhang, Y."
  1. Zhang, Y.; Salaba, A.: Implementing FRBR in libraries : key issues and future directions (2009) 0.03
    0.029655844 = product of:
      0.13839394 = sum of:
        0.03364573 = weight(_text_:classification in 345) [ClassicSimilarity], result of:
          0.03364573 = score(doc=345,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.35186368 = fieldWeight in 345, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.078125 = fieldNorm(doc=345)
        0.07110247 = weight(_text_:bibliographic in 345) [ClassicSimilarity], result of:
          0.07110247 = score(doc=345,freq=4.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.6082881 = fieldWeight in 345, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.078125 = fieldNorm(doc=345)
        0.03364573 = weight(_text_:classification in 345) [ClassicSimilarity], result of:
          0.03364573 = score(doc=345,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.35186368 = fieldWeight in 345, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.078125 = fieldNorm(doc=345)
      0.21428572 = coord(3/14)
    
    Footnote
    Rez. in: Cataloging & Classification Quarterly, Volume 49(2011) no.1, S.47-49 (William Denton).
    RSWK
    Functional Requirements for Bibliographic Records (BVB)
    Subject
    Functional Requirements for Bibliographic Records (BVB)
  2. Zhang, Y.; Ren, P.; Rijke, M. de: ¬A taxonomy, data set, and benchmark for detecting and classifying malevolent dialogue responses (2021) 0.01
    0.009990192 = product of:
      0.06993134 = sum of:
        0.03496567 = weight(_text_:classification in 356) [ClassicSimilarity], result of:
          0.03496567 = score(doc=356,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.3656675 = fieldWeight in 356, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=356)
        0.03496567 = weight(_text_:classification in 356) [ClassicSimilarity], result of:
          0.03496567 = score(doc=356,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.3656675 = fieldWeight in 356, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=356)
      0.14285715 = coord(2/14)
    
    Abstract
    Conversational interfaces are increasingly popular as a way of connecting people to information. With the increased generative capacity of corpus-based conversational agents comes the need to classify and filter out malevolent responses that are inappropriate in terms of content and dialogue acts. Previous studies on the topic of detecting and classifying inappropriate content are mostly focused on a specific category of malevolence or on single sentences instead of an entire dialogue. We make three contributions to advance research on the malevolent dialogue response detection and classification (MDRDC) task. First, we define the task and present a hierarchical malevolent dialogue taxonomy. Second, we create a labeled multiturn dialogue data set and formulate the MDRDC task as a hierarchical classification task. Last, we apply state-of-the-art text classification methods to the MDRDC task, and report on experiments aimed at assessing the performance of these approaches.
  3. Zhang, Y.; Li, Y.: ¬A user-centered functional metadata evaluation of moving image collections (2008) 0.01
    0.0069839032 = product of:
      0.04888732 = sum of:
        0.0237488 = product of:
          0.0474976 = sum of:
            0.0474976 = weight(_text_:schemes in 1884) [ClassicSimilarity], result of:
              0.0474976 = score(doc=1884,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2956176 = fieldWeight in 1884, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1884)
          0.5 = coord(1/2)
        0.02513852 = weight(_text_:bibliographic in 1884) [ClassicSimilarity], result of:
          0.02513852 = score(doc=1884,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.21506234 = fieldWeight in 1884, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1884)
      0.14285715 = coord(2/14)
    
    Abstract
    In this article, the authors report a series of evaluations of two metadata schemes developed for Moving Image Collections (MIC), an integrated online catalog of moving images. Through two online surveys and one experiment spanning various stages of metadata implementation, the MIC evaluation team explored a user-centered approach in which the four generic user tasks suggested by IFLA FRBR (International Association of Library Associations Functional Requirement for Bibliographic Records) were embedded in data collection and analyses. Diverse groups of users rated usefulness of individual metadata fields for finding, identifying, selecting, and obtaining moving images. The results demonstrate a consistency across these evaluations with respect to (a) identification of a set of useful metadata fields highly rated by target users for each of the FRBR generic tasks, and (b) indication of a significant interaction between MIC metadata fields and the FRBR generic tasks. The findings provide timely feedback for the MIC implementation specifically, and valuable suggestions to other similar metadata application settings in general. They also suggest the feasibility of using the four IFLA FRBR generic tasks as a framework for user-centered functional metadata evaluations.
  4. Zhang, Y.; Salaba, A.: What do users tell us about FRBR-based catalogs? (2012) 0.01
    0.0067291465 = product of:
      0.047104023 = sum of:
        0.023552012 = weight(_text_:classification in 1924) [ClassicSimilarity], result of:
          0.023552012 = score(doc=1924,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.24630459 = fieldWeight in 1924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1924)
        0.023552012 = weight(_text_:classification in 1924) [ClassicSimilarity], result of:
          0.023552012 = score(doc=1924,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.24630459 = fieldWeight in 1924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1924)
      0.14285715 = coord(2/14)
    
    Source
    Cataloging and classification quarterly. 50(2012) no.5/7, S.705-723
  5. Zhang, Y.; Xu, W.: Fast exact maximum likelihood estimation for mixture of language model (2008) 0.01
    0.00576784 = product of:
      0.04037488 = sum of:
        0.02018744 = weight(_text_:classification in 2082) [ClassicSimilarity], result of:
          0.02018744 = score(doc=2082,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 2082, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=2082)
        0.02018744 = weight(_text_:classification in 2082) [ClassicSimilarity], result of:
          0.02018744 = score(doc=2082,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 2082, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=2082)
      0.14285715 = coord(2/14)
    
    Abstract
    Language modeling is an effective and theoretically attractive probabilistic framework for text information retrieval. The basic idea of this approach is to estimate a language model of a given document (or document set), and then do retrieval or classification based on this model. A common language modeling approach assumes the data D is generated from a mixture of several language models. The core problem is to find the maximum likelihood estimation of one language model mixture, given the fixed mixture weights and the other language model mixture. The EM algorithm is usually used to find the solution. In this paper, we proof that an exact maximum likelihood estimation of the unknown mixture component exists and can be calculated using the new algorithm we proposed. We further improve the algorithm and provide an efficient algorithm of O(k) complexity to find the exact solution, where k is the number of words occurring at least once in data D. Furthermore, we proof the probabilities of many words are exactly zeros, and the MLE estimation is implemented as a feature selection technique explicitly.
  6. Ku, Y.; Chiu, C.; Zhang, Y.; Chen, H.; Su, H.: Text mining self-disclosing health information for public health service (2014) 0.01
    0.00576784 = product of:
      0.04037488 = sum of:
        0.02018744 = weight(_text_:classification in 1262) [ClassicSimilarity], result of:
          0.02018744 = score(doc=1262,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 1262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=1262)
        0.02018744 = weight(_text_:classification in 1262) [ClassicSimilarity], result of:
          0.02018744 = score(doc=1262,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 1262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=1262)
      0.14285715 = coord(2/14)
    
    Abstract
    Understanding specific patterns or knowledge of self-disclosing health information could support public health surveillance and healthcare. This study aimed to develop an analytical framework to identify self-disclosing health information with unusual messages on web forums by leveraging advanced text-mining techniques. To demonstrate the performance of the proposed analytical framework, we conducted an experimental study on 2 major human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS) forums in Taiwan. The experimental results show that the classification accuracy increased significantly (up to 83.83%) when using features selected by the information gain technique. The results also show the importance of adopting domain-specific features in analyzing unusual messages on web forums. This study has practical implications for the prevention and support of HIV/AIDS healthcare. For example, public health agencies can re-allocate resources and deliver services to people who need help via social media sites. In addition, individuals can also join a social media site to get better suggestions and support from each other.
  7. Zhang, Y.: Searching for specific health-related information in MedlinePlus : behavioral patterns and user experience (2014) 0.00
    0.004806533 = product of:
      0.03364573 = sum of:
        0.016822865 = weight(_text_:classification in 1180) [ClassicSimilarity], result of:
          0.016822865 = score(doc=1180,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.17593184 = fieldWeight in 1180, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1180)
        0.016822865 = weight(_text_:classification in 1180) [ClassicSimilarity], result of:
          0.016822865 = score(doc=1180,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.17593184 = fieldWeight in 1180, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1180)
      0.14285715 = coord(2/14)
    
    Abstract
    Searches for specific factual health information constitute a significant part of consumer health information requests, but little is known about how users search for such information. This study attempts to fill this gap by observing users' behavior while using MedlinePlus to search for specific health information. Nineteen students participated in the study, and each performed 12 specific tasks. During the search process, they submitted short queries or complete questions, and they examined less than 1 result per search. Participants rarely reformulated queries; when they did, they tended to make a query more specific or more general, or iterate in different ways. Participants also browsed, primarily relying on the alphabetical list and the anatomical classification, to navigate to specific health topics. Participants overall had a positive experience with MedlinePlus, and the experience was significantly correlated with task difficulty and participants' spatial abilities. The results suggest that, to better support specific item search in the health domain, systems could provide a more "natural" interface to encourage users to ask questions; effective conceptual hierarchies could be implemented to help users reformulate queries; and the search results page should be reconceptualized as a place for accessing answers rather than documents. Moreover, multiple schemas should be provided to help users navigate to a health topic. The results also suggest that users' experience with information systems in general and health-related systems in particular should be evaluated in relation to contextual factors, such as task features and individual differences.
  8. Zhang, Y.; Liu, J.; Song, S.: ¬The design and evaluation of a nudge-based interface to facilitate consumers' evaluation of online health information credibility (2023) 0.00
    0.0044839755 = product of:
      0.03138783 = sum of:
        0.021217827 = weight(_text_:subject in 993) [ClassicSimilarity], result of:
          0.021217827 = score(doc=993,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.19758089 = fieldWeight in 993, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=993)
        0.010170003 = product of:
          0.020340007 = sum of:
            0.020340007 = weight(_text_:22 in 993) [ClassicSimilarity], result of:
              0.020340007 = score(doc=993,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.19345059 = fieldWeight in 993, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=993)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    Evaluating the quality of online health information (OHI) is a major challenge facing consumers. We designed PageGraph, an interface that displays quality indicators and associated values for a webpage, based on credibility evaluation models, the nudge theory, and existing empirical research concerning professionals' and consumers' evaluation of OHI quality. A qualitative evaluation of the interface with 16 participants revealed that PageGraph rendered the information and presentation nudges as intended. It provided the participants with easier access to quality indicators, encouraged fresh angles to assess information credibility, provided an evaluation framework, and encouraged validation of initial judgments. We then conducted a quantitative evaluation of the interface involving 60 participants using a between-subject experimental design. The control group used a regular web browser and evaluated the credibility of 12 preselected webpages, whereas the experimental group evaluated the same webpages with the assistance of PageGraph. PageGraph did not significantly influence participants' evaluation results. The results may be attributed to the insufficiency of the saliency and structure of the nudges implemented and the webpage stimuli's lack of sensitivity to the intervention. Future directions for applying nudges to support OHI evaluation were discussed.
    Date
    22. 6.2023 18:18:34
  9. Zhang, Y.: ¬The effect of open access on citation impact : a comparison study based on Web citation analysis (2006) 0.00
    0.0017956087 = product of:
      0.02513852 = sum of:
        0.02513852 = weight(_text_:bibliographic in 5071) [ClassicSimilarity], result of:
          0.02513852 = score(doc=5071,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.21506234 = fieldWeight in 5071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5071)
      0.071428575 = coord(1/14)
    
    Abstract
    The academic impact advantage of Open Access (OA) is a prominent topic of debate in the library and publishing communities. Web citations have been proposed as comparable to, even replacements for, bibliographic citations in assessing the academic impact of journals. In our study, we compare Web citations to articles in an OA journal, the Journal of Computer-Mediated Communication (JCMC), and a traditional access journal, New Media & Society (NMS), in the communication discipline. Web citation counts for JCMC are significantly higher than those for NMS. Furthermore, JCMC receives significantly higher Web citations from the formal scholarly publications posted on the Web than NMS does. The types of Web citations for journal articles were also examined. In the Web context, the impact of a journal can be assessed using more than one type of source: citations from scholarly articles, teaching materials and non-authoritative documents. The OA journal has higher percentages of citations from the third type, which suggests that, in addition to the research community, the impact advantage of open access is also detectable among ordinary users participating in Web-based academic communication. Moreover, our study also proves that the OA journal has impact advantage in developing countries. Compared with NMS, JCMC has more Web citations from developing countries.
  10. Zhang, Y.; Zhang, C.; Li, J.: Joint modeling of characters, words, and conversation contexts for microblog keyphrase extraction (2020) 0.00
    0.001780432 = product of:
      0.024926046 = sum of:
        0.024926046 = product of:
          0.04985209 = sum of:
            0.04985209 = weight(_text_:texts in 5816) [ClassicSimilarity], result of:
              0.04985209 = score(doc=5816,freq=2.0), product of:
                0.16460659 = queryWeight, product of:
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.03002521 = queryNorm
                0.302856 = fieldWeight in 5816, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5816)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Abstract
    Millions of messages are produced on microblog platforms every day, leading to the pressing need for automatic identification of key points from the massive texts. To absorb salient content from the vast bulk of microblog posts, this article focuses on the task of microblog keyphrase extraction. In previous work, most efforts treat messages as independent documents and might suffer from the data sparsity problem exhibited in short and informal microblog posts. On the contrary, we propose to enrich contexts via exploiting conversations initialized by target posts and formed by their replies, which are generally centered around relevant topics to the target posts and therefore helpful for keyphrase identification. Concretely, we present a neural keyphrase extraction framework, which has 2 modules: a conversation context encoder and a keyphrase tagger. The conversation context encoder captures indicative representation from their conversation contexts and feeds the representation into the keyphrase tagger, and the keyphrase tagger extracts salient words from target posts. The 2 modules were trained jointly to optimize the conversation context encoding and keyphrase extraction processes. In the conversation context encoder, we leverage hierarchical structures to capture the word-level indicative representation and message-level indicative representation hierarchically. In both of the modules, we apply character-level representations, which enables the model to explore morphological features and deal with the out-of-vocabulary problem caused by the informal language style of microblog messages. Extensive comparison results on real-life data sets indicate that our model outperforms state-of-the-art models from previous studies.
  11. Zhang, Y.: ¬The impact of Internet-based electronic resources on formal scholarly communication in the area of library and information science : a citation analysis (1998) 0.00
    0.0010273255 = product of:
      0.014382556 = sum of:
        0.014382556 = product of:
          0.028765112 = sum of:
            0.028765112 = weight(_text_:22 in 2808) [ClassicSimilarity], result of:
              0.028765112 = score(doc=2808,freq=4.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.27358043 = fieldWeight in 2808, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2808)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    30. 1.1999 17:22:22
  12. Zhang, Y.: Developing a holistic model for digital library evaluation (2010) 0.00
    8.7171455E-4 = product of:
      0.0122040035 = sum of:
        0.0122040035 = product of:
          0.024408007 = sum of:
            0.024408007 = weight(_text_:22 in 2360) [ClassicSimilarity], result of:
              0.024408007 = score(doc=2360,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.23214069 = fieldWeight in 2360, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2360)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Abstract
    This article reports the author's recent research in developing a holistic model for various levels of digital library (DL) evaluation in which perceived important criteria from heterogeneous stakeholder groups are organized and presented. To develop such a model, the author applied a three-stage research approach: exploration, confirmation, and verification. During the exploration stage, a literature review was conducted followed by an interview, along with a card sorting technique, to collect important criteria perceived by DL experts. Then the criteria identified were used for developing an online survey during the confirmation stage. Survey respondents (431 in total) from 22 countries rated the importance of the criteria. A holistic DL evaluation model was constructed using statistical techniques. Eventually, the verification stage was devised to test the reliability of the model in the context of searching and evaluating an operational DL. The proposed model fills two lacunae in the DL domain: (a) the lack of a comprehensive and flexible framework to guide and benchmark evaluations, and (b) the uncertainty about what divergence exists among heterogeneous DL stakeholders, including general users.
  13. Zhang, Y.; Jansen, B.J.; Spink, A.: Identification of factors predicting clickthrough in Web searching using neural network analysis (2009) 0.00
    8.7171455E-4 = product of:
      0.0122040035 = sum of:
        0.0122040035 = product of:
          0.024408007 = sum of:
            0.024408007 = weight(_text_:22 in 2742) [ClassicSimilarity], result of:
              0.024408007 = score(doc=2742,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.23214069 = fieldWeight in 2742, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2742)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    22. 3.2009 17:49:11
  14. Zhang, Y.; Wu, M.; Zhang, G.; Lu, J.: Stepping beyond your comfort zone : diffusion-based network analytics for knowledge trajectory recommendation (2023) 0.00
    7.264289E-4 = product of:
      0.010170003 = sum of:
        0.010170003 = product of:
          0.020340007 = sum of:
            0.020340007 = weight(_text_:22 in 994) [ClassicSimilarity], result of:
              0.020340007 = score(doc=994,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.19345059 = fieldWeight in 994, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=994)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    22. 6.2023 18:07:12