Search (3 results, page 1 of 1)

  • × author_ss:"Sun, A."
  • × year_i:[2020 TO 2030}
  1. Lee, G.E.; Sun, A.: Understanding the stability of medical concept embeddings (2021) 0.00
    0.002277429 = product of:
      0.02049686 = sum of:
        0.02049686 = weight(_text_:of in 159) [ClassicSimilarity], result of:
          0.02049686 = score(doc=159,freq=30.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.33457235 = fieldWeight in 159, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=159)
      0.11111111 = coord(1/9)
    
    Abstract
    Frequency is one of the major factors for training quality word embeddings. Several studies have recently discussed the stability of word embeddings in general domain and suggested factors influencing the stability. In this work, we conduct a detailed analysis on the stability of concept embeddings in medical domain, particularly in relations with concept frequency. The analysis reveals the surprising high stability of low-frequency concepts: low-frequency (<100) concepts have the same high stability as high-frequency (>1,000) concepts. To develop a deeper understanding of this finding, we propose a new factor, the noisiness of context words, which influences the stability of medical concept embeddings regardless of high or low frequency. We evaluate the proposed factor by showing the linear correlation with the stability of medical concept embeddings. The correlations are clear and consistent with various groups of medical concepts. Based on the linear relations, we make suggestions on ways to adjust the noisiness of context words for the improvement of stability. Finally, we demonstrate that the linear relation of the proposed factor extends to the word embedding stability in general domain.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.3, S.346-356
  2. Yu, M.; Sun, A.: Dataset versus reality : understanding model performance from the perspective of information need (2023) 0.00
    0.0016631988 = product of:
      0.014968789 = sum of:
        0.014968789 = weight(_text_:of in 1073) [ClassicSimilarity], result of:
          0.014968789 = score(doc=1073,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24433708 = fieldWeight in 1073, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1073)
      0.11111111 = coord(1/9)
    
    Abstract
    Deep learning technologies have brought us many models that outperform human beings on a few benchmarks. An interesting question is: can these models well solve real-world problems with similar settings (e.g., identical input/output) to the benchmark datasets? We argue that a model is trained to answer the same information need in a similar context (e.g., the information available), for which the training dataset is created. The trained model may be used to solve real-world problems for a similar information need in a similar context. However, information need is independent of the format of dataset input/output. Although some datasets may share high structural similarities, they may represent different research tasks aiming for answering different information needs. Examples are question-answer pairs for the question answering (QA) task, and image-caption pairs for the image captioning (IC) task. In this paper, we use the QA task and IC task as two case studies and compare their widely used benchmark datasets. From the perspective of information need in the context of information retrieval, we show the differences in the dataset creation processes and the differences in morphosyntactic properties between datasets. The differences in these datasets can be attributed to the different information needs and contexts of the specific research tasks. We encourage all researchers to consider the information need perspective of a research task when selecting the appropriate datasets to train a model. Likewise, while creating a dataset, researchers may also incorporate the information need perspective as a factor to determine the degree to which the dataset accurately reflects the real-world problem or the research task they intend to tackle.
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.11, S.1293-1306
  3. Phan, M.C.; Sun, A.: Collective named entity recognition in user comments via parameterized label propagation (2020) 0.00
    0.0011760591 = product of:
      0.010584532 = sum of:
        0.010584532 = weight(_text_:of in 5815) [ClassicSimilarity], result of:
          0.010584532 = score(doc=5815,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.17277241 = fieldWeight in 5815, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5815)
      0.11111111 = coord(1/9)
    
    Abstract
    Named entity recognition (NER) in the past has focused on extracting mentions in a local region, within a sentence or short paragraph. When dealing with user-generated text, the diverse and informal writing style makes traditional approaches much less effective. On the other hand, in many types of text on social media such as user comments, tweets, or question-answer posts, the contextual connections between documents do exist. Examples include posts in a thread discussing the same topic, tweets that share a hashtag about the same entity. Our idea in this work is utilizing the related contexts across documents to perform mention recognition in a collective manner. Intuitively, within a mention coreference graph, the labels of mentions are expected to propagate from more confidence cases to less confidence ones. To this end, we propose a novel semisupervised inference algorithm named parameterized label propagation. In our model, the propagation weights between mentions are learned by an attention-like mechanism, given their local contexts and the initial labels as input. We study the performance of our approach in the Yahoo! News data set, where comments and articles within a thread share similar context. The results show that our model significantly outperforms all other noncollective NER baselines.
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.5, S.568-577

Authors