Search (7 results, page 1 of 1)

  • × author_ss:"Li, C."
  1. Cheang, B.; Chu, S.K.W.; Li, C.; Lim, A.: ¬A multidimensional approach to evaluating management journals : refining pagerank via the differentiation of citation types and identifying the roles that management journals play (2014) 0.04
    0.041486606 = product of:
      0.06222991 = sum of:
        0.01058886 = weight(_text_:information in 1551) [ClassicSimilarity], result of:
          0.01058886 = score(doc=1551,freq=2.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.116372846 = fieldWeight in 1551, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1551)
        0.05164105 = product of:
          0.1032821 = sum of:
            0.1032821 = weight(_text_:management in 1551) [ClassicSimilarity], result of:
              0.1032821 = score(doc=1551,freq=14.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.59117234 = fieldWeight in 1551, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1551)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this article, the authors introduce two citation-based approaches to facilitate a multidimensional evaluation of 39 selected management journals. The first is a refined application of PageRank via the differentiation of citation types. The second is a form of mathematical manipulation to identify the roles that the selected management journals play. Their findings reveal that Academy of Management Journal, Academy of Management Review, and Administrative Science Quarterly are the top three management journals, respectively. They also discovered that these three journals play the role of a knowledge hub in the domain. Finally, when compared with Journal Citation Reports (Thomson Reuters, Philadelphia, PA), their results closely match expert opinions.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.12, S.2581-2591
  2. Toms, E.G.; Freund, L.; Li, C.: WilRE: the Web Interactive information retrieval experimentation system prototype (2004) 0.03
    0.025239285 = product of:
      0.037858926 = sum of:
        0.018340444 = weight(_text_:information in 2534) [ClassicSimilarity], result of:
          0.018340444 = score(doc=2534,freq=6.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.20156369 = fieldWeight in 2534, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2534)
        0.019518482 = product of:
          0.039036963 = sum of:
            0.039036963 = weight(_text_:management in 2534) [ClassicSimilarity], result of:
              0.039036963 = score(doc=2534,freq=2.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.22344214 = fieldWeight in 2534, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2534)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We introduce WiIRE, a prototype system for conducting interactive information retrieval (IIR) experiments via the Internet. We conceived Wi IRE to increase validity while streamlining procedures and adding efficiencies to the conduct of IIR experiments. The system incorporates password-controlled access, online questionnaires, study instructions and tutorials, conditional interface assignment, and conditional query assignment as well as provision for data collection. As an initial evaluation, we used WiIRE inhouse to conduct a Web-based IIR experiment using an external search engine with customized search interfaces and the TREC 11 Interactive Track search queries. Our evaluation of the prototype indicated significant cost efficiencies in the conduct of IIR studies, and additionally had some novel findings about the human perspective: about half participants would have preferred some personal contact with the researcher, and participants spent a significantly decreasing amount of time on tasks over the course of a session.
    Source
    Information processing and management. 40(2004) no.4, S.655-676
  3. Li, X.; Zhang, A.; Li, C.; Ouyang, J.; Cai, Y.: Exploring coherent topics by topic modeling with term weighting (2018) 0.02
    0.0167263 = product of:
      0.02508945 = sum of:
        0.0088240495 = weight(_text_:information in 5045) [ClassicSimilarity], result of:
          0.0088240495 = score(doc=5045,freq=2.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.09697737 = fieldWeight in 5045, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5045)
        0.016265402 = product of:
          0.032530803 = sum of:
            0.032530803 = weight(_text_:management in 5045) [ClassicSimilarity], result of:
              0.032530803 = score(doc=5045,freq=2.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.18620178 = fieldWeight in 5045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5045)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Information processing and management. 54(2018) no.6, S.1345-1358
  4. Li, C.; Sun, A.; Datta, A.: TSDW: Two-stage word sense disambiguation using Wikipedia (2013) 0.01
    0.0050945682 = product of:
      0.015283704 = sum of:
        0.015283704 = weight(_text_:information in 956) [ClassicSimilarity], result of:
          0.015283704 = score(doc=956,freq=6.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.16796975 = fieldWeight in 956, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=956)
      0.33333334 = coord(1/3)
    
    Abstract
    The semantic knowledge of Wikipedia has proved to be useful for many tasks, for example, named entity disambiguation. Among these applications, the task of identifying the word sense based on Wikipedia is a crucial component because the output of this component is often used in subsequent tasks. In this article, we present a two-stage framework (called TSDW) for word sense disambiguation using knowledge latent in Wikipedia. The disambiguation of a given phrase is applied through a two-stage disambiguation process: (a) The first-stage disambiguation explores the contextual semantic information, where the noisy information is pruned for better effectiveness and efficiency; and (b) the second-stage disambiguation explores the disambiguated phrases of high confidence from the first stage to achieve better redisambiguation decisions for the phrases that are difficult to disambiguate in the first stage. Moreover, existing studies have addressed the disambiguation problem for English text only. Considering the popular usage of Wikipedia in different languages, we study the performance of TSDW and the existing state-of-the-art approaches over both English and Traditional Chinese articles. The experimental results show that TSDW generalizes well to different semantic relatedness measures and text in different languages. More important, TSDW significantly outperforms the state-of-the-art approaches with both better effectiveness and efficiency.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.6, S.1203-1223
  5. Li, C.; Sugimoto, S.: Provenance description of metadata application profiles for long-term maintenance of metadata schemas : Luciano Floridi's philosophy of information as the foundation for library and information science (2018) 0.01
    0.0050945682 = product of:
      0.015283704 = sum of:
        0.015283704 = weight(_text_:information in 4048) [ClassicSimilarity], result of:
          0.015283704 = score(doc=4048,freq=6.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.16796975 = fieldWeight in 4048, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas. Design/methodology/approach The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English. Findings Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time. Research limitations/implications The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered. Originality/value This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.
  6. Li, C.; Sun, A.: Extracting fine-grained location with temporal awareness in tweets : a two-stage approach (2017) 0.00
    0.0033277576 = product of:
      0.009983272 = sum of:
        0.009983272 = weight(_text_:information in 3686) [ClassicSimilarity], result of:
          0.009983272 = score(doc=3686,freq=4.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.10971737 = fieldWeight in 3686, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3686)
      0.33333334 = coord(1/3)
    
    Abstract
    Twitter has attracted billions of users for life logging and sharing activities and opinions. In their tweets, users often reveal their location information and short-term visiting histories or plans. Capturing user's short-term activities could benefit many applications for providing the right context at the right time and location. In this paper we are interested in extracting locations mentioned in tweets at fine-grained granularity, with temporal awareness. Specifically, we recognize the points-of-interest (POIs) mentioned in a tweet and predict whether the user has visited, is currently at, or will soon visit the mentioned POIs. A POI can be a restaurant, a shopping mall, a bookstore, or any other fine-grained location. Our proposed framework, named TS-Petar (Two-Stage POI Extractor with Temporal Awareness), consists of two main components: a POI inventory and a two-stage time-aware POI tagger. The POI inventory is built by exploiting the crowd wisdom of the Foursquare community. It contains both POIs' formal names and their informal abbreviations, commonly observed in Foursquare check-ins. The time-aware POI tagger, based on the Conditional Random Field (CRF) model, is devised to disambiguate the POI mentions and to resolve their associated temporal awareness accordingly. Three sets of contextual features (linguistic, temporal, and inventory features) and two labeling schema features (OP and BILOU schemas) are explored for the time-aware POI extraction task. Our empirical study shows that the subtask of POI disambiguation and the subtask of temporal awareness resolution call for different feature settings for best performance. We have also evaluated the proposed TS-Petar against several strong baseline methods. The experimental results demonstrate that the two-stage approach achieves the best accuracy and outperforms all baseline methods in terms of both effectiveness and efficiency.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.7, S.1652-1670
  7. Qu, B.; Cong, G.; Li, C.; Sun, A.; Chen, H.: ¬An evaluation of classification models for question topic categorization (2012) 0.00
    0.00294135 = product of:
      0.0088240495 = sum of:
        0.0088240495 = weight(_text_:information in 237) [ClassicSimilarity], result of:
          0.0088240495 = score(doc=237,freq=2.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.09697737 = fieldWeight in 237, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=237)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.5, S.889-903