Search (8 results, page 1 of 1)

  • × author_ss:"Yang, J."
  1. Tang, X.-B.; Liu, G.-C.; Yang, J.; Wei, W.: Knowledge-based financial statement fraud detection system : based on an ontology and a decision tree (2018) 0.02
    0.023258494 = product of:
      0.04651699 = sum of:
        0.04651699 = sum of:
          0.009076704 = weight(_text_:a in 4306) [ClassicSimilarity], result of:
            0.009076704 = score(doc=4306,freq=10.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.1709182 = fieldWeight in 4306, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4306)
          0.037440285 = weight(_text_:22 in 4306) [ClassicSimilarity], result of:
            0.037440285 = score(doc=4306,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 4306, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4306)
      0.5 = coord(1/2)
    
    Abstract
    Financial statement fraud has seriously affected investors' confidence in the stock market and economic stability. Several serious financial statement fraud events have caused huge economic losses. Intelligent financial statement fraud detection has thus been the topic of recent studies. In this paper, we developed a knowledge-based financial statement fraud detection system based on a financial statement detection ontology and detection rules extracted from a C4.5 decision tree algorithm. Through discovering the patterns of financial statement fraud activity, we defined the scope of our financial statement domain ontology. By utilizing SWRL rules and the Pellet inference engine in domain ontology, we detected financial statement fraud activities and discovered implicit knowledge. This system can be used to support investors' decision-making and provide early warning to regulators.
    Date
    21. 6.2018 10:22:43
    Type
    a
  2. Wan, X.; Yang, J.; Xiao, J.: Incorporating cross-document relationships between sentences for single document summarizations (2006) 0.02
    0.022235535 = product of:
      0.04447107 = sum of:
        0.04447107 = sum of:
          0.007030784 = weight(_text_:a in 2421) [ClassicSimilarity], result of:
            0.007030784 = score(doc=2421,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.13239266 = fieldWeight in 2421, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2421)
          0.037440285 = weight(_text_:22 in 2421) [ClassicSimilarity], result of:
            0.037440285 = score(doc=2421,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 2421, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2421)
      0.5 = coord(1/2)
    
    Abstract
    Graph-based ranking algorithms have recently been proposed for single document summarizations and such algorithms evaluate the importance of a sentence by making use of the relationships between sentences in the document in a recursive way. In this paper, we investigate using other related or relevant documents to improve summarization of one single document based on the graph-based ranking algorithm. In addition to the within-document relationships between sentences in the specified document, the cross-document relationships between sentences in different documents are also taken into account in the proposed approach. We evaluate the performance of the proposed approach on DUC 2002 data with the ROUGE metric and results demonstrate that the cross-document relationships between sentences in different but related documents can significantly improve the performance of single document summarization.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
    Type
    a
  3. Zhang, L.; Lu, W.; Yang, J.: LAGOS-AND : a large gold standard dataset for scholarly author name disambiguation (2023) 0.02
    0.018529613 = product of:
      0.037059225 = sum of:
        0.037059225 = sum of:
          0.005858987 = weight(_text_:a in 883) [ClassicSimilarity], result of:
            0.005858987 = score(doc=883,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.11032722 = fieldWeight in 883, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=883)
          0.03120024 = weight(_text_:22 in 883) [ClassicSimilarity], result of:
            0.03120024 = score(doc=883,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 883, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=883)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we present a method to automatically build large labeled datasets for the author ambiguity problem in the academic world by leveraging the authoritative academic resources, ORCID and DOI. Using the method, we built LAGOS-AND, two large, gold-standard sub-datasets for author name disambiguation (AND), of which LAGOS-AND-BLOCK is created for clustering-based AND research and LAGOS-AND-PAIRWISE is created for classification-based AND research. Our LAGOS-AND datasets are substantially different from the existing ones. The initial versions of the datasets (v1.0, released in February 2021) include 7.5 M citations authored by 798 K unique authors (LAGOS-AND-BLOCK) and close to 1 M instances (LAGOS-AND-PAIRWISE). And both datasets show close similarities to the whole Microsoft Academic Graph (MAG) across validations of six facets. In building the datasets, we reveal the variation degrees of last names in three literature databases, PubMed, MAG, and Semantic Scholar, by comparing author names hosted to the authors' official last names shown on the ORCID pages. Furthermore, we evaluate several baseline disambiguation methods as well as the MAG's author IDs system on our datasets, and the evaluation helps identify several interesting findings. We hope the datasets and findings will bring new insights for future studies. The code and datasets are publicly available.
    Date
    22. 1.2023 18:40:36
    Type
    a
  4. Wan, X.; Yang, J.; Xiao, J.: Towards a unified approach to document similarity search using manifold-ranking of blocks (2008) 0.00
    0.0031642143 = product of:
      0.0063284286 = sum of:
        0.0063284286 = product of:
          0.012656857 = sum of:
            0.012656857 = weight(_text_:a in 2081) [ClassicSimilarity], result of:
              0.012656857 = score(doc=2081,freq=28.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.23833402 = fieldWeight in 2081, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2081)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Document similarity search (i.e. query by example) aims to retrieve a ranked list of documents similar to a query document in a text corpus or on the Web. Most existing approaches to similarity search first compute the pairwise similarity score between each document and the query using a retrieval function or similarity measure (e.g. Cosine), and then rank the documents by the similarity scores. In this paper, we propose a novel retrieval approach based on manifold-ranking of document blocks (i.e. a block of coherent text about a subtopic) to re-rank a small set of documents initially retrieved by some existing retrieval function. The proposed approach can make full use of the intrinsic global manifold structure of the document blocks by propagating the ranking scores between the blocks on a weighted graph. First, the TextTiling algorithm and the VIPS algorithm are respectively employed to segment text documents and web pages into blocks. Then, each block is assigned with a ranking score by the manifold-ranking algorithm. Lastly, a document gets its final ranking score by fusing the scores of its blocks. Experimental results on the TDT data and the ODP data demonstrate that the proposed approach can significantly improve the retrieval performances over baseline approaches. Document block is validated to be a better unit than the whole document in the manifold-ranking process.
    Type
    a
  5. Huang, S.; Qian, J.; Huang, Y.; Lu, W.; Bu, Y.; Yang, J.; Cheng, Q.: Disclosing the relationship between citation structure and future impact of a publication (2022) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 621) [ClassicSimilarity], result of:
              0.00894975 = score(doc=621,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 621, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=621)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Each section header of an article has its distinct communicative function. Citations from distinct sections may be different regarding citing motivation. In this paper, we grouped section headers with similar functions as a structural function and defined the distribution of citations from structural functions for a paper as its citation structure. We aim to explore the relationship between citation structure and the future impact of a publication and disclose the relative importance among citations from different structural functions. Specifically, we proposed two citation counting methods and a citation life cycle identification method, by which the regression data were built. Subsequently, we employed a ridge regression model to predict the future impact of the paper and analyzed the relative weights of regressors. Based on documents collected from the Association for Computational Linguistics Anthology website, our empirical experiments disclosed that functional structure features improve the prediction accuracy of citation count prediction and that there exist differences among citations from different structural functions. Specifically, at the early stage of citation lifetime, citations from Introduction and Method are particularly important for perceiving future impact of papers, and citations from Result and Conclusion are also vital. However, early accumulation of citations from the Background seems less important.
    Type
    a
  6. Gachot, D.A.; Lange, E.; Yang, J.: ¬The SYSTRAN NLP browser : an application of machine translation technology in cross-language information retrieval (1998) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 6213) [ClassicSimilarity], result of:
              0.008118451 = score(doc=6213,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 6213, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6213)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  7. Wang, J.; Clements, M.; Yang, J.; Vries, A.P. de; Reinders, M.J.T.: Personalization of tagging systems (2010) 0.00
    0.001757696 = product of:
      0.003515392 = sum of:
        0.003515392 = product of:
          0.007030784 = sum of:
            0.007030784 = weight(_text_:a in 4229) [ClassicSimilarity], result of:
              0.007030784 = score(doc=4229,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13239266 = fieldWeight in 4229, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4229)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Social media systems have encouraged end user participation in the Internet, for the purpose of storing and distributing Internet content, sharing opinions and maintaining relationships. Collaborative tagging allows users to annotate the resulting user-generated content, and enables effective retrieval of otherwise uncategorised data. However, compared to professional web content production, collaborative tagging systems face the challenge that end-users assign tags in an uncontrolled manner, resulting in unsystematic and inconsistent metadata. This paper introduces a framework for the personalization of social media systems. We pinpoint three tasks that would benefit from personalization: collaborative tagging, collaborative browsing and collaborative search. We propose a ranking model for each task that integrates the individual user's tagging history in the recommendation of tags and content, to align its suggestions to the individual user preferences. We demonstrate on two real data sets that for all three tasks, the personalized ranking should take into account both the user's own preference and the opinion of others.
    Type
    a
  8. Wang, F.; Yang, J.; Wu, Y.: Non-synchronism in theoretical research of information science (2021) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 602) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=602,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 602, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=602)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose This paper aims to reveal the global non-synchronism that exists in the theoretical research of information science (IS) by analyzing and comparing the distribution of theory use, creation and borrowing in four representative journals from the USA, the UK and China. Design/methodology/approach Quantitative content analysis is adopted as the research method. First, an analytical framework for non-synchronism in theoretical research of IS is constructed. Second, theories mentioned in the full texts of the research papers of four journals are extracted according to a theory dictionary made before. Third, the non-synchronism in the theoretical research of IS is analyzed. Findings Non-synchronism exists in many aspects of the theoretical research of IS between journals, subject areas and countries/regions. The theoretical underdevelopment still exists in some subject areas of IS. IS presents obvious interdisciplinary characteristics. The theoretical distance from IS to social sciences is shorter than that to natural sciences. Research limitations/implications This study investigates the theoretical research of IS from the perspective of non-synchronism theory, reveals the theoretical distance from IS to other sciences, deepens the communication between different subject and regional sub-communities of IS and provides new evidences for the necessity of developing domestic theories and theorists of IS. Originality/value This study introduces the theory of non-synchronism to IS research for the first time, investigates the new advances in theoretical research of IS and provides new quantitative evidences for the understanding of the interdisciplinary characteristics of IS and the necessity of better communication between sub-communities of IS.
    Type
    a