Search (7 results, page 1 of 1)

  • × theme_ss:"Data Mining"
  • × year_i:[2010 TO 2020}
  1. Suakkaphong, N.; Zhang, Z.; Chen, H.: Disease named entity recognition using semisupervised learning and conditional random fields (2011) 0.03
    0.030565115 = product of:
      0.12226046 = sum of:
        0.12226046 = weight(_text_:fields in 4367) [ClassicSimilarity], result of:
          0.12226046 = score(doc=4367,freq=4.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.38684773 = fieldWeight in 4367, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4367)
      0.25 = coord(1/4)
    
    Abstract
    Information extraction is an important text-mining task that aims at extracting prespecified types of information from large text collections and making them available in structured representations such as databases. In the biomedical domain, information extraction can be applied to help biologists make the most use of their digital-literature archives. Currently, there are large amounts of biomedical literature that contain rich information about biomedical substances. Extracting such knowledge requires a good named entity recognition technique. In this article, we combine conditional random fields (CRFs), a state-of-the-art sequence-labeling algorithm, with two semisupervised learning techniques, bootstrapping and feature sampling, to recognize disease names from biomedical literature. Two data-processing strategies for each technique also were analyzed: one sequentially processing unlabeled data partitions and another one processing unlabeled data partitions in a round-robin fashion. The experimental results showed the advantage of semisupervised learning techniques given limited labeled training data. Specifically, CRFs with bootstrapping implemented in sequential fashion outperformed strictly supervised CRFs for disease name recognition. The project was supported by NIH/NLM Grant R33 LM07299-01, 2002-2005.
  2. Tu, Y.-N.; Hsu, S.-L.: Constructing conceptual trajectory maps to trace the development of research fields (2016) 0.03
    0.030565115 = product of:
      0.12226046 = sum of:
        0.12226046 = weight(_text_:fields in 3059) [ClassicSimilarity], result of:
          0.12226046 = score(doc=3059,freq=4.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.38684773 = fieldWeight in 3059, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3059)
      0.25 = coord(1/4)
    
    Abstract
    This study proposes a new method to construct and trace the trajectory of conceptual development of a research field by combining main path analysis, citation analysis, and text-mining techniques. Main path analysis, a method used commonly to trace the most critical path in a citation network, helps describe the developmental trajectory of a research field. This study extends the main path analysis method and applies text-mining techniques in the new method, which reflects the trajectory of conceptual development in an academic research field more accurately than citation frequency, which represents only the articles examined. Articles can be merged based on similarity of concepts, and by merging concepts the history of a research field can be described more precisely. The new method was applied to the "h-index" and "text mining" fields. The precision, recall, and F-measures of the h-index were 0.738, 0.652, and 0.658 and those of text-mining were 0.501, 0.653, and 0.551, respectively. Last, this study not only establishes the conceptual trajectory map of a research field, but also recommends keywords that are more precise than those used currently by researchers. These precise keywords could enable researchers to gather related works more quickly than before.
  3. Liu, X.; Yu, S.; Janssens, F.; Glänzel, W.; Moreau, Y.; Moor, B.de: Weighted hybrid clustering by combining text mining and bibliometrics on a large-scale journal database (2010) 0.03
    0.02593536 = product of:
      0.10374144 = sum of:
        0.10374144 = weight(_text_:fields in 3464) [ClassicSimilarity], result of:
          0.10374144 = score(doc=3464,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.32825118 = fieldWeight in 3464, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.046875 = fieldNorm(doc=3464)
      0.25 = coord(1/4)
    
    Abstract
    We propose a new hybrid clustering framework to incorporate text mining with bibliometrics in journal set analysis. The framework integrates two different approaches: clustering ensemble and kernel-fusion clustering. To improve the flexibility and the efficiency of processing large-scale data, we propose an information-based weighting scheme to leverage the effect of multiple data sources in hybrid clustering. Three different algorithms are extended by the proposed weighting scheme and they are employed on a large journal set retrieved from the Web of Science (WoS) database. The clustering performance of the proposed algorithms is systematically evaluated using multiple evaluation methods, and they were cross-compared with alternative methods. Experimental results demonstrate that the proposed weighted hybrid clustering strategy is superior to other methods in clustering performance and efficiency. The proposed approach also provides a more refined structural mapping of journal sets, which is useful for monitoring and detecting new trends in different scientific fields.
  4. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.01
    0.010809385 = product of:
      0.04323754 = sum of:
        0.04323754 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
          0.04323754 = score(doc=668,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.19345059 = fieldWeight in 668, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=668)
      0.25 = coord(1/4)
    
    Date
    22. 3.2013 19:43:01
  5. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.01
    0.010809385 = product of:
      0.04323754 = sum of:
        0.04323754 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
          0.04323754 = score(doc=1605,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.19345059 = fieldWeight in 1605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1605)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
  6. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.01
    0.010809385 = product of:
      0.04323754 = sum of:
        0.04323754 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
          0.04323754 = score(doc=5011,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.19345059 = fieldWeight in 5011, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5011)
      0.25 = coord(1/4)
    
    Date
    7. 3.2019 16:32:22
  7. Jäger, L.: Von Big Data zu Big Brother (2018) 0.01
    0.008647508 = product of:
      0.034590032 = sum of:
        0.034590032 = weight(_text_:22 in 5234) [ClassicSimilarity], result of:
          0.034590032 = score(doc=5234,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.15476047 = fieldWeight in 5234, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=5234)
      0.25 = coord(1/4)
    
    Date
    22. 1.2018 11:33:49