Search (7 results, page 1 of 1)

  • × theme_ss:"Data Mining"
  • × year_i:[2010 TO 2020}
  1. Maaten, L. van den; Hinton, G.: Visualizing non-metric similarities in multiple maps (2012) 0.02
    0.016444964 = product of:
      0.04933489 = sum of:
        0.04933489 = product of:
          0.14800467 = sum of:
            0.14800467 = weight(_text_:objects in 3884) [ClassicSimilarity], result of:
              0.14800467 = score(doc=3884,freq=4.0), product of:
                0.2970264 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.055883802 = queryNorm
                0.49828792 = fieldWeight in 3884, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3884)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Techniques for multidimensional scaling visualize objects as points in a low-dimensional metric map. As a result, the visualizations are subject to the fundamental limitations of metric spaces. These limitations prevent multidimensional scaling from faithfully representing non-metric similarity data such as word associations or event co-occurrences. In particular, multidimensional scaling cannot faithfully represent intransitive pairwise similarities in a visualization, and it cannot faithfully visualize "central" objects. In this paper, we present an extension of a recently proposed multidimensional scaling technique called t-SNE. The extension aims to address the problems of traditional multidimensional scaling techniques when these techniques are used to visualize non-metric similarities. The new technique, called multiple maps t-SNE, alleviates these problems by constructing a collection of maps that reveal complementary structure in the similarity data. We apply multiple maps t-SNE to a large data set of word association data and to a data set of NIPS co-authorships, demonstrating its ability to successfully visualize non-metric similarities.
  2. Maaten, L. van den: Accelerating t-SNE using Tree-Based Algorithms (2014) 0.01
    0.013566403 = product of:
      0.040699206 = sum of:
        0.040699206 = product of:
          0.12209761 = sum of:
            0.12209761 = weight(_text_:objects in 3886) [ClassicSimilarity], result of:
              0.12209761 = score(doc=3886,freq=2.0), product of:
                0.2970264 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.055883802 = queryNorm
                0.41106653 = fieldWeight in 3886, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3886)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper investigates the acceleration of t-SNE-an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots-using two tree-based algorithms. In particular, the paper develops variants of the Barnes-Hut algorithm and of the dual-tree algorithm that approximate the gradient used for learning t-SNE embeddings in O(N*logN). Our experiments show that the resulting algorithms substantially accelerate t-SNE, and that they make it possible to learn embeddings of data sets with millions of objects. Somewhat counterintuitively, the Barnes-Hut variant of t-SNE appears to outperform the dual-tree variant.
  3. Suakkaphong, N.; Zhang, Z.; Chen, H.: Disease named entity recognition using semisupervised learning and conditional random fields (2011) 0.01
    0.009454172 = product of:
      0.028362514 = sum of:
        0.028362514 = product of:
          0.05672503 = sum of:
            0.05672503 = weight(_text_:2002 in 4367) [ClassicSimilarity], result of:
              0.05672503 = score(doc=4367,freq=2.0), product of:
                0.23954816 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.055883802 = queryNorm
                0.2368001 = fieldWeight in 4367, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4367)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Information extraction is an important text-mining task that aims at extracting prespecified types of information from large text collections and making them available in structured representations such as databases. In the biomedical domain, information extraction can be applied to help biologists make the most use of their digital-literature archives. Currently, there are large amounts of biomedical literature that contain rich information about biomedical substances. Extracting such knowledge requires a good named entity recognition technique. In this article, we combine conditional random fields (CRFs), a state-of-the-art sequence-labeling algorithm, with two semisupervised learning techniques, bootstrapping and feature sampling, to recognize disease names from biomedical literature. Two data-processing strategies for each technique also were analyzed: one sequentially processing unlabeled data partitions and another one processing unlabeled data partitions in a round-robin fashion. The experimental results showed the advantage of semisupervised learning techniques given limited labeled training data. Specifically, CRFs with bootstrapping implemented in sequential fashion outperformed strictly supervised CRFs for disease name recognition. The project was supported by NIH/NLM Grant R33 LM07299-01, 2002-2005.
  4. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.01
    0.00630957 = product of:
      0.018928709 = sum of:
        0.018928709 = product of:
          0.037857417 = sum of:
            0.037857417 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
              0.037857417 = score(doc=668,freq=2.0), product of:
                0.19569555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.055883802 = queryNorm
                0.19345059 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=668)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2013 19:43:01
  5. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.01
    0.00630957 = product of:
      0.018928709 = sum of:
        0.018928709 = product of:
          0.037857417 = sum of:
            0.037857417 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.037857417 = score(doc=1605,freq=2.0), product of:
                0.19569555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.055883802 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
  6. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.01
    0.00630957 = product of:
      0.018928709 = sum of:
        0.018928709 = product of:
          0.037857417 = sum of:
            0.037857417 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
              0.037857417 = score(doc=5011,freq=2.0), product of:
                0.19569555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.055883802 = queryNorm
                0.19345059 = fieldWeight in 5011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5011)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    7. 3.2019 16:32:22
  7. Jäger, L.: Von Big Data zu Big Brother (2018) 0.01
    0.0050476557 = product of:
      0.015142967 = sum of:
        0.015142967 = product of:
          0.030285934 = sum of:
            0.030285934 = weight(_text_:22 in 5234) [ClassicSimilarity], result of:
              0.030285934 = score(doc=5234,freq=2.0), product of:
                0.19569555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.055883802 = queryNorm
                0.15476047 = fieldWeight in 5234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5234)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2018 11:33:49