Search (117 results, page 1 of 6)

  • × theme_ss:"Data Mining"
  1. Liu, X.; Yu, S.; Janssens, F.; Glänzel, W.; Moreau, Y.; Moor, B.de: Weighted hybrid clustering by combining text mining and bibliometrics on a large-scale journal database (2010) 0.04
    0.0448773 = product of:
      0.0897546 = sum of:
        0.04717497 = weight(_text_:processing in 3464) [ClassicSimilarity], result of:
          0.04717497 = score(doc=3464,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.26835677 = fieldWeight in 3464, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=3464)
        0.04257962 = product of:
          0.06386943 = sum of:
            0.02824782 = weight(_text_:science in 3464) [ClassicSimilarity], result of:
              0.02824782 = score(doc=3464,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.24694869 = fieldWeight in 3464, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
            0.03562161 = weight(_text_:29 in 3464) [ClassicSimilarity], result of:
              0.03562161 = score(doc=3464,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.23319192 = fieldWeight in 3464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    We propose a new hybrid clustering framework to incorporate text mining with bibliometrics in journal set analysis. The framework integrates two different approaches: clustering ensemble and kernel-fusion clustering. To improve the flexibility and the efficiency of processing large-scale data, we propose an information-based weighting scheme to leverage the effect of multiple data sources in hybrid clustering. Three different algorithms are extended by the proposed weighting scheme and they are employed on a large journal set retrieved from the Web of Science (WoS) database. The clustering performance of the proposed algorithms is systematically evaluated using multiple evaluation methods, and they were cross-compared with alternative methods. Experimental results demonstrate that the proposed weighted hybrid clustering strategy is superior to other methods in clustering performance and efficiency. The proposed approach also provides a more refined structural mapping of journal sets, which is useful for monitoring and detecting new trends in different scientific fields.
    Date
    1. 6.2010 9:29:57
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1105-1119
  2. Cardie, C.: Empirical methods in information extraction (1997) 0.04
    0.039365895 = product of:
      0.07873179 = sum of:
        0.06289996 = weight(_text_:processing in 3246) [ClassicSimilarity], result of:
          0.06289996 = score(doc=3246,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.35780904 = fieldWeight in 3246, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0625 = fieldNorm(doc=3246)
        0.015831826 = product of:
          0.047495477 = sum of:
            0.047495477 = weight(_text_:29 in 3246) [ClassicSimilarity], result of:
              0.047495477 = score(doc=3246,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.31092256 = fieldWeight in 3246, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3246)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Date
    6. 3.1999 13:50:29
    Footnote
    Contribution to a special section reviewing recent research in empirical methods in speech recognition, syntactic parsing, semantic processing, information extraction and machine translation
  3. Suakkaphong, N.; Zhang, Z.; Chen, H.: Disease named entity recognition using semisupervised learning and conditional random fields (2011) 0.04
    0.0368198 = product of:
      0.0736396 = sum of:
        0.068091206 = weight(_text_:processing in 4367) [ClassicSimilarity], result of:
          0.068091206 = score(doc=4367,freq=6.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.38733965 = fieldWeight in 4367, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4367)
        0.005548396 = product of:
          0.016645188 = sum of:
            0.016645188 = weight(_text_:science in 4367) [ClassicSimilarity], result of:
              0.016645188 = score(doc=4367,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 4367, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4367)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Information extraction is an important text-mining task that aims at extracting prespecified types of information from large text collections and making them available in structured representations such as databases. In the biomedical domain, information extraction can be applied to help biologists make the most use of their digital-literature archives. Currently, there are large amounts of biomedical literature that contain rich information about biomedical substances. Extracting such knowledge requires a good named entity recognition technique. In this article, we combine conditional random fields (CRFs), a state-of-the-art sequence-labeling algorithm, with two semisupervised learning techniques, bootstrapping and feature sampling, to recognize disease names from biomedical literature. Two data-processing strategies for each technique also were analyzed: one sequentially processing unlabeled data partitions and another one processing unlabeled data partitions in a round-robin fashion. The experimental results showed the advantage of semisupervised learning techniques given limited labeled training data. Specifically, CRFs with bootstrapping implemented in sequential fashion outperformed strictly supervised CRFs for disease name recognition. The project was supported by NIH/NLM Grant R33 LM07299-01, 2002-2005.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.4, S.727-737
  4. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.04
    0.035010517 = product of:
      0.07002103 = sum of:
        0.03931248 = weight(_text_:processing in 1605) [ClassicSimilarity], result of:
          0.03931248 = score(doc=1605,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 1605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1605)
        0.030708551 = product of:
          0.046062827 = sum of:
            0.016645188 = weight(_text_:science in 1605) [ClassicSimilarity], result of:
              0.016645188 = score(doc=1605,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
            0.029417641 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.029417641 = score(doc=1605,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    Numerous studies have explored the possibility of uncovering information from web search queries but few have examined the factors that affect web query data sources. We conducted a study that investigated this issue by comparing Google Trends and Baidu Index. Data from these two services are based on queries entered by users into Google and Baidu, two of the largest search engines in the world. We first compared the features and functions of the two services based on documents and extensive testing. We then carried out an empirical study that collected query volume data from the two sources. We found that data from both sources could be used to predict the quality of Chinese universities and companies. Despite the differences between the two services in terms of technology, such as differing methods of language processing, the search volume data from the two were highly correlated and combining the two data sources did not improve the predictive power of the data. However, there was a major difference between the two in terms of data availability. Baidu Index was able to provide more search volume data than Google Trends did. Our analysis showed that the disadvantage of Google Trends in this regard was due to Google's smaller user base in China. The implication of this finding goes beyond China. Google's user bases in many countries are smaller than that in China, so the search volume data related to those countries could result in the same issue as that related to China.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
  5. Teich, E.; Degaetano-Ortlieb, S.; Fankhauser, P.; Kermes, H.; Lapshinova-Koltunski, E.: ¬The linguistic construal of disciplinarity : a data-mining approach using register features (2016) 0.03
    0.028295456 = product of:
      0.05659091 = sum of:
        0.04717497 = weight(_text_:processing in 3015) [ClassicSimilarity], result of:
          0.04717497 = score(doc=3015,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.26835677 = fieldWeight in 3015, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=3015)
        0.00941594 = product of:
          0.02824782 = sum of:
            0.02824782 = weight(_text_:science in 3015) [ClassicSimilarity], result of:
              0.02824782 = score(doc=3015,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.24694869 = fieldWeight in 3015, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3015)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    We analyze the linguistic evolution of selected scientific disciplines over a 30-year time span (1970s to 2000s). Our focus is on four highly specialized disciplines at the boundaries of computer science that emerged during that time: computational linguistics, bioinformatics, digital construction, and microelectronics. Our analysis is driven by the question whether these disciplines develop a distinctive language use-both individually and collectively-over the given time period. The data set is the English Scientific Text Corpus (scitex), which includes texts from the 1970s/1980s and early 2000s. Our theoretical basis is register theory. In terms of methods, we combine corpus-based methods of feature extraction (various aggregated features [part-of-speech based], n-grams, lexico-grammatical patterns) and automatic text classification. The results of our research are directly relevant to the study of linguistic variation and languages for specific purposes (LSP) and have implications for various natural language processing (NLP) tasks, for example, authorship attribution, text mining, or training NLP tools.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.7, S.1668-1678
  6. Galal, G.M.; Cook, D.J.; Holder, L.B.: Exploiting parallelism in a structural scientific discovery system to improve scalability (1999) 0.03
    0.026916523 = product of:
      0.053833045 = sum of:
        0.04717497 = weight(_text_:processing in 2952) [ClassicSimilarity], result of:
          0.04717497 = score(doc=2952,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.26835677 = fieldWeight in 2952, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=2952)
        0.006658075 = product of:
          0.019974224 = sum of:
            0.019974224 = weight(_text_:science in 2952) [ClassicSimilarity], result of:
              0.019974224 = score(doc=2952,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.17461908 = fieldWeight in 2952, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2952)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The large amount of data collected today is quickly overwhelming researchers' abilities to interpret the data and discover interesting patterns. Knowledge discovery and data mining approaches hold the potential to automate the interpretation process, but these approaches frequently utilize computationally expensive algorithms. In particular, scientific discovery systems focus on the utilization of richer data representation, sometimes without regard for scalability. This research investigates approaches for scaling a particular knowledge discovery in databases (KDD) system, SUBDUE, using parallel and distributed resources. SUBDUE has been used to discover interesting and repetitive concepts in graph-based databases from a variety of domains, but requires a substantial amount of processing time. Experiments that demonstrate scalability of parallel versions of the SUBDUE system are performed using CAD circuit databases and artificially-generated databases, and potential achievements and obstacles are discussed
    Source
    Journal of the American Society for Information Science. 50(1999) no.1, S.65-73
  7. Wu, K.J.; Chen, M.-C.; Sun, Y.: Automatic topics discovery from hyperlinked documents (2004) 0.03
    0.026916523 = product of:
      0.053833045 = sum of:
        0.04717497 = weight(_text_:processing in 2563) [ClassicSimilarity], result of:
          0.04717497 = score(doc=2563,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.26835677 = fieldWeight in 2563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=2563)
        0.006658075 = product of:
          0.019974224 = sum of:
            0.019974224 = weight(_text_:science in 2563) [ClassicSimilarity], result of:
              0.019974224 = score(doc=2563,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.17461908 = fieldWeight in 2563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2563)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Topic discovery is an important means for marketing, e-Business and social science studies. As well, it can be applied to various purposes, such as identifying a group with certain properties and observing the emergence and diminishment of a certain cyber community. Previous topic discovery work (J.M. Kleinberg, Proceedings of the 9th Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, California, p. 668) requires manual judgment of usefulness of outcomes and is thus incapable of handling the explosive growth of the Internet. In this paper, we propose the Automatic Topic Discovery (ATD) method, which combines a method of base set construction, a clustering algorithm and an iterative principal eigenvector computation method to discover the topics relevant to a given query without using manual examination. Given a query, ATD returns with topics associated with the query and top representative pages for each topic. Our experiments show that the ATD method performs better than the traditional eigenvector method in terms of computation time and topic discovery quality.
    Source
    Information processing and management. 40(2004) no.2, S.239-255
  8. Wang, W.M.; Cheung, C.F.; Lee, W.B.; Kwok, S.K.: Mining knowledge from natural language texts using fuzzy associated concept mapping (2008) 0.02
    0.024457855 = product of:
      0.04891571 = sum of:
        0.044476993 = weight(_text_:processing in 2121) [ClassicSimilarity], result of:
          0.044476993 = score(doc=2121,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.2530092 = fieldWeight in 2121, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.03125 = fieldNorm(doc=2121)
        0.004438717 = product of:
          0.01331615 = sum of:
            0.01331615 = weight(_text_:science in 2121) [ClassicSimilarity], result of:
              0.01331615 = score(doc=2121,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.11641272 = fieldWeight in 2121, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2121)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Natural Language Processing (NLP) techniques have been successfully used to automatically extract information from unstructured text through a detailed analysis of their content, often to satisfy particular information needs. In this paper, an automatic concept map construction technique, Fuzzy Association Concept Mapping (FACM), is proposed for the conversion of abstracted short texts into concept maps. The approach consists of a linguistic module and a recommendation module. The linguistic module is a text mining method that does not require the use to have any prior knowledge about using NLP techniques. It incorporates rule-based reasoning (RBR) and case based reasoning (CBR) for anaphoric resolution. It aims at extracting the propositions in text so as to construct a concept map automatically. The recommendation module is arrived at by adopting fuzzy set theories. It is an interactive process which provides suggestions of propositions for further human refinement of the automatically generated concept maps. The suggested propositions are relationships among the concepts which are not explicitly found in the paragraphs. This technique helps to stimulate individual reflection and generate new knowledge. Evaluation was carried out by using the Science Citation Index (SCI) abstract database and CNET News as test data, which are well known databases and the quality of the text is assured. Experimental results show that the automatically generated concept maps conform to the outputs generated manually by domain experts, since the degree of difference between them is proportionally small. The method provides users with the ability to convert scientific and short texts into a structured format which can be easily processed by computer. Moreover, it provides knowledge workers with extra time to re-think their written text and to view their knowledge from another angle.
    Source
    Information processing and management. 44(2008) no.5, S.1707-1719
  9. Analytische Informationssysteme : Data Warehouse, On-Line Analytical Processing, Data Mining (1999) 0.02
    0.023831924 = product of:
      0.0953277 = sum of:
        0.0953277 = weight(_text_:processing in 1381) [ClassicSimilarity], result of:
          0.0953277 = score(doc=1381,freq=6.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.54227555 = fieldWeight in 1381, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1381)
      0.25 = coord(1/4)
    
    Abstract
    Neben den operativen Informationssystemen, welche die Abwicklung des betrieblichen Tagesgeschäftes unterstützen, treten heute verstärkt Informationssysteme für analytische Aufgaben der Fach- und Führungskräfte in den Vordergrund. In fast allen Unternehmen werden derzeit Begriffe und Konzepte wie Data Warehouse, On-Line Analytical Processing und Data Mining diskutiert und die zugehörigen Produkte evaluiert. Vor diesem Hintergrund zielt der vorliegende Sammelband darauf ab, einen aktuellen Überblick über Technologien, Produkte und Trends zu bieten. Als Entscheidungsgrundlage für den Praktiker beim Aufbau und Einsatz derartiger analytischer Informationssysteme können die unterschiedlichen Beiträge aus Wirtschaft und Wissenschaft wertvolle Hilfestellung leisten.
    Content
    Grundlagen.- Data Warehouse.- On-line Analytical Processing.- Data Mining.- Betriebswirtschaftliche und strategische Aspekte.
  10. Intelligent information processing and web mining : Proceedings of the International IIS: IIPWM'03 Conference held in Zakopane, Poland, June 2-5, 2003 (2003) 0.02
    0.023587486 = product of:
      0.09434994 = sum of:
        0.09434994 = weight(_text_:processing in 4642) [ClassicSimilarity], result of:
          0.09434994 = score(doc=4642,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.53671354 = fieldWeight in 4642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.09375 = fieldNorm(doc=4642)
      0.25 = coord(1/4)
    
  11. Classification, automation, and new media : Proceedings of the 24th Annual Conference of the Gesellschaft für Klassifikation e.V., University of Passau, March 15 - 17, 2000 (2002) 0.02
    0.023579547 = product of:
      0.047159094 = sum of:
        0.03931248 = weight(_text_:processing in 5997) [ClassicSimilarity], result of:
          0.03931248 = score(doc=5997,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 5997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.007846616 = product of:
          0.023539849 = sum of:
            0.023539849 = weight(_text_:science in 5997) [ClassicSimilarity], result of:
              0.023539849 = score(doc=5997,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.20579056 = fieldWeight in 5997, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5997)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Given the huge amount of information in the internet and in practically every domain of knowledge that we are facing today, knowledge discovery calls for automation. The book deals with methods from classification and data analysis that respond effectively to this rapidly growing challenge. The interested reader will find new methodological insights as well as applications in economics, management science, finance, and marketing, and in pattern recognition, biology, health, and archaeology.
    Content
    Data Analysis, Statistics, and Classification.- Pattern Recognition and Automation.- Data Mining, Information Processing, and Automation.- New Media, Web Mining, and Automation.- Applications in Management Science, Finance, and Marketing.- Applications in Medicine, Biology, Archaeology, and Others.- Author Index.- Subject Index.
  12. Tonkin, E.L.; Tourte, G.J.L.: Working with text. tools, techniques and approaches for text mining (2016) 0.02
    0.023579547 = product of:
      0.047159094 = sum of:
        0.03931248 = weight(_text_:processing in 4019) [ClassicSimilarity], result of:
          0.03931248 = score(doc=4019,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 4019, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4019)
        0.007846616 = product of:
          0.023539849 = sum of:
            0.023539849 = weight(_text_:science in 4019) [ClassicSimilarity], result of:
              0.023539849 = score(doc=4019,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.20579056 = fieldWeight in 4019, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4019)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    What is text mining, and how can it be used? What relevance do these methods have to everyday work in information science and the digital humanities? How does one develop competences in text mining? Working with Text provides a series of cross-disciplinary perspectives on text mining and its applications. As text mining raises legal and ethical issues, the legal background of text mining and the responsibilities of the engineer are discussed in this book. Chapters provide an introduction to the use of the popular GATE text mining package with data drawn from social media, the use of text mining to support semantic search, the development of an authority system to support content tagging, and recent techniques in automatic language evaluation. Focused studies describe text mining on historical texts, automated indexing using constrained vocabularies, and the use of natural language processing to explore the climate science literature. Interviews are included that offer a glimpse into the real-life experience of working within commercial and academic text mining.
  13. Varathan, K.D.; Giachanou, A.; Crestani, F.: Comparative opinion mining : a review (2017) 0.02
    0.022430437 = product of:
      0.044860873 = sum of:
        0.03931248 = weight(_text_:processing in 3540) [ClassicSimilarity], result of:
          0.03931248 = score(doc=3540,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 3540, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3540)
        0.005548396 = product of:
          0.016645188 = sum of:
            0.016645188 = weight(_text_:science in 3540) [ClassicSimilarity], result of:
              0.016645188 = score(doc=3540,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 3540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3540)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Opinion mining refers to the use of natural language processing, text analysis, and computational linguistics to identify and extract subjective information in textual material. Opinion mining, also known as sentiment analysis, has received a lot of attention in recent times, as it provides a number of tools to analyze public opinion on a number of different topics. Comparative opinion mining is a subfield of opinion mining which deals with identifying and extracting information that is expressed in a comparative form (e.g., "paper X is better than the Y"). Comparative opinion mining plays a very important role when one tries to evaluate something because it provides a reference point for the comparison. This paper provides a review of the area of comparative opinion mining. It is the first review that cover specifically this topic as all previous reviews dealt mostly with general opinion mining. This survey covers comparative opinion mining from two different angles. One from the perspective of techniques and the other from the perspective of comparative opinion elements. It also incorporates preprocessing tools as well as data set that were used by past researchers that can be useful to future researchers in the field of comparative opinion mining.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.4, S.811-829
  14. Analytische Informationssysteme : Data Warehouse, On-Line Analytical Processing, Data Mining (1998) 0.02
    0.022238497 = product of:
      0.08895399 = sum of:
        0.08895399 = weight(_text_:processing in 1380) [ClassicSimilarity], result of:
          0.08895399 = score(doc=1380,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.5060184 = fieldWeight in 1380, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0625 = fieldNorm(doc=1380)
      0.25 = coord(1/4)
    
    Abstract
    Neben den operativen Informationssystemen treten heute verstärkt Informationssysteme für die analytischen Aufgaben der Fach- und Führungskräfte in den Vordergrund. In fast allen Unternehmen werden derzeit Begriffe und Konzepte wie Data Warehouse, On-Line Analytical Processing und Data Mining diskutiert und die zugehörigen Produkte evaluiert. Vor diesem Hintergrund zielt der vorliegende Sammelband darauf, einen aktuellen Überblick über Technologien, Produkte und Trends zu bieten. Als Entscheidungsgrundlage für den Praktiker beim Aufbau und Einsatz derartiger analytischer Informationssysteme können die unterschiedlichen Beiträge aus Wirtschaft und Wissenschaft wertvolle Hilfestellung leisten
  15. Cohen, D.J.: From Babel to knowledge : data mining large digital collections (2006) 0.02
    0.017944349 = product of:
      0.035888698 = sum of:
        0.03144998 = weight(_text_:processing in 1178) [ClassicSimilarity], result of:
          0.03144998 = score(doc=1178,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.17890452 = fieldWeight in 1178, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.03125 = fieldNorm(doc=1178)
        0.004438717 = product of:
          0.01331615 = sum of:
            0.01331615 = weight(_text_:science in 1178) [ClassicSimilarity], result of:
              0.01331615 = score(doc=1178,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.11641272 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1178)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In Jorge Luis Borges's curious short story The Library of Babel, the narrator describes an endless collection of books stored from floor to ceiling in a labyrinth of countless hexagonal rooms. The pages of the library's books seem to contain random sequences of letters and spaces; occasionally a few intelligible words emerge in the sea of paper and ink. Nevertheless, readers diligently, and exasperatingly, scan the shelves for coherent passages. The narrator himself has wandered numerous rooms in search of enlightenment, but with resignation he simply awaits his death and burial - which Borges explains (with signature dark humor) consists of being tossed unceremoniously over the library's banister. Borges's nightmare, of course, is a cursed vision of the research methods of disciplines such as literature, history, and philosophy, where the careful reading of books, one after the other, is supposed to lead inexorably to knowledge and understanding. Computer scientists would approach Borges's library far differently. Employing the information theory that forms the basis for search engines and other computerized techniques for assessing in one fell swoop large masses of documents, they would quickly realize the collection's incoherence though sampling and statistical methods - and wisely start looking for the library's exit. These computational methods, which allow us to find patterns, determine relationships, categorize documents, and extract information from massive corpuses, will form the basis for new tools for research in the humanities and other disciplines in the coming decade. For the past three years I have been experimenting with how to provide such end-user tools - that is, tools that harness the power of vast electronic collections while hiding much of their complicated technical plumbing. In particular, I have made extensive use of the application programming interfaces (APIs) the leading search engines provide for programmers to query their databases directly (from server to server without using their web interfaces). In addition, I have explored how one might extract information from large digital collections, from the well-curated lexicographic database WordNet to the democratic (and poorly curated) online reference work Wikipedia. While processing these digital corpuses is currently an imperfect science, even now useful tools can be created by combining various collections and methods for searching and analyzing them. And more importantly, these nascent services suggest a future in which information can be gleaned from, and sense can be made out of, even imperfect digital libraries of enormous scale. A brief examination of two approaches to data mining large digital collections hints at this future, while also providing some lessons about how to get there.
  16. Gaizauskas, R.; Wilks, Y.: Information extraction : beyond document retrieval (1998) 0.02
    0.016678872 = product of:
      0.06671549 = sum of:
        0.06671549 = weight(_text_:processing in 4716) [ClassicSimilarity], result of:
          0.06671549 = score(doc=4716,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3795138 = fieldWeight in 4716, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=4716)
      0.25 = coord(1/4)
    
    Abstract
    In this paper we give a synoptic view of the growth of the text processing technology of informatione xtraction (IE) whose function is to extract information about a pre-specified set of entities, relations or events from natural language texts and to record this information in structured representations called templates. Here we describe the nature of the IE task, review the history of the area from its origins in AI work in the 1960s and 70s till the present, discuss the techniques being used to carry out the task, describe application areas where IE systems are or are about to be at work, and conclude with a discussion of the challenges facing the area. What emerges is a picture of an exciting new text processing technology with a host of new applications, both on its own and in conjunction with other technologies, such as information retrieval, machine translation and data mining
  17. Kraker, P.; Kittel, C,; Enkhbayar, A.: Open Knowledge Maps : creating a visual interface to the world's scientific knowledge based on natural language processing (2016) 0.02
    0.016678872 = product of:
      0.06671549 = sum of:
        0.06671549 = weight(_text_:processing in 3205) [ClassicSimilarity], result of:
          0.06671549 = score(doc=3205,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3795138 = fieldWeight in 3205, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
      0.25 = coord(1/4)
    
    Abstract
    The goal of Open Knowledge Maps is to create a visual interface to the world's scientific knowledge. The base for this visual interface consists of so-called knowledge maps, which enable the exploration of existing knowledge and the discovery of new knowledge. Our open source knowledge mapping software applies a mixture of summarization techniques and similarity measures on article metadata, which are iteratively chained together. After processing, the representation is saved in a database for use in a web visualization. In the future, we want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery. We want to enable people to guide each other in their discovery by collaboratively annotating and modifying the automatically created maps.
  18. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.02
    0.01576062 = product of:
      0.06304248 = sum of:
        0.06304248 = product of:
          0.09456371 = sum of:
            0.047495477 = weight(_text_:29 in 1270) [ClassicSimilarity], result of:
              0.047495477 = score(doc=1270,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.31092256 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
            0.047068227 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.047068227 = score(doc=1270,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Date
    5. 4.1996 15:29:15
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  19. Survey of text mining : clustering, classification, and retrieval (2004) 0.01
    0.01389906 = product of:
      0.05559624 = sum of:
        0.05559624 = weight(_text_:processing in 804) [ClassicSimilarity], result of:
          0.05559624 = score(doc=804,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3162615 = fieldWeight in 804, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
      0.25 = coord(1/4)
    
    Abstract
    Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
  20. Saggi, M.K.; Jain, S.: ¬A survey towards an integration of big data analytics to big insights for value-creation (2018) 0.01
    0.01389906 = product of:
      0.05559624 = sum of:
        0.05559624 = weight(_text_:processing in 5053) [ClassicSimilarity], result of:
          0.05559624 = score(doc=5053,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3162615 = fieldWeight in 5053, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5053)
      0.25 = coord(1/4)
    
    Abstract
    Big Data Analytics (BDA) is increasingly becoming a trending practice that generates an enormous amount of data and provides a new opportunity that is helpful in relevant decision-making. The developments in Big Data Analytics provide a new paradigm and solutions for big data sources, storage, and advanced analytics. The BDA provide a nuanced view of big data development, and insights on how it can truly create value for firm and customer. This article presents a comprehensive, well-informed examination, and realistic analysis of deploying big data analytics successfully in companies. It provides an overview of the architecture of BDA including six components, namely: (i) data generation, (ii) data acquisition, (iii) data storage, (iv) advanced data analytics, (v) data visualization, and (vi) decision-making for value-creation. In this paper, seven V's characteristics of BDA namely Volume, Velocity, Variety, Valence, Veracity, Variability, and Value are explored. The various big data analytics tools, techniques and technologies have been described. Furthermore, it presents a methodical analysis for the usage of Big Data Analytics in various applications such as agriculture, healthcare, cyber security, and smart city. This paper also highlights the previous research, challenges, current status, and future directions of big data analytics for various application platforms. This overview highlights three issues, namely (i) concepts, characteristics and processing paradigms of Big Data Analytics; (ii) the state-of-the-art framework for decision-making in BDA for companies to insight value-creation; and (iii) the current challenges of Big Data Analytics as well as possible future directions.
    Source
    Information processing and management. 54(2018) no.5, S.758-790

Years

Languages

  • e 101
  • d 16

Types

  • a 101
  • m 12
  • s 12
  • el 4
  • More… Less…