Search (24 results, page 1 of 2)

  • × theme_ss:"Data Mining"
  1. Tu, Y.-N.; Hsu, S.-L.: Constructing conceptual trajectory maps to trace the development of research fields (2016) 0.10
    0.09513286 = product of:
      0.14269929 = sum of:
        0.10535134 = weight(_text_:citation in 3059) [ClassicSimilarity], result of:
          0.10535134 = score(doc=3059,freq=6.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.44868594 = fieldWeight in 3059, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3059)
        0.03734795 = product of:
          0.0746959 = sum of:
            0.0746959 = weight(_text_:index in 3059) [ClassicSimilarity], result of:
              0.0746959 = score(doc=3059,freq=4.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.3413878 = fieldWeight in 3059, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3059)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This study proposes a new method to construct and trace the trajectory of conceptual development of a research field by combining main path analysis, citation analysis, and text-mining techniques. Main path analysis, a method used commonly to trace the most critical path in a citation network, helps describe the developmental trajectory of a research field. This study extends the main path analysis method and applies text-mining techniques in the new method, which reflects the trajectory of conceptual development in an academic research field more accurately than citation frequency, which represents only the articles examined. Articles can be merged based on similarity of concepts, and by merging concepts the history of a research field can be described more precisely. The new method was applied to the "h-index" and "text mining" fields. The precision, recall, and F-measures of the h-index were 0.738, 0.652, and 0.658 and those of text-mining were 0.501, 0.653, and 0.551, respectively. Last, this study not only establishes the conceptual trajectory map of a research field, but also recommends keywords that are more precise than those used currently by researchers. These precise keywords could enable researchers to gather related works more quickly than before.
  2. Leydesdorff, L.; Persson, O.: Mapping the geography of science : distribution patterns and networks of relations among cities and institutes (2010) 0.09
    0.089942396 = product of:
      0.1349136 = sum of:
        0.1032228 = weight(_text_:citation in 3704) [ClassicSimilarity], result of:
          0.1032228 = score(doc=3704,freq=4.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.4396206 = fieldWeight in 3704, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=3704)
        0.031690784 = product of:
          0.06338157 = sum of:
            0.06338157 = weight(_text_:index in 3704) [ClassicSimilarity], result of:
              0.06338157 = score(doc=3704,freq=2.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.28967714 = fieldWeight in 3704, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3704)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Using Google Earth, Google Maps, and/or network visualization programs such as Pajek, one can overlay the network of relations among addresses in scientific publications onto the geographic map. The authors discuss the pros and cons of various options, and provide software (freeware) for bridging existing gaps between the Science Citation Indices (Thomson Reuters) and Scopus (Elsevier), on the one hand, and these various visualization tools on the other. At the level of city names, the global map can be drawn reliably on the basis of the available address information. At the level of the names of organizations and institutes, there are problems of unification both in the ISI databases and with Scopus. Pajek enables a combination of visualization and statistical analysis, whereas the Google Maps and its derivatives provide superior tools on the Internet.
    Object
    Science Citation Index
  3. Fong, A.C.M.: Mining a Web citation database for document clustering (2002) 0.08
    0.08028441 = product of:
      0.24085322 = sum of:
        0.24085322 = weight(_text_:citation in 3940) [ClassicSimilarity], result of:
          0.24085322 = score(doc=3940,freq=4.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            1.0257815 = fieldWeight in 3940, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.109375 = fieldNorm(doc=3940)
      0.33333334 = coord(1/3)
    
    Theme
    Citation indexing
  4. Schwartz, F.; Fang, Y.C.: Citation data analysis on hydrogeology (2007) 0.05
    0.053795323 = product of:
      0.16138597 = sum of:
        0.16138597 = weight(_text_:citation in 433) [ClassicSimilarity], result of:
          0.16138597 = score(doc=433,freq=22.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.68733454 = fieldWeight in 433, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.03125 = fieldNorm(doc=433)
      0.33333334 = coord(1/3)
    
    Abstract
    This article explores the status of research in hydrogeology using data mining techniques. First we try to explain what citation analysis is and review some of the previous work on citation analysis. The main idea in this article is to address some common issues about citation numbers and the use of these data. To validate the use of citation numbers, we compare the citation patterns for Water Resources Research papers in the 1980s with those in the 1990s. The citation growths for highly cited authors from the 1980s are used to examine whether it is possible to predict the citation patterns for highly-cited authors in the 1990s. If the citation data prove to be steady and stable, these numbers then can be used to explore the evolution of science in hydrogeology. The famous quotation, "If you are not the lead dog, the scenery never changes," attributed to Lee Iacocca, points to the importance of an entrepreneurial spirit in all forms of endeavor. In the case of hydrogeological research, impact analysis makes it clear how important it is to be a pioneer. Statistical correlation coefficients are used to retrieve papers among a collection of 2,847 papers before and after 1991 sharing the same topics with 273 papers in 1991 in Water Resources Research. The numbers of papers before and after 1991 are then plotted against various levels of citations for papers in 1991 to compare the distributions of paper population before and after that year. The similarity metrics based on word counts can ensure that the "before" papers are like ancestors and "after" papers are descendants in the same type of research. This exercise gives us an idea of how many papers are populated before and after 1991 (1991 is chosen based on balanced numbers of papers before and after that year). In addition, the impact of papers is measured in terms of citation presented as "percentile," a relative measure based on rankings in one year, in order to minimize the effect of time.
    Theme
    Citation indexing
  5. Wang, W.M.; Cheung, C.F.; Lee, W.B.; Kwok, S.K.: Mining knowledge from natural language texts using fuzzy associated concept mapping (2008) 0.05
    0.04652459 = product of:
      0.069786884 = sum of:
        0.048659697 = weight(_text_:citation in 2121) [ClassicSimilarity], result of:
          0.048659697 = score(doc=2121,freq=2.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.20723915 = fieldWeight in 2121, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.03125 = fieldNorm(doc=2121)
        0.021127189 = product of:
          0.042254377 = sum of:
            0.042254377 = weight(_text_:index in 2121) [ClassicSimilarity], result of:
              0.042254377 = score(doc=2121,freq=2.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.1931181 = fieldWeight in 2121, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2121)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Natural Language Processing (NLP) techniques have been successfully used to automatically extract information from unstructured text through a detailed analysis of their content, often to satisfy particular information needs. In this paper, an automatic concept map construction technique, Fuzzy Association Concept Mapping (FACM), is proposed for the conversion of abstracted short texts into concept maps. The approach consists of a linguistic module and a recommendation module. The linguistic module is a text mining method that does not require the use to have any prior knowledge about using NLP techniques. It incorporates rule-based reasoning (RBR) and case based reasoning (CBR) for anaphoric resolution. It aims at extracting the propositions in text so as to construct a concept map automatically. The recommendation module is arrived at by adopting fuzzy set theories. It is an interactive process which provides suggestions of propositions for further human refinement of the automatically generated concept maps. The suggested propositions are relationships among the concepts which are not explicitly found in the paragraphs. This technique helps to stimulate individual reflection and generate new knowledge. Evaluation was carried out by using the Science Citation Index (SCI) abstract database and CNET News as test data, which are well known databases and the quality of the text is assured. Experimental results show that the automatically generated concept maps conform to the outputs generated manually by domain experts, since the degree of difference between them is proportionally small. The method provides users with the ability to convert scientific and short texts into a structured format which can be easily processed by computer. Moreover, it provides knowledge workers with extra time to re-think their written text and to view their knowledge from another angle.
  6. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.04
    0.041801147 = product of:
      0.12540343 = sum of:
        0.12540343 = sum of:
          0.09148343 = weight(_text_:index in 1605) [ClassicSimilarity], result of:
            0.09148343 = score(doc=1605,freq=6.0), product of:
              0.21880072 = queryWeight, product of:
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.050071523 = queryNorm
              0.418113 = fieldWeight in 1605, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1605)
          0.03392 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
            0.03392 = score(doc=1605,freq=2.0), product of:
              0.17534193 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050071523 = queryNorm
              0.19345059 = fieldWeight in 1605, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1605)
      0.33333334 = coord(1/3)
    
    Abstract
    Numerous studies have explored the possibility of uncovering information from web search queries but few have examined the factors that affect web query data sources. We conducted a study that investigated this issue by comparing Google Trends and Baidu Index. Data from these two services are based on queries entered by users into Google and Baidu, two of the largest search engines in the world. We first compared the features and functions of the two services based on documents and extensive testing. We then carried out an empirical study that collected query volume data from the two sources. We found that data from both sources could be used to predict the quality of Chinese universities and companies. Despite the differences between the two services in terms of technology, such as differing methods of language processing, the search volume data from the two were highly correlated and combining the two data sources did not improve the predictive power of the data. However, there was a major difference between the two in terms of data availability. Baidu Index was able to provide more search volume data than Google Trends did. Our analysis showed that the disadvantage of Google Trends in this regard was due to Google's smaller user base in China. The implication of this finding goes beyond China. Google's user bases in many countries are smaller than that in China, so the search volume data related to those countries could result in the same issue as that related to China.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
  7. Berendt, B.; Krause, B.; Kolbe-Nusser, S.: Intelligent scientific authoring tools : interactive data mining for constructive uses of citation networks (2010) 0.03
    0.0344076 = product of:
      0.1032228 = sum of:
        0.1032228 = weight(_text_:citation in 4226) [ClassicSimilarity], result of:
          0.1032228 = score(doc=4226,freq=4.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.4396206 = fieldWeight in 4226, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=4226)
      0.33333334 = coord(1/3)
    
    Abstract
    Many powerful methods and tools exist for extracting meaning from scientific publications, their texts, and their citation links. However, existing proposals often neglect a fundamental aspect of learning: that understanding and learning require an active and constructive exploration of a domain. In this paper, we describe a new method and a tool that use data mining and interactivity to turn the typical search and retrieve dialogue, in which the user asks questions and a system gives answers, into a dialogue that also involves sense-making, in which the user has to become active by constructing a bibliography and a domain model of the search term(s). This model starts from an automatically generated and annotated clustering solution that is iteratively modified by users. The tool is part of an integrated authoring system covering all phases from search through reading and sense-making to writing. Two evaluation studies demonstrate the usability of this interactive and constructive approach, and they show that clusters and groups represent identifiable sub-topics.
  8. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.02
    0.015829332 = product of:
      0.047487997 = sum of:
        0.047487997 = product of:
          0.09497599 = sum of:
            0.09497599 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.09497599 = score(doc=4577,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    2. 4.2000 18:01:22
  9. KDD : techniques and applications (1998) 0.01
    0.013567999 = product of:
      0.040703997 = sum of:
        0.040703997 = product of:
          0.081407994 = sum of:
            0.081407994 = weight(_text_:22 in 6783) [ClassicSimilarity], result of:
              0.081407994 = score(doc=6783,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.46428138 = fieldWeight in 6783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6783)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    A special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997
  10. Classification, automation, and new media : Proceedings of the 24th Annual Conference of the Gesellschaft für Klassifikation e.V., University of Passau, March 15 - 17, 2000 (2002) 0.01
    0.012449317 = product of:
      0.03734795 = sum of:
        0.03734795 = product of:
          0.0746959 = sum of:
            0.0746959 = weight(_text_:index in 5997) [ClassicSimilarity], result of:
              0.0746959 = score(doc=5997,freq=4.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.3413878 = fieldWeight in 5997, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5997)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Data Analysis, Statistics, and Classification.- Pattern Recognition and Automation.- Data Mining, Information Processing, and Automation.- New Media, Web Mining, and Automation.- Applications in Management Science, Finance, and Marketing.- Applications in Medicine, Biology, Archaeology, and Others.- Author Index.- Subject Index.
  11. Information visualization in data mining and knowledge discovery (2002) 0.01
    0.011565063 = product of:
      0.03469519 = sum of:
        0.03469519 = sum of:
          0.021127189 = weight(_text_:index in 1789) [ClassicSimilarity], result of:
            0.021127189 = score(doc=1789,freq=2.0), product of:
              0.21880072 = queryWeight, product of:
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.050071523 = queryNorm
              0.09655905 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
          0.013568 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
            0.013568 = score(doc=1789,freq=2.0), product of:
              0.17534193 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050071523 = queryNorm
              0.07738023 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
      0.33333334 = coord(1/3)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
  12. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.01
    0.009045334 = product of:
      0.027136 = sum of:
        0.027136 = product of:
          0.054272 = sum of:
            0.054272 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
              0.054272 = score(doc=1737,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.30952093 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1737)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22.11.1998 18:57:22
  13. Lusti, M.: Data Warehousing and Data Mining : Eine Einführung in entscheidungsunterstützende Systeme (1999) 0.01
    0.009045334 = product of:
      0.027136 = sum of:
        0.027136 = product of:
          0.054272 = sum of:
            0.054272 = weight(_text_:22 in 4261) [ClassicSimilarity], result of:
              0.054272 = score(doc=4261,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.30952093 = fieldWeight in 4261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4261)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    17. 7.2002 19:22:06
  14. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.01
    0.009045334 = product of:
      0.027136 = sum of:
        0.027136 = product of:
          0.054272 = sum of:
            0.054272 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.054272 = score(doc=1270,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  15. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.01
    0.007914666 = product of:
      0.023743998 = sum of:
        0.023743998 = product of:
          0.047487997 = sum of:
            0.047487997 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
              0.047487997 = score(doc=2908,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.2708308 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  16. Mining text data (2012) 0.01
    0.0070423963 = product of:
      0.021127189 = sum of:
        0.021127189 = product of:
          0.042254377 = sum of:
            0.042254377 = weight(_text_:index in 362) [ClassicSimilarity], result of:
              0.042254377 = score(doc=362,freq=2.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.1931181 = fieldWeight in 362, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.03125 = fieldNorm(doc=362)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Inhalt: An Introduction to Text Mining.- Information Extraction from Text.- A Survey of Text Summarization Techniques.- A Survey of Text Clustering Algorithms.- Dimensionality Reduction and Topic Modeling.- A Survey of Text Classification Algorithms.- Transfer Learning for Text Mining.- Probabilistic Models for Text Mining.- Mining Text Streams.- Translingual Mining from Text Data.- Text Mining in Multimedia.- Text Analytics in Social Media.- A Survey of Opinion Mining and Sentiment Analysis.- Biomedical Text Mining: A Survey of Recent Progress.- Index.
  17. Lackes, R.; Tillmanns, C.: Data Mining für die Unternehmenspraxis : Entscheidungshilfen und Fallstudien mit führenden Softwarelösungen (2006) 0.01
    0.0067839995 = product of:
      0.020351999 = sum of:
        0.020351999 = product of:
          0.040703997 = sum of:
            0.040703997 = weight(_text_:22 in 1383) [ClassicSimilarity], result of:
              0.040703997 = score(doc=1383,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.23214069 = fieldWeight in 1383, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1383)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 14:46:06
  18. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.01
    0.005653334 = product of:
      0.01696 = sum of:
        0.01696 = product of:
          0.03392 = sum of:
            0.03392 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
              0.03392 = score(doc=668,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.19345059 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=668)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2013 19:43:01
  19. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.01
    0.005653334 = product of:
      0.01696 = sum of:
        0.01696 = product of:
          0.03392 = sum of:
            0.03392 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
              0.03392 = score(doc=5011,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.19345059 = fieldWeight in 5011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5011)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    7. 3.2019 16:32:22
  20. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.00
    0.004522667 = product of:
      0.013568 = sum of:
        0.013568 = product of:
          0.027136 = sum of:
            0.027136 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
              0.027136 = score(doc=1507,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.15476047 = fieldWeight in 1507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1507)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 4.2003 11:45:36

Languages

  • e 17
  • d 7

Types