Search (54 results, page 1 of 3)

  • × theme_ss:"Data Mining"
  1. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.04
    0.038064104 = product of:
      0.07612821 = sum of:
        0.07612821 = sum of:
          0.040714126 = weight(_text_:research in 668) [ClassicSimilarity], result of:
            0.040714126 = score(doc=668,freq=6.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.2729826 = fieldWeight in 668, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0390625 = fieldNorm(doc=668)
          0.03541408 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
            0.03541408 = score(doc=668,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.19345059 = fieldWeight in 668, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=668)
      0.5 = coord(1/2)
    
    Abstract
    20th century massification of higher education and research in academia is said to have produced structurally stratified higher education systems in many countries. Most manifestly, the research mission of universities appears to be divisive. Authors have claimed that the Swedish system, while formally unified, has developed into a binary state, and statistics seem to support this conclusion. This article makes use of a comprehensive statistical data source on Swedish higher education institutions to illustrate stratification, and uses literature on Swedish research policy history to contextualize the statistics. Highlighting the opportunities as well as constraints of the data, the article argues that there is great merit in combining statistics with a qualitative analysis when studying the structural characteristics of national higher education systems. Not least the article shows that it is an over-simplification to describe the Swedish system as binary; the stratification is more complex. On basis of the analysis, the article also argues that while global trends certainly influence national developments, higher education systems have country-specific features that may enrich the understanding of how systems evolve and therefore should be analyzed as part of a broader study of the increasingly globalized academic system.
    Date
    22. 3.2013 19:43:01
  2. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.03
    0.029460195 = product of:
      0.05892039 = sum of:
        0.05892039 = sum of:
          0.023506312 = weight(_text_:research in 5011) [ClassicSimilarity], result of:
            0.023506312 = score(doc=5011,freq=2.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.15760657 = fieldWeight in 5011, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5011)
          0.03541408 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
            0.03541408 = score(doc=5011,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.19345059 = fieldWeight in 5011, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5011)
      0.5 = coord(1/2)
    
    Abstract
    The present challenge faced by scientists working with Big Data comes in the overwhelming volume and level of detail provided by current data sets. Exceeding traditional empirical approaches, Big Data opens a new perspective on scientific work in which data comes to play a role in the development of the scientific problematic to be developed. Addressing this reconfiguration of our relationship with data through readings of Wittgenstein, Macherey, and Popper, we propose a picture of science that encourages scientists to engage with the data in a direct way, using the data itself as an instrument for scientific investigation. Using GIS as a theme, we develop the concept of cyber-human systems of thought and understanding to bridge the divide between representative (theoretical) thinking and (non-theoretical) data-driven science. At the foundation of these systems, we invoke the concept of the "semantic pixel" to establish a logical and virtual space linking data and the work of scientists. It is with this discussion of the relationship between analysts in their pursuit of knowledge and the rise of Big Data that this present discussion of the philosophical foundations of Big Data addresses the central questions raised by social informatics research.
    Date
    7. 3.2019 16:32:22
  3. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.02
    0.024789855 = product of:
      0.04957971 = sum of:
        0.04957971 = product of:
          0.09915942 = sum of:
            0.09915942 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.09915942 = score(doc=4577,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 4.2000 18:01:22
  4. Information visualization in data mining and knowledge discovery (2002) 0.02
    0.024673326 = product of:
      0.049346652 = sum of:
        0.049346652 = sum of:
          0.03518102 = weight(_text_:research in 1789) [ClassicSimilarity], result of:
            0.03518102 = score(doc=1789,freq=28.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.23588389 = fieldWeight in 1789, product of:
                5.2915025 = tf(freq=28.0), with freq of:
                  28.0 = termFreq=28.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
          0.0141656315 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
            0.0141656315 = score(doc=1789,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.07738023 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
      0.5 = coord(1/2)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    Rez. in: JASIST 54(2003) no.9, S.905-906 (C.A. Badurek): "Visual approaches for knowledge discovery in very large databases are a prime research need for information scientists focused an extracting meaningful information from the ever growing stores of data from a variety of domains, including business, the geosciences, and satellite and medical imagery. This work presents a summary of research efforts in the fields of data mining, knowledge discovery, and data visualization with the goal of aiding the integration of research approaches and techniques from these major fields. The editors, leading computer scientists from academia and industry, present a collection of 32 papers from contributors who are incorporating visualization and data mining techniques through academic research as well application development in industry and government agencies. Information Visualization focuses upon techniques to enhance the natural abilities of humans to visually understand data, in particular, large-scale data sets. It is primarily concerned with developing interactive graphical representations to enable users to more intuitively make sense of multidimensional data as part of the data exploration process. It includes research from computer science, psychology, human-computer interaction, statistics, and information science. Knowledge Discovery in Databases (KDD) most often refers to the process of mining databases for previously unknown patterns and trends in data. Data mining refers to the particular computational methods or algorithms used in this process. The data mining research field is most related to computational advances in database theory, artificial intelligence and machine learning. This work compiles research summaries from these main research areas in order to provide "a reference work containing the collection of thoughts and ideas of noted researchers from the fields of data mining and data visualization" (p. 8). It addresses these areas in three main sections: the first an data visualization, the second an KDD and model visualization, and the last an using visualization in the knowledge discovery process. The seven chapters of Part One focus upon methodologies and successful techniques from the field of Data Visualization. Hoffman and Grinstein (Chapter 2) give a particularly good overview of the field of data visualization and its potential application to data mining. An introduction to the terminology of data visualization, relation to perceptual and cognitive science, and discussion of the major visualization display techniques are presented. Discussion and illustration explain the usefulness and proper context of such data visualization techniques as scatter plots, 2D and 3D isosurfaces, glyphs, parallel coordinates, and radial coordinate visualizations. Remaining chapters present the need for standardization of visualization methods, discussion of user requirements in the development of tools, and examples of using information visualization in addressing research problems.
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
    With contributors almost exclusively from the computer science field, the intended audience of this work is heavily slanted towards a computer science perspective. However, it is highly readable and provides introductory material that would be useful to information scientists from a variety of domains. Yet, much interesting work in information visualization from other fields could have been included giving the work more of an interdisciplinary perspective to complement their goals of integrating work in this area. Unfortunately, many of the application chapters are these, shallow, and lack complementary illustrations of visualization techniques or user interfaces used. However, they do provide insight into the many applications being developed in this rapidly expanding field. The authors have successfully put together a highly useful reference text for the data mining and information visualization communities. Those interested in a good introduction and overview of complementary research areas in these fields will be satisfied with this collection of papers. The focus upon integrating data visualization with data mining complements texts in each of these fields, such as Advances in Knowledge Discovery and Data Mining (Fayyad et al., MIT Press) and Readings in Information Visualization: Using Vision to Think (Card et. al., Morgan Kauffman). This unique work is a good starting point for future interaction between researchers in the fields of data visualization and data mining and makes a good accompaniment for a course focused an integrating these areas or to the main reference texts in these fields."
  5. KDD : techniques and applications (1998) 0.02
    0.021248447 = product of:
      0.042496894 = sum of:
        0.042496894 = product of:
          0.08499379 = sum of:
            0.08499379 = weight(_text_:22 in 6783) [ClassicSimilarity], result of:
              0.08499379 = score(doc=6783,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.46428138 = fieldWeight in 6783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6783)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    A special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997
  6. Tu, Y.-N.; Hsu, S.-L.: Constructing conceptual trajectory maps to trace the development of research fields (2016) 0.01
    0.014394616 = product of:
      0.028789232 = sum of:
        0.028789232 = product of:
          0.057578463 = sum of:
            0.057578463 = weight(_text_:research in 3059) [ClassicSimilarity], result of:
              0.057578463 = score(doc=3059,freq=12.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.38605565 = fieldWeight in 3059, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3059)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study proposes a new method to construct and trace the trajectory of conceptual development of a research field by combining main path analysis, citation analysis, and text-mining techniques. Main path analysis, a method used commonly to trace the most critical path in a citation network, helps describe the developmental trajectory of a research field. This study extends the main path analysis method and applies text-mining techniques in the new method, which reflects the trajectory of conceptual development in an academic research field more accurately than citation frequency, which represents only the articles examined. Articles can be merged based on similarity of concepts, and by merging concepts the history of a research field can be described more precisely. The new method was applied to the "h-index" and "text mining" fields. The precision, recall, and F-measures of the h-index were 0.738, 0.652, and 0.658 and those of text-mining were 0.501, 0.653, and 0.551, respectively. Last, this study not only establishes the conceptual trajectory map of a research field, but also recommends keywords that are more precise than those used currently by researchers. These precise keywords could enable researchers to gather related works more quickly than before.
  7. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.01
    0.0141656315 = product of:
      0.028331263 = sum of:
        0.028331263 = product of:
          0.056662526 = sum of:
            0.056662526 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
              0.056662526 = score(doc=1737,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.30952093 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1737)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22.11.1998 18:57:22
  8. Lusti, M.: Data Warehousing and Data Mining : Eine Einführung in entscheidungsunterstützende Systeme (1999) 0.01
    0.0141656315 = product of:
      0.028331263 = sum of:
        0.028331263 = product of:
          0.056662526 = sum of:
            0.056662526 = weight(_text_:22 in 4261) [ClassicSimilarity], result of:
              0.056662526 = score(doc=4261,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.30952093 = fieldWeight in 4261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4261)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    17. 7.2002 19:22:06
  9. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.01
    0.0141656315 = product of:
      0.028331263 = sum of:
        0.028331263 = product of:
          0.056662526 = sum of:
            0.056662526 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.056662526 = score(doc=1270,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  10. Lowe, D.B.; Dollinger, I.; Koster, T.; Herbert, B.E.: Text mining for type of research classification (2021) 0.01
    0.014103786 = product of:
      0.028207572 = sum of:
        0.028207572 = product of:
          0.056415144 = sum of:
            0.056415144 = weight(_text_:research in 720) [ClassicSimilarity], result of:
              0.056415144 = score(doc=720,freq=8.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.37825575 = fieldWeight in 720, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=720)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This project brought together undergraduate students in Computer Science with librarians to mine abstracts of articles from the Texas A&M University Libraries' institutional repository, OAKTrust, in order to probe the creation of new metadata to improve discovery and use. The mining operation task consisted simply of classifying the articles into two categories of research type: basic research ("for understanding," "curiosity-based," or "knowledge-based") and applied research ("use-based"). These categories are fundamental especially for funders but are also important to researchers. The mining-to-classification steps took several iterations, but ultimately, we achieved good results with the toolkit BERT (Bidirectional Encoder Representations from Transformers). The project and its workflows represent a preview of what may lie ahead in the future of crafting metadata using text mining techniques to enhance discoverability.
  11. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.01
    0.012394927 = product of:
      0.024789855 = sum of:
        0.024789855 = product of:
          0.04957971 = sum of:
            0.04957971 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
              0.04957971 = score(doc=2908,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.2708308 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  12. Fayyad, U.; Piatetsky-Shapiro, G.; Smyth, P.: From data mining to knowledge discovery in databases (1996) 0.01
    0.011753156 = product of:
      0.023506312 = sum of:
        0.023506312 = product of:
          0.047012623 = sum of:
            0.047012623 = weight(_text_:research in 7458) [ClassicSimilarity], result of:
              0.047012623 = score(doc=7458,freq=2.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.31521314 = fieldWeight in 7458, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7458)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Gives an overview of data mining and knowledge discovery in databases. Clarifies how they are related both to each other and to related fields. Mentions real world applications data mining techniques, challenges involved in real world applications of knowledge discovery, and current and future research directions
  13. Nicholson, S.: Bibliomining for automated collection development in a digital library setting : using data mining to discover Web-based scholarly research works (2003) 0.01
    0.011753156 = product of:
      0.023506312 = sum of:
        0.023506312 = product of:
          0.047012623 = sum of:
            0.047012623 = weight(_text_:research in 1867) [ClassicSimilarity], result of:
              0.047012623 = score(doc=1867,freq=8.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.31521314 = fieldWeight in 1867, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1867)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This research creates an intelligent agent for automated collection development in a digital library setting. It uses a predictive model based an facets of each Web page to select scholarly works. The criteria came from the academic library selection literature, and a Delphi study was used to refine the list to 41 criteria. A Perl program was designed to analyze a Web page for each criterion and applied to a large collection of scholarly and nonscholarly Web pages. Bibliomining, or data mining for libraries, was then used to create different classification models. Four techniques were used: logistic regression, nonparametric discriminant analysis, classification trees, and neural networks. Accuracy and return were used to judge the effectiveness of each model an test datasets. In addition, a set of problematic pages that were difficult to classify because of their similarity to scholarly research was gathered and classified using the models. The resulting models could be used in the selection process to automatically create a digital library of Webbased scholarly research works. In addition, the technique can be extended to create a digital library of any type of structured electronic information.
  14. Borgman, C.L.; Wofford, M.F.; Golshan, M.S.; Darch, P.T.: Collaborative qualitative research at scale : reflections on 20 years of acquiring global data and making data global (2021) 0.01
    0.011753156 = product of:
      0.023506312 = sum of:
        0.023506312 = product of:
          0.047012623 = sum of:
            0.047012623 = weight(_text_:research in 239) [ClassicSimilarity], result of:
              0.047012623 = score(doc=239,freq=8.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.31521314 = fieldWeight in 239, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=239)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A 5-year project to study scientific data uses in geography, starting in 1999, evolved into 20 years of research on data practices in sensor networks, environmental sciences, biology, seismology, undersea science, biomedicine, astronomy, and other fields. By emulating the "team science" approaches of the scientists studied, the UCLA Center for Knowledge Infrastructures accumulated a comprehensive collection of qualitative data about how scientists generate, manage, use, and reuse data across domains. Building upon Paul N. Edwards's model of "making global data"-collecting signals via consistent methods, technologies, and policies-to "make data global"-comparing and integrating those data, the research team has managed and exploited these data as a collaborative resource. This article reflects on the social, technical, organizational, economic, and policy challenges the team has encountered in creating new knowledge from data old and new. We reflect on continuity over generations of students and staff, transitions between grants, transfer of legacy data between software tools, research methods, and the role of professional data managers in the social sciences.
  15. Zhou, L.; Chaovalit, P.: Ontology-supported polarity mining (2008) 0.01
    0.01163503 = product of:
      0.02327006 = sum of:
        0.02327006 = product of:
          0.04654012 = sum of:
            0.04654012 = weight(_text_:research in 1343) [ClassicSimilarity], result of:
              0.04654012 = score(doc=1343,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.31204507 = fieldWeight in 1343, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1343)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Polarity mining provides an in-depth analysis of semantic orientations of text information. Motivated by its success in the area of topic mining, we propose an ontology-supported polarity mining (OSPM) approach. The approach aims to enhance polarity mining with ontology by providing detailed topic-specific information. OSPM was evaluated in the movie review domain using both supervised and unsupervised techniques. Results revealed that OSPM outperformed the baseline method without ontology support. The findings of this study not only advance the state of polarity mining research but also shed light on future research directions.
  16. Lackes, R.; Tillmanns, C.: Data Mining für die Unternehmenspraxis : Entscheidungshilfen und Fallstudien mit führenden Softwarelösungen (2006) 0.01
    0.010624223 = product of:
      0.021248447 = sum of:
        0.021248447 = product of:
          0.042496894 = sum of:
            0.042496894 = weight(_text_:22 in 1383) [ClassicSimilarity], result of:
              0.042496894 = score(doc=1383,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.23214069 = fieldWeight in 1383, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1383)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2008 14:46:06
  17. Schwartz, F.; Fang, Y.C.: Citation data analysis on hydrogeology (2007) 0.01
    0.010512341 = product of:
      0.021024682 = sum of:
        0.021024682 = product of:
          0.042049363 = sum of:
            0.042049363 = weight(_text_:research in 433) [ClassicSimilarity], result of:
              0.042049363 = score(doc=433,freq=10.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.2819352 = fieldWeight in 433, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.03125 = fieldNorm(doc=433)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article explores the status of research in hydrogeology using data mining techniques. First we try to explain what citation analysis is and review some of the previous work on citation analysis. The main idea in this article is to address some common issues about citation numbers and the use of these data. To validate the use of citation numbers, we compare the citation patterns for Water Resources Research papers in the 1980s with those in the 1990s. The citation growths for highly cited authors from the 1980s are used to examine whether it is possible to predict the citation patterns for highly-cited authors in the 1990s. If the citation data prove to be steady and stable, these numbers then can be used to explore the evolution of science in hydrogeology. The famous quotation, "If you are not the lead dog, the scenery never changes," attributed to Lee Iacocca, points to the importance of an entrepreneurial spirit in all forms of endeavor. In the case of hydrogeological research, impact analysis makes it clear how important it is to be a pioneer. Statistical correlation coefficients are used to retrieve papers among a collection of 2,847 papers before and after 1991 sharing the same topics with 273 papers in 1991 in Water Resources Research. The numbers of papers before and after 1991 are then plotted against various levels of citations for papers in 1991 to compare the distributions of paper population before and after that year. The similarity metrics based on word counts can ensure that the "before" papers are like ancestors and "after" papers are descendants in the same type of research. This exercise gives us an idea of how many papers are populated before and after 1991 (1991 is chosen based on balanced numbers of papers before and after that year). In addition, the impact of papers is measured in terms of citation presented as "percentile," a relative measure based on rankings in one year, in order to minimize the effect of time.
  18. Raan, A.F.J. van; Noyons, E.C.M.: Discovery of patterns of scientific and technological development and knowledge transfer (2002) 0.01
    0.0101785315 = product of:
      0.020357063 = sum of:
        0.020357063 = product of:
          0.040714126 = sum of:
            0.040714126 = weight(_text_:research in 3603) [ClassicSimilarity], result of:
              0.040714126 = score(doc=3603,freq=6.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.2729826 = fieldWeight in 3603, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3603)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper addresses a bibliometric methodology to discover the structure of the scientific 'landscape' in order to gain detailed insight into the development of MD fields, their interaction, and the transfer of knowledge between them. This methodology is appropriate to visualize the position of MD activities in relation to interdisciplinary MD developments, and particularly in relation to socio-economic problems. Furthermore, it allows the identification of the major actors. It even provides the possibility of foresight. We describe a first approach to apply bibliometric mapping as an instrument to investigate characteristics of knowledge transfer. In this paper we discuss the creation of 'maps of science' with help of advanced bibliometric methods. This 'bibliometric cartography' can be seen as a specific type of data-mining, applied to large amounts of scientific publications. As an example we describe the mapping of the field neuroscience, one of the largest and fast growing fields in the life sciences. The number of publications covered by this database is about 80,000 per year, the period covered is 1995-1998. Current research is going an to update the mapping for the years 1999-2002. This paper addresses the main lines of the methodology and its application in the study of knowledge transfer.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  19. Chen, Y.-L.; Liu, Y.-H.; Ho, W.-L.: ¬A text mining approach to assist the general public in the retrieval of legal documents (2013) 0.01
    0.009972882 = product of:
      0.019945765 = sum of:
        0.019945765 = product of:
          0.03989153 = sum of:
            0.03989153 = weight(_text_:research in 521) [ClassicSimilarity], result of:
              0.03989153 = score(doc=521,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.2674672 = fieldWeight in 521, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=521)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Applying text mining techniques to legal issues has been an emerging research topic in recent years. Although some previous studies focused on assisting professionals in the retrieval of related legal documents, they did not take into account the general public and their difficulty in describing legal problems in professional legal terms. Because this problem has not been addressed by previous research, this study aims to design a text-mining-based method that allows the general public to use everyday vocabulary to search for and retrieve criminal judgments. The experimental results indicate that our method can help the general public, who are not familiar with professional legal terms, to acquire relevant criminal judgments more accurately and effectively.
  20. Cardie, C.: Empirical methods in information extraction (1997) 0.01
    0.009402524 = product of:
      0.018805047 = sum of:
        0.018805047 = product of:
          0.037610095 = sum of:
            0.037610095 = weight(_text_:research in 3246) [ClassicSimilarity], result of:
              0.037610095 = score(doc=3246,freq=2.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.2521705 = fieldWeight in 3246, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3246)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Contribution to a special section reviewing recent research in empirical methods in speech recognition, syntactic parsing, semantic processing, information extraction and machine translation

Years

Languages

  • e 46
  • d 8

Types