Search (27 results, page 1 of 2)

  • × theme_ss:"Data Mining"
  1. Borgman, C.L.; Wofford, M.F.; Golshan, M.S.; Darch, P.T.: Collaborative qualitative research at scale : reflections on 20 years of acquiring global data and making data global (2021) 0.02
    0.023581749 = product of:
      0.082536116 = sum of:
        0.05024958 = weight(_text_:networks in 239) [ClassicSimilarity], result of:
          0.05024958 = score(doc=239,freq=2.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.26129362 = fieldWeight in 239, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=239)
        0.03228654 = product of:
          0.06457308 = sum of:
            0.06457308 = weight(_text_:policy in 239) [ClassicSimilarity], result of:
              0.06457308 = score(doc=239,freq=2.0), product of:
                0.21800333 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.04065836 = queryNorm
                0.29620224 = fieldWeight in 239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=239)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    A 5-year project to study scientific data uses in geography, starting in 1999, evolved into 20 years of research on data practices in sensor networks, environmental sciences, biology, seismology, undersea science, biomedicine, astronomy, and other fields. By emulating the "team science" approaches of the scientists studied, the UCLA Center for Knowledge Infrastructures accumulated a comprehensive collection of qualitative data about how scientists generate, manage, use, and reuse data across domains. Building upon Paul N. Edwards's model of "making global data"-collecting signals via consistent methods, technologies, and policies-to "make data global"-comparing and integrating those data, the research team has managed and exploited these data as a collaborative resource. This article reflects on the social, technical, organizational, economic, and policy challenges the team has encountered in creating new knowledge from data old and new. We reflect on continuity over generations of students and staff, transitions between grants, transfer of legacy data between software tools, research methods, and the role of professional data managers in the social sciences.
  2. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.02
    0.01698048 = product of:
      0.11886335 = sum of:
        0.11886335 = sum of:
          0.09132012 = weight(_text_:policy in 668) [ClassicSimilarity], result of:
            0.09132012 = score(doc=668,freq=4.0), product of:
              0.21800333 = queryWeight, product of:
                5.361833 = idf(docFreq=563, maxDocs=44218)
                0.04065836 = queryNorm
              0.41889322 = fieldWeight in 668, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.361833 = idf(docFreq=563, maxDocs=44218)
                0.0390625 = fieldNorm(doc=668)
          0.027543232 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
            0.027543232 = score(doc=668,freq=2.0), product of:
              0.14237864 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04065836 = queryNorm
              0.19345059 = fieldWeight in 668, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=668)
      0.14285715 = coord(1/7)
    
    Abstract
    20th century massification of higher education and research in academia is said to have produced structurally stratified higher education systems in many countries. Most manifestly, the research mission of universities appears to be divisive. Authors have claimed that the Swedish system, while formally unified, has developed into a binary state, and statistics seem to support this conclusion. This article makes use of a comprehensive statistical data source on Swedish higher education institutions to illustrate stratification, and uses literature on Swedish research policy history to contextualize the statistics. Highlighting the opportunities as well as constraints of the data, the article argues that there is great merit in combining statistics with a qualitative analysis when studying the structural characteristics of national higher education systems. Not least the article shows that it is an over-simplification to describe the Swedish system as binary; the stratification is more complex. On basis of the analysis, the article also argues that while global trends certainly influence national developments, higher education systems have country-specific features that may enrich the understanding of how systems evolve and therefore should be analyzed as part of a broader study of the increasingly globalized academic system.
    Date
    22. 3.2013 19:43:01
  3. Mining text data (2012) 0.01
    0.0114856195 = product of:
      0.080399334 = sum of:
        0.080399334 = weight(_text_:networks in 362) [ClassicSimilarity], result of:
          0.080399334 = score(doc=362,freq=8.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.4180698 = fieldWeight in 362, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.03125 = fieldNorm(doc=362)
      0.14285715 = coord(1/7)
    
    Abstract
    Text mining applications have experienced tremendous advances because of web 2.0 and social networking applications. Recent advances in hardware and software technology have lead to a number of unique scenarios where text mining algorithms are learned. Mining Text Data introduces an important niche in the text analytics field, and is an edited volume contributed by leading international researchers and practitioners focused on social networks & data mining. This book contains a wide swath in topics across social networks & data mining. Each chapter contains a comprehensive survey including the key research content on the topic, and the future directions of research in the field. There is a special focus on Text Embedded with Heterogeneous and Multimedia Data which makes the mining process much more challenging. A number of methods have been designed such as transfer learning and cross-lingual mining for such cases. Mining Text Data simplifies the content, so that advanced-level students, practitioners and researchers in computer science can benefit from this book. Academic and corporate libraries, as well as ACM, IEEE, and Management Science focused on information security, electronic commerce, databases, data mining, machine learning, and statistics are the primary buyers for this reference book.
    LCSH
    Computer Communication Networks
    Subject
    Computer Communication Networks
  4. Methodologies for knowledge discovery and data mining : Third Pacific-Asia Conference, PAKDD'99, Beijing, China, April 26-28, 1999, Proceedings (1999) 0.01
    0.010049917 = product of:
      0.07034942 = sum of:
        0.07034942 = weight(_text_:networks in 3821) [ClassicSimilarity], result of:
          0.07034942 = score(doc=3821,freq=2.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.36581108 = fieldWeight in 3821, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3821)
      0.14285715 = coord(1/7)
    
    Abstract
    The 29 revised full papers presented together with 37 short papers were carefully selected from a total of 158 submissions. The book is divided into sections on emerging KDD technology; association rules; feature selection and generation; mining in semi-unstructured data; interestingness, surprisingness, and exceptions; rough sets, fuzzy logic, and neural networks; induction, classification, and clustering; visualization, causal models and graph-based methods; agent-based and distributed data mining; and advanced topics and new methodologies
  5. Information visualization in data mining and knowledge discovery (2002) 0.01
    0.009893437 = product of:
      0.034627028 = sum of:
        0.029118383 = weight(_text_:government in 1789) [ClassicSimilarity], result of:
          0.029118383 = score(doc=1789,freq=2.0), product of:
            0.23146805 = queryWeight, product of:
              5.6930003 = idf(docFreq=404, maxDocs=44218)
              0.04065836 = queryNorm
            0.12579872 = fieldWeight in 1789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6930003 = idf(docFreq=404, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
        0.0055086464 = product of:
          0.011017293 = sum of:
            0.011017293 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.011017293 = score(doc=1789,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    Rez. in: JASIST 54(2003) no.9, S.905-906 (C.A. Badurek): "Visual approaches for knowledge discovery in very large databases are a prime research need for information scientists focused an extracting meaningful information from the ever growing stores of data from a variety of domains, including business, the geosciences, and satellite and medical imagery. This work presents a summary of research efforts in the fields of data mining, knowledge discovery, and data visualization with the goal of aiding the integration of research approaches and techniques from these major fields. The editors, leading computer scientists from academia and industry, present a collection of 32 papers from contributors who are incorporating visualization and data mining techniques through academic research as well application development in industry and government agencies. Information Visualization focuses upon techniques to enhance the natural abilities of humans to visually understand data, in particular, large-scale data sets. It is primarily concerned with developing interactive graphical representations to enable users to more intuitively make sense of multidimensional data as part of the data exploration process. It includes research from computer science, psychology, human-computer interaction, statistics, and information science. Knowledge Discovery in Databases (KDD) most often refers to the process of mining databases for previously unknown patterns and trends in data. Data mining refers to the particular computational methods or algorithms used in this process. The data mining research field is most related to computational advances in database theory, artificial intelligence and machine learning. This work compiles research summaries from these main research areas in order to provide "a reference work containing the collection of thoughts and ideas of noted researchers from the fields of data mining and data visualization" (p. 8). It addresses these areas in three main sections: the first an data visualization, the second an KDD and model visualization, and the last an using visualization in the knowledge discovery process. The seven chapters of Part One focus upon methodologies and successful techniques from the field of Data Visualization. Hoffman and Grinstein (Chapter 2) give a particularly good overview of the field of data visualization and its potential application to data mining. An introduction to the terminology of data visualization, relation to perceptual and cognitive science, and discussion of the major visualization display techniques are presented. Discussion and illustration explain the usefulness and proper context of such data visualization techniques as scatter plots, 2D and 3D isosurfaces, glyphs, parallel coordinates, and radial coordinate visualizations. Remaining chapters present the need for standardization of visualization methods, discussion of user requirements in the development of tools, and examples of using information visualization in addressing research problems.
  6. Whittle, M.; Eaglestone, B.; Ford, N.; Gillet, V.J.; Madden, A.: Data mining of search engine logs (2007) 0.01
    0.008614214 = product of:
      0.060299497 = sum of:
        0.060299497 = weight(_text_:networks in 1330) [ClassicSimilarity], result of:
          0.060299497 = score(doc=1330,freq=2.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.31355235 = fieldWeight in 1330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=1330)
      0.14285715 = coord(1/7)
    
    Abstract
    This article reports on the development of a novel method for the analysis of Web logs. The method uses techniques that look for similarities between queries and identify sequences of query transformation. It allows sequences of query transformations to be represented as graphical networks, thereby giving a richer view of search behavior than is possible with the usual sequential descriptions. We also perform a basic analysis to study the correlations between observed transformation codes, with results that appear to show evidence of behavior habits. The method was developed using transaction logs from the Excite search engine to provide a tool for an ongoing research project that is endeavoring to develop a greater understanding of Web-based searching by the general public.
  7. Leydesdorff, L.; Persson, O.: Mapping the geography of science : distribution patterns and networks of relations among cities and institutes (2010) 0.01
    0.008614214 = product of:
      0.060299497 = sum of:
        0.060299497 = weight(_text_:networks in 3704) [ClassicSimilarity], result of:
          0.060299497 = score(doc=3704,freq=2.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.31355235 = fieldWeight in 3704, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=3704)
      0.14285715 = coord(1/7)
    
  8. Berendt, B.; Krause, B.; Kolbe-Nusser, S.: Intelligent scientific authoring tools : interactive data mining for constructive uses of citation networks (2010) 0.01
    0.008614214 = product of:
      0.060299497 = sum of:
        0.060299497 = weight(_text_:networks in 4226) [ClassicSimilarity], result of:
          0.060299497 = score(doc=4226,freq=2.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.31355235 = fieldWeight in 4226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=4226)
      0.14285715 = coord(1/7)
    
  9. Winterhalter, C.: Licence to mine : ein Überblick über Rahmenbedingungen von Text and Data Mining und den aktuellen Stand der Diskussion (2016) 0.01
    0.00737978 = product of:
      0.05165846 = sum of:
        0.05165846 = product of:
          0.10331692 = sum of:
            0.10331692 = weight(_text_:policy in 673) [ClassicSimilarity], result of:
              0.10331692 = score(doc=673,freq=2.0), product of:
                0.21800333 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.04065836 = queryNorm
                0.47392356 = fieldWeight in 673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0625 = fieldNorm(doc=673)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Der Artikel gibt einen Überblick über die Möglichkeiten der Anwendung von Text and Data Mining (TDM) und ähnlichen Verfahren auf der Grundlage bestehender Regelungen in Lizenzverträgen zu kostenpflichtigen elektronischen Ressourcen, die Debatte über zusätzliche Lizenzen für TDM am Beispiel von Elseviers TDM Policy und den Stand der Diskussion über die Einführung von Schrankenregelungen im Urheberrecht für TDM zu nichtkommerziellen wissenschaftlichen Zwecken.
  10. Nicholson, S.: Bibliomining for automated collection development in a digital library setting : using data mining to discover Web-based scholarly research works (2003) 0.01
    0.007178512 = product of:
      0.05024958 = sum of:
        0.05024958 = weight(_text_:networks in 1867) [ClassicSimilarity], result of:
          0.05024958 = score(doc=1867,freq=2.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.26129362 = fieldWeight in 1867, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1867)
      0.14285715 = coord(1/7)
    
    Abstract
    This research creates an intelligent agent for automated collection development in a digital library setting. It uses a predictive model based an facets of each Web page to select scholarly works. The criteria came from the academic library selection literature, and a Delphi study was used to refine the list to 41 criteria. A Perl program was designed to analyze a Web page for each criterion and applied to a large collection of scholarly and nonscholarly Web pages. Bibliomining, or data mining for libraries, was then used to create different classification models. Four techniques were used: logistic regression, nonparametric discriminant analysis, classification trees, and neural networks. Accuracy and return were used to judge the effectiveness of each model an test datasets. In addition, a set of problematic pages that were difficult to classify because of their similarity to scholarly research was gathered and classified using the models. The resulting models could be used in the selection process to automatically create a digital library of Webbased scholarly research works. In addition, the technique can be extended to create a digital library of any type of structured electronic information.
  11. Haravu, L.J.; Neelameghan, A.: Text mining and data mining in knowledge organization and discovery : the making of knowledge-based products (2003) 0.01
    0.007178512 = product of:
      0.05024958 = sum of:
        0.05024958 = weight(_text_:networks in 5653) [ClassicSimilarity], result of:
          0.05024958 = score(doc=5653,freq=2.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.26129362 = fieldWeight in 5653, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5653)
      0.14285715 = coord(1/7)
    
    Abstract
    Discusses the importance of knowledge organization in the context of the information overload caused by the vast quantities of data and information accessible on internal and external networks of an organization. Defines the characteristics of a knowledge-based product. Elaborates on the techniques and applications of text mining in developing knowledge products. Presents two approaches, as case studies, to the making of knowledge products: (1) steps and processes in the planning, designing and development of a composite multilingual multimedia CD product, with the potential international, inter-cultural end users in view, and (2) application of natural language processing software in text mining. Using a text mining software, it is possible to link concept terms from a processed text to a related thesaurus, glossary, schedules of a classification scheme, and facet structured subject representations. Concludes that the products of text mining and data mining could be made more useful if the features of a faceted scheme for subject classification are incorporated into text mining techniques and products.
  12. Ekbia, H.; Mattioli, M.; Kouper, I.; Arave, G.; Ghazinejad, A.; Bowman, T.; Suri, V.R.; Tsou, A.; Weingart, S.; Sugimoto, C.R.: Big data, bigger dilemmas : a critical review (2015) 0.01
    0.006522866 = product of:
      0.04566006 = sum of:
        0.04566006 = product of:
          0.09132012 = sum of:
            0.09132012 = weight(_text_:policy in 2155) [ClassicSimilarity], result of:
              0.09132012 = score(doc=2155,freq=4.0), product of:
                0.21800333 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.04065836 = queryNorm
                0.41889322 = fieldWeight in 2155, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2155)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    The recent interest in Big Data has generated a broad range of new academic, corporate, and policy practices along with an evolving debate among its proponents, detractors, and skeptics. While the practices draw on a common set of tools, techniques, and technologies, most contributions to the debate come either from a particular disciplinary perspective or with a focus on a domain-specific issue. A close examination of these contributions reveals a set of common problematics that arise in various guises and in different places. It also demonstrates the need for a critical synthesis of the conceptual and practical dilemmas surrounding Big Data. The purpose of this article is to provide such a synthesis by drawing on relevant writings in the sciences, humanities, policy, and trade literature. In bringing these diverse literatures together, we aim to shed light on the common underlying issues that concern and affect all of these areas. By contextualizing the phenomenon of Big Data within larger socioeconomic developments, we also seek to provide a broader understanding of its drivers, barriers, and challenges. This approach allows us to identify attributes of Big Data that require more attention-autonomy, opacity, generativity, disparity, and futurity-leading to questions and ideas for moving beyond dilemmas.
  13. Kantardzic, M.: Data mining : concepts, models, methods, and algorithms (2003) 0.01
    0.0057428097 = product of:
      0.040199667 = sum of:
        0.040199667 = weight(_text_:networks in 2291) [ClassicSimilarity], result of:
          0.040199667 = score(doc=2291,freq=2.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.2090349 = fieldWeight in 2291, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.03125 = fieldNorm(doc=2291)
      0.14285715 = coord(1/7)
    
    Abstract
    This book offers a comprehensive introduction to the exploding field of data mining. We are surrounded by data, numerical and otherwise, which must be analyzed and processed to convert it into information that informs, instructs, answers, or otherwise aids understanding and decision-making. Due to the ever-increasing complexity and size of today's data sets, a new term, data mining, was created to describe the indirect, automatic data analysis techniques that utilize more complex and sophisticated tools than those which analysts used in the past to do mere data analysis. "Data Mining: Concepts, Models, Methods, and Algorithms" discusses data mining principles and then describes representative state-of-the-art methods and algorithms originating from different disciplines such as statistics, machine learning, neural networks, fuzzy logic, and evolutionary computation. Detailed algorithms are provided with necessary explanations and illustrative examples. This text offers guidance: how and when to use a particular software tool (with their companion data sets) from among the hundreds offered when faced with a data set to mine. This allows analysts to create and perform their own data mining experiments using their knowledge of the methodologies and techniques provided. This book emphasizes the selection of appropriate methodologies and data analysis software, as well as parameter tuning. These critically important, qualitative decisions can only be made with the deeper understanding of parameter meaning and its role in the technique that is offered here. Data mining is an exploding field and this book offers much-needed guidance to selecting among the numerous analysis programs that are available.
  14. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.01
    0.005508647 = product of:
      0.038560525 = sum of:
        0.038560525 = product of:
          0.07712105 = sum of:
            0.07712105 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.07712105 = score(doc=4577,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    2. 4.2000 18:01:22
  15. KDD : techniques and applications (1998) 0.00
    0.004721697 = product of:
      0.03305188 = sum of:
        0.03305188 = product of:
          0.06610376 = sum of:
            0.06610376 = weight(_text_:22 in 6783) [ClassicSimilarity], result of:
              0.06610376 = score(doc=6783,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.46428138 = fieldWeight in 6783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6783)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Footnote
    A special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997
  16. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.00
    0.0031477981 = product of:
      0.022034585 = sum of:
        0.022034585 = product of:
          0.04406917 = sum of:
            0.04406917 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
              0.04406917 = score(doc=1737,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.30952093 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1737)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22.11.1998 18:57:22
  17. Lusti, M.: Data Warehousing and Data Mining : Eine Einführung in entscheidungsunterstützende Systeme (1999) 0.00
    0.0031477981 = product of:
      0.022034585 = sum of:
        0.022034585 = product of:
          0.04406917 = sum of:
            0.04406917 = weight(_text_:22 in 4261) [ClassicSimilarity], result of:
              0.04406917 = score(doc=4261,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.30952093 = fieldWeight in 4261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4261)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    17. 7.2002 19:22:06
  18. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.00
    0.0031477981 = product of:
      0.022034585 = sum of:
        0.022034585 = product of:
          0.04406917 = sum of:
            0.04406917 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.04406917 = score(doc=1270,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  19. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.00
    0.0027543234 = product of:
      0.019280262 = sum of:
        0.019280262 = product of:
          0.038560525 = sum of:
            0.038560525 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
              0.038560525 = score(doc=2908,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.2708308 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  20. Lackes, R.; Tillmanns, C.: Data Mining für die Unternehmenspraxis : Entscheidungshilfen und Fallstudien mit führenden Softwarelösungen (2006) 0.00
    0.0023608485 = product of:
      0.01652594 = sum of:
        0.01652594 = product of:
          0.03305188 = sum of:
            0.03305188 = weight(_text_:22 in 1383) [ClassicSimilarity], result of:
              0.03305188 = score(doc=1383,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.23214069 = fieldWeight in 1383, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1383)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 3.2008 14:46:06

Years

Languages

  • e 19
  • d 8

Types