Search (36 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Data Mining"
  1. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.01
    0.011251533 = product of:
      0.056257665 = sum of:
        0.056257665 = product of:
          0.0843865 = sum of:
            0.042383887 = weight(_text_:29 in 1270) [ClassicSimilarity], result of:
              0.042383887 = score(doc=1270,freq=2.0), product of:
                0.13631654 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038751747 = queryNorm
                0.31092256 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
            0.042002615 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.042002615 = score(doc=1270,freq=2.0), product of:
                0.13570201 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038751747 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.6666667 = coord(2/3)
      0.2 = coord(1/5)
    
    Date
    5. 4.1996 15:29:15
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  2. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.01
    0.009845092 = product of:
      0.04922546 = sum of:
        0.04922546 = product of:
          0.07383819 = sum of:
            0.037085902 = weight(_text_:29 in 2908) [ClassicSimilarity], result of:
              0.037085902 = score(doc=2908,freq=2.0), product of:
                0.13631654 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038751747 = queryNorm
                0.27205724 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
            0.036752287 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
              0.036752287 = score(doc=2908,freq=2.0), product of:
                0.13570201 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038751747 = queryNorm
                0.2708308 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
          0.6666667 = coord(2/3)
      0.2 = coord(1/5)
    
    Date
    5. 4.1996 15:29:15
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  3. Survey of text mining : clustering, classification, and retrieval (2004) 0.01
    0.008425091 = product of:
      0.042125456 = sum of:
        0.042125456 = weight(_text_:management in 804) [ClassicSimilarity], result of:
          0.042125456 = score(doc=804,freq=6.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.32251096 = fieldWeight in 804, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
      0.2 = coord(1/5)
    
    Abstract
    Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
    Classification
    ST 270 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme
    RVK
    ST 270 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme
  4. Information visualization in data mining and knowledge discovery (2002) 0.01
    0.006903333 = product of:
      0.017258333 = sum of:
        0.0137581155 = weight(_text_:management in 1789) [ClassicSimilarity], result of:
          0.0137581155 = score(doc=1789,freq=4.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.10533164 = fieldWeight in 1789, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
        0.003500218 = product of:
          0.010500654 = sum of:
            0.010500654 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.010500654 = score(doc=1789,freq=2.0), product of:
                0.13570201 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038751747 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
    Series
    Morgan Kaufmann series in data management systems
  5. Classification, automation, and new media : Proceedings of the 24th Annual Conference of the Gesellschaft für Klassifikation e.V., University of Passau, March 15 - 17, 2000 (2002) 0.01
    0.0068790577 = product of:
      0.03439529 = sum of:
        0.03439529 = weight(_text_:management in 5997) [ClassicSimilarity], result of:
          0.03439529 = score(doc=5997,freq=4.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.2633291 = fieldWeight in 5997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
      0.2 = coord(1/5)
    
    Abstract
    Given the huge amount of information in the internet and in practically every domain of knowledge that we are facing today, knowledge discovery calls for automation. The book deals with methods from classification and data analysis that respond effectively to this rapidly growing challenge. The interested reader will find new methodological insights as well as applications in economics, management science, finance, and marketing, and in pattern recognition, biology, health, and archaeology.
    Content
    Data Analysis, Statistics, and Classification.- Pattern Recognition and Automation.- Data Mining, Information Processing, and Automation.- New Media, Web Mining, and Automation.- Applications in Management Science, Finance, and Marketing.- Applications in Medicine, Biology, Archaeology, and Others.- Author Index.- Subject Index.
  6. Knowledge management in fuzzy databases (2000) 0.01
    0.0068099196 = product of:
      0.034049597 = sum of:
        0.034049597 = weight(_text_:management in 4260) [ClassicSimilarity], result of:
          0.034049597 = score(doc=4260,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.2606825 = fieldWeight in 4260, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4260)
      0.2 = coord(1/5)
    
  7. Liu, W.; Weichselbraun, A.; Scharl, A.; Chang, E.: Semi-automatic ontology extension using spreading activation (2005) 0.01
    0.0068099196 = product of:
      0.034049597 = sum of:
        0.034049597 = weight(_text_:management in 3028) [ClassicSimilarity], result of:
          0.034049597 = score(doc=3028,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.2606825 = fieldWeight in 3028, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3028)
      0.2 = coord(1/5)
    
    Source
    Journal of universal knowledge management. 0(2005) no.1, S.50-58
  8. Mining text data (2012) 0.01
    0.006740073 = product of:
      0.033700366 = sum of:
        0.033700366 = weight(_text_:management in 362) [ClassicSimilarity], result of:
          0.033700366 = score(doc=362,freq=6.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.25800878 = fieldWeight in 362, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=362)
      0.2 = coord(1/5)
    
    Abstract
    Text mining applications have experienced tremendous advances because of web 2.0 and social networking applications. Recent advances in hardware and software technology have lead to a number of unique scenarios where text mining algorithms are learned. Mining Text Data introduces an important niche in the text analytics field, and is an edited volume contributed by leading international researchers and practitioners focused on social networks & data mining. This book contains a wide swath in topics across social networks & data mining. Each chapter contains a comprehensive survey including the key research content on the topic, and the future directions of research in the field. There is a special focus on Text Embedded with Heterogeneous and Multimedia Data which makes the mining process much more challenging. A number of methods have been designed such as transfer learning and cross-lingual mining for such cases. Mining Text Data simplifies the content, so that advanced-level students, practitioners and researchers in computer science can benefit from this book. Academic and corporate libraries, as well as ACM, IEEE, and Management Science focused on information security, electronic commerce, databases, data mining, machine learning, and statistics are the primary buyers for this reference book.
    LCSH
    Database management
    Subject
    Database management
  9. Relational data mining (2001) 0.01
    0.005837074 = product of:
      0.02918537 = sum of:
        0.02918537 = weight(_text_:management in 1303) [ClassicSimilarity], result of:
          0.02918537 = score(doc=1303,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.22344214 = fieldWeight in 1303, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=1303)
      0.2 = coord(1/5)
    
    Theme
    Information Resources Management
  10. Wu, K.J.; Chen, M.-C.; Sun, Y.: Automatic topics discovery from hyperlinked documents (2004) 0.01
    0.005837074 = product of:
      0.02918537 = sum of:
        0.02918537 = weight(_text_:management in 2563) [ClassicSimilarity], result of:
          0.02918537 = score(doc=2563,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.22344214 = fieldWeight in 2563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2563)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 40(2004) no.2, S.239-255
  11. Chen, H.; Chau, M.: Web mining : machine learning for Web applications (2003) 0.01
    0.005837074 = product of:
      0.02918537 = sum of:
        0.02918537 = weight(_text_:management in 4242) [ClassicSimilarity], result of:
          0.02918537 = score(doc=4242,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.22344214 = fieldWeight in 4242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=4242)
      0.2 = coord(1/5)
    
    Abstract
    With more than two billion pages created by millions of Web page authors and organizations, the World Wide Web is a tremendously rich knowledge base. The knowledge comes not only from the content of the pages themselves, but also from the unique characteristics of the Web, such as its hyperlink structure and its diversity of content and languages. Analysis of these characteristics often reveals interesting patterns and new knowledge. Such knowledge can be used to improve users' efficiency and effectiveness in searching for information an the Web, and also for applications unrelated to the Web, such as support for decision making or business management. The Web's size and its unstructured and dynamic content, as well as its multilingual nature, make the extraction of useful knowledge a challenging research problem. Furthermore, the Web generates a large amount of data in other formats that contain valuable information. For example, Web server logs' information about user access patterns can be used for information personalization or improving Web page design.
  12. Pons-Porrata, A.; Berlanga-Llavori, R.; Ruiz-Shulcloper, J.: Topic discovery based on text mining techniques (2007) 0.01
    0.005837074 = product of:
      0.02918537 = sum of:
        0.02918537 = weight(_text_:management in 916) [ClassicSimilarity], result of:
          0.02918537 = score(doc=916,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.22344214 = fieldWeight in 916, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=916)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 43(2007) no.3, S.752-768
  13. Sánchez, D.; Chamorro-Martínez, J.; Vila, M.A.: Modelling subjectivity in visual perception of orientation for image retrieval (2003) 0.01
    0.005837074 = product of:
      0.02918537 = sum of:
        0.02918537 = weight(_text_:management in 1067) [ClassicSimilarity], result of:
          0.02918537 = score(doc=1067,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.22344214 = fieldWeight in 1067, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=1067)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 39(2003) no.2, S.251-266
  14. Berendt, B.; Krause, B.; Kolbe-Nusser, S.: Intelligent scientific authoring tools : interactive data mining for constructive uses of citation networks (2010) 0.01
    0.005837074 = product of:
      0.02918537 = sum of:
        0.02918537 = weight(_text_:management in 4226) [ClassicSimilarity], result of:
          0.02918537 = score(doc=4226,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.22344214 = fieldWeight in 4226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=4226)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 46(2010) no.1, S.1-10
  15. Budzik, J.; Hammond, K.J.; Birnbaum, L.: Information access in context (2001) 0.00
    0.0049447874 = product of:
      0.024723936 = sum of:
        0.024723936 = product of:
          0.074171804 = sum of:
            0.074171804 = weight(_text_:29 in 3835) [ClassicSimilarity], result of:
              0.074171804 = score(doc=3835,freq=2.0), product of:
                0.13631654 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038751747 = queryNorm
                0.5441145 = fieldWeight in 3835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3835)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 3.2002 17:31:17
  16. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.00
    0.004900305 = product of:
      0.024501525 = sum of:
        0.024501525 = product of:
          0.073504575 = sum of:
            0.073504575 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.073504575 = score(doc=4577,freq=2.0), product of:
                0.13570201 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038751747 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    2. 4.2000 18:01:22
  17. Hereth, J.; Stumme, G.; Wille, R.; Wille, U.: Conceptual knowledge discovery and data analysis (2000) 0.00
    0.0048642284 = product of:
      0.02432114 = sum of:
        0.02432114 = weight(_text_:management in 5083) [ClassicSimilarity], result of:
          0.02432114 = score(doc=5083,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.18620178 = fieldWeight in 5083, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5083)
      0.2 = coord(1/5)
    
    Abstract
    In this paper, we discuss Conceptual Knowledge Discovery in Databases (CKDD) in its connection with Data Analysis. Our approach is based on Formal Concept Analysis, a mathematical theory which has been developed and proven useful during the last 20 years. Formal Concept Analysis has led to a theory of conceptual information systems which has been applied by using the management system TOSCANA in a wide range of domains. In this paper, we use such an application in database marketing to demonstrate how methods and procedures of CKDD can be applied in Data Analysis. In particular, we show the interplay and integration of data mining and data analysis techniques based on Formal Concept Analysis. The main concern of this paper is to explain how the transition from data to knowledge can be supported by a TOSCANA system. To clarify the transition steps we discuss their correspondence to the five levels of knowledge representation established by R. Brachman and to the steps of empirically grounded theory building proposed by A. Strauss and J. Corbin
  18. Liu, Y.; Huang, X.; An, A.: Personalized recommendation with adaptive mixture of markov models (2007) 0.00
    0.0048642284 = product of:
      0.02432114 = sum of:
        0.02432114 = weight(_text_:management in 606) [ClassicSimilarity], result of:
          0.02432114 = score(doc=606,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.18620178 = fieldWeight in 606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=606)
      0.2 = coord(1/5)
    
    Abstract
    With more and more information available on the Internet, the task of making personalized recommendations to assist the user's navigation has become increasingly important. Considering there might be millions of users with different backgrounds accessing a Web site everyday, it is infeasible to build a separate recommendation system for each user. To address this problem, clustering techniques can first be employed to discover user groups. Then, user navigation patterns for each group can be discovered, to allow the adaptation of a Web site to the interest of each individual group. In this paper, we propose to model user access sequences as stochastic processes, and a mixture of Markov models based approach is taken to cluster users and to capture the sequential relationships inherent in user access histories. Several important issues that arise in constructing the Markov models are also addressed. The first issue lies in the complexity of the mixture of Markov models. To improve the efficiency of building/maintaining the mixture of Markov models, we develop a lightweight adapt-ive algorithm to update the model parameters without recomputing model parameters from scratch. The second issue concerns the proper selection of training data for building the mixture of Markov models. We investigate two different training data selection strategies and perform extensive experiments to compare their effectiveness on a real dataset that is generated by a Web-based knowledge management system, Livelink.
  19. Lihui, C.; Lian, C.W.: Using Web structure and summarisation techniques for Web content mining (2005) 0.00
    0.0048642284 = product of:
      0.02432114 = sum of:
        0.02432114 = weight(_text_:management in 1046) [ClassicSimilarity], result of:
          0.02432114 = score(doc=1046,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.18620178 = fieldWeight in 1046, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1046)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 41(2005) no.5, S.1225-1242
  20. Saggi, M.K.; Jain, S.: ¬A survey towards an integration of big data analytics to big insights for value-creation (2018) 0.00
    0.0048642284 = product of:
      0.02432114 = sum of:
        0.02432114 = weight(_text_:management in 5053) [ClassicSimilarity], result of:
          0.02432114 = score(doc=5053,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.18620178 = fieldWeight in 5053, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5053)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 54(2018) no.5, S.758-790

Types