Search (3333 results, page 1 of 167)

  • × type_ss:"a"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.27
    0.27286285 = sum of:
      0.08046506 = product of:
        0.24139518 = sum of:
          0.24139518 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24139518 = score(doc=562,freq=2.0), product of:
              0.429515 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05066224 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.19239777 = sum of:
        0.15121357 = weight(_text_:mining in 562) [ClassicSimilarity], result of:
          0.15121357 = score(doc=562,freq=4.0), product of:
            0.28585905 = queryWeight, product of:
              5.642448 = idf(docFreq=425, maxDocs=44218)
              0.05066224 = queryNorm
            0.5289795 = fieldWeight in 562, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.642448 = idf(docFreq=425, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.0411842 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
          0.0411842 = score(doc=562,freq=2.0), product of:
            0.17741053 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.05066224 = queryNorm
            0.23214069 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Source
    Proceedings of the 4th IEEE International Conference on Data Mining (ICDM 2004), 1-4 November 2004, Brighton, UK
  2. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.22
    0.22446406 = product of:
      0.44892812 = sum of:
        0.44892812 = sum of:
          0.35283166 = weight(_text_:mining in 4577) [ClassicSimilarity], result of:
            0.35283166 = score(doc=4577,freq=4.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              1.2342855 = fieldWeight in 4577, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.109375 = fieldNorm(doc=4577)
          0.09609647 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
            0.09609647 = score(doc=4577,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.5416616 = fieldWeight in 4577, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=4577)
      0.5 = coord(1/2)
    
    Date
    2. 4.2000 18:01:22
    Theme
    Data Mining
  3. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.13
    0.12826519 = product of:
      0.25653037 = sum of:
        0.25653037 = sum of:
          0.2016181 = weight(_text_:mining in 1737) [ClassicSimilarity], result of:
            0.2016181 = score(doc=1737,freq=4.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.705306 = fieldWeight in 1737, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.0625 = fieldNorm(doc=1737)
          0.054912273 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
            0.054912273 = score(doc=1737,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.30952093 = fieldWeight in 1737, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1737)
      0.5 = coord(1/2)
    
    Abstract
    Defines digital libraries and discusses the effects of new technology on librarians. Examines the different viewpoints of librarians and information technologists on digital libraries. Describes the development of a digital library at the National Drug Intelligence Center, USA, which was carried out in collaboration with information technology experts. The system is based on Web enabled search technology to find information, data visualization and data mining to visualize it and use of SGML as an information standard to store it
    Date
    22.11.1998 18:57:22
    Theme
    Data Mining
  4. Keim, D.A.: Data Mining mit bloßem Auge (2002) 0.12
    0.11954483 = product of:
      0.23908965 = sum of:
        0.23908965 = product of:
          0.4781793 = sum of:
            0.4781793 = weight(_text_:mining in 1086) [ClassicSimilarity], result of:
              0.4781793 = score(doc=1086,freq=10.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.67278 = fieldWeight in 1086, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1086)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Visualisierungen, die möglichst instruktive grafische Darstellung von Daten, ist wesentlicher Bestandteil des Data Mining
    Footnote
    Teil eines Heftthemas 'Data Mining'
    Series
    Data Mining
    Theme
    Data Mining
  5. Saz, J.T.: Perspectivas en recuperacion y explotacion de informacion electronica : el 'data mining' (1997) 0.11
    0.109129 = product of:
      0.218258 = sum of:
        0.218258 = product of:
          0.436516 = sum of:
            0.436516 = weight(_text_:mining in 3723) [ClassicSimilarity], result of:
              0.436516 = score(doc=3723,freq=12.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.5270323 = fieldWeight in 3723, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3723)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents the concept and the techniques identified by the term data mining. Explains the principles and phases of developing a data mining process, and the main types of data mining tools
    Footnote
    Übers. des Titels: Perspectives on the retrieval and exploitation of electronic information: data mining
    Theme
    Data Mining
  6. Wrobel, S.: Lern- und Entdeckungsverfahren (2002) 0.11
    0.10692415 = product of:
      0.2138483 = sum of:
        0.2138483 = product of:
          0.4276966 = sum of:
            0.4276966 = weight(_text_:mining in 1105) [ClassicSimilarity], result of:
              0.4276966 = score(doc=1105,freq=8.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.4961799 = fieldWeight in 1105, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1105)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Betrügerische Kreditkartenkäufe, besonders fähige Basketballspieler und umweltbewusste Saftverkäufer ausfindig machen - Data-Mining-Verfahren lernen selbständig das Wesentliche
    Footnote
    Teil eines Heftthemas 'Data Mining'
    Series
    Data Mining
    Theme
    Data Mining
  7. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.10
    0.101031266 = product of:
      0.20206253 = sum of:
        0.20206253 = sum of:
          0.1746064 = weight(_text_:mining in 1507) [ClassicSimilarity], result of:
            0.1746064 = score(doc=1507,freq=12.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.6108129 = fieldWeight in 1507, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.03125 = fieldNorm(doc=1507)
          0.027456136 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
            0.027456136 = score(doc=1507,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.15476047 = fieldWeight in 1507, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1507)
      0.5 = coord(1/2)
    
    Abstract
    Wir werden einmal die Grundlagen des Text-Mining-Systems bei IBM darstellen, dann werden wir das Projekt etwas umfangreicher und deutlicher darstellen, da kennen wir uns aus. Von daher haben wir zwei Teile, einmal Heidelberg, einmal Hamburg. Noch einmal zur Technologie. Text-Mining ist eine von IBM entwickelte Technologie, die in einer besonderen Ausformung und Programmierung für uns zusammengestellt wurde. Das Projekt hieß bei uns lange Zeit DocText Miner und heißt seit einiger Zeit auf Vorschlag von IBM DocCat, das soll eine Abkürzung für Document-Categoriser sein, sie ist ja auch nett und anschaulich. Wir fangen an mit Text-Mining, das bei IBM in Heidelberg entwickelt wurde. Die verstehen darunter das automatische Indexieren als eine Instanz, also einen Teil von Text-Mining. Probleme werden dabei gezeigt, und das Text-Mining ist eben eine Methode zur Strukturierung von und der Suche in großen Dokumentenmengen, die Extraktion von Informationen und, das ist der hohe Anspruch, von impliziten Zusammenhängen. Das letztere sei dahingestellt. IBM macht das quantitativ, empirisch, approximativ und schnell. das muss man wirklich sagen. Das Ziel, und das ist ganz wichtig für unser Projekt gewesen, ist nicht, den Text zu verstehen, sondern das Ergebnis dieser Verfahren ist, was sie auf Neudeutsch a bundle of words, a bag of words nennen, also eine Menge von bedeutungstragenden Begriffen aus einem Text zu extrahieren, aufgrund von Algorithmen, also im Wesentlichen aufgrund von Rechenoperationen. Es gibt eine ganze Menge von linguistischen Vorstudien, ein wenig Linguistik ist auch dabei, aber nicht die Grundlage der ganzen Geschichte. Was sie für uns gemacht haben, ist also die Annotierung von Pressetexten für unsere Pressedatenbank. Für diejenigen, die es noch nicht kennen: Gruner + Jahr führt eine Textdokumentation, die eine Datenbank führt, seit Anfang der 70er Jahre, da sind z.Z. etwa 6,5 Millionen Dokumente darin, davon etwas über 1 Million Volltexte ab 1993. Das Prinzip war lange Zeit, dass wir die Dokumente, die in der Datenbank gespeichert waren und sind, verschlagworten und dieses Prinzip haben wir auch dann, als der Volltext eingeführt wurde, in abgespeckter Form weitergeführt. Zu diesen 6,5 Millionen Dokumenten gehören dann eben auch ungefähr 10 Millionen Faksimileseiten, weil wir die Faksimiles auch noch standardmäßig aufheben.
    Date
    22. 4.2003 11:45:36
    Theme
    Data Mining
  8. Tunbridge, N.: Semiology put to data mining (1999) 0.10
    0.10080905 = product of:
      0.2016181 = sum of:
        0.2016181 = product of:
          0.4032362 = sum of:
            0.4032362 = weight(_text_:mining in 6782) [ClassicSimilarity], result of:
              0.4032362 = score(doc=6782,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.410612 = fieldWeight in 6782, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.125 = fieldNorm(doc=6782)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Data Mining
  9. Spertus, E.: ParaSite : mining structural information on the Web (1997) 0.10
    0.098738894 = product of:
      0.19747779 = sum of:
        0.19747779 = sum of:
          0.14256552 = weight(_text_:mining in 2740) [ClassicSimilarity], result of:
            0.14256552 = score(doc=2740,freq=2.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.49872664 = fieldWeight in 2740, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.0625 = fieldNorm(doc=2740)
          0.054912273 = weight(_text_:22 in 2740) [ClassicSimilarity], result of:
            0.054912273 = score(doc=2740,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.30952093 = fieldWeight in 2740, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2740)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
  10. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.10
    0.098738894 = product of:
      0.19747779 = sum of:
        0.19747779 = sum of:
          0.14256552 = weight(_text_:mining in 1270) [ClassicSimilarity], result of:
            0.14256552 = score(doc=1270,freq=2.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.49872664 = fieldWeight in 1270, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.0625 = fieldNorm(doc=1270)
          0.054912273 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
            0.054912273 = score(doc=1270,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.30952093 = fieldWeight in 1270, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1270)
      0.5 = coord(1/2)
    
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
    Theme
    Data Mining
  11. Lawson, M.: Automatic extraction of citations from the text of English-language patents : an example of template mining (1996) 0.10
    0.09619889 = product of:
      0.19239777 = sum of:
        0.19239777 = sum of:
          0.15121357 = weight(_text_:mining in 2654) [ClassicSimilarity], result of:
            0.15121357 = score(doc=2654,freq=4.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.5289795 = fieldWeight in 2654, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.046875 = fieldNorm(doc=2654)
          0.0411842 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
            0.0411842 = score(doc=2654,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.23214069 = fieldWeight in 2654, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2654)
      0.5 = coord(1/2)
    
    Abstract
    Describes and evaluates methods for automatically isolating and extracting biliographic references from the full texts of patents, designed to facilitate the work of patent examiners who currently perform this task manually. These references include citations both to patents and to other bibliographic sources. Notes that patents are unusual as citing documents in that the citations occur maily in the body of the text, rather than as footnotes or in separate sections. Describes the natural language processing technique of template mining used to extract data directly from the text where either the data or the text surrounding the data form recognizable patterns. When text matches a template, the system extracts data according to instructions associated with that template. Examines the sub languages of citations and the development of templates for the extraction of citations to patent. Reports results of running 2 reference extraction systems against a sample of 100 European Patent Office patent documents, with recall and prescision data for patent and non patent citations, and concludes with suggestions for future improvements
    Source
    Journal of information science. 22(1996) no.6, S.423-436
  12. Li, D.: Knowledge representation and discovery based on linguistic atoms (1998) 0.10
    0.09619889 = product of:
      0.19239777 = sum of:
        0.19239777 = sum of:
          0.15121357 = weight(_text_:mining in 3836) [ClassicSimilarity], result of:
            0.15121357 = score(doc=3836,freq=4.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.5289795 = fieldWeight in 3836, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.046875 = fieldNorm(doc=3836)
          0.0411842 = weight(_text_:22 in 3836) [ClassicSimilarity], result of:
            0.0411842 = score(doc=3836,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.23214069 = fieldWeight in 3836, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3836)
      0.5 = coord(1/2)
    
    Abstract
    Describes a new concept of linguistic atoms with 3 digital characteristics: expected value Ex, entropy En, and deviation D. The mathematical description has effectively integrated the fuzziness and randomness of linguistic terms in a unified way. Develops a method of knowledge representation in KDD, which bridges the gap between quantitative and qualitative knowledge. Mapping between quantities and qualities becomes much easier and interchangeable. In order to discover generalised knowledge from a database, uses virtual linguistic terms and cloud transfer for the auto-generation of concept hierarchies to attributes. Predicitve data mining with the cloud model is given for implementation. Illustrates the advantages of this linguistic model in KDD
    Footnote
    Contribution to a special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997
  13. Sun, A.; Lim, E.-P.: Web unit-based mining of homepage relationships (2006) 0.09
    0.09432595 = product of:
      0.1886519 = sum of:
        0.1886519 = sum of:
          0.15433173 = weight(_text_:mining in 5274) [ClassicSimilarity], result of:
            0.15433173 = score(doc=5274,freq=6.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.5398875 = fieldWeight in 5274, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5274)
          0.034320172 = weight(_text_:22 in 5274) [ClassicSimilarity], result of:
            0.034320172 = score(doc=5274,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.19345059 = fieldWeight in 5274, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5274)
      0.5 = coord(1/2)
    
    Abstract
    Homepages usually describe important semantic information about conceptual or physical entities; hence, they are the main targets for searching and browsing. To facilitate semantic-based information retrieval (IR) at a Web site, homepages can be identified and classified under some predefined concepts and these concepts are then used in query or browsing criteria, e.g., finding professor homepages containing information retrieval. In some Web sites, relationships may also exist among homepages. These relationship instances (also known as homepage relationships) enrich our knowledge about these Web sites and allow more expressive semantic-based IR. In this article, we investigate the features to be used in mining homepage relationships. We systematically develop different classes of inter-homepage features, namely, navigation, relative-location, and common-item features. We also propose deriving for each homepage a set of support pages to obtain richer and more complete content about the entity described by the homepage. The homepage together with its support pages are known to be a Web unit. By extracting inter-homepage features from Web units, our experiments on the WebKB dataset show that better homepage relationship mining accuracies can be achieved.
    Date
    22. 7.2006 16:18:25
  14. Kruse, R.; Borgelt, C.: Suche im Datendschungel (2002) 0.09
    0.09259903 = product of:
      0.18519805 = sum of:
        0.18519805 = product of:
          0.3703961 = sum of:
            0.3703961 = weight(_text_:mining in 1087) [ClassicSimilarity], result of:
              0.3703961 = score(doc=1087,freq=6.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.2957299 = fieldWeight in 1087, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1087)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Teil eines Heftthemas 'Data Mining'
    Series
    Data Mining
    Theme
    Data Mining
  15. Fayyad, U.; Piatetsky-Shapiro, G.; Smyth, P.: From data mining to knowledge discovery in databases (1996) 0.09
    0.08910345 = product of:
      0.1782069 = sum of:
        0.1782069 = product of:
          0.3564138 = sum of:
            0.3564138 = weight(_text_:mining in 7458) [ClassicSimilarity], result of:
              0.3564138 = score(doc=7458,freq=8.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.2468166 = fieldWeight in 7458, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7458)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Gives an overview of data mining and knowledge discovery in databases. Clarifies how they are related both to each other and to related fields. Mentions real world applications data mining techniques, challenges involved in real world applications of knowledge discovery, and current and future research directions
    Theme
    Data Mining
  16. Schmid, J.: Data mining : wie finde ich in Datensammlungen entscheidungsrelevante Muster? (1999) 0.09
    0.088207915 = product of:
      0.17641583 = sum of:
        0.17641583 = product of:
          0.35283166 = sum of:
            0.35283166 = weight(_text_:mining in 4540) [ClassicSimilarity], result of:
              0.35283166 = score(doc=4540,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.2342855 = fieldWeight in 4540, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4540)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Data Mining
  17. Fong, A.C.M.: Mining a Web citation database for document clustering (2002) 0.09
    0.088207915 = product of:
      0.17641583 = sum of:
        0.17641583 = product of:
          0.35283166 = sum of:
            0.35283166 = weight(_text_:mining in 3940) [ClassicSimilarity], result of:
              0.35283166 = score(doc=3940,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.2342855 = fieldWeight in 3940, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3940)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Data Mining
  18. Blake, C.: Text mining (2011) 0.09
    0.088207915 = product of:
      0.17641583 = sum of:
        0.17641583 = product of:
          0.35283166 = sum of:
            0.35283166 = weight(_text_:mining in 1599) [ClassicSimilarity], result of:
              0.35283166 = score(doc=1599,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.2342855 = fieldWeight in 1599, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1599)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Data Mining
  19. Koczkodaj, W.: ¬A note on using a consistency-driven approach to CD-ROM selection (1997) 0.09
    0.08639653 = product of:
      0.17279306 = sum of:
        0.17279306 = sum of:
          0.12474483 = weight(_text_:mining in 7893) [ClassicSimilarity], result of:
            0.12474483 = score(doc=7893,freq=2.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.4363858 = fieldWeight in 7893, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.0546875 = fieldNorm(doc=7893)
          0.048048235 = weight(_text_:22 in 7893) [ClassicSimilarity], result of:
            0.048048235 = score(doc=7893,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.2708308 = fieldWeight in 7893, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=7893)
      0.5 = coord(1/2)
    
    Abstract
    As with print collections, the evaluation and selection of CD-ROMs should be based on established guidelines. Such attributes as computer network compatibility and platform are exclusively applicable to CD-ROM. Presents a knowledge based system to prioritize and select CD-ROMs for a library collection, operating on consistency driven pairwise comparisons. The computer system indicates the most inconsistent judgements and allows librarians to reconsider their position. After consistency analysis is completed, the software computes the weights of all criteria used in the evaluation process. The system includes a subsystem for evaluating CD-ROM titles. Offers a CD-ROM evaluation form. Discusses cost considerations; the use of pairwise comparisons in knowledge based systems with reference to data mining; the CD-ROM selection process; and consistency analysis of experts' judgements
    Date
    6. 3.1997 16:22:15
  20. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.09
    0.08639653 = product of:
      0.17279306 = sum of:
        0.17279306 = sum of:
          0.12474483 = weight(_text_:mining in 2908) [ClassicSimilarity], result of:
            0.12474483 = score(doc=2908,freq=2.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.4363858 = fieldWeight in 2908, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2908)
          0.048048235 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
            0.048048235 = score(doc=2908,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.2708308 = fieldWeight in 2908, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2908)
      0.5 = coord(1/2)
    
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
    Theme
    Data Mining

Languages

Types

  • el 88
  • b 34
  • p 1
  • More… Less…

Themes

Classifications