Search (44 results, page 1 of 3)

  • × theme_ss:"Data Mining"
  1. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.06
    0.059352882 = product of:
      0.1483822 = sum of:
        0.04893632 = weight(_text_:1 in 4577) [ClassicSimilarity], result of:
          0.04893632 = score(doc=4577,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.37997085 = fieldWeight in 4577, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.109375 = fieldNorm(doc=4577)
        0.09944589 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
          0.09944589 = score(doc=4577,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.5416616 = fieldWeight in 4577, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=4577)
      0.4 = coord(2/5)
    
    Date
    2. 4.2000 18:01:22
    Source
    Library trends. 48(1999) no.1, S.182-208
  2. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.02
    0.021197459 = product of:
      0.052993648 = sum of:
        0.017477257 = weight(_text_:1 in 1605) [ClassicSimilarity], result of:
          0.017477257 = score(doc=1605,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.13570388 = fieldWeight in 1605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1605)
        0.03551639 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
          0.03551639 = score(doc=1605,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.19345059 = fieldWeight in 1605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1605)
      0.4 = coord(2/5)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
  3. KDD : techniques and applications (1998) 0.02
    0.017047867 = product of:
      0.085239336 = sum of:
        0.085239336 = weight(_text_:22 in 6783) [ClassicSimilarity], result of:
          0.085239336 = score(doc=6783,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.46428138 = fieldWeight in 6783, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=6783)
      0.2 = coord(1/5)
    
    Footnote
    A special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997
  4. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.02
    0.016957967 = product of:
      0.042394917 = sum of:
        0.013981805 = weight(_text_:1 in 1507) [ClassicSimilarity], result of:
          0.013981805 = score(doc=1507,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.1085631 = fieldWeight in 1507, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.03125 = fieldNorm(doc=1507)
        0.028413111 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
          0.028413111 = score(doc=1507,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.15476047 = fieldWeight in 1507, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=1507)
      0.4 = coord(2/5)
    
    Abstract
    Wir werden einmal die Grundlagen des Text-Mining-Systems bei IBM darstellen, dann werden wir das Projekt etwas umfangreicher und deutlicher darstellen, da kennen wir uns aus. Von daher haben wir zwei Teile, einmal Heidelberg, einmal Hamburg. Noch einmal zur Technologie. Text-Mining ist eine von IBM entwickelte Technologie, die in einer besonderen Ausformung und Programmierung für uns zusammengestellt wurde. Das Projekt hieß bei uns lange Zeit DocText Miner und heißt seit einiger Zeit auf Vorschlag von IBM DocCat, das soll eine Abkürzung für Document-Categoriser sein, sie ist ja auch nett und anschaulich. Wir fangen an mit Text-Mining, das bei IBM in Heidelberg entwickelt wurde. Die verstehen darunter das automatische Indexieren als eine Instanz, also einen Teil von Text-Mining. Probleme werden dabei gezeigt, und das Text-Mining ist eben eine Methode zur Strukturierung von und der Suche in großen Dokumentenmengen, die Extraktion von Informationen und, das ist der hohe Anspruch, von impliziten Zusammenhängen. Das letztere sei dahingestellt. IBM macht das quantitativ, empirisch, approximativ und schnell. das muss man wirklich sagen. Das Ziel, und das ist ganz wichtig für unser Projekt gewesen, ist nicht, den Text zu verstehen, sondern das Ergebnis dieser Verfahren ist, was sie auf Neudeutsch a bundle of words, a bag of words nennen, also eine Menge von bedeutungstragenden Begriffen aus einem Text zu extrahieren, aufgrund von Algorithmen, also im Wesentlichen aufgrund von Rechenoperationen. Es gibt eine ganze Menge von linguistischen Vorstudien, ein wenig Linguistik ist auch dabei, aber nicht die Grundlage der ganzen Geschichte. Was sie für uns gemacht haben, ist also die Annotierung von Pressetexten für unsere Pressedatenbank. Für diejenigen, die es noch nicht kennen: Gruner + Jahr führt eine Textdokumentation, die eine Datenbank führt, seit Anfang der 70er Jahre, da sind z.Z. etwa 6,5 Millionen Dokumente darin, davon etwas über 1 Million Volltexte ab 1993. Das Prinzip war lange Zeit, dass wir die Dokumente, die in der Datenbank gespeichert waren und sind, verschlagworten und dieses Prinzip haben wir auch dann, als der Volltext eingeführt wurde, in abgespeckter Form weitergeführt. Zu diesen 6,5 Millionen Dokumenten gehören dann eben auch ungefähr 10 Millionen Faksimileseiten, weil wir die Faksimiles auch noch standardmäßig aufheben.
    Date
    22. 4.2003 11:45:36
  5. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.01
    0.012718475 = product of:
      0.031796187 = sum of:
        0.010486354 = weight(_text_:1 in 1833) [ClassicSimilarity], result of:
          0.010486354 = score(doc=1833,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.08142233 = fieldWeight in 1833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
        0.021309834 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
          0.021309834 = score(doc=1833,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.116070345 = fieldWeight in 1833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
      0.4 = coord(2/5)
    
    Content
    Enthält u.a. die Beiträge (Dokumentarische Aspekte): Günter Perers/Volker Gaese: Das DocCat-System in der Textdokumentation von Gr+J (Weimar 2000) Thomas Gerick: Finden statt suchen. Knowledge Retrieval in Wissensbanken. Mit organisiertem Wissen zu mehr Erfolg (Weimar 2000) Winfried Gödert: Aufbereitung und Rezeption von Information (Weimar 2000) Elisabeth Damen: Klassifikation als Ordnungssystem im elektronischen Pressearchiv (Köln 2001) Clemens Schlenkrich: Aspekte neuer Regelwerksarbeit - Multimediales Datenmodell für ARD und ZDF (Köln 2001) Josef Wandeler: Comprenez-vous only Bahnhof'? - Mehrsprachigkeit in der Mediendokumentation (Köln 200 1)
    Date
    11. 5.2008 19:49:22
  6. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.01
    0.011365245 = product of:
      0.056826223 = sum of:
        0.056826223 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
          0.056826223 = score(doc=1737,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.30952093 = fieldWeight in 1737, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=1737)
      0.2 = coord(1/5)
    
    Date
    22.11.1998 18:57:22
  7. Lusti, M.: Data Warehousing and Data Mining : Eine Einführung in entscheidungsunterstützende Systeme (1999) 0.01
    0.011365245 = product of:
      0.056826223 = sum of:
        0.056826223 = weight(_text_:22 in 4261) [ClassicSimilarity], result of:
          0.056826223 = score(doc=4261,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.30952093 = fieldWeight in 4261, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=4261)
      0.2 = coord(1/5)
    
    Date
    17. 7.2002 19:22:06
  8. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.01
    0.011365245 = product of:
      0.056826223 = sum of:
        0.056826223 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
          0.056826223 = score(doc=1270,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.30952093 = fieldWeight in 1270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=1270)
      0.2 = coord(1/5)
    
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  9. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.01
    0.009944589 = product of:
      0.049722943 = sum of:
        0.049722943 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
          0.049722943 = score(doc=2908,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.2708308 = fieldWeight in 2908, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2908)
      0.2 = coord(1/5)
    
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  10. Budzik, J.; Hammond, K.J.; Birnbaum, L.: Information access in context (2001) 0.01
    0.009787264 = product of:
      0.04893632 = sum of:
        0.04893632 = weight(_text_:1 in 3835) [ClassicSimilarity], result of:
          0.04893632 = score(doc=3835,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.37997085 = fieldWeight in 3835, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.109375 = fieldNorm(doc=3835)
      0.2 = coord(1/5)
    
    Source
    Knowledge-based systems. 14(2001) nos.1/2, S.37-53
  11. Blake, C.: Text mining (2011) 0.01
    0.009787264 = product of:
      0.04893632 = sum of:
        0.04893632 = weight(_text_:1 in 1599) [ClassicSimilarity], result of:
          0.04893632 = score(doc=1599,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.37997085 = fieldWeight in 1599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.109375 = fieldNorm(doc=1599)
      0.2 = coord(1/5)
    
    Source
    Annual review of information science and technology. 45(2011) no.1, S.121-155
  12. Lackes, R.; Tillmanns, C.: Data Mining für die Unternehmenspraxis : Entscheidungshilfen und Fallstudien mit führenden Softwarelösungen (2006) 0.01
    0.008523934 = product of:
      0.042619668 = sum of:
        0.042619668 = weight(_text_:22 in 1383) [ClassicSimilarity], result of:
          0.042619668 = score(doc=1383,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.23214069 = fieldWeight in 1383, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=1383)
      0.2 = coord(1/5)
    
    Date
    22. 3.2008 14:46:06
  13. Information visualization in data mining and knowledge discovery (2002) 0.01
    0.008478983 = product of:
      0.021197459 = sum of:
        0.0069909026 = weight(_text_:1 in 1789) [ClassicSimilarity], result of:
          0.0069909026 = score(doc=1789,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.05428155 = fieldWeight in 1789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
        0.014206556 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
          0.014206556 = score(doc=1789,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.07738023 = fieldWeight in 1789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
      0.4 = coord(2/5)
    
    Date
    23. 3.2008 19:10:22
    Isbn
    1-55860-689-0
  14. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.01
    0.007103278 = product of:
      0.03551639 = sum of:
        0.03551639 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
          0.03551639 = score(doc=668,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.19345059 = fieldWeight in 668, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=668)
      0.2 = coord(1/5)
    
    Date
    22. 3.2013 19:43:01
  15. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.01
    0.007103278 = product of:
      0.03551639 = sum of:
        0.03551639 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
          0.03551639 = score(doc=5011,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.19345059 = fieldWeight in 5011, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5011)
      0.2 = coord(1/5)
    
    Date
    7. 3.2019 16:32:22
  16. Wiegmann, S.: Hättest du die Titanic überlebt? : Eine kurze Einführung in das Data Mining mit freier Software (2023) 0.01
    0.006920641 = product of:
      0.034603205 = sum of:
        0.034603205 = weight(_text_:1 in 876) [ClassicSimilarity], result of:
          0.034603205 = score(doc=876,freq=4.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.26867998 = fieldWeight in 876, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.0546875 = fieldNorm(doc=876)
      0.2 = coord(1/5)
    
    Date
    1. 2.2023 14:03:14
    Source
    API Magazin. 4(2023), Nr.1 [https://journals.sub.uni-hamburg.de/hup3/apimagazin/article/view/130]
  17. Keim, D.A.: Datenvisualisierung und Data Mining (2004) 0.01
    0.0060543 = product of:
      0.030271498 = sum of:
        0.030271498 = weight(_text_:1 in 2931) [ClassicSimilarity], result of:
          0.030271498 = score(doc=2931,freq=6.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.23504603 = fieldWeight in 2931, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2931)
      0.2 = coord(1/5)
    
    Abstract
    Die rasante technologische Entwicklung der letzten zwei Jahrzehnte ermöglicht heute die persistente Speicherung riesiger Datenmengen durch den Computer. Forscher an der Universität Berkeley haben berechnet, dass jedes Jahr ca. 1 Exabyte (= 1 Million Terabyte) Daten generiert werden - ein großer Teil davon in digitaler Form. Das bedeutet aber, dass in den nächsten drei Jahren mehr Daten generiert werden als in der gesamten menschlichen Entwicklung zuvor. Die Daten werden oft automatisch mit Hilfe von Sensoren und Überwachungssystemen aufgezeichnet. So werden beispielsweise alltägliche Vorgänge des menschlichen Lebens, wie das Bezahlen mit Kreditkarte oder die Benutzung des Telefons, durch Computer aufgezeichnet. Dabei werden gewöhnlich alle verfügbaren Parameter abgespeichert, wodurch hochdimensionale Datensätze entstehen. Die Daten werden gesammelt, da sie wertvolle Informationen enthalten, die einen Wettbewerbsvorteil bieten können. Das Finden der wertvollen Informationen in den großen Datenmengen ist aber keine leichte Aufgabe. Heutige Datenbankmanagementsysteme können nur kleine Teilmengen dieser riesigen Datenmengen darstellen. Werden die Daten zum Beispiel in textueller Form ausgegeben, können höchstens ein paar hundert Zeilen auf dem Bildschirm dargestellt werden. Bei Millionen von Datensätzen ist dies aber nur ein Tropfen auf den heißen Stein.
    Source
    Grundlagen der praktischen Information und Dokumentation. 5., völlig neu gefaßte Ausgabe. 2 Bde. Hrsg. von R. Kuhlen, Th. Seeger u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried. Bd.1: Handbuch zur Einführung in die Informationswissenschaft und -praxis
  18. Haravu, L.J.; Neelameghan, A.: Text mining and data mining in knowledge organization and discovery : the making of knowledge-based products (2003) 0.01
    0.0060543 = product of:
      0.030271498 = sum of:
        0.030271498 = weight(_text_:1 in 5653) [ClassicSimilarity], result of:
          0.030271498 = score(doc=5653,freq=6.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.23504603 = fieldWeight in 5653, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5653)
      0.2 = coord(1/5)
    
    Abstract
    Discusses the importance of knowledge organization in the context of the information overload caused by the vast quantities of data and information accessible on internal and external networks of an organization. Defines the characteristics of a knowledge-based product. Elaborates on the techniques and applications of text mining in developing knowledge products. Presents two approaches, as case studies, to the making of knowledge products: (1) steps and processes in the planning, designing and development of a composite multilingual multimedia CD product, with the potential international, inter-cultural end users in view, and (2) application of natural language processing software in text mining. Using a text mining software, it is possible to link concept terms from a processed text to a related thesaurus, glossary, schedules of a classification scheme, and facet structured subject representations. Concludes that the products of text mining and data mining could be made more useful if the features of a faceted scheme for subject classification are incorporated into text mining techniques and products.
    Date
    1. 8.2006 18:34:03
    Source
    Cataloging and classification quarterly. 37(2003) nos.1/2, S.96-114
  19. Tonkin, E.L.; Tourte, G.J.L.: Working with text. tools, techniques and approaches for text mining (2016) 0.01
    0.0060543 = product of:
      0.030271498 = sum of:
        0.030271498 = weight(_text_:1 in 4019) [ClassicSimilarity], result of:
          0.030271498 = score(doc=4019,freq=6.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.23504603 = fieldWeight in 4019, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4019)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: JASIST 69(2018) no.1, S.181-184 (Jacques Savoy).
    Isbn
    978-1-84334-749-1
  20. Berendt, B.; Krause, B.; Kolbe-Nusser, S.: Intelligent scientific authoring tools : interactive data mining for constructive uses of citation networks (2010) 0.01
    0.0059319776 = product of:
      0.029659888 = sum of:
        0.029659888 = weight(_text_:1 in 4226) [ClassicSimilarity], result of:
          0.029659888 = score(doc=4226,freq=4.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.23029712 = fieldWeight in 4226, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.046875 = fieldNorm(doc=4226)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 46(2010) no.1, S.1-10

Years

Languages

  • e 30
  • d 14

Types