Search (158 results, page 1 of 8)

  • × theme_ss:"Data Mining"
  1. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.01
    0.008313378 = product of:
      0.024940135 = sum of:
        0.006121026 = weight(_text_:a in 4577) [ClassicSimilarity], result of:
          0.006121026 = score(doc=4577,freq=2.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.17835285 = fieldWeight in 4577, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=4577)
        0.018819109 = product of:
          0.056457322 = sum of:
            0.056457322 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.056457322 = score(doc=4577,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    2. 4.2000 18:01:22
    Type
    a
  2. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.01
    0.0074834004 = product of:
      0.014966801 = sum of:
        0.0030291225 = weight(_text_:a in 1507) [ClassicSimilarity], result of:
          0.0030291225 = score(doc=1507,freq=6.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.088261776 = fieldWeight in 1507, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1507)
        0.006560791 = product of:
          0.026243163 = sum of:
            0.026243163 = weight(_text_:g in 1507) [ClassicSimilarity], result of:
              0.026243163 = score(doc=1507,freq=4.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.23474671 = fieldWeight in 1507, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1507)
          0.25 = coord(1/4)
        0.005376888 = product of:
          0.016130663 = sum of:
            0.016130663 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
              0.016130663 = score(doc=1507,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.15476047 = fieldWeight in 1507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1507)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Abstract
    Wir werden einmal die Grundlagen des Text-Mining-Systems bei IBM darstellen, dann werden wir das Projekt etwas umfangreicher und deutlicher darstellen, da kennen wir uns aus. Von daher haben wir zwei Teile, einmal Heidelberg, einmal Hamburg. Noch einmal zur Technologie. Text-Mining ist eine von IBM entwickelte Technologie, die in einer besonderen Ausformung und Programmierung für uns zusammengestellt wurde. Das Projekt hieß bei uns lange Zeit DocText Miner und heißt seit einiger Zeit auf Vorschlag von IBM DocCat, das soll eine Abkürzung für Document-Categoriser sein, sie ist ja auch nett und anschaulich. Wir fangen an mit Text-Mining, das bei IBM in Heidelberg entwickelt wurde. Die verstehen darunter das automatische Indexieren als eine Instanz, also einen Teil von Text-Mining. Probleme werden dabei gezeigt, und das Text-Mining ist eben eine Methode zur Strukturierung von und der Suche in großen Dokumentenmengen, die Extraktion von Informationen und, das ist der hohe Anspruch, von impliziten Zusammenhängen. Das letztere sei dahingestellt. IBM macht das quantitativ, empirisch, approximativ und schnell. das muss man wirklich sagen. Das Ziel, und das ist ganz wichtig für unser Projekt gewesen, ist nicht, den Text zu verstehen, sondern das Ergebnis dieser Verfahren ist, was sie auf Neudeutsch a bundle of words, a bag of words nennen, also eine Menge von bedeutungstragenden Begriffen aus einem Text zu extrahieren, aufgrund von Algorithmen, also im Wesentlichen aufgrund von Rechenoperationen. Es gibt eine ganze Menge von linguistischen Vorstudien, ein wenig Linguistik ist auch dabei, aber nicht die Grundlage der ganzen Geschichte. Was sie für uns gemacht haben, ist also die Annotierung von Pressetexten für unsere Pressedatenbank. Für diejenigen, die es noch nicht kennen: Gruner + Jahr führt eine Textdokumentation, die eine Datenbank führt, seit Anfang der 70er Jahre, da sind z.Z. etwa 6,5 Millionen Dokumente darin, davon etwas über 1 Million Volltexte ab 1993. Das Prinzip war lange Zeit, dass wir die Dokumente, die in der Datenbank gespeichert waren und sind, verschlagworten und dieses Prinzip haben wir auch dann, als der Volltext eingeführt wurde, in abgespeckter Form weitergeführt. Zu diesen 6,5 Millionen Dokumenten gehören dann eben auch ungefähr 10 Millionen Faksimileseiten, weil wir die Faksimiles auch noch standardmäßig aufheben.
    Date
    22. 4.2003 11:45:36
    Type
    a
  3. KDD : techniques and applications (1998) 0.01
    0.007125753 = product of:
      0.021377258 = sum of:
        0.0052465936 = weight(_text_:a in 6783) [ClassicSimilarity], result of:
          0.0052465936 = score(doc=6783,freq=2.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.15287387 = fieldWeight in 6783, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=6783)
        0.016130663 = product of:
          0.04839199 = sum of:
            0.04839199 = weight(_text_:22 in 6783) [ClassicSimilarity], result of:
              0.04839199 = score(doc=6783,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.46428138 = fieldWeight in 6783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6783)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Footnote
    A special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997
  4. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.01
    0.0064404756 = product of:
      0.019321427 = sum of:
        0.008567652 = weight(_text_:a in 1270) [ClassicSimilarity], result of:
          0.008567652 = score(doc=1270,freq=12.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.24964198 = fieldWeight in 1270, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1270)
        0.010753776 = product of:
          0.032261327 = sum of:
            0.032261327 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.032261327 = score(doc=1270,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Current algorithms for finding associations among the attributes describing data in a database have a number of shortcomings. Presents a novel method for association generation, that answers all desiderata. The method is different from all existing algorithms and especially suitable to textual databases with binary attributes. Uses subword trees for quick indexing into the required database statistics. Tests the algorithm on the Reuters-22173 database with satisfactory results
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
    Type
    a
  5. Fayyad, U.; Piatetsky-Shapiro, G.; Smyth, P.: From data mining to knowledge discovery in databases (1996) 0.01
    0.0053233705 = product of:
      0.01597011 = sum of:
        0.0043721613 = weight(_text_:a in 7458) [ClassicSimilarity], result of:
          0.0043721613 = score(doc=7458,freq=2.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.12739488 = fieldWeight in 7458, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=7458)
        0.01159795 = product of:
          0.0463918 = sum of:
            0.0463918 = weight(_text_:g in 7458) [ClassicSimilarity], result of:
              0.0463918 = score(doc=7458,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.4149775 = fieldWeight in 7458, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7458)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Type
    a
  6. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.01
    0.0052334378 = product of:
      0.015700312 = sum of:
        0.004946536 = weight(_text_:a in 1737) [ClassicSimilarity], result of:
          0.004946536 = score(doc=1737,freq=4.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.14413087 = fieldWeight in 1737, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1737)
        0.010753776 = product of:
          0.032261327 = sum of:
            0.032261327 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
              0.032261327 = score(doc=1737,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.30952093 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1737)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Defines digital libraries and discusses the effects of new technology on librarians. Examines the different viewpoints of librarians and information technologists on digital libraries. Describes the development of a digital library at the National Drug Intelligence Center, USA, which was carried out in collaboration with information technology experts. The system is based on Web enabled search technology to find information, data visualization and data mining to visualize it and use of SGML as an information standard to store it
    Date
    22.11.1998 18:57:22
    Type
    a
  7. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.01
    0.0051768604 = product of:
      0.015530581 = sum of:
        0.006121026 = weight(_text_:a in 2908) [ClassicSimilarity], result of:
          0.006121026 = score(doc=2908,freq=8.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.17835285 = fieldWeight in 2908, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2908)
        0.009409554 = product of:
          0.028228661 = sum of:
            0.028228661 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
              0.028228661 = score(doc=2908,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.2708308 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Focuses on the information modelling side of conceptual modelling. Deals with the exploitation of fact verbalisations after finishing the actual information system. Verbalisations are used as input for the design of the so-called information model. Exploits these verbalisation in 4 directions: considers their use for a conceptual query language, the verbalisation of instances, the description of the contents of a database and for the verbalisation of queries in a computer supported query environment. Provides an example session with an envisioned tool for end user query formulations that exploits the verbalisation
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
    Type
    a
  8. Maaten, L. van den; Hinton, G.: Visualizing non-metric similarities in multiple maps (2012) 0.00
    0.004792858 = product of:
      0.014378574 = sum of:
        0.007419804 = weight(_text_:a in 3884) [ClassicSimilarity], result of:
          0.007419804 = score(doc=3884,freq=16.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.2161963 = fieldWeight in 3884, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3884)
        0.0069587696 = product of:
          0.027835079 = sum of:
            0.027835079 = weight(_text_:g in 3884) [ClassicSimilarity], result of:
              0.027835079 = score(doc=3884,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.24898648 = fieldWeight in 3884, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3884)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    Techniques for multidimensional scaling visualize objects as points in a low-dimensional metric map. As a result, the visualizations are subject to the fundamental limitations of metric spaces. These limitations prevent multidimensional scaling from faithfully representing non-metric similarity data such as word associations or event co-occurrences. In particular, multidimensional scaling cannot faithfully represent intransitive pairwise similarities in a visualization, and it cannot faithfully visualize "central" objects. In this paper, we present an extension of a recently proposed multidimensional scaling technique called t-SNE. The extension aims to address the problems of traditional multidimensional scaling techniques when these techniques are used to visualize non-metric similarities. The new technique, called multiple maps t-SNE, alleviates these problems by constructing a collection of maps that reveal complementary structure in the similarity data. We apply multiple maps t-SNE to a large data set of word association data and to a data set of NIPS co-authorships, demonstrating its ability to successfully visualize non-metric similarities.
    Type
    a
  9. Ekbia, H.; Mattioli, M.; Kouper, I.; Arave, G.; Ghazinejad, A.; Bowman, T.; Suri, V.R.; Tsou, A.; Weingart, S.; Sugimoto, C.R.: Big data, bigger dilemmas : a critical review (2015) 0.00
    0.0046595135 = product of:
      0.01397854 = sum of:
        0.008179565 = weight(_text_:a in 2155) [ClassicSimilarity], result of:
          0.008179565 = score(doc=2155,freq=28.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.23833402 = fieldWeight in 2155, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2155)
        0.005798975 = product of:
          0.0231959 = sum of:
            0.0231959 = weight(_text_:g in 2155) [ClassicSimilarity], result of:
              0.0231959 = score(doc=2155,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.20748875 = fieldWeight in 2155, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2155)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    The recent interest in Big Data has generated a broad range of new academic, corporate, and policy practices along with an evolving debate among its proponents, detractors, and skeptics. While the practices draw on a common set of tools, techniques, and technologies, most contributions to the debate come either from a particular disciplinary perspective or with a focus on a domain-specific issue. A close examination of these contributions reveals a set of common problematics that arise in various guises and in different places. It also demonstrates the need for a critical synthesis of the conceptual and practical dilemmas surrounding Big Data. The purpose of this article is to provide such a synthesis by drawing on relevant writings in the sciences, humanities, policy, and trade literature. In bringing these diverse literatures together, we aim to shed light on the common underlying issues that concern and affect all of these areas. By contextualizing the phenomenon of Big Data within larger socioeconomic developments, we also seek to provide a broader understanding of its drivers, barriers, and challenges. This approach allows us to identify attributes of Big Data that require more attention-autonomy, opacity, generativity, disparity, and futurity-leading to questions and ideas for moving beyond dilemmas.
    Type
    a
  10. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.00
    0.0041683125 = product of:
      0.012504937 = sum of:
        0.005783826 = weight(_text_:a in 5011) [ClassicSimilarity], result of:
          0.005783826 = score(doc=5011,freq=14.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.1685276 = fieldWeight in 5011, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5011)
        0.0067211105 = product of:
          0.020163331 = sum of:
            0.020163331 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
              0.020163331 = score(doc=5011,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.19345059 = fieldWeight in 5011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5011)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The present challenge faced by scientists working with Big Data comes in the overwhelming volume and level of detail provided by current data sets. Exceeding traditional empirical approaches, Big Data opens a new perspective on scientific work in which data comes to play a role in the development of the scientific problematic to be developed. Addressing this reconfiguration of our relationship with data through readings of Wittgenstein, Macherey, and Popper, we propose a picture of science that encourages scientists to engage with the data in a direct way, using the data itself as an instrument for scientific investigation. Using GIS as a theme, we develop the concept of cyber-human systems of thought and understanding to bridge the divide between representative (theoretical) thinking and (non-theoretical) data-driven science. At the foundation of these systems, we invoke the concept of the "semantic pixel" to establish a logical and virtual space linking data and the work of scientists. It is with this discussion of the relationship between analysts in their pursuit of knowledge and the rise of Big Data that this present discussion of the philosophical foundations of Big Data addresses the central questions raised by social informatics research.
    Date
    7. 3.2019 16:32:22
    Type
    a
  11. Benoit, G.: Data mining (2002) 0.00
    0.0040684547 = product of:
      0.012205363 = sum of:
        0.0052465936 = weight(_text_:a in 4296) [ClassicSimilarity], result of:
          0.0052465936 = score(doc=4296,freq=8.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.15287387 = fieldWeight in 4296, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4296)
        0.0069587696 = product of:
          0.027835079 = sum of:
            0.027835079 = weight(_text_:g in 4296) [ClassicSimilarity], result of:
              0.027835079 = score(doc=4296,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.24898648 = fieldWeight in 4296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4296)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    Data mining (DM) is a multistaged process of extracting previously unanticipated knowledge from large databases, and applying the results to decision making. Data mining tools detect patterns from the data and infer associations and rules from them. The extracted information may then be applied to prediction or classification models by identifying relations within the data records or between databases. Those patterns and rules can then guide decision making and forecast the effects of those decisions. However, this definition may be applied equally to "knowledge discovery in databases" (KDD). Indeed, in the recent literature of DM and KDD, a source of confusion has emerged, making it difficult to determine the exact parameters of both. KDD is sometimes viewed as the broader discipline, of which data mining is merely a component-specifically pattern extraction, evaluation, and cleansing methods (Raghavan, Deogun, & Sever, 1998, p. 397). Thurasingham (1999, p. 2) remarked that "knowledge discovery," "pattern discovery," "data dredging," "information extraction," and "knowledge mining" are all employed as synonyms for DM. Trybula, in his ARIST chapter an text mining, observed that the "existing work [in KDD] is confusing because the terminology is inconsistent and poorly defined.
    Type
    a
  12. Maaten, L. van den; Hinton, G.: Visualizing data using t-SNE (2008) 0.00
    0.0039940486 = product of:
      0.011982145 = sum of:
        0.0061831702 = weight(_text_:a in 3888) [ClassicSimilarity], result of:
          0.0061831702 = score(doc=3888,freq=16.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.18016359 = fieldWeight in 3888, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3888)
        0.005798975 = product of:
          0.0231959 = sum of:
            0.0231959 = weight(_text_:g in 3888) [ClassicSimilarity], result of:
              0.0231959 = score(doc=3888,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.20748875 = fieldWeight in 3888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3888)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    We present a new technique called "t-SNE" that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large data sets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of data sets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the data sets.
    Type
    a
  13. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.00
    0.003869779 = product of:
      0.011609336 = sum of:
        0.0048882253 = weight(_text_:a in 668) [ClassicSimilarity], result of:
          0.0048882253 = score(doc=668,freq=10.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.14243183 = fieldWeight in 668, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=668)
        0.0067211105 = product of:
          0.020163331 = sum of:
            0.020163331 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
              0.020163331 = score(doc=668,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.19345059 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=668)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    20th century massification of higher education and research in academia is said to have produced structurally stratified higher education systems in many countries. Most manifestly, the research mission of universities appears to be divisive. Authors have claimed that the Swedish system, while formally unified, has developed into a binary state, and statistics seem to support this conclusion. This article makes use of a comprehensive statistical data source on Swedish higher education institutions to illustrate stratification, and uses literature on Swedish research policy history to contextualize the statistics. Highlighting the opportunities as well as constraints of the data, the article argues that there is great merit in combining statistics with a qualitative analysis when studying the structural characteristics of national higher education systems. Not least the article shows that it is an over-simplification to describe the Swedish system as binary; the stratification is more complex. On basis of the analysis, the article also argues that while global trends certainly influence national developments, higher education systems have country-specific features that may enrich the understanding of how systems evolve and therefore should be analyzed as part of a broader study of the increasingly globalized academic system.
    Date
    22. 3.2013 19:43:01
    Type
    a
  14. Hereth, J.; Stumme, G.; Wille, R.; Wille, U.: Conceptual knowledge discovery and data analysis (2000) 0.00
    0.0037179193 = product of:
      0.011153758 = sum of:
        0.0053547826 = weight(_text_:a in 5083) [ClassicSimilarity], result of:
          0.0053547826 = score(doc=5083,freq=12.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.15602624 = fieldWeight in 5083, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5083)
        0.005798975 = product of:
          0.0231959 = sum of:
            0.0231959 = weight(_text_:g in 5083) [ClassicSimilarity], result of:
              0.0231959 = score(doc=5083,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.20748875 = fieldWeight in 5083, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5083)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    In this paper, we discuss Conceptual Knowledge Discovery in Databases (CKDD) in its connection with Data Analysis. Our approach is based on Formal Concept Analysis, a mathematical theory which has been developed and proven useful during the last 20 years. Formal Concept Analysis has led to a theory of conceptual information systems which has been applied by using the management system TOSCANA in a wide range of domains. In this paper, we use such an application in database marketing to demonstrate how methods and procedures of CKDD can be applied in Data Analysis. In particular, we show the interplay and integration of data mining and data analysis techniques based on Formal Concept Analysis. The main concern of this paper is to explain how the transition from data to knowledge can be supported by a TOSCANA system. To clarify the transition steps we discuss their correspondence to the five levels of knowledge representation established by R. Brachman and to the steps of empirically grounded theory building proposed by A. Strauss and J. Corbin
    Type
    a
  15. Ma, Z.; Sun, A.; Cong, G.: On predicting the popularity of newly emerging hashtags in Twitter (2013) 0.00
    0.0037179193 = product of:
      0.011153758 = sum of:
        0.0053547826 = weight(_text_:a in 967) [ClassicSimilarity], result of:
          0.0053547826 = score(doc=967,freq=12.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.15602624 = fieldWeight in 967, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=967)
        0.005798975 = product of:
          0.0231959 = sum of:
            0.0231959 = weight(_text_:g in 967) [ClassicSimilarity], result of:
              0.0231959 = score(doc=967,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.20748875 = fieldWeight in 967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=967)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    Because of Twitter's popularity and the viral nature of information dissemination on Twitter, predicting which Twitter topics will become popular in the near future becomes a task of considerable economic importance. Many Twitter topics are annotated by hashtags. In this article, we propose methods to predict the popularity of new hashtags on Twitter by formulating the problem as a classification task. We use five standard classification models (i.e., Naïve bayes, k-nearest neighbors, decision trees, support vector machines, and logistic regression) for prediction. The main challenge is the identification of effective features for describing new hashtags. We extract 7 content features from a hashtag string and the collection of tweets containing the hashtag and 11 contextual features from the social graph formed by users who have adopted the hashtag. We conducted experiments on a Twitter data set consisting of 31 million tweets from 2 million Singapore-based users. The experimental results show that the standard classifiers using the extracted features significantly outperform the baseline methods that do not use these features. Among the five classifiers, the logistic regression model performs the best in terms of the Micro-F1 measure. We also observe that contextual features are more effective than content features.
    Type
    a
  16. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.00
    0.0036977574 = product of:
      0.011093272 = sum of:
        0.0043721613 = weight(_text_:a in 1605) [ClassicSimilarity], result of:
          0.0043721613 = score(doc=1605,freq=8.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.12739488 = fieldWeight in 1605, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1605)
        0.0067211105 = product of:
          0.020163331 = sum of:
            0.020163331 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.020163331 = score(doc=1605,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Numerous studies have explored the possibility of uncovering information from web search queries but few have examined the factors that affect web query data sources. We conducted a study that investigated this issue by comparing Google Trends and Baidu Index. Data from these two services are based on queries entered by users into Google and Baidu, two of the largest search engines in the world. We first compared the features and functions of the two services based on documents and extensive testing. We then carried out an empirical study that collected query volume data from the two sources. We found that data from both sources could be used to predict the quality of Chinese universities and companies. Despite the differences between the two services in terms of technology, such as differing methods of language processing, the search volume data from the two were highly correlated and combining the two data sources did not improve the predictive power of the data. However, there was a major difference between the two in terms of data availability. Baidu Index was able to provide more search volume data than Google Trends did. Our analysis showed that the disadvantage of Google Trends in this regard was due to Google's smaller user base in China. The implication of this finding goes beyond China. Google's user bases in many countries are smaller than that in China, so the search volume data related to those countries could result in the same issue as that related to China.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
    Type
    a
  17. Li, D.; Tang, J.; Ding, Y.; Shuai, X.; Chambers, T.; Sun, G.; Luo, Z.; Zhang, J.: Topic-level opinion influence model (TOIM) : an investigation using tencent microblogging (2015) 0.00
    0.0035624001 = product of:
      0.0106872 = sum of:
        0.0048882253 = weight(_text_:a in 2345) [ClassicSimilarity], result of:
          0.0048882253 = score(doc=2345,freq=10.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.14243183 = fieldWeight in 2345, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2345)
        0.005798975 = product of:
          0.0231959 = sum of:
            0.0231959 = weight(_text_:g in 2345) [ClassicSimilarity], result of:
              0.0231959 = score(doc=2345,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.20748875 = fieldWeight in 2345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2345)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    Text mining has been widely used in multiple types of user-generated data to infer user opinion, but its application to microblogging is difficult because text messages are short and noisy, providing limited information about user opinion. Given that microblogging users communicate with each other to form a social network, we hypothesize that user opinion is influenced by its neighbors in the network. In this paper, we infer user opinion on a topic by combining two factors: the user's historical opinion about relevant topics and opinion influence from his/her neighbors. We thus build a topic-level opinion influence model (TOIM) by integrating both topic factor and opinion influence factor into a unified probabilistic model. We evaluate our model in one of the largest microblogging sites in China, Tencent Weibo, and the experiments show that TOIM outperforms baseline methods in opinion inference accuracy. Moreover, incorporating indirect influence further improves inference recall and f1-measure. Finally, we demonstrate some useful applications of TOIM in analyzing users' behaviors in Tencent Weibo.
    Type
    a
  18. Heyer, G.; Läuter, M.; Quasthoff, U.; Wolff, C.: Texttechnologische Anwendungen am Beispiel Text Mining (2000) 0.00
    0.0031940225 = product of:
      0.009582067 = sum of:
        0.0026232968 = weight(_text_:a in 5565) [ClassicSimilarity], result of:
          0.0026232968 = score(doc=5565,freq=2.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.07643694 = fieldWeight in 5565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5565)
        0.0069587696 = product of:
          0.027835079 = sum of:
            0.027835079 = weight(_text_:g in 5565) [ClassicSimilarity], result of:
              0.027835079 = score(doc=5565,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.24898648 = fieldWeight in 5565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5565)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Type
    a
  19. Information visualization in data mining and knowledge discovery (2002) 0.00
    0.0023823972 = product of:
      0.007147191 = sum of:
        0.0044587473 = weight(_text_:a in 1789) [ClassicSimilarity], result of:
          0.0044587473 = score(doc=1789,freq=52.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.12991782 = fieldWeight in 1789, product of:
              7.2111025 = tf(freq=52.0), with freq of:
                52.0 = termFreq=52.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
        0.002688444 = product of:
          0.008065332 = sum of:
            0.008065332 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.008065332 = score(doc=1789,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    Rez. in: JASIST 54(2003) no.9, S.905-906 (C.A. Badurek): "Visual approaches for knowledge discovery in very large databases are a prime research need for information scientists focused an extracting meaningful information from the ever growing stores of data from a variety of domains, including business, the geosciences, and satellite and medical imagery. This work presents a summary of research efforts in the fields of data mining, knowledge discovery, and data visualization with the goal of aiding the integration of research approaches and techniques from these major fields. The editors, leading computer scientists from academia and industry, present a collection of 32 papers from contributors who are incorporating visualization and data mining techniques through academic research as well application development in industry and government agencies. Information Visualization focuses upon techniques to enhance the natural abilities of humans to visually understand data, in particular, large-scale data sets. It is primarily concerned with developing interactive graphical representations to enable users to more intuitively make sense of multidimensional data as part of the data exploration process. It includes research from computer science, psychology, human-computer interaction, statistics, and information science. Knowledge Discovery in Databases (KDD) most often refers to the process of mining databases for previously unknown patterns and trends in data. Data mining refers to the particular computational methods or algorithms used in this process. The data mining research field is most related to computational advances in database theory, artificial intelligence and machine learning. This work compiles research summaries from these main research areas in order to provide "a reference work containing the collection of thoughts and ideas of noted researchers from the fields of data mining and data visualization" (p. 8). It addresses these areas in three main sections: the first an data visualization, the second an KDD and model visualization, and the last an using visualization in the knowledge discovery process. The seven chapters of Part One focus upon methodologies and successful techniques from the field of Data Visualization. Hoffman and Grinstein (Chapter 2) give a particularly good overview of the field of data visualization and its potential application to data mining. An introduction to the terminology of data visualization, relation to perceptual and cognitive science, and discussion of the major visualization display techniques are presented. Discussion and illustration explain the usefulness and proper context of such data visualization techniques as scatter plots, 2D and 3D isosurfaces, glyphs, parallel coordinates, and radial coordinate visualizations. Remaining chapters present the need for standardization of visualization methods, discussion of user requirements in the development of tools, and examples of using information visualization in addressing research problems.
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
    With contributors almost exclusively from the computer science field, the intended audience of this work is heavily slanted towards a computer science perspective. However, it is highly readable and provides introductory material that would be useful to information scientists from a variety of domains. Yet, much interesting work in information visualization from other fields could have been included giving the work more of an interdisciplinary perspective to complement their goals of integrating work in this area. Unfortunately, many of the application chapters are these, shallow, and lack complementary illustrations of visualization techniques or user interfaces used. However, they do provide insight into the many applications being developed in this rapidly expanding field. The authors have successfully put together a highly useful reference text for the data mining and information visualization communities. Those interested in a good introduction and overview of complementary research areas in these fields will be satisfied with this collection of papers. The focus upon integrating data visualization with data mining complements texts in each of these fields, such as Advances in Knowledge Discovery and Data Mining (Fayyad et al., MIT Press) and Readings in Information Visualization: Using Vision to Think (Card et. al., Morgan Kauffman). This unique work is a good starting point for future interaction between researchers in the fields of data visualization and data mining and makes a good accompaniment for a course focused an integrating these areas or to the main reference texts in these fields."
  20. Hölzig, C.: Google spürt Grippewellen auf : Die neue Anwendung ist bisher auf die USA beschränkt (2008) 0.00
    0.002375251 = product of:
      0.0071257525 = sum of:
        0.0017488645 = weight(_text_:a in 2403) [ClassicSimilarity], result of:
          0.0017488645 = score(doc=2403,freq=2.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.050957955 = fieldWeight in 2403, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2403)
        0.005376888 = product of:
          0.016130663 = sum of:
            0.016130663 = weight(_text_:22 in 2403) [ClassicSimilarity], result of:
              0.016130663 = score(doc=2403,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.15476047 = fieldWeight in 2403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2403)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    3. 5.1997 8:44:22
    Type
    a

Years

Languages

  • e 127
  • d 30
  • sp 1
  • More… Less…

Types

  • a 141
  • el 15
  • m 12
  • s 11
  • More… Less…