Search (20 results, page 1 of 1)

  • × theme_ss:"Data Mining"
  • × year_i:[2010 TO 2020}
  1. Jäger, L.: Von Big Data zu Big Brother (2018) 0.02
    0.015446237 = product of:
      0.03861559 = sum of:
        0.03209586 = weight(_text_:den in 5234) [ClassicSimilarity], result of:
          0.03209586 = score(doc=5234,freq=12.0), product of:
            0.10344325 = queryWeight, product of:
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.036090754 = queryNorm
            0.31027505 = fieldWeight in 5234, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.03125 = fieldNorm(doc=5234)
        0.0065197325 = product of:
          0.019559197 = sum of:
            0.019559197 = weight(_text_:22 in 5234) [ClassicSimilarity], result of:
              0.019559197 = score(doc=5234,freq=2.0), product of:
                0.12638368 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036090754 = queryNorm
                0.15476047 = fieldWeight in 5234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5234)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    1983 bewegte ein einziges Thema die gesamte Bundesrepublik: die geplante Volkszählung. Jeder Haushalt in Westdeutschland sollte Fragebögen mit 36 Fragen zur Wohnsituation, den im Haushalt lebenden Personen und über ihre Einkommensverhältnisse ausfüllen. Es regte sich massiver Widerstand, hunderte Bürgerinitiativen formierten sich im ganzen Land gegen die Befragung. Man wollte nicht "erfasst" werden, die Privatsphäre war heilig. Es bestand die (berechtigte) Sorge, dass die Antworten auf den eigentlich anonymisierten Fragebögen Rückschlüsse auf die Identität der Befragten zulassen. Das Bundesverfassungsgericht gab den Klägern gegen den Zensus Recht: Die geplante Volkszählung verstieß gegen den Datenschutz und damit auch gegen das Grundgesetz. Sie wurde gestoppt. Nur eine Generation später geben wir sorglos jedes Mal beim Einkaufen die Bonuskarte der Supermarktkette heraus, um ein paar Punkte für ein Geschenk oder Rabatte beim nächsten Einkauf zu sammeln. Und dabei wissen wir sehr wohl, dass der Supermarkt damit unser Konsumverhalten bis ins letzte Detail erfährt. Was wir nicht wissen, ist, wer noch Zugang zu diesen Daten erhält. Deren Käufer bekommen nicht nur Zugriff auf unsere Einkäufe, sondern können über sie auch unsere Gewohnheiten, persönlichen Vorlieben und Einkommen ermitteln. Genauso unbeschwert surfen wir im Internet, googeln und shoppen, mailen und chatten. Google, Facebook und Microsoft schauen bei all dem nicht nur zu, sondern speichern auf alle Zeiten alles, was wir von uns geben, was wir einkaufen, was wir suchen, und verwenden es für ihre eigenen Zwecke. Sie durchstöbern unsere E-Mails, kennen unser persönliches Zeitmanagement, verfolgen unseren momentanen Standort, wissen um unsere politischen, religiösen und sexuellen Präferenzen (wer kennt ihn nicht, den Button "an Männern interessiert" oder "an Frauen interessiert"?), unsere engsten Freunde, mit denen wir online verbunden sind, unseren Beziehungsstatus, welche Schule wir besuchen oder besucht haben und vieles mehr.
    Date
    22. 1.2018 11:33:49
  2. Liu, X.; Yu, S.; Janssens, F.; Glänzel, W.; Moreau, Y.; Moor, B.de: Weighted hybrid clustering by combining text mining and bibliometrics on a large-scale journal database (2010) 0.01
    0.009015142 = product of:
      0.022537854 = sum of:
        0.012669483 = product of:
          0.038008448 = sum of:
            0.038008448 = weight(_text_:f in 3464) [ClassicSimilarity], result of:
              0.038008448 = score(doc=3464,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.26422277 = fieldWeight in 3464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.33333334 = coord(1/3)
        0.00986837 = product of:
          0.029605111 = sum of:
            0.029605111 = weight(_text_:29 in 3464) [ClassicSimilarity], result of:
              0.029605111 = score(doc=3464,freq=2.0), product of:
                0.12695599 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036090754 = queryNorm
                0.23319192 = fieldWeight in 3464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Date
    1. 6.2010 9:29:57
  3. Tu, Y.-N.; Hsu, S.-L.: Constructing conceptual trajectory maps to trace the development of research fields (2016) 0.01
    0.007512619 = product of:
      0.018781547 = sum of:
        0.010557904 = product of:
          0.03167371 = sum of:
            0.03167371 = weight(_text_:f in 3059) [ClassicSimilarity], result of:
              0.03167371 = score(doc=3059,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.22018565 = fieldWeight in 3059, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3059)
          0.33333334 = coord(1/3)
        0.008223643 = product of:
          0.024670927 = sum of:
            0.024670927 = weight(_text_:29 in 3059) [ClassicSimilarity], result of:
              0.024670927 = score(doc=3059,freq=2.0), product of:
                0.12695599 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036090754 = queryNorm
                0.19432661 = fieldWeight in 3059, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3059)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    This study proposes a new method to construct and trace the trajectory of conceptual development of a research field by combining main path analysis, citation analysis, and text-mining techniques. Main path analysis, a method used commonly to trace the most critical path in a citation network, helps describe the developmental trajectory of a research field. This study extends the main path analysis method and applies text-mining techniques in the new method, which reflects the trajectory of conceptual development in an academic research field more accurately than citation frequency, which represents only the articles examined. Articles can be merged based on similarity of concepts, and by merging concepts the history of a research field can be described more precisely. The new method was applied to the "h-index" and "text mining" fields. The precision, recall, and F-measures of the h-index were 0.738, 0.652, and 0.658 and those of text-mining were 0.501, 0.653, and 0.551, respectively. Last, this study not only establishes the conceptual trajectory map of a research field, but also recommends keywords that are more precise than those used currently by researchers. These precise keywords could enable researchers to gather related works more quickly than before.
    Date
    21. 7.2016 19:29:19
  4. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.01
    0.007483028 = product of:
      0.01870757 = sum of:
        0.010557904 = product of:
          0.03167371 = sum of:
            0.03167371 = weight(_text_:f in 5011) [ClassicSimilarity], result of:
              0.03167371 = score(doc=5011,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.22018565 = fieldWeight in 5011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5011)
          0.33333334 = coord(1/3)
        0.008149666 = product of:
          0.024448996 = sum of:
            0.024448996 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
              0.024448996 = score(doc=5011,freq=2.0), product of:
                0.12638368 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036090754 = queryNorm
                0.19345059 = fieldWeight in 5011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5011)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Date
    7. 3.2019 16:32:22
  5. Nohr, H.: Big Data im Lichte der EU-Datenschutz-Grundverordnung (2017) 0.01
    0.0074122213 = product of:
      0.037061106 = sum of:
        0.037061106 = weight(_text_:den in 4076) [ClassicSimilarity], result of:
          0.037061106 = score(doc=4076,freq=4.0), product of:
            0.10344325 = queryWeight, product of:
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.036090754 = queryNorm
            0.35827476 = fieldWeight in 4076, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.0625 = fieldNorm(doc=4076)
      0.2 = coord(1/5)
    
    Abstract
    Der vorliegende Beitrag beschäftigt sich mit den Rahmenbedingungen für analytische Anwendungen wie Big Data, die durch das neue europäische Datenschutzrecht entstehen, insbesondere durch die EU-Datenschutz-Grundverordnung. Er stellt wesentliche Neuerungen vor und untersucht die spezifischen datenschutzrechtlichen Regelungen im Hinblick auf den Einsatz von Big Data sowie Voraussetzungen, die durch die Verordnung abverlangt werden.
  6. Winterhalter, C.: Licence to mine : ein Überblick über Rahmenbedingungen von Text and Data Mining und den aktuellen Stand der Diskussion (2016) 0.01
    0.0074122213 = product of:
      0.037061106 = sum of:
        0.037061106 = weight(_text_:den in 673) [ClassicSimilarity], result of:
          0.037061106 = score(doc=673,freq=4.0), product of:
            0.10344325 = queryWeight, product of:
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.036090754 = queryNorm
            0.35827476 = fieldWeight in 673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.0625 = fieldNorm(doc=673)
      0.2 = coord(1/5)
    
    Abstract
    Der Artikel gibt einen Überblick über die Möglichkeiten der Anwendung von Text and Data Mining (TDM) und ähnlichen Verfahren auf der Grundlage bestehender Regelungen in Lizenzverträgen zu kostenpflichtigen elektronischen Ressourcen, die Debatte über zusätzliche Lizenzen für TDM am Beispiel von Elseviers TDM Policy und den Stand der Diskussion über die Einführung von Schrankenregelungen im Urheberrecht für TDM zu nichtkommerziellen wissenschaftlichen Zwecken.
  7. Bauckhage, C.: Moderne Textanalyse : neues Wissen für intelligente Lösungen (2016) 0.01
    0.005241232 = product of:
      0.02620616 = sum of:
        0.02620616 = weight(_text_:den in 2568) [ClassicSimilarity], result of:
          0.02620616 = score(doc=2568,freq=2.0), product of:
            0.10344325 = queryWeight, product of:
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.036090754 = queryNorm
            0.25333852 = fieldWeight in 2568, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.0625 = fieldNorm(doc=2568)
      0.2 = coord(1/5)
    
    Abstract
    Im Zuge der immer größeren Verfügbarkeit von Daten (Big Data) und rasanter Fortschritte im Daten-basierten maschinellen Lernen haben wir in den letzten Jahren Durchbrüche in der künstlichen Intelligenz erlebt. Dieser Vortrag beleuchtet diese Entwicklungen insbesondere im Hinblick auf die automatische Analyse von Textdaten. Anhand einfacher Beispiele illustrieren wir, wie moderne Textanalyse abläuft und zeigen wiederum anhand von Beispielen, welche praktischen Anwendungsmöglichkeiten sich heutzutage in Branchen wie dem Verlagswesen, der Finanzindustrie oder dem Consulting ergeben.
  8. Maaten, L. van den: Accelerating t-SNE using Tree-Based Algorithms (2014) 0.00
    0.0045860778 = product of:
      0.02293039 = sum of:
        0.02293039 = weight(_text_:den in 3886) [ClassicSimilarity], result of:
          0.02293039 = score(doc=3886,freq=2.0), product of:
            0.10344325 = queryWeight, product of:
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.036090754 = queryNorm
            0.2216712 = fieldWeight in 3886, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3886)
      0.2 = coord(1/5)
    
  9. Maaten, L. van den; Hinton, G.: Visualizing non-metric similarities in multiple maps (2012) 0.00
    0.003930924 = product of:
      0.01965462 = sum of:
        0.01965462 = weight(_text_:den in 3884) [ClassicSimilarity], result of:
          0.01965462 = score(doc=3884,freq=2.0), product of:
            0.10344325 = queryWeight, product of:
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.036090754 = queryNorm
            0.19000389 = fieldWeight in 3884, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.046875 = fieldNorm(doc=3884)
      0.2 = coord(1/5)
    
  10. Loonus, Y.: Einsatzbereiche der KI und ihre Relevanz für Information Professionals (2017) 0.00
    0.003930924 = product of:
      0.01965462 = sum of:
        0.01965462 = weight(_text_:den in 5668) [ClassicSimilarity], result of:
          0.01965462 = score(doc=5668,freq=2.0), product of:
            0.10344325 = queryWeight, product of:
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.036090754 = queryNorm
            0.19000389 = fieldWeight in 5668, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.046875 = fieldNorm(doc=5668)
      0.2 = coord(1/5)
    
    Abstract
    Es liegt in der Natur des Menschen, Erfahrungen und Ideen in Wort und Schrift mit anderen teilen zu wollen. So produzieren wir jeden Tag gigantische Mengen an Texten, die in digitaler Form geteilt und abgelegt werden. The Radicati Group schätzt, dass 2017 täglich 269 Milliarden E-Mails versendet und empfangen werden. Hinzu kommen größtenteils unstrukturierte Daten wie Soziale Medien, Presse, Websites und firmeninterne Systeme, beispielsweise in Form von CRM-Software oder PDF-Dokumenten. Der weltweite Bestand an unstrukturierten Daten wächst so rasant, dass es kaum möglich ist, seinen Umfang zu quantifizieren. Der Versuch, eine belastbare Zahl zu recherchieren, führt unweigerlich zu diversen Artikeln, die den Anteil unstrukturierter Texte am gesamten Datenbestand auf 80% schätzen. Auch wenn nicht mehr einwandfrei nachvollziehbar ist, woher diese Zahl stammt, kann bei kritischer Reflexion unseres Tagesablaufs kaum bezweifelt werden, dass diese Daten von großer wirtschaftlicher Relevanz sind.
  11. Wattenberg, M.; Viégas, F.; Johnson, I.: How to use t-SNE effectively (2016) 0.00
    0.003378529 = product of:
      0.016892646 = sum of:
        0.016892646 = product of:
          0.050677933 = sum of:
            0.050677933 = weight(_text_:f in 3887) [ClassicSimilarity], result of:
              0.050677933 = score(doc=3887,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.35229704 = fieldWeight in 3887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3887)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  12. Kong, S.; Ye, F.; Feng, L.; Zhao, Z.: Towards the prediction problems of bursting hashtags on Twitter (2015) 0.00
    0.002956213 = product of:
      0.014781064 = sum of:
        0.014781064 = product of:
          0.044343192 = sum of:
            0.044343192 = weight(_text_:f in 2338) [ClassicSimilarity], result of:
              0.044343192 = score(doc=2338,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.3082599 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  13. Song, J.; Huang, Y.; Qi, X.; Li, Y.; Li, F.; Fu, K.; Huang, T.: Discovering hierarchical topic evolution in time-stamped documents (2016) 0.00
    0.0025338966 = product of:
      0.012669483 = sum of:
        0.012669483 = product of:
          0.038008448 = sum of:
            0.038008448 = weight(_text_:f in 2853) [ClassicSimilarity], result of:
              0.038008448 = score(doc=2853,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.26422277 = fieldWeight in 2853, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2853)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  14. Ebrahimi, M.; ShafieiBavani, E.; Wong, R.; Chen, F.: Twitter user geolocation by filtering of highly mentioned users (2018) 0.00
    0.0025338966 = product of:
      0.012669483 = sum of:
        0.012669483 = product of:
          0.038008448 = sum of:
            0.038008448 = weight(_text_:f in 4286) [ClassicSimilarity], result of:
              0.038008448 = score(doc=4286,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.26422277 = fieldWeight in 4286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4286)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  15. Varathan, K.D.; Giachanou, A.; Crestani, F.: Comparative opinion mining : a review (2017) 0.00
    0.002111581 = product of:
      0.010557904 = sum of:
        0.010557904 = product of:
          0.03167371 = sum of:
            0.03167371 = weight(_text_:f in 3540) [ClassicSimilarity], result of:
              0.03167371 = score(doc=3540,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.22018565 = fieldWeight in 3540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3540)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  16. Qiu, X.Y.; Srinivasan, P.; Hu, Y.: Supervised learning models to predict firm performance with annual reports : an empirical study (2014) 0.00
    0.0019736742 = product of:
      0.00986837 = sum of:
        0.00986837 = product of:
          0.029605111 = sum of:
            0.029605111 = weight(_text_:29 in 1205) [ClassicSimilarity], result of:
              0.029605111 = score(doc=1205,freq=2.0), product of:
                0.12695599 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036090754 = queryNorm
                0.23319192 = fieldWeight in 1205, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1205)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 1.2014 16:46:40
  17. Ma, Z.; Sun, A.; Cong, G.: On predicting the popularity of newly emerging hashtags in Twitter (2013) 0.00
    0.0016447286 = product of:
      0.008223643 = sum of:
        0.008223643 = product of:
          0.024670927 = sum of:
            0.024670927 = weight(_text_:29 in 967) [ClassicSimilarity], result of:
              0.024670927 = score(doc=967,freq=2.0), product of:
                0.12695599 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036090754 = queryNorm
                0.19432661 = fieldWeight in 967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=967)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    25. 6.2013 19:05:29
  18. Gill, A.J.; Hinrichs-Krapels, S.; Blanke, T.; Grant, J.; Hedges, M.; Tanner, S.: Insight workflow : systematically combining human and computational methods to explore textual data (2017) 0.00
    0.0016447286 = product of:
      0.008223643 = sum of:
        0.008223643 = product of:
          0.024670927 = sum of:
            0.024670927 = weight(_text_:29 in 3682) [ClassicSimilarity], result of:
              0.024670927 = score(doc=3682,freq=2.0), product of:
                0.12695599 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036090754 = queryNorm
                0.19432661 = fieldWeight in 3682, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3682)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    16.11.2017 14:00:29
  19. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.00
    0.0016299331 = product of:
      0.008149666 = sum of:
        0.008149666 = product of:
          0.024448996 = sum of:
            0.024448996 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
              0.024448996 = score(doc=668,freq=2.0), product of:
                0.12638368 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036090754 = queryNorm
                0.19345059 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=668)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    22. 3.2013 19:43:01
  20. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.00
    0.0016299331 = product of:
      0.008149666 = sum of:
        0.008149666 = product of:
          0.024448996 = sum of:
            0.024448996 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.024448996 = score(doc=1605,freq=2.0), product of:
                0.12638368 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036090754 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22