Search (131 results, page 1 of 7)

  • × theme_ss:"Data Mining"
  1. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.02
    0.023111522 = product of:
      0.06933457 = sum of:
        0.020491816 = weight(_text_:information in 2908) [ClassicSimilarity], result of:
          0.020491816 = score(doc=2908,freq=10.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.3035872 = fieldWeight in 2908, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2908)
        0.04884275 = product of:
          0.07326412 = sum of:
            0.03679757 = weight(_text_:29 in 2908) [ClassicSimilarity], result of:
              0.03679757 = score(doc=2908,freq=2.0), product of:
                0.13525672 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03845047 = queryNorm
                0.27205724 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
            0.03646655 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
              0.03646655 = score(doc=2908,freq=2.0), product of:
                0.13464698 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03845047 = queryNorm
                0.2708308 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
          0.6666667 = coord(2/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Focuses on the information modelling side of conceptual modelling. Deals with the exploitation of fact verbalisations after finishing the actual information system. Verbalisations are used as input for the design of the so-called information model. Exploits these verbalisation in 4 directions: considers their use for a conceptual query language, the verbalisation of instances, the description of the contents of a database and for the verbalisation of queries in a computer supported query environment. Provides an example session with an envisioned tool for end user query formulations that exploits the verbalisation
    Date
    5. 4.1996 15:29:15
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  2. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.02
    0.022097893 = product of:
      0.06629368 = sum of:
        0.010473392 = weight(_text_:information in 1270) [ClassicSimilarity], result of:
          0.010473392 = score(doc=1270,freq=2.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.1551638 = fieldWeight in 1270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=1270)
        0.055820286 = product of:
          0.08373043 = sum of:
            0.042054366 = weight(_text_:29 in 1270) [ClassicSimilarity], result of:
              0.042054366 = score(doc=1270,freq=2.0), product of:
                0.13525672 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03845047 = queryNorm
                0.31092256 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
            0.04167606 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.04167606 = score(doc=1270,freq=2.0), product of:
                0.13464698 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03845047 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.6666667 = coord(2/3)
      0.33333334 = coord(2/6)
    
    Date
    5. 4.1996 15:29:15
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  3. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.02
    0.015110459 = product of:
      0.045331378 = sum of:
        0.03838537 = weight(_text_:geschichte in 1507) [ClassicSimilarity], result of:
          0.03838537 = score(doc=1507,freq=2.0), product of:
            0.18274738 = queryWeight, product of:
              4.7528 = idf(docFreq=1036, maxDocs=44218)
              0.03845047 = queryNorm
            0.21004607 = fieldWeight in 1507, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7528 = idf(docFreq=1036, maxDocs=44218)
              0.03125 = fieldNorm(doc=1507)
        0.00694601 = product of:
          0.02083803 = sum of:
            0.02083803 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
              0.02083803 = score(doc=1507,freq=2.0), product of:
                0.13464698 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03845047 = queryNorm
                0.15476047 = fieldWeight in 1507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1507)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Wir werden einmal die Grundlagen des Text-Mining-Systems bei IBM darstellen, dann werden wir das Projekt etwas umfangreicher und deutlicher darstellen, da kennen wir uns aus. Von daher haben wir zwei Teile, einmal Heidelberg, einmal Hamburg. Noch einmal zur Technologie. Text-Mining ist eine von IBM entwickelte Technologie, die in einer besonderen Ausformung und Programmierung für uns zusammengestellt wurde. Das Projekt hieß bei uns lange Zeit DocText Miner und heißt seit einiger Zeit auf Vorschlag von IBM DocCat, das soll eine Abkürzung für Document-Categoriser sein, sie ist ja auch nett und anschaulich. Wir fangen an mit Text-Mining, das bei IBM in Heidelberg entwickelt wurde. Die verstehen darunter das automatische Indexieren als eine Instanz, also einen Teil von Text-Mining. Probleme werden dabei gezeigt, und das Text-Mining ist eben eine Methode zur Strukturierung von und der Suche in großen Dokumentenmengen, die Extraktion von Informationen und, das ist der hohe Anspruch, von impliziten Zusammenhängen. Das letztere sei dahingestellt. IBM macht das quantitativ, empirisch, approximativ und schnell. das muss man wirklich sagen. Das Ziel, und das ist ganz wichtig für unser Projekt gewesen, ist nicht, den Text zu verstehen, sondern das Ergebnis dieser Verfahren ist, was sie auf Neudeutsch a bundle of words, a bag of words nennen, also eine Menge von bedeutungstragenden Begriffen aus einem Text zu extrahieren, aufgrund von Algorithmen, also im Wesentlichen aufgrund von Rechenoperationen. Es gibt eine ganze Menge von linguistischen Vorstudien, ein wenig Linguistik ist auch dabei, aber nicht die Grundlage der ganzen Geschichte. Was sie für uns gemacht haben, ist also die Annotierung von Pressetexten für unsere Pressedatenbank. Für diejenigen, die es noch nicht kennen: Gruner + Jahr führt eine Textdokumentation, die eine Datenbank führt, seit Anfang der 70er Jahre, da sind z.Z. etwa 6,5 Millionen Dokumente darin, davon etwas über 1 Million Volltexte ab 1993. Das Prinzip war lange Zeit, dass wir die Dokumente, die in der Datenbank gespeichert waren und sind, verschlagworten und dieses Prinzip haben wir auch dann, als der Volltext eingeführt wurde, in abgespeckter Form weitergeführt. Zu diesen 6,5 Millionen Dokumenten gehören dann eben auch ungefähr 10 Millionen Faksimileseiten, weil wir die Faksimiles auch noch standardmäßig aufheben.
    Date
    22. 4.2003 11:45:36
  4. Budzik, J.; Hammond, K.J.; Birnbaum, L.: Information access in context (2001) 0.01
    0.014286717 = product of:
      0.04286015 = sum of:
        0.018328438 = weight(_text_:information in 3835) [ClassicSimilarity], result of:
          0.018328438 = score(doc=3835,freq=2.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.27153665 = fieldWeight in 3835, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=3835)
        0.024531713 = product of:
          0.07359514 = sum of:
            0.07359514 = weight(_text_:29 in 3835) [ClassicSimilarity], result of:
              0.07359514 = score(doc=3835,freq=2.0), product of:
                0.13525672 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03845047 = queryNorm
                0.5441145 = fieldWeight in 3835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3835)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    29. 3.2002 17:31:17
  5. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.01
    0.014213158 = product of:
      0.04263947 = sum of:
        0.018328438 = weight(_text_:information in 4577) [ClassicSimilarity], result of:
          0.018328438 = score(doc=4577,freq=2.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.27153665 = fieldWeight in 4577, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4577)
        0.024311034 = product of:
          0.0729331 = sum of:
            0.0729331 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.0729331 = score(doc=4577,freq=2.0), product of:
                0.13464698 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03845047 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    2. 4.2000 18:01:22
  6. Bath, P.A.: Data mining in health and medical information (2003) 0.01
    0.012479113 = product of:
      0.03743734 = sum of:
        0.023419216 = weight(_text_:information in 4263) [ClassicSimilarity], result of:
          0.023419216 = score(doc=4263,freq=10.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.3469568 = fieldWeight in 4263, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4263)
        0.014018122 = product of:
          0.042054366 = sum of:
            0.042054366 = weight(_text_:29 in 4263) [ClassicSimilarity], result of:
              0.042054366 = score(doc=4263,freq=2.0), product of:
                0.13525672 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03845047 = queryNorm
                0.31092256 = fieldWeight in 4263, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4263)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Data mining (DM) is part of a process by which information can be extracted from data or databases and used to inform decision making in a variety of contexts (Benoit, 2002; Michalski, Bratka & Kubat, 1997). DM includes a range of tools and methods for extractiog information; their use in the commercial sector for knowledge extraction and discovery has been one of the main driving forces in their development (Adriaans & Zantinge, 1996; Benoit, 2002). DM has been developed and applied in numerous areas. This review describes its use in analyzing health and medical information.
    Date
    23.10.2005 18:29:03
    Source
    Annual review of information science and technology. 38(2004), S.331-370
  7. Cardie, C.: Empirical methods in information extraction (1997) 0.01
    0.011654968 = product of:
      0.034964904 = sum of:
        0.020946784 = weight(_text_:information in 3246) [ClassicSimilarity], result of:
          0.020946784 = score(doc=3246,freq=8.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.3103276 = fieldWeight in 3246, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3246)
        0.014018122 = product of:
          0.042054366 = sum of:
            0.042054366 = weight(_text_:29 in 3246) [ClassicSimilarity], result of:
              0.042054366 = score(doc=3246,freq=2.0), product of:
                0.13525672 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03845047 = queryNorm
                0.31092256 = fieldWeight in 3246, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3246)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Surveys the use of empirical, machine-learning methods for information extraction. Presents a generic architecture for information extraction systems and surveys the learning algorithms that have been developed to address the problems of accuracy, portability, and knowledge acquisition for each component of the architecture
    Date
    6. 3.1999 13:50:29
    Footnote
    Contribution to a special section reviewing recent research in empirical methods in speech recognition, syntactic parsing, semantic processing, information extraction and machine translation
  8. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.01
    0.011612935 = product of:
      0.034838803 = sum of:
        0.020946784 = weight(_text_:information in 1737) [ClassicSimilarity], result of:
          0.020946784 = score(doc=1737,freq=8.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.3103276 = fieldWeight in 1737, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=1737)
        0.01389202 = product of:
          0.04167606 = sum of:
            0.04167606 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
              0.04167606 = score(doc=1737,freq=2.0), product of:
                0.13464698 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03845047 = queryNorm
                0.30952093 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1737)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Defines digital libraries and discusses the effects of new technology on librarians. Examines the different viewpoints of librarians and information technologists on digital libraries. Describes the development of a digital library at the National Drug Intelligence Center, USA, which was carried out in collaboration with information technology experts. The system is based on Web enabled search technology to find information, data visualization and data mining to visualize it and use of SGML as an information standard to store it
    Date
    22.11.1998 18:57:22
  9. Lusti, M.: Data Warehousing and Data Mining : Eine Einführung in entscheidungsunterstützende Systeme (1999) 0.01
    0.008121804 = product of:
      0.024365412 = sum of:
        0.010473392 = weight(_text_:information in 4261) [ClassicSimilarity], result of:
          0.010473392 = score(doc=4261,freq=2.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.1551638 = fieldWeight in 4261, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4261)
        0.01389202 = product of:
          0.04167606 = sum of:
            0.04167606 = weight(_text_:22 in 4261) [ClassicSimilarity], result of:
              0.04167606 = score(doc=4261,freq=2.0), product of:
                0.13464698 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03845047 = queryNorm
                0.30952093 = fieldWeight in 4261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4261)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    17. 7.2002 19:22:06
    Theme
    Information Resources Management
  10. Liu, X.; Yu, S.; Janssens, F.; Glänzel, W.; Moreau, Y.; Moor, B.de: Weighted hybrid clustering by combining text mining and bibliometrics on a large-scale journal database (2010) 0.01
    0.0072074337 = product of:
      0.0216223 = sum of:
        0.0111087095 = weight(_text_:information in 3464) [ClassicSimilarity], result of:
          0.0111087095 = score(doc=3464,freq=4.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.16457605 = fieldWeight in 3464, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3464)
        0.010513592 = product of:
          0.031540774 = sum of:
            0.031540774 = weight(_text_:29 in 3464) [ClassicSimilarity], result of:
              0.031540774 = score(doc=3464,freq=2.0), product of:
                0.13525672 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03845047 = queryNorm
                0.23319192 = fieldWeight in 3464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    We propose a new hybrid clustering framework to incorporate text mining with bibliometrics in journal set analysis. The framework integrates two different approaches: clustering ensemble and kernel-fusion clustering. To improve the flexibility and the efficiency of processing large-scale data, we propose an information-based weighting scheme to leverage the effect of multiple data sources in hybrid clustering. Three different algorithms are extended by the proposed weighting scheme and they are employed on a large journal set retrieved from the Web of Science (WoS) database. The clustering performance of the proposed algorithms is systematically evaluated using multiple evaluation methods, and they were cross-compared with alternative methods. Experimental results demonstrate that the proposed weighted hybrid clustering strategy is superior to other methods in clustering performance and efficiency. The proposed approach also provides a more refined structural mapping of journal sets, which is useful for monitoring and detecting new trends in different scientific fields.
    Date
    1. 6.2010 9:29:57
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1105-1119
  11. Qiu, X.Y.; Srinivasan, P.; Hu, Y.: Supervised learning models to predict firm performance with annual reports : an empirical study (2014) 0.01
    0.0072074337 = product of:
      0.0216223 = sum of:
        0.0111087095 = weight(_text_:information in 1205) [ClassicSimilarity], result of:
          0.0111087095 = score(doc=1205,freq=4.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.16457605 = fieldWeight in 1205, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1205)
        0.010513592 = product of:
          0.031540774 = sum of:
            0.031540774 = weight(_text_:29 in 1205) [ClassicSimilarity], result of:
              0.031540774 = score(doc=1205,freq=2.0), product of:
                0.13525672 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03845047 = queryNorm
                0.23319192 = fieldWeight in 1205, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1205)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Text mining and machine learning methodologies have been applied toward knowledge discovery in several domains, such as biomedicine and business. Interestingly, in the business domain, the text mining and machine learning community has minimally explored company annual reports with their mandatory disclosures. In this study, we explore the question "How can annual reports be used to predict change in company performance from one year to the next?" from a text mining perspective. Our article contributes a systematic study of the potential of company mandatory disclosures using a computational viewpoint in the following aspects: (a) We characterize our research problem along distinct dimensions to gain a reasonably comprehensive understanding of the capacity of supervised learning methods in predicting change in company performance using annual reports, and (b) our findings from unbiased systematic experiments provide further evidence about the economic incentives faced by analysts in their stock recommendations and speculations on analysts having access to more information in producing earnings forecast.
    Date
    29. 1.2014 16:46:40
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.2, S.400-413
  12. Wiegmann, S.: Hättest du die Titanic überlebt? : Eine kurze Einführung in das Data Mining mit freier Software (2023) 0.01
    0.0071433587 = product of:
      0.021430075 = sum of:
        0.009164219 = weight(_text_:information in 876) [ClassicSimilarity], result of:
          0.009164219 = score(doc=876,freq=2.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.13576832 = fieldWeight in 876, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=876)
        0.012265856 = product of:
          0.03679757 = sum of:
            0.03679757 = weight(_text_:29 in 876) [ClassicSimilarity], result of:
              0.03679757 = score(doc=876,freq=2.0), product of:
                0.13525672 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03845047 = queryNorm
                0.27205724 = fieldWeight in 876, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=876)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Am 10. April 1912 ging Elisabeth Walton Allen an Bord der "Titanic", um ihr Hab und Gut nach England zu holen. Eines Nachts wurde sie von ihrer aufgelösten Tante geweckt, deren Kajüte unter Wasser stand. Wie steht es um Elisabeths Chancen und hätte man selbst das Unglück damals überlebt? Das Titanic-Orakel ist eine algorithmusbasierte App, die entsprechende Prognosen aufstellt und im Rahmen des Kurses "Data Science" am Department Information der HAW Hamburg entstanden ist. Dieser Beitrag zeigt Schritt für Schritt, wie die App unter Verwendung freier Software entwickelt wurde. Code und Daten werden zur Nachnutzung bereitgestellt.
    Date
    28. 1.2022 11:05:29
  13. Srinivasan, P.: Text mining in biomedicine : challenges and opportunities (2006) 0.01
    0.0061228788 = product of:
      0.018368635 = sum of:
        0.007855045 = weight(_text_:information in 1497) [ClassicSimilarity], result of:
          0.007855045 = score(doc=1497,freq=2.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.116372846 = fieldWeight in 1497, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1497)
        0.010513592 = product of:
          0.031540774 = sum of:
            0.031540774 = weight(_text_:29 in 1497) [ClassicSimilarity], result of:
              0.031540774 = score(doc=1497,freq=2.0), product of:
                0.13525672 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03845047 = queryNorm
                0.23319192 = fieldWeight in 1497, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1497)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    29. 2.2008 17:14:09
    Source
    Knowledge organization, information systems and other essays: Professor A. Neelameghan Festschrift. Ed. by K.S. Raghavan and K.N. Prasad
  14. Raan, A.F.J. van; Noyons, E.C.M.: Discovery of patterns of scientific and technological development and knowledge transfer (2002) 0.01
    0.006006195 = product of:
      0.018018585 = sum of:
        0.009257258 = weight(_text_:information in 3603) [ClassicSimilarity], result of:
          0.009257258 = score(doc=3603,freq=4.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.13714671 = fieldWeight in 3603, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3603)
        0.008761327 = product of:
          0.02628398 = sum of:
            0.02628398 = weight(_text_:29 in 3603) [ClassicSimilarity], result of:
              0.02628398 = score(doc=3603,freq=2.0), product of:
                0.13525672 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03845047 = queryNorm
                0.19432661 = fieldWeight in 3603, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3603)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  15. Ma, Z.; Sun, A.; Cong, G.: On predicting the popularity of newly emerging hashtags in Twitter (2013) 0.01
    0.006006195 = product of:
      0.018018585 = sum of:
        0.009257258 = weight(_text_:information in 967) [ClassicSimilarity], result of:
          0.009257258 = score(doc=967,freq=4.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.13714671 = fieldWeight in 967, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=967)
        0.008761327 = product of:
          0.02628398 = sum of:
            0.02628398 = weight(_text_:29 in 967) [ClassicSimilarity], result of:
              0.02628398 = score(doc=967,freq=2.0), product of:
                0.13525672 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03845047 = queryNorm
                0.19432661 = fieldWeight in 967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=967)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Because of Twitter's popularity and the viral nature of information dissemination on Twitter, predicting which Twitter topics will become popular in the near future becomes a task of considerable economic importance. Many Twitter topics are annotated by hashtags. In this article, we propose methods to predict the popularity of new hashtags on Twitter by formulating the problem as a classification task. We use five standard classification models (i.e., Naïve bayes, k-nearest neighbors, decision trees, support vector machines, and logistic regression) for prediction. The main challenge is the identification of effective features for describing new hashtags. We extract 7 content features from a hashtag string and the collection of tweets containing the hashtag and 11 contextual features from the social graph formed by users who have adopted the hashtag. We conducted experiments on a Twitter data set consisting of 31 million tweets from 2 million Singapore-based users. The experimental results show that the standard classifiers using the extracted features significantly outperform the baseline methods that do not use these features. Among the five classifiers, the logistic regression model performs the best in terms of the Micro-F1 measure. We also observe that contextual features are more effective than content features.
    Date
    25. 6.2013 19:05:29
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.7, S.1399-1410
  16. Gill, A.J.; Hinrichs-Krapels, S.; Blanke, T.; Grant, J.; Hedges, M.; Tanner, S.: Insight workflow : systematically combining human and computational methods to explore textual data (2017) 0.01
    0.006006195 = product of:
      0.018018585 = sum of:
        0.009257258 = weight(_text_:information in 3682) [ClassicSimilarity], result of:
          0.009257258 = score(doc=3682,freq=4.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.13714671 = fieldWeight in 3682, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3682)
        0.008761327 = product of:
          0.02628398 = sum of:
            0.02628398 = weight(_text_:29 in 3682) [ClassicSimilarity], result of:
              0.02628398 = score(doc=3682,freq=2.0), product of:
                0.13525672 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03845047 = queryNorm
                0.19432661 = fieldWeight in 3682, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3682)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Analyzing large quantities of real-world textual data has the potential to provide new insights for researchers. However, such data present challenges for both human and computational methods, requiring a diverse range of specialist skills, often shared across a number of individuals. In this paper we use the analysis of a real-world data set as our case study, and use this exploration as a demonstration of our "insight workflow," which we present for use and adaptation by other researchers. The data we use are impact case study documents collected as part of the UK Research Excellence Framework (REF), consisting of 6,679 documents and 6.25 million words; the analysis was commissioned by the Higher Education Funding Council for England (published as report HEFCE 2015). In our exploration and analysis we used a variety of techniques, ranging from keyword in context and frequency information to more sophisticated methods (topic modeling), with these automated techniques providing an empirical point of entry for in-depth and intensive human analysis. We present the 60 topics to demonstrate the output of our methods, and illustrate how the variety of analysis techniques can be combined to provide insights. We note potential limitations and propose future work.
    Date
    16.11.2017 14:00:29
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.7, S.1671-1686
  17. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.01
    0.0059799235 = product of:
      0.01793977 = sum of:
        0.009257258 = weight(_text_:information in 1605) [ClassicSimilarity], result of:
          0.009257258 = score(doc=1605,freq=4.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.13714671 = fieldWeight in 1605, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1605)
        0.008682513 = product of:
          0.026047537 = sum of:
            0.026047537 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.026047537 = score(doc=1605,freq=2.0), product of:
                0.13464698 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03845047 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Numerous studies have explored the possibility of uncovering information from web search queries but few have examined the factors that affect web query data sources. We conducted a study that investigated this issue by comparing Google Trends and Baidu Index. Data from these two services are based on queries entered by users into Google and Baidu, two of the largest search engines in the world. We first compared the features and functions of the two services based on documents and extensive testing. We then carried out an empirical study that collected query volume data from the two sources. We found that data from both sources could be used to predict the quality of Chinese universities and companies. Despite the differences between the two services in terms of technology, such as differing methods of language processing, the search volume data from the two were highly correlated and combining the two data sources did not improve the predictive power of the data. However, there was a major difference between the two in terms of data availability. Baidu Index was able to provide more search volume data than Google Trends did. Our analysis showed that the disadvantage of Google Trends in this regard was due to Google's smaller user base in China. The implication of this finding goes beyond China. Google's user bases in many countries are smaller than that in China, so the search volume data related to those countries could result in the same issue as that related to China.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
  18. Tu, Y.-N.; Hsu, S.-L.: Constructing conceptual trajectory maps to trace the development of research fields (2016) 0.01
    0.0051023993 = product of:
      0.015307197 = sum of:
        0.00654587 = weight(_text_:information in 3059) [ClassicSimilarity], result of:
          0.00654587 = score(doc=3059,freq=2.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.09697737 = fieldWeight in 3059, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3059)
        0.008761327 = product of:
          0.02628398 = sum of:
            0.02628398 = weight(_text_:29 in 3059) [ClassicSimilarity], result of:
              0.02628398 = score(doc=3059,freq=2.0), product of:
                0.13525672 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03845047 = queryNorm
                0.19432661 = fieldWeight in 3059, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3059)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    21. 7.2016 19:29:19
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.8, S.2016-2031
  19. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.01
    0.005076128 = product of:
      0.015228383 = sum of:
        0.00654587 = weight(_text_:information in 668) [ClassicSimilarity], result of:
          0.00654587 = score(doc=668,freq=2.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.09697737 = fieldWeight in 668, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=668)
        0.008682513 = product of:
          0.026047537 = sum of:
            0.026047537 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
              0.026047537 = score(doc=668,freq=2.0), product of:
                0.13464698 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03845047 = queryNorm
                0.19345059 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=668)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    22. 3.2013 19:43:01
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.3, S.574-586
  20. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.01
    0.005076128 = product of:
      0.015228383 = sum of:
        0.00654587 = weight(_text_:information in 5011) [ClassicSimilarity], result of:
          0.00654587 = score(doc=5011,freq=2.0), product of:
            0.067498945 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03845047 = queryNorm
            0.09697737 = fieldWeight in 5011, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5011)
        0.008682513 = product of:
          0.026047537 = sum of:
            0.026047537 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
              0.026047537 = score(doc=5011,freq=2.0), product of:
                0.13464698 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03845047 = queryNorm
                0.19345059 = fieldWeight in 5011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5011)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    7. 3.2019 16:32:22
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.4, S.402-411

Years

Languages

  • e 108
  • d 22
  • sp 1
  • More… Less…

Types

  • a 110
  • m 16
  • s 14
  • el 4
  • x 1
  • More… Less…