Search (116 results, page 1 of 6)

  • × theme_ss:"Data Mining"
  • × type_ss:"a"
  1. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.02
    0.022141643 = product of:
      0.06642493 = sum of:
        0.06642493 = sum of:
          0.018938582 = weight(_text_:of in 1737) [ClassicSimilarity], result of:
            0.018938582 = score(doc=1737,freq=8.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.27643585 = fieldWeight in 1737, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0625 = fieldNorm(doc=1737)
          0.047486346 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
            0.047486346 = score(doc=1737,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.30952093 = fieldWeight in 1737, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1737)
      0.33333334 = coord(1/3)
    
    Abstract
    Defines digital libraries and discusses the effects of new technology on librarians. Examines the different viewpoints of librarians and information technologists on digital libraries. Describes the development of a digital library at the National Drug Intelligence Center, USA, which was carried out in collaboration with information technology experts. The system is based on Web enabled search technology to find information, data visualization and data mining to visualize it and use of SGML as an information standard to store it
    Date
    22.11.1998 18:57:22
  2. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.02
    0.021157425 = product of:
      0.06347227 = sum of:
        0.06347227 = sum of:
          0.021921717 = weight(_text_:of in 2908) [ClassicSimilarity], result of:
            0.021921717 = score(doc=2908,freq=14.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.31997898 = fieldWeight in 2908, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2908)
          0.041550554 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
            0.041550554 = score(doc=2908,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.2708308 = fieldWeight in 2908, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2908)
      0.33333334 = coord(1/3)
    
    Abstract
    Focuses on the information modelling side of conceptual modelling. Deals with the exploitation of fact verbalisations after finishing the actual information system. Verbalisations are used as input for the design of the so-called information model. Exploits these verbalisation in 4 directions: considers their use for a conceptual query language, the verbalisation of instances, the description of the contents of a database and for the verbalisation of queries in a computer supported query environment. Provides an example session with an envisioned tool for end user query formulations that exploits the verbalisation
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  3. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.02
    0.018985212 = product of:
      0.056955636 = sum of:
        0.056955636 = sum of:
          0.009469291 = weight(_text_:of in 1270) [ClassicSimilarity], result of:
            0.009469291 = score(doc=1270,freq=2.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.13821793 = fieldWeight in 1270, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0625 = fieldNorm(doc=1270)
          0.047486346 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
            0.047486346 = score(doc=1270,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.30952093 = fieldWeight in 1270, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1270)
      0.33333334 = coord(1/3)
    
    Abstract
    Current algorithms for finding associations among the attributes describing data in a database have a number of shortcomings. Presents a novel method for association generation, that answers all desiderata. The method is different from all existing algorithms and especially suitable to textual databases with binary attributes. Uses subword trees for quick indexing into the required database statistics. Tests the algorithm on the Reuters-22173 database with satisfactory results
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  4. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.02
    0.01826274 = product of:
      0.054788217 = sum of:
        0.054788217 = sum of:
          0.02510925 = weight(_text_:of in 5011) [ClassicSimilarity], result of:
            0.02510925 = score(doc=5011,freq=36.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.36650562 = fieldWeight in 5011, product of:
                6.0 = tf(freq=36.0), with freq of:
                  36.0 = termFreq=36.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5011)
          0.029678967 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
            0.029678967 = score(doc=5011,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.19345059 = fieldWeight in 5011, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5011)
      0.33333334 = coord(1/3)
    
    Abstract
    The present challenge faced by scientists working with Big Data comes in the overwhelming volume and level of detail provided by current data sets. Exceeding traditional empirical approaches, Big Data opens a new perspective on scientific work in which data comes to play a role in the development of the scientific problematic to be developed. Addressing this reconfiguration of our relationship with data through readings of Wittgenstein, Macherey, and Popper, we propose a picture of science that encourages scientists to engage with the data in a direct way, using the data itself as an instrument for scientific investigation. Using GIS as a theme, we develop the concept of cyber-human systems of thought and understanding to bridge the divide between representative (theoretical) thinking and (non-theoretical) data-driven science. At the foundation of these systems, we invoke the concept of the "semantic pixel" to establish a logical and virtual space linking data and the work of scientists. It is with this discussion of the relationship between analysts in their pursuit of knowledge and the rise of Big Data that this present discussion of the philosophical foundations of Big Data addresses the central questions raised by social informatics research.
    Date
    7. 3.2019 16:32:22
    Footnote
    Beitrag eines Special issue on social informatics of knowledge
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.4, S.402-411
  5. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.02
    0.016726863 = product of:
      0.050180584 = sum of:
        0.050180584 = sum of:
          0.020501617 = weight(_text_:of in 1605) [ClassicSimilarity], result of:
            0.020501617 = score(doc=1605,freq=24.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.2992506 = fieldWeight in 1605, product of:
                4.8989797 = tf(freq=24.0), with freq of:
                  24.0 = termFreq=24.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1605)
          0.029678967 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
            0.029678967 = score(doc=1605,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.19345059 = fieldWeight in 1605, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1605)
      0.33333334 = coord(1/3)
    
    Abstract
    Numerous studies have explored the possibility of uncovering information from web search queries but few have examined the factors that affect web query data sources. We conducted a study that investigated this issue by comparing Google Trends and Baidu Index. Data from these two services are based on queries entered by users into Google and Baidu, two of the largest search engines in the world. We first compared the features and functions of the two services based on documents and extensive testing. We then carried out an empirical study that collected query volume data from the two sources. We found that data from both sources could be used to predict the quality of Chinese universities and companies. Despite the differences between the two services in terms of technology, such as differing methods of language processing, the search volume data from the two were highly correlated and combining the two data sources did not improve the predictive power of the data. However, there was a major difference between the two in terms of data availability. Baidu Index was able to provide more search volume data than Google Trends did. Our analysis showed that the disadvantage of Google Trends in this regard was due to Google's smaller user base in China. The implication of this finding goes beyond China. Google's user bases in many countries are smaller than that in China, so the search volume data related to those countries could result in the same issue as that related to China.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
  6. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.02
    0.016131433 = product of:
      0.048394296 = sum of:
        0.048394296 = sum of:
          0.01871533 = weight(_text_:of in 668) [ClassicSimilarity], result of:
            0.01871533 = score(doc=668,freq=20.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.27317715 = fieldWeight in 668, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0390625 = fieldNorm(doc=668)
          0.029678967 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
            0.029678967 = score(doc=668,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.19345059 = fieldWeight in 668, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=668)
      0.33333334 = coord(1/3)
    
    Abstract
    20th century massification of higher education and research in academia is said to have produced structurally stratified higher education systems in many countries. Most manifestly, the research mission of universities appears to be divisive. Authors have claimed that the Swedish system, while formally unified, has developed into a binary state, and statistics seem to support this conclusion. This article makes use of a comprehensive statistical data source on Swedish higher education institutions to illustrate stratification, and uses literature on Swedish research policy history to contextualize the statistics. Highlighting the opportunities as well as constraints of the data, the article argues that there is great merit in combining statistics with a qualitative analysis when studying the structural characteristics of national higher education systems. Not least the article shows that it is an over-simplification to describe the Swedish system as binary; the stratification is more complex. On basis of the analysis, the article also argues that while global trends certainly influence national developments, higher education systems have country-specific features that may enrich the understanding of how systems evolve and therefore should be analyzed as part of a broader study of the increasingly globalized academic system.
    Date
    22. 3.2013 19:43:01
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.3, S.574-586
  7. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.01
    0.013850185 = product of:
      0.041550554 = sum of:
        0.041550554 = product of:
          0.08310111 = sum of:
            0.08310111 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.08310111 = score(doc=4577,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    2. 4.2000 18:01:22
  8. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.01
    0.0101463245 = product of:
      0.030438973 = sum of:
        0.030438973 = sum of:
          0.0066958 = weight(_text_:of in 1507) [ClassicSimilarity], result of:
            0.0066958 = score(doc=1507,freq=4.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.09773483 = fieldWeight in 1507, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.03125 = fieldNorm(doc=1507)
          0.023743173 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
            0.023743173 = score(doc=1507,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.15476047 = fieldWeight in 1507, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1507)
      0.33333334 = coord(1/3)
    
    Abstract
    Wir werden einmal die Grundlagen des Text-Mining-Systems bei IBM darstellen, dann werden wir das Projekt etwas umfangreicher und deutlicher darstellen, da kennen wir uns aus. Von daher haben wir zwei Teile, einmal Heidelberg, einmal Hamburg. Noch einmal zur Technologie. Text-Mining ist eine von IBM entwickelte Technologie, die in einer besonderen Ausformung und Programmierung für uns zusammengestellt wurde. Das Projekt hieß bei uns lange Zeit DocText Miner und heißt seit einiger Zeit auf Vorschlag von IBM DocCat, das soll eine Abkürzung für Document-Categoriser sein, sie ist ja auch nett und anschaulich. Wir fangen an mit Text-Mining, das bei IBM in Heidelberg entwickelt wurde. Die verstehen darunter das automatische Indexieren als eine Instanz, also einen Teil von Text-Mining. Probleme werden dabei gezeigt, und das Text-Mining ist eben eine Methode zur Strukturierung von und der Suche in großen Dokumentenmengen, die Extraktion von Informationen und, das ist der hohe Anspruch, von impliziten Zusammenhängen. Das letztere sei dahingestellt. IBM macht das quantitativ, empirisch, approximativ und schnell. das muss man wirklich sagen. Das Ziel, und das ist ganz wichtig für unser Projekt gewesen, ist nicht, den Text zu verstehen, sondern das Ergebnis dieser Verfahren ist, was sie auf Neudeutsch a bundle of words, a bag of words nennen, also eine Menge von bedeutungstragenden Begriffen aus einem Text zu extrahieren, aufgrund von Algorithmen, also im Wesentlichen aufgrund von Rechenoperationen. Es gibt eine ganze Menge von linguistischen Vorstudien, ein wenig Linguistik ist auch dabei, aber nicht die Grundlage der ganzen Geschichte. Was sie für uns gemacht haben, ist also die Annotierung von Pressetexten für unsere Pressedatenbank. Für diejenigen, die es noch nicht kennen: Gruner + Jahr führt eine Textdokumentation, die eine Datenbank führt, seit Anfang der 70er Jahre, da sind z.Z. etwa 6,5 Millionen Dokumente darin, davon etwas über 1 Million Volltexte ab 1993. Das Prinzip war lange Zeit, dass wir die Dokumente, die in der Datenbank gespeichert waren und sind, verschlagworten und dieses Prinzip haben wir auch dann, als der Volltext eingeführt wurde, in abgespeckter Form weitergeführt. Zu diesen 6,5 Millionen Dokumenten gehören dann eben auch ungefähr 10 Millionen Faksimileseiten, weil wir die Faksimiles auch noch standardmäßig aufheben.
    Date
    22. 4.2003 11:45:36
  9. Kong, S.; Ye, F.; Feng, L.; Zhao, Z.: Towards the prediction problems of bursting hashtags on Twitter (2015) 0.00
    0.0047837105 = product of:
      0.014351131 = sum of:
        0.014351131 = product of:
          0.028702263 = sum of:
            0.028702263 = weight(_text_:of in 2338) [ClassicSimilarity], result of:
              0.028702263 = score(doc=2338,freq=24.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.41895083 = fieldWeight in 2338, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Hundreds of thousands of hashtags are generated every day on Twitter. Only a few will burst and become trending topics. In this article, we provide the definition of a bursting hashtag and conduct a systematic study of a series of challenging prediction problems that span the entire life cycles of bursting hashtags. Around the problem of "how to build a system to predict bursting hashtags," we explore different types of features and present machine learning solutions. On real data sets from Twitter, experiments are conducted to evaluate the effectiveness of the proposed solutions and the contributions of features.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2566-2579
  10. Raan, A.F.J. van; Noyons, E.C.M.: Discovery of patterns of scientific and technological development and knowledge transfer (2002) 0.00
    0.0045201816 = product of:
      0.013560544 = sum of:
        0.013560544 = product of:
          0.027121088 = sum of:
            0.027121088 = weight(_text_:of in 3603) [ClassicSimilarity], result of:
              0.027121088 = score(doc=3603,freq=42.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.39587128 = fieldWeight in 3603, product of:
                  6.4807405 = tf(freq=42.0), with freq of:
                    42.0 = termFreq=42.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3603)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper addresses a bibliometric methodology to discover the structure of the scientific 'landscape' in order to gain detailed insight into the development of MD fields, their interaction, and the transfer of knowledge between them. This methodology is appropriate to visualize the position of MD activities in relation to interdisciplinary MD developments, and particularly in relation to socio-economic problems. Furthermore, it allows the identification of the major actors. It even provides the possibility of foresight. We describe a first approach to apply bibliometric mapping as an instrument to investigate characteristics of knowledge transfer. In this paper we discuss the creation of 'maps of science' with help of advanced bibliometric methods. This 'bibliometric cartography' can be seen as a specific type of data-mining, applied to large amounts of scientific publications. As an example we describe the mapping of the field neuroscience, one of the largest and fast growing fields in the life sciences. The number of publications covered by this database is about 80,000 per year, the period covered is 1995-1998. Current research is going an to update the mapping for the years 1999-2002. This paper addresses the main lines of the methodology and its application in the study of knowledge transfer.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  11. Biskri, I.; Rompré, L.: Using association rules for query reformulation (2012) 0.00
    0.004428855 = product of:
      0.013286565 = sum of:
        0.013286565 = product of:
          0.02657313 = sum of:
            0.02657313 = weight(_text_:of in 92) [ClassicSimilarity], result of:
              0.02657313 = score(doc=92,freq=28.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.38787308 = fieldWeight in 92, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=92)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this paper the authors will present research on the combination of two methods of data mining: text classification and maximal association rules. Text classification has been the focus of interest of many researchers for a long time. However, the results take the form of lists of words (classes) that people often do not know what to do with. The use of maximal association rules induced a number of advantages: (1) the detection of dependencies and correlations between the relevant units of information (words) of different classes, (2) the extraction of hidden knowledge, often relevant, from a large volume of data. The authors will show how this combination can improve the process of information retrieval.
  12. Chen, S.Y.; Liu, X.: ¬The contribution of data mining to information science : making sense of it all (2005) 0.00
    0.0041003237 = product of:
      0.01230097 = sum of:
        0.01230097 = product of:
          0.02460194 = sum of:
            0.02460194 = weight(_text_:of in 4655) [ClassicSimilarity], result of:
              0.02460194 = score(doc=4655,freq=6.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3591007 = fieldWeight in 4655, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4655)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of information science. 30(2005) no.6, S.550-
  13. Hölzig, C.: Google spürt Grippewellen auf : Die neue Anwendung ist bisher auf die USA beschränkt (2008) 0.00
    0.0039571957 = product of:
      0.011871587 = sum of:
        0.011871587 = product of:
          0.023743173 = sum of:
            0.023743173 = weight(_text_:22 in 2403) [ClassicSimilarity], result of:
              0.023743173 = score(doc=2403,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.15476047 = fieldWeight in 2403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2403)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    3. 5.1997 8:44:22
  14. Jäger, L.: Von Big Data zu Big Brother (2018) 0.00
    0.0039571957 = product of:
      0.011871587 = sum of:
        0.011871587 = product of:
          0.023743173 = sum of:
            0.023743173 = weight(_text_:22 in 5234) [ClassicSimilarity], result of:
              0.023743173 = score(doc=5234,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.15476047 = fieldWeight in 5234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5234)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2018 11:33:49
  15. O'Brien, H.L.; Lebow, M.: Mixed-methods approach to measuring user experience in online news interactions (2013) 0.00
    0.003945538 = product of:
      0.0118366135 = sum of:
        0.0118366135 = product of:
          0.023673227 = sum of:
            0.023673227 = weight(_text_:of in 1001) [ClassicSimilarity], result of:
              0.023673227 = score(doc=1001,freq=32.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34554482 = fieldWeight in 1001, product of:
                  5.656854 = tf(freq=32.0), with freq of:
                    32.0 = termFreq=32.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1001)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    When it comes to evaluating online information experiences, what metrics matter? We conducted a study in which 30 people browsed and selected content within an online news website. Data collected included psychometric scales (User Engagement, Cognitive Absorption, System Usability Scales), self-reported interest in news content, and performance metrics (i.e., reading time, browsing time, total time, number of pages visited, and use of recommended links); a subset of the participants had their physiological responses recorded during the interaction (i.e., heart rate, electrodermal activity, electrocmytogram). Findings demonstrated the concurrent validity of the psychometric scales and interest ratings and revealed that increased time on tasks, number of pages visited, and use of recommended links were not necessarily indicative of greater self-reported engagement, cognitive absorption, or perceived usability. Positive ratings of news content were associated with lower physiological activity. The implications of this research are twofold. First, we propose that user experience is a useful framework for studying online information interactions and will result in a broader conceptualization of information interaction and its evaluation. Second, we advocate a mixed-methods approach to measurement that employs a suite of metrics capable of capturing the pragmatic (e.g., usability) and hedonic (e.g., fun, engagement) aspects of information interactions. We underscore the importance of using multiple measures in information research, because our results emphasize that performance and physiological data must be interpreted in the context of users' subjective experiences.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.8, S.1543-1556
  16. Deogun, J.S.: Feature selection and effective classifiers (1998) 0.00
    0.003925761 = product of:
      0.011777283 = sum of:
        0.011777283 = product of:
          0.023554565 = sum of:
            0.023554565 = weight(_text_:of in 2911) [ClassicSimilarity], result of:
              0.023554565 = score(doc=2911,freq=22.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34381276 = fieldWeight in 2911, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2911)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Develops and analyzes 4 algorithms for feature selection in the context of rough set methodology. Develops the notion of accuracy of classification that can be used for upper or lower classification methods and defines the feature selection problem. Presents a discussion of upper classifiers and develops 4 features selection heuristics and discusses the family of stepwise backward selection algorithms. Analyzes the worst case time complexity in all algorithms presented. Discusses details of the experiments and results of using a family of stepwise backward selection learning data sets and a duodenal ulcer data set. Includes the experimental setup and results of comparison of lower classifiers and upper classiers on the duodenal ulcer data set. Discusses exteded decision tables
    Source
    Journal of the American Society for Information Science. 49(1998) no.5, S.423-434
  17. Leydesdorff, L.; Persson, O.: Mapping the geography of science : distribution patterns and networks of relations among cities and institutes (2010) 0.00
    0.003925761 = product of:
      0.011777283 = sum of:
        0.011777283 = product of:
          0.023554565 = sum of:
            0.023554565 = weight(_text_:of in 3704) [ClassicSimilarity], result of:
              0.023554565 = score(doc=3704,freq=22.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34381276 = fieldWeight in 3704, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3704)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Using Google Earth, Google Maps, and/or network visualization programs such as Pajek, one can overlay the network of relations among addresses in scientific publications onto the geographic map. The authors discuss the pros and cons of various options, and provide software (freeware) for bridging existing gaps between the Science Citation Indices (Thomson Reuters) and Scopus (Elsevier), on the one hand, and these various visualization tools on the other. At the level of city names, the global map can be drawn reliably on the basis of the available address information. At the level of the names of organizations and institutes, there are problems of unification both in the ISI databases and with Scopus. Pajek enables a combination of visualization and statistical analysis, whereas the Google Maps and its derivatives provide superior tools on the Internet.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.8, S.1622-1634
  18. Maaten, L. van den: Accelerating t-SNE using Tree-Based Algorithms (2014) 0.00
    0.0039058835 = product of:
      0.01171765 = sum of:
        0.01171765 = product of:
          0.0234353 = sum of:
            0.0234353 = weight(_text_:of in 3886) [ClassicSimilarity], result of:
              0.0234353 = score(doc=3886,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34207192 = fieldWeight in 3886, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3886)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper investigates the acceleration of t-SNE-an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots-using two tree-based algorithms. In particular, the paper develops variants of the Barnes-Hut algorithm and of the dual-tree algorithm that approximate the gradient used for learning t-SNE embeddings in O(N*logN). Our experiments show that the resulting algorithms substantially accelerate t-SNE, and that they make it possible to learn embeddings of data sets with millions of objects. Somewhat counterintuitively, the Barnes-Hut variant of t-SNE appears to outperform the dual-tree variant.
    Source
    Journal of machine learning research. 15(2014), S.3221-3245
  19. Fayyad, U.M.; Djorgovski, S.G.; Weir, N.: From digitized images to online catalogs : data ming a sky server (1996) 0.00
    0.003865822 = product of:
      0.011597466 = sum of:
        0.011597466 = product of:
          0.023194931 = sum of:
            0.023194931 = weight(_text_:of in 6625) [ClassicSimilarity], result of:
              0.023194931 = score(doc=6625,freq=12.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.33856338 = fieldWeight in 6625, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6625)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Offers a data mining approach based on machine learning classification methods to the problem of automated cataloguing of online databases of digital images resulting from sky surveys. The SKICAT system automates the reduction and analysis of 3 terabytes of images expected to contain about 2 billion sky objects. It offers a solution to problems associated with the analysis of large data sets in science
  20. Ohly, H.P.: Bibliometric mining : added value from document analysis and retrieval (2008) 0.00
    0.003743066 = product of:
      0.0112291975 = sum of:
        0.0112291975 = product of:
          0.022458395 = sum of:
            0.022458395 = weight(_text_:of in 2386) [ClassicSimilarity], result of:
              0.022458395 = score(doc=2386,freq=20.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.32781258 = fieldWeight in 2386, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2386)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Bibliometrics is understood as statistical analysis of scientific structures and processes. The analyzed data result from information and administrative actions. The demand for quality judgments or the discovering of new structures and information means that Bibliometrics takes on the role of being exploratory and decision supporting. To the extent that it has acquired important features of Data Mining, the analysis of text and internet material can be viewed as an additional challenge. In the sense of an evaluative approach Bibliometrics can also be seen to apply inference procedures as well as navigation tools.
    Source
    Kompatibilität, Medien und Ethik in der Wissensorganisation - Compatibility, Media and Ethics in Knowledge Organization: Proceedings der 10. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Wien, 3.-5. Juli 2006 - Proceedings of the 10th Conference of the German Section of the International Society of Knowledge Organization Vienna, 3-5 July 2006. Ed.: H.P. Ohly, S. Netscher u. K. Mitgutsch

Years

Languages

  • e 110
  • d 5
  • sp 1
  • More… Less…

Classifications