Search (2 results, page 1 of 1)

  • × theme_ss:"Internet"
  • × theme_ss:"Data Mining"
  1. Raan, A.F.J. van; Noyons, E.C.M.: Discovery of patterns of scientific and technological development and knowledge transfer (2002) 0.02
    0.015284747 = product of:
      0.04585424 = sum of:
        0.04585424 = product of:
          0.09170848 = sum of:
            0.09170848 = weight(_text_:2002 in 3603) [ClassicSimilarity], result of:
              0.09170848 = score(doc=3603,freq=7.0), product of:
                0.20701107 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.048293278 = queryNorm
                0.44301245 = fieldWeight in 3603, product of:
                  2.6457512 = tf(freq=7.0), with freq of:
                    7.0 = termFreq=7.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3603)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper addresses a bibliometric methodology to discover the structure of the scientific 'landscape' in order to gain detailed insight into the development of MD fields, their interaction, and the transfer of knowledge between them. This methodology is appropriate to visualize the position of MD activities in relation to interdisciplinary MD developments, and particularly in relation to socio-economic problems. Furthermore, it allows the identification of the major actors. It even provides the possibility of foresight. We describe a first approach to apply bibliometric mapping as an instrument to investigate characteristics of knowledge transfer. In this paper we discuss the creation of 'maps of science' with help of advanced bibliometric methods. This 'bibliometric cartography' can be seen as a specific type of data-mining, applied to large amounts of scientific publications. As an example we describe the mapping of the field neuroscience, one of the largest and fast growing fields in the life sciences. The number of publications covered by this database is about 80,000 per year, the period covered is 1995-1998. Current research is going an to update the mapping for the years 1999-2002. This paper addresses the main lines of the methodology and its application in the study of knowledge transfer.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
    Year
    2002
  2. Chakrabarti, S.: Mining the Web : discovering knowledge from hypertext data (2003) 0.01
    0.0065360325 = product of:
      0.019608097 = sum of:
        0.019608097 = product of:
          0.039216194 = sum of:
            0.039216194 = weight(_text_:2002 in 2222) [ClassicSimilarity], result of:
              0.039216194 = score(doc=2222,freq=2.0), product of:
                0.20701107 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.048293278 = queryNorm
                0.18944009 = fieldWeight in 2222, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2222)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: JASIST 55(2004) no.3, S.275-276 (C. Chen): "This is a book about finding significant statistical patterns on the Web - in particular, patterns that are associated with hypertext documents, topics, hyperlinks, and queries. The term pattern in this book refers to dependencies among such items. On the one hand, the Web contains useful information an just about every topic under the sun. On the other hand, just like searching for a needle in a haystack, one would need powerful tools to locate useful information an the vast land of the Web. Soumen Chakrabarti's book focuses an a wide range of techniques for machine learning and data mining an the Web. The goal of the book is to provide both the technical Background and tools and tricks of the trade of Web content mining. Much of the technical content reflects the state of the art between 1995 and 2002. The targeted audience is researchers and innovative developers in this area, as well as newcomers who intend to enter this area. The book begins with an introduction chapter. The introduction chapter explains fundamental concepts such as crawling and indexing as well as clustering and classification. The remaining eight chapters are organized into three parts: i) infrastructure, ii) learning and iii) applications.