Search (140 results, page 1 of 7)

  • × theme_ss:"Informetrie"
  1. Coulter, N.; Monarch, I.; Konda, S.: Software engineering as seen through its research literature : a study in co-word analysis (1998) 0.03
    0.031167427 = product of:
      0.062334854 = sum of:
        0.062334854 = product of:
          0.12466971 = sum of:
            0.12466971 = weight(_text_:software in 2161) [ClassicSimilarity], result of:
              0.12466971 = score(doc=2161,freq=6.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.6073436 = fieldWeight in 2161, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2161)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This empirical research demonstrates the effectiveness of content analysis to map the research literature of the software engineering discipline. The results suggest that certain research themes in software engineering have remained constant, but with changing thrusts
  2. Cobo, M.J.; López-Herrera, A.G.; Herrera-Viedma, E.; Herrera, F.: Science mapping software tools : review, analysis, and cooperative study among tools (2011) 0.03
    0.031167427 = product of:
      0.062334854 = sum of:
        0.062334854 = product of:
          0.12466971 = sum of:
            0.12466971 = weight(_text_:software in 4486) [ClassicSimilarity], result of:
              0.12466971 = score(doc=4486,freq=6.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.6073436 = fieldWeight in 4486, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4486)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Science mapping aims to build bibliometric maps that describe how specific disciplines, scientific domains, or research fields are conceptually, intellectually, and socially structured. Different techniques and software tools have been proposed to carry out science mapping analysis. The aim of this article is to review, analyze, and compare some of these software tools, taking into account aspects such as the bibliometric techniques available and the different kinds of analysis.
  3. Ravichandra Rao, I.K.; Sahoo, B.B.: Studies and research in informetrics at the Documentation Research and Training Centre (DRTC), ISI Bangalore (2006) 0.03
    0.030177733 = product of:
      0.060355466 = sum of:
        0.060355466 = product of:
          0.12071093 = sum of:
            0.12071093 = weight(_text_:software in 1512) [ClassicSimilarity], result of:
              0.12071093 = score(doc=1512,freq=10.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.58805794 = fieldWeight in 1512, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1512)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Contributions of DRTC to informetric studies and research are discussed. A report on recent work - a quantitative country-wise analysis of software literature based on the data from two bibliographic databases i.e. COMPENDEX and INSPEC is presented. The number of countries involved in R & D activities in software in the most productive group is increasing. The research contribution on software is decreasing in developed countries as compared to that in developing and less developed countries. India 's contribution is only 1.1% and it has remained constant over the period of 12 years 1989-2001. The number of countries involved in R&D activities in software has been increasing in the 1990s. It is also noted that higher the budget for higher education, higher the number of publications; and that higher the number of publications, higher the export as well as the domestic consumption of software.
  4. Marion, L.S.; McCain, K.W.: Contrasting views of software engineering journals : author cocitation choices and indexer vocabulary assignments (2001) 0.03
    0.029755643 = product of:
      0.059511285 = sum of:
        0.059511285 = product of:
          0.11902257 = sum of:
            0.11902257 = weight(_text_:software in 5767) [ClassicSimilarity], result of:
              0.11902257 = score(doc=5767,freq=14.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.5798329 = fieldWeight in 5767, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5767)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We explore the intellectual subject structure and research themes in software engineering through the identification and analysis of a core journal literature. We examine this literature via two expert perspectives: that of the author, who identified significant work by citing it (journal cocitation analysis), and that of the professional indexer, who tags published work with subject terms to facilitate retrieval from a bibliographic database (subject profile analysis). The data sources are SCISEARCH (the on-line version of Science Citation Index), and INSPEC (a database covering software engineering, computer science, and information systems). We use data visualization tools (cluster analysis, multidimensional scaling, and PFNets) to show the "intellectual maps" of software engineering. Cocitation and subject profile analyses demonstrate that software engineering is a distinct interdisciplinary field, valuing practical and applied aspects, and spanning a subject continuum from "programming-in-the-smalI" to "programming-in-the-large." This continuum mirrors the software development life cycle by taking the operating system or major application from initial programming through project management, implementation, and maintenance. Object orientation is an integral but distinct subject area in software engineering. Key differences are the importance of management and programming: (1) cocitation analysis emphasizes project management and systems development; (2) programming techniques/languages are more influential in subject profiles; (3) cocitation profiles place object-oriented journals separately and centrally while the subject profile analysis locates these journals with the programming/languages group
  5. Nicholls, P.T.: Empirical validation of Lotka's law (1986) 0.03
    0.028041592 = product of:
      0.056083184 = sum of:
        0.056083184 = product of:
          0.11216637 = sum of:
            0.11216637 = weight(_text_:22 in 5509) [ClassicSimilarity], result of:
              0.11216637 = score(doc=5509,freq=2.0), product of:
                0.18119352 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051742528 = queryNorm
                0.61904186 = fieldWeight in 5509, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5509)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 22(1986), S.417-419
  6. Nicolaisen, J.: Citation analysis (2007) 0.03
    0.028041592 = product of:
      0.056083184 = sum of:
        0.056083184 = product of:
          0.11216637 = sum of:
            0.11216637 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.11216637 = score(doc=6091,freq=2.0), product of:
                0.18119352 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051742528 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 7.2008 19:53:22
  7. Fiala, J.: Information flood : fiction and reality (1987) 0.03
    0.028041592 = product of:
      0.056083184 = sum of:
        0.056083184 = product of:
          0.11216637 = sum of:
            0.11216637 = weight(_text_:22 in 1080) [ClassicSimilarity], result of:
              0.11216637 = score(doc=1080,freq=2.0), product of:
                0.18119352 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051742528 = queryNorm
                0.61904186 = fieldWeight in 1080, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=1080)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Thermochimica acta. 110(1987), S.11-22
  8. Cobo, M.J.; López-Herrera, A.G.; Herrera-Viedma, E.; Herrera, F.: SciMAT: A new science mapping analysis software tool (2012) 0.03
    0.027271498 = product of:
      0.054542996 = sum of:
        0.054542996 = product of:
          0.10908599 = sum of:
            0.10908599 = weight(_text_:software in 373) [ClassicSimilarity], result of:
              0.10908599 = score(doc=373,freq=6.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.53142565 = fieldWeight in 373, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=373)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article presents a new open-source software tool, SciMAT, which performs science mapping analysis within a longitudinal framework. It provides different modules that help the analyst to carry out all the steps of the science mapping workflow. In addition, SciMAT presents three key features that are remarkable in respect to other science mapping software tools: (a) a powerful preprocessing module to clean the raw bibliographical data, (b) the use of bibliometric measures to study the impact of each studied element, and (c) a wizard to configure the analysis.
  9. Su, Y.; Han, L.-F.: ¬A new literature growth model : variable exponential growth law of literature (1998) 0.02
    0.0247855 = product of:
      0.049571 = sum of:
        0.049571 = product of:
          0.099142 = sum of:
            0.099142 = weight(_text_:22 in 3690) [ClassicSimilarity], result of:
              0.099142 = score(doc=3690,freq=4.0), product of:
                0.18119352 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051742528 = queryNorm
                0.54716086 = fieldWeight in 3690, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3690)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.1999 19:22:35
  10. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.0247855 = product of:
      0.049571 = sum of:
        0.049571 = product of:
          0.099142 = sum of:
            0.099142 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.099142 = score(doc=3925,freq=4.0), product of:
                0.18119352 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051742528 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28
  11. Diodato, V.: Dictionary of bibliometrics (1994) 0.02
    0.024536394 = product of:
      0.049072787 = sum of:
        0.049072787 = product of:
          0.098145574 = sum of:
            0.098145574 = weight(_text_:22 in 5666) [ClassicSimilarity], result of:
              0.098145574 = score(doc=5666,freq=2.0), product of:
                0.18119352 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051742528 = queryNorm
                0.5416616 = fieldWeight in 5666, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5666)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: Journal of library and information science 22(1996) no.2, S.116-117 (L.C. Smith)
  12. Bookstein, A.: Informetric distributions : I. Unified overview (1990) 0.02
    0.024536394 = product of:
      0.049072787 = sum of:
        0.049072787 = product of:
          0.098145574 = sum of:
            0.098145574 = weight(_text_:22 in 6902) [ClassicSimilarity], result of:
              0.098145574 = score(doc=6902,freq=2.0), product of:
                0.18119352 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051742528 = queryNorm
                0.5416616 = fieldWeight in 6902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6902)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 18:55:29
  13. Bookstein, A.: Informetric distributions : II. Resilience to ambiguity (1990) 0.02
    0.024536394 = product of:
      0.049072787 = sum of:
        0.049072787 = product of:
          0.098145574 = sum of:
            0.098145574 = weight(_text_:22 in 4689) [ClassicSimilarity], result of:
              0.098145574 = score(doc=4689,freq=2.0), product of:
                0.18119352 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051742528 = queryNorm
                0.5416616 = fieldWeight in 4689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4689)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 18:55:55
  14. Newby, G.B.; Greenberg, J.; Jones, P.: Open source software development and Lotka's law : bibliometric patterns in programming (2003) 0.02
    0.023375569 = product of:
      0.046751138 = sum of:
        0.046751138 = product of:
          0.093502276 = sum of:
            0.093502276 = weight(_text_:software in 5140) [ClassicSimilarity], result of:
              0.093502276 = score(doc=5140,freq=6.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.4555077 = fieldWeight in 5140, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5140)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Newby, Greenberg, and Jones analyze programming productivity of open source software by counting registered developers contributions found in the Linux Software Map and in Scourceforge. Using seven years of data from a subset of the Linux directory tree LSM data provided 4503 files with 3341 unique author names. The distribution follows Lotka's Law with an exponent of 2.82 as verified by the Kolmolgorov-Smirnov one sample goodness of fit test. Scourceforge data is broken into developers and administrators, but when both were used as authors the Lotka distribution exponent of 2.55 produces the lowest error. This would not be significant by the K-S test but the 3.54% maximum error would indicate a fit and calls into question the appropriateness of K-S for large populations of authors.
  15. Garfield, E.; Paris, S.W.; Stock, W.G.: HistCite(TM) : a software tool for informetric analysis of citation linkage (2006) 0.02
    0.022267086 = product of:
      0.044534173 = sum of:
        0.044534173 = product of:
          0.089068346 = sum of:
            0.089068346 = weight(_text_:software in 79) [ClassicSimilarity], result of:
              0.089068346 = score(doc=79,freq=4.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.43390724 = fieldWeight in 79, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=79)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    HistCite(TM) is a software tool for analyzing and visualizing direct citation linkages between scientific papers. Its inputs are bibliographic records (with cited references) from "Web of Knowledge" or other sources. Its outputs are various tables and graphs with informetric indicators about the knowledge domain under study. As an example we analyze informetrically the literature about Alexius Meinong, an Austrian philosopher and psychologist. The article shortly discusses the informetric functionality of "Web of Knowledge" and shows broadly the possibilities that HistCite offers to its users (e.g. scientists, scientometricans and science journalists).
  16. Lewison, G.: ¬The work of the Bibliometrics Research Group (City University) and associates (2005) 0.02
    0.021031193 = product of:
      0.042062387 = sum of:
        0.042062387 = product of:
          0.084124774 = sum of:
            0.084124774 = weight(_text_:22 in 4890) [ClassicSimilarity], result of:
              0.084124774 = score(doc=4890,freq=2.0), product of:
                0.18119352 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051742528 = queryNorm
                0.46428138 = fieldWeight in 4890, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4890)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2007 17:02:22
  17. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.02
    0.021031193 = product of:
      0.042062387 = sum of:
        0.042062387 = product of:
          0.084124774 = sum of:
            0.084124774 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.084124774 = score(doc=1239,freq=2.0), product of:
                0.18119352 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051742528 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    18. 3.2014 19:13:22
  18. Kopcsa, A.; Schiebel, E.: Science and technology mapping : a new iteration model for representing multidimensional relationships (1998) 0.02
    0.019086074 = product of:
      0.03817215 = sum of:
        0.03817215 = product of:
          0.0763443 = sum of:
            0.0763443 = weight(_text_:software in 326) [ClassicSimilarity], result of:
              0.0763443 = score(doc=326,freq=4.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.3719205 = fieldWeight in 326, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=326)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Much effort has been done to develop more objective quantitative methods to analyze and integrate survey information for understanding research trends and research structures. Co-word analysis is one class of techniques that exploits the use of co-occurences of items in written information. However, there are some bottlenecks in using statistical methods to produce mappings of reduced information in a comfortable manner. On one hand, often used statistical software for PCs has restrictions for the amount for calculable data; on the other hand, the results of the mufltidimensional scaling routines are not quite satisfying. Therefore, this article introduces a new iteration model for the calculation of co-word maps that eases the problem. The iteration model is for positioning the words in the two-dimensional plane due to their connections to each other, and its consists of a quick and stabile algorithm that has been implemented with software for personal computers. A graphic module represents the data in well-known 'technology maps'
  19. Petersen, A.; Münch, V.: STN® AnaVist(TM) holt verborgenes Wissen aus Recherche-Ergebnissen : Neue Software analysiert und visualisiert Marktaufteilung, Forschung und Patentaktivitäten (2005) 0.02
    0.017994521 = product of:
      0.035989042 = sum of:
        0.035989042 = product of:
          0.071978085 = sum of:
            0.071978085 = weight(_text_:software in 3984) [ClassicSimilarity], result of:
              0.071978085 = score(doc=3984,freq=8.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.35064998 = fieldWeight in 3984, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3984)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    "Im 21. Jahrhundert ist die entscheidende Herausforderung an Informationsdienstleister nicht, Informationen zugänglich, sondern sie optimal nutzbar zu machen", sagt Sabine Brünger-Weilandt, Geschäftsführerin von FIZ Karlsruhe, das den Online-Dienst STN International in internationaler Kooperation betreibt. Informationsprofis, so Brünger-Weilandt weiter, bräuchten hockentwickelte Software für strategisches Informationsmanagement. Als "Antwort auf diesen Bedarf" hat STN International eine neue Software zur Analyse und Visualisierung (A&V) von Rechercheergebnissen aus STN-Datenbanken entwickelt. STN® AnaVistT(TM) wurde auf der DGI Online-Tagung Ende Mai in Frankfurt am Main und auf Benutzertreffen in Frankfurt am Main, München und Essen vorgestellt. Seit 18. Juli 2005 ist das neue A&V-Werkzeug für die öffentliche Nutzung freigegeben (www.stn-international.de).
    Die wichtigsten Funktionen von STN AnaVist sind: - Inhalte aus mehreren Datenbanken sind gleichzeitig auswertbar - Nutzer können Daten aus unterschiedlichen Ouellen suchen, analysieren und visualisieren, u.a. aus der Chemiedatenbank CAplusSM, der Patentdatenbank PCTFULL, und US-amerikanischen Volltextdatenbanken. - Einzigartige Beziehungen zwischen Datenelementen-nur STN AnaVist bietet die Möglichkeit, Beziehungen zwischen sieben unterschiedlichen Feldern aus Datenbankdokumenten - z.B., Firmen, Erfindern, Veröffentlichungsjahren und Konzepten-darzustellen. - Gruppierung und Bereinigung von Daten - vor der Analyse werden Firmen und ihre unterschiedlichen Namensvarianten von einem "Company Name Thesaurus" zusammengefasst. - Konzept-Standardisierung - Durch das CAS-Vokabular werden Fachbegriffe datenbankübergreifend standardisiert, so dass weniger Streuung auftritt. - Interaktive Präsentation der Beziehungen zwischen Daten und Diagrammenwährend der Auswertung können Daten zum besseren Erkennen der Beziehungen farblich hervorgehoben werden. - Flexible Erstellung der auszuwertenden Rechercheergebnisse - Rechercheergebnisse, die als Ausgangsdatensatz für die Analyse verwendet werden sollen, können auf zwei Arten gewonnen werden: zum einen über die in STN® AnaVist(TM) integrierte Konzept-Suchfunktion, zum anderen durch problemlose Übernahme von Suchergebnissen aus der bewährten Software STN Express® with Discover! TM Analysis Edition, Version 8.0
  20. Thelwall, M.: Web indicators for research evaluation : a practical guide (2016) 0.02
    0.015905062 = product of:
      0.031810123 = sum of:
        0.031810123 = product of:
          0.06362025 = sum of:
            0.06362025 = weight(_text_:software in 3384) [ClassicSimilarity], result of:
              0.06362025 = score(doc=3384,freq=4.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.30993375 = fieldWeight in 3384, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3384)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In recent years there has been an increasing demand for research evaluation within universities and other research-based organisations. In parallel, there has been an increasing recognition that traditional citation-based indicators are not able to reflect the societal impacts of research and are slow to appear. This has led to the creation of new indicators for different types of research impact as well as timelier indicators, mainly derived from the Web. These indicators have been called altmetrics, webometrics or just web metrics. This book describes and evaluates a range of web indicators for aspects of societal or scholarly impact, discusses the theory and practice of using and evaluating web indicators for research assessment and outlines practical strategies for obtaining many web indicators. In addition to describing impact indicators for traditional scholarly outputs, such as journal articles and monographs, it also covers indicators for videos, datasets, software and other non-standard scholarly outputs. The book describes strategies to analyse web indicators for individual publications as well as to compare the impacts of groups of publications. The practical part of the book includes descriptions of how to use the free software Webometric Analyst to gather and analyse web data. This book is written for information science undergraduate and Master?s students that are learning about alternative indicators or scientometrics as well as Ph.D. students and other researchers and practitioners using indicators to help assess research impact or to study scholarly communication.

Authors

Years

Languages

  • e 130
  • d 9
  • ro 1
  • More… Less…

Types

  • a 136
  • m 4
  • el 2
  • s 1
  • More… Less…