Search (126 results, page 1 of 7)

  • × language_ss:"e"
  • × theme_ss:"Informetrie"
  • × type_ss:"a"
  1. Coulter, N.; Monarch, I.; Konda, S.: Software engineering as seen through its research literature : a study in co-word analysis (1998) 0.01
    0.013708099 = product of:
      0.054832395 = sum of:
        0.054832395 = product of:
          0.10966479 = sum of:
            0.10966479 = weight(_text_:software in 2161) [ClassicSimilarity], result of:
              0.10966479 = score(doc=2161,freq=6.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.6073436 = fieldWeight in 2161, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2161)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This empirical research demonstrates the effectiveness of content analysis to map the research literature of the software engineering discipline. The results suggest that certain research themes in software engineering have remained constant, but with changing thrusts
  2. Cobo, M.J.; López-Herrera, A.G.; Herrera-Viedma, E.; Herrera, F.: Science mapping software tools : review, analysis, and cooperative study among tools (2011) 0.01
    0.013708099 = product of:
      0.054832395 = sum of:
        0.054832395 = product of:
          0.10966479 = sum of:
            0.10966479 = weight(_text_:software in 4486) [ClassicSimilarity], result of:
              0.10966479 = score(doc=4486,freq=6.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.6073436 = fieldWeight in 4486, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4486)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Science mapping aims to build bibliometric maps that describe how specific disciplines, scientific domains, or research fields are conceptually, intellectually, and socially structured. Different techniques and software tools have been proposed to carry out science mapping analysis. The aim of this article is to review, analyze, and compare some of these software tools, taking into account aspects such as the bibliometric techniques available and the different kinds of analysis.
  3. Ravichandra Rao, I.K.; Sahoo, B.B.: Studies and research in informetrics at the Documentation Research and Training Centre (DRTC), ISI Bangalore (2006) 0.01
    0.01327281 = product of:
      0.05309124 = sum of:
        0.05309124 = product of:
          0.10618248 = sum of:
            0.10618248 = weight(_text_:software in 1512) [ClassicSimilarity], result of:
              0.10618248 = score(doc=1512,freq=10.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.58805794 = fieldWeight in 1512, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1512)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Contributions of DRTC to informetric studies and research are discussed. A report on recent work - a quantitative country-wise analysis of software literature based on the data from two bibliographic databases i.e. COMPENDEX and INSPEC is presented. The number of countries involved in R & D activities in software in the most productive group is increasing. The research contribution on software is decreasing in developed countries as compared to that in developing and less developed countries. India 's contribution is only 1.1% and it has remained constant over the period of 12 years 1989-2001. The number of countries involved in R&D activities in software has been increasing in the 1990s. It is also noted that higher the budget for higher education, higher the number of publications; and that higher the number of publications, higher the export as well as the domestic consumption of software.
  4. Marion, L.S.; McCain, K.W.: Contrasting views of software engineering journals : author cocitation choices and indexer vocabulary assignments (2001) 0.01
    0.0130871665 = product of:
      0.052348666 = sum of:
        0.052348666 = product of:
          0.10469733 = sum of:
            0.10469733 = weight(_text_:software in 5767) [ClassicSimilarity], result of:
              0.10469733 = score(doc=5767,freq=14.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.5798329 = fieldWeight in 5767, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5767)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    We explore the intellectual subject structure and research themes in software engineering through the identification and analysis of a core journal literature. We examine this literature via two expert perspectives: that of the author, who identified significant work by citing it (journal cocitation analysis), and that of the professional indexer, who tags published work with subject terms to facilitate retrieval from a bibliographic database (subject profile analysis). The data sources are SCISEARCH (the on-line version of Science Citation Index), and INSPEC (a database covering software engineering, computer science, and information systems). We use data visualization tools (cluster analysis, multidimensional scaling, and PFNets) to show the "intellectual maps" of software engineering. Cocitation and subject profile analyses demonstrate that software engineering is a distinct interdisciplinary field, valuing practical and applied aspects, and spanning a subject continuum from "programming-in-the-smalI" to "programming-in-the-large." This continuum mirrors the software development life cycle by taking the operating system or major application from initial programming through project management, implementation, and maintenance. Object orientation is an integral but distinct subject area in software engineering. Key differences are the importance of management and programming: (1) cocitation analysis emphasizes project management and systems development; (2) programming techniques/languages are more influential in subject profiles; (3) cocitation profiles place object-oriented journals separately and centrally while the subject profile analysis locates these journals with the programming/languages group
  5. Nicholls, P.T.: Empirical validation of Lotka's law (1986) 0.01
    0.012333291 = product of:
      0.049333163 = sum of:
        0.049333163 = product of:
          0.098666325 = sum of:
            0.098666325 = weight(_text_:22 in 5509) [ClassicSimilarity], result of:
              0.098666325 = score(doc=5509,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.61904186 = fieldWeight in 5509, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5509)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 22(1986), S.417-419
  6. Nicolaisen, J.: Citation analysis (2007) 0.01
    0.012333291 = product of:
      0.049333163 = sum of:
        0.049333163 = product of:
          0.098666325 = sum of:
            0.098666325 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.098666325 = score(doc=6091,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    13. 7.2008 19:53:22
  7. Fiala, J.: Information flood : fiction and reality (1987) 0.01
    0.012333291 = product of:
      0.049333163 = sum of:
        0.049333163 = product of:
          0.098666325 = sum of:
            0.098666325 = weight(_text_:22 in 1080) [ClassicSimilarity], result of:
              0.098666325 = score(doc=1080,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.61904186 = fieldWeight in 1080, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=1080)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Thermochimica acta. 110(1987), S.11-22
  8. Cobo, M.J.; López-Herrera, A.G.; Herrera-Viedma, E.; Herrera, F.: SciMAT: A new science mapping analysis software tool (2012) 0.01
    0.011994586 = product of:
      0.047978345 = sum of:
        0.047978345 = product of:
          0.09595669 = sum of:
            0.09595669 = weight(_text_:software in 373) [ClassicSimilarity], result of:
              0.09595669 = score(doc=373,freq=6.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.53142565 = fieldWeight in 373, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=373)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This article presents a new open-source software tool, SciMAT, which performs science mapping analysis within a longitudinal framework. It provides different modules that help the analyst to carry out all the steps of the science mapping workflow. In addition, SciMAT presents three key features that are remarkable in respect to other science mapping software tools: (a) a powerful preprocessing module to clean the raw bibliographical data, (b) the use of bibliometric measures to study the impact of each studied element, and (c) a wizard to configure the analysis.
  9. Su, Y.; Han, L.-F.: ¬A new literature growth model : variable exponential growth law of literature (1998) 0.01
    0.010901191 = product of:
      0.043604765 = sum of:
        0.043604765 = product of:
          0.08720953 = sum of:
            0.08720953 = weight(_text_:22 in 3690) [ClassicSimilarity], result of:
              0.08720953 = score(doc=3690,freq=4.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.54716086 = fieldWeight in 3690, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3690)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 5.1999 19:22:35
  10. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.01
    0.010901191 = product of:
      0.043604765 = sum of:
        0.043604765 = product of:
          0.08720953 = sum of:
            0.08720953 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.08720953 = score(doc=3925,freq=4.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 15:22:28
  11. Bookstein, A.: Informetric distributions : I. Unified overview (1990) 0.01
    0.01079163 = product of:
      0.04316652 = sum of:
        0.04316652 = product of:
          0.08633304 = sum of:
            0.08633304 = weight(_text_:22 in 6902) [ClassicSimilarity], result of:
              0.08633304 = score(doc=6902,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.5416616 = fieldWeight in 6902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6902)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 18:55:29
  12. Bookstein, A.: Informetric distributions : II. Resilience to ambiguity (1990) 0.01
    0.01079163 = product of:
      0.04316652 = sum of:
        0.04316652 = product of:
          0.08633304 = sum of:
            0.08633304 = weight(_text_:22 in 4689) [ClassicSimilarity], result of:
              0.08633304 = score(doc=4689,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.5416616 = fieldWeight in 4689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4689)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 18:55:55
  13. Newby, G.B.; Greenberg, J.; Jones, P.: Open source software development and Lotka's law : bibliometric patterns in programming (2003) 0.01
    0.010281074 = product of:
      0.041124295 = sum of:
        0.041124295 = product of:
          0.08224859 = sum of:
            0.08224859 = weight(_text_:software in 5140) [ClassicSimilarity], result of:
              0.08224859 = score(doc=5140,freq=6.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.4555077 = fieldWeight in 5140, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5140)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Newby, Greenberg, and Jones analyze programming productivity of open source software by counting registered developers contributions found in the Linux Software Map and in Scourceforge. Using seven years of data from a subset of the Linux directory tree LSM data provided 4503 files with 3341 unique author names. The distribution follows Lotka's Law with an exponent of 2.82 as verified by the Kolmolgorov-Smirnov one sample goodness of fit test. Scourceforge data is broken into developers and administrators, but when both were used as authors the Lotka distribution exponent of 2.55 produces the lowest error. This would not be significant by the K-S test but the 3.54% maximum error would indicate a fit and calls into question the appropriateness of K-S for large populations of authors.
  14. Garfield, E.; Paris, S.W.; Stock, W.G.: HistCite(TM) : a software tool for informetric analysis of citation linkage (2006) 0.01
    0.009793539 = product of:
      0.039174154 = sum of:
        0.039174154 = product of:
          0.07834831 = sum of:
            0.07834831 = weight(_text_:software in 79) [ClassicSimilarity], result of:
              0.07834831 = score(doc=79,freq=4.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.43390724 = fieldWeight in 79, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=79)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    HistCite(TM) is a software tool for analyzing and visualizing direct citation linkages between scientific papers. Its inputs are bibliographic records (with cited references) from "Web of Knowledge" or other sources. Its outputs are various tables and graphs with informetric indicators about the knowledge domain under study. As an example we analyze informetrically the literature about Alexius Meinong, an Austrian philosopher and psychologist. The article shortly discusses the informetric functionality of "Web of Knowledge" and shows broadly the possibilities that HistCite offers to its users (e.g. scientists, scientometricans and science journalists).
  15. Lewison, G.: ¬The work of the Bibliometrics Research Group (City University) and associates (2005) 0.01
    0.0092499675 = product of:
      0.03699987 = sum of:
        0.03699987 = product of:
          0.07399974 = sum of:
            0.07399974 = weight(_text_:22 in 4890) [ClassicSimilarity], result of:
              0.07399974 = score(doc=4890,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.46428138 = fieldWeight in 4890, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4890)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    20. 1.2007 17:02:22
  16. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.01
    0.0092499675 = product of:
      0.03699987 = sum of:
        0.03699987 = product of:
          0.07399974 = sum of:
            0.07399974 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.07399974 = score(doc=1239,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    18. 3.2014 19:13:22
  17. Kopcsa, A.; Schiebel, E.: Science and technology mapping : a new iteration model for representing multidimensional relationships (1998) 0.01
    0.008394462 = product of:
      0.03357785 = sum of:
        0.03357785 = product of:
          0.0671557 = sum of:
            0.0671557 = weight(_text_:software in 326) [ClassicSimilarity], result of:
              0.0671557 = score(doc=326,freq=4.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.3719205 = fieldWeight in 326, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=326)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Much effort has been done to develop more objective quantitative methods to analyze and integrate survey information for understanding research trends and research structures. Co-word analysis is one class of techniques that exploits the use of co-occurences of items in written information. However, there are some bottlenecks in using statistical methods to produce mappings of reduced information in a comfortable manner. On one hand, often used statistical software for PCs has restrictions for the amount for calculable data; on the other hand, the results of the mufltidimensional scaling routines are not quite satisfying. Therefore, this article introduces a new iteration model for the calculation of co-word maps that eases the problem. The iteration model is for positioning the words in the two-dimensional plane due to their connections to each other, and its consists of a quick and stabile algorithm that has been implemented with software for personal computers. A graphic module represents the data in well-known 'technology maps'
  18. Small, H.: Update on science mapping : creating large document spaces (1997) 0.01
    0.006925077 = product of:
      0.027700309 = sum of:
        0.027700309 = product of:
          0.055400617 = sum of:
            0.055400617 = weight(_text_:software in 410) [ClassicSimilarity], result of:
              0.055400617 = score(doc=410,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.30681872 = fieldWeight in 410, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=410)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Science mapping projects have been revived by the advent of virtual reality (VR) software capable of navigating large sysnthetic 3 dimensional spaces. Unlike the earlier mapping efforts aimed at creating simple maps at either a global or local level, the focus is now on creating large scale maps displaying many thousands of documents which can be input into the new VR systems. Presents a general framework for creating large scale document spaces as well as some new methods which perform some of the individual processing steps. The methods are designed primarily for citation data but could be applied to other types of data, including hypertext links
  19. Williams, B.: Dimensions & VOSViewer bibliometrics in the reference interview (2020) 0.01
    0.006925077 = product of:
      0.027700309 = sum of:
        0.027700309 = product of:
          0.055400617 = sum of:
            0.055400617 = weight(_text_:software in 5719) [ClassicSimilarity], result of:
              0.055400617 = score(doc=5719,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.30681872 = fieldWeight in 5719, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5719)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The VOSviewer software provides easy access to bibliometric mapping using data from Dimensions, Scopus and Web of Science. The properly formatted and structured citation data, and the ease in which it can be exported open up new avenues for use during citation searches and eference interviews. This paper details specific techniques for using advanced searches in Dimensions, exporting the citation data, and drawing insights from the maps produced in VOS Viewer. These search techniques and data export practices are fast and accurate enough to build into reference interviews for graduate students, faculty, and post-PhD researchers. The search results derived from them are accurate and allow a more comprehensive view of citation networks embedded in ordinary complex boolean searches.
  20. Raan, A.F.J. van: Statistical properties of bibliometric indicators : research group indicator distributions and correlations (2006) 0.01
    0.0065407157 = product of:
      0.026162863 = sum of:
        0.026162863 = product of:
          0.052325726 = sum of:
            0.052325726 = weight(_text_:22 in 5275) [ClassicSimilarity], result of:
              0.052325726 = score(doc=5275,freq=4.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.32829654 = fieldWeight in 5275, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5275)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 16:20:22

Authors

Years