Search (59 results, page 1 of 3)

  • × language_ss:"e"
  • × theme_ss:"Informetrie"
  • × year_i:[1990 TO 2000}
  1. Kreider, J.: ¬The correlation of local citation data with citation data from Journal Citation Reports (1999) 0.05
    0.053411096 = product of:
      0.10682219 = sum of:
        0.08778879 = weight(_text_:data in 102) [ClassicSimilarity], result of:
          0.08778879 = score(doc=102,freq=16.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5928845 = fieldWeight in 102, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=102)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 102) [ClassicSimilarity], result of:
              0.038066804 = score(doc=102,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 102, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=102)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    University librarians continue to face the difficult task of determining which journals remain crucial for their collections during these times of static financial resources and escalating journal costs. One evaluative tool, Journal Citation Reports (JCR), recently has become available on CD-ROM, making it simpler for librarians to use its citation data as input for ranking journals. But many librarians remain unconvinced that the global citation data from the JCR bears enough correspondence to their local situation to be useful. In this project, I explore the correlation between global citation data available from JCR with local citation data generated specifically for the University of British Columbia, for 20 subject fields in the sciences and social sciences. The significant correlations obtained in this study suggest that large research-oriented university libraries could consider substituting global citation data for local citation data when evaluating their journals, with certain cautions.
    Date
    10. 9.2000 17:38:22
  2. Small, H.: Update on science mapping : creating large document spaces (1997) 0.04
    0.040442396 = product of:
      0.08088479 = sum of:
        0.051210128 = weight(_text_:data in 410) [ClassicSimilarity], result of:
          0.051210128 = score(doc=410,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34584928 = fieldWeight in 410, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=410)
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 410) [ClassicSimilarity], result of:
              0.05934933 = score(doc=410,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 410, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=410)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Science mapping projects have been revived by the advent of virtual reality (VR) software capable of navigating large sysnthetic 3 dimensional spaces. Unlike the earlier mapping efforts aimed at creating simple maps at either a global or local level, the focus is now on creating large scale maps displaying many thousands of documents which can be input into the new VR systems. Presents a general framework for creating large scale document spaces as well as some new methods which perform some of the individual processing steps. The methods are designed primarily for citation data but could be applied to other types of data, including hypertext links
  3. Falkingham, L.T.; Reeves, R.: Context analysis : a technique for analysing research in a field, applied to literature on the management of R&D at the section level (1998) 0.03
    0.029208332 = product of:
      0.058416665 = sum of:
        0.036211025 = weight(_text_:data in 3689) [ClassicSimilarity], result of:
          0.036211025 = score(doc=3689,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 3689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3689)
        0.022205638 = product of:
          0.044411276 = sum of:
            0.044411276 = weight(_text_:22 in 3689) [ClassicSimilarity], result of:
              0.044411276 = score(doc=3689,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.2708308 = fieldWeight in 3689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3689)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Context analysis is a new method for appraising a body of publications. the process consists of creating a database of attributes assigned to each paper by the reviewer and then looking for interesting relationships in the data. Assigning the attributes requires an understanding of the subject matter of the papers. Presents findings about one particular research field, Management of R&D at the Section Level. The findings support the view that this body of academic publications does not meet the needs of practitioner R&D managers. Discusses practical aspects of how to apply the method in other fields
    Date
    22. 5.1999 19:18:46
  4. Tonta, Y.: Scholarly communication and the use of networked information sources (1996) 0.03
    0.025035713 = product of:
      0.050071426 = sum of:
        0.031038022 = weight(_text_:data in 6389) [ClassicSimilarity], result of:
          0.031038022 = score(doc=6389,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 6389, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=6389)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 6389) [ClassicSimilarity], result of:
              0.038066804 = score(doc=6389,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 6389, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6389)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Examines the use of networked information sources in scholarly communication. Networked information sources are defined broadly to cover: documents and images stored on electronic network hosts; data files; newsgroups; listservs; online information services and electronic periodicals. Reports results of a survey to determine how heavily, if at all, networked information sources are cited in scholarly printed periodicals published in 1993 and 1994. 27 printed periodicals, representing a wide range of subjects and the most influential periodicals in their fields, were identified through the Science Citation Index and Social Science Citation Index Journal Citation Reports. 97 articles were selected for further review and references, footnotes and bibliographies were checked for references to networked information sources. Only 2 articles were found to contain such references. Concludes that, although networked information sources facilitate scholars' work to a great extent during the research process, scholars have yet to incorporate such sources in the bibliographies of their published articles
    Source
    IFLA journal. 22(1996) no.3, S.240-245
  5. Egghe, L.; Rousseau, R.: Averaging and globalising quotients of informetric and scientometric data (1996) 0.03
    0.025035713 = product of:
      0.050071426 = sum of:
        0.031038022 = weight(_text_:data in 7659) [ClassicSimilarity], result of:
          0.031038022 = score(doc=7659,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 7659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=7659)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 7659) [ClassicSimilarity], result of:
              0.038066804 = score(doc=7659,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 7659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7659)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Journal of information science. 22(1996) no.3, S.165-170
  6. Schwens, U.: Feasibility of exploiting bibliometric data in European national bibliographic databases (1999) 0.02
    0.018105512 = product of:
      0.07242205 = sum of:
        0.07242205 = weight(_text_:data in 3792) [ClassicSimilarity], result of:
          0.07242205 = score(doc=3792,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48910472 = fieldWeight in 3792, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.109375 = fieldNorm(doc=3792)
      0.25 = coord(1/4)
    
  7. Leydesdorff, L.: ¬The generation of aggregated journal-journal citation maps on the basis of the CD-ROM version of the Science Citation Index (1994) 0.02
    0.015679834 = product of:
      0.06271934 = sum of:
        0.06271934 = weight(_text_:data in 8281) [ClassicSimilarity], result of:
          0.06271934 = score(doc=8281,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.42357713 = fieldWeight in 8281, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8281)
      0.25 = coord(1/4)
    
    Abstract
    Describes a method for the generation of journal-journal citation maps on the basis of the CD-ROM version of the Science Citation Index. Discusses sources of potential error from this data. Offers strategies to counteract such errors. Analyzes a number of scientometric periodical mappings in relation to mappings from previous studies which have used tape data and/or data from ISI's Journal Citation Reports. Compares the quality of these mappings with the quality of those for previous years in order to demonstrate the use of such mappings as indicators for dynamic developments in the sciences
  8. Lardy, J.P.; Herzhaft, L.: Bibliometric treatments according to bibliographic errors and data heterogenity : the end-user point of view (1992) 0.02
    0.015679834 = product of:
      0.06271934 = sum of:
        0.06271934 = weight(_text_:data in 5064) [ClassicSimilarity], result of:
          0.06271934 = score(doc=5064,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.42357713 = fieldWeight in 5064, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5064)
      0.25 = coord(1/4)
    
    Abstract
    The quality of online and CD-ROM databases is far from satisfactory. Errors are frequently found in listings from online searches. Spelling mistakes are the most common but there are also more misleading errors such as variations of an author's name or absence of homogenity in the content of certain field. Describes breifly a bibliometric study of large amounts of data downloaded from databases to investigate bibliographic errors and data heterogeneity. Recommends that database producers should consider either the implementation of a common format or the recommendations of the Société Française de Bibliométrie
  9. Heine, M.M.: Bradford ranking conventions and their application to a growing literature (1998) 0.02
    0.015679834 = product of:
      0.06271934 = sum of:
        0.06271934 = weight(_text_:data in 1069) [ClassicSimilarity], result of:
          0.06271934 = score(doc=1069,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.42357713 = fieldWeight in 1069, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1069)
      0.25 = coord(1/4)
    
    Abstract
    Bradford distributions describe the relationship between 'journal productivities' and 'journal rankings by productivity'. However, different ranking conventions exist, implying some ambiguity as to what the Bradford distribution 'is'. A need accordingly arises for a standard ranking convention to assist comparisons between empirical data, and also comparisons between empirical data and theoretical models. Five ranking conventions are described including the one used originally by Bradford, along with suggested distinctions between 'Bradford data set', 'Bradford distribution', 'Bradford graph', 'Bradford model', and 'Bradford's law'. Constructions such as the Lotka distribution, Groos droop (generalised to accomodate growth as well as fall-off in the Bradford log-graph), Brookes hooks, and the slope and intercept of the Bradford log graph are clarified on this basis
  10. Suraud, M.G.; Quoniam, L.; Rostaing, H.; Dou, H.: On the significance of data bases keywords for a large scale bibliometric investigation in fundamental physics (1995) 0.02
    0.015519011 = product of:
      0.062076043 = sum of:
        0.062076043 = weight(_text_:data in 6094) [ClassicSimilarity], result of:
          0.062076043 = score(doc=6094,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4192326 = fieldWeight in 6094, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=6094)
      0.25 = coord(1/4)
    
  11. Schwartz, C.A.: ¬The rise and fall of uncitedness (1997) 0.02
    0.015519011 = product of:
      0.062076043 = sum of:
        0.062076043 = weight(_text_:data in 7658) [ClassicSimilarity], result of:
          0.062076043 = score(doc=7658,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4192326 = fieldWeight in 7658, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=7658)
      0.25 = coord(1/4)
    
    Abstract
    Large scale uncitedness refers to the significant proportion of articles that do not receive a single citation within 5 years of publication. Notes the brief and troubled history of this area of inquiry, which was prone to miscalculation, misinterpretation, and politicization. Reassesses large scale uncitedness as both a general phenomenon in the scholarly communication system (with data for the physical sciences, social sciences and humanities) and a case study of library and information science, where its rate was reported to be 72%. The study was in 4 parts: examination of the problem of disaggregation in the study of uncitedness; review of the reaction of the popular press and scholars to uncitedness; a case study of uncitedness in C&RL; and a brief summary with suggestions for further research. Data disaggregation was found to be essential in interpreting citation data from tools such as Science Citation Index, Arts and Humanities Citation Index and Social Sciences Citation Index; which do not differentiate between articles and marginal materials (book reviews, letters, obituaries). Stresses the dangers of conclusions from uncitedness data
  12. Moed, H.F.: Differences in the construction of SCI based bibliometric indicators among various producers : a first overview (1996) 0.01
    0.014631464 = product of:
      0.058525857 = sum of:
        0.058525857 = weight(_text_:data in 5073) [ClassicSimilarity], result of:
          0.058525857 = score(doc=5073,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3952563 = fieldWeight in 5073, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=5073)
      0.25 = coord(1/4)
    
    Abstract
    Discusses basic technical methodological issues with respect to data collection and the construction of bibliometric indicators, particular at the macro or meso level. Focuses on the use of the Science Citation Index. Aims to highlight important decisions that have to be made in the process of data collection and the construction of bibliometric indicators. Illustrates differences in the methodologies applied by several important producers of bibliometric indicators, thus illustrating the complexity of the process of 'standardization'
  13. Gupta, B.M.; Sharma, P.; Karisiddappa, C.R.: Growth of research literature in scientific specialities : a modelling perspective (1997) 0.01
    0.014631464 = product of:
      0.058525857 = sum of:
        0.058525857 = weight(_text_:data in 1040) [ClassicSimilarity], result of:
          0.058525857 = score(doc=1040,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3952563 = fieldWeight in 1040, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=1040)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the application of 3 well known doffusion models and their modified versions to the growth of publication data in 4 selected fields of science and technology. It is observed that all the 3 models in their modified versions generally improve their performance in terms of parameter values, fit statistics, and graphical fit to the data. The most appropriate model is generally seen to be the modified exponential-logistic model
  14. Debackere, K.; Clarysse, B.: Advanced bibliometric methods to model the relationship between entry behavior and networking in emerging technological communities (1998) 0.01
    0.012802532 = product of:
      0.051210128 = sum of:
        0.051210128 = weight(_text_:data in 330) [ClassicSimilarity], result of:
          0.051210128 = score(doc=330,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34584928 = fieldWeight in 330, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=330)
      0.25 = coord(1/4)
    
    Abstract
    Organizational ecology and social network theory are used to explain entries in technological communities. Using bibliometric data on 411 organizations in the field of plant biotechnology, we test several hypotheses that entry is not only influenced by the density of the field, but also by the structure of the R&D network within the community. The empirical findings point to the usefulness of bibliometric data in mapping change and evolution in technological communities, as well as to the effects of networking on entry behavior
  15. Su, Y.; Han, L.-F.: ¬A new literature growth model : variable exponential growth law of literature (1998) 0.01
    0.011215541 = product of:
      0.044862162 = sum of:
        0.044862162 = product of:
          0.089724325 = sum of:
            0.089724325 = weight(_text_:22 in 3690) [ClassicSimilarity], result of:
              0.089724325 = score(doc=3690,freq=4.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.54716086 = fieldWeight in 3690, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3690)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 5.1999 19:22:35
  16. Diodato, V.: Dictionary of bibliometrics (1994) 0.01
    0.011102819 = product of:
      0.044411276 = sum of:
        0.044411276 = product of:
          0.08882255 = sum of:
            0.08882255 = weight(_text_:22 in 5666) [ClassicSimilarity], result of:
              0.08882255 = score(doc=5666,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.5416616 = fieldWeight in 5666, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5666)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: Journal of library and information science 22(1996) no.2, S.116-117 (L.C. Smith)
  17. Bookstein, A.: Informetric distributions : I. Unified overview (1990) 0.01
    0.011102819 = product of:
      0.044411276 = sum of:
        0.044411276 = product of:
          0.08882255 = sum of:
            0.08882255 = weight(_text_:22 in 6902) [ClassicSimilarity], result of:
              0.08882255 = score(doc=6902,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.5416616 = fieldWeight in 6902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6902)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 18:55:29
  18. Bookstein, A.: Informetric distributions : II. Resilience to ambiguity (1990) 0.01
    0.011102819 = product of:
      0.044411276 = sum of:
        0.044411276 = product of:
          0.08882255 = sum of:
            0.08882255 = weight(_text_:22 in 4689) [ClassicSimilarity], result of:
              0.08882255 = score(doc=4689,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.5416616 = fieldWeight in 4689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4689)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 18:55:55
  19. Moed, H.F.; Bruin, R.E.D.; Leeuwen, T.N.V.: New bibliometric tools for the assessment of national research performance : database description, overview of indicators and first applications (1995) 0.01
    0.010973599 = product of:
      0.043894395 = sum of:
        0.043894395 = weight(_text_:data in 3376) [ClassicSimilarity], result of:
          0.043894395 = score(doc=3376,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 3376, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3376)
      0.25 = coord(1/4)
    
    Abstract
    Gives an outline of a new bibliometric database based upon all articles published by authors from the Netherlands and processed during 1980-1993 by ISI for the SCI, SSCI and AHCI. Describes various types of information added to the database: data on articles citing the Dutch publications; detailed citation data on ISI journals and subfields; and a classification system of the main publishing organizations. Also gives an overview of the types of bibliometric indicators constructed. and discusses their relationship to indicators developed by other researchers in the field. Gives 2 applications to illustrate the potentials of the database and of the bibliometric indicators derived from it: one that represents a synthesis of 'classical' macro indicator studies on the one hand and bibliometric analyses of research groups on the other; and a second that gives for the first time a detailed analysis of a country's publications per institutional sector
  20. Kopcsa, A.; Schiebel, E.: Science and technology mapping : a new iteration model for representing multidimensional relationships (1998) 0.01
    0.010973599 = product of:
      0.043894395 = sum of:
        0.043894395 = weight(_text_:data in 326) [ClassicSimilarity], result of:
          0.043894395 = score(doc=326,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 326, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=326)
      0.25 = coord(1/4)
    
    Abstract
    Much effort has been done to develop more objective quantitative methods to analyze and integrate survey information for understanding research trends and research structures. Co-word analysis is one class of techniques that exploits the use of co-occurences of items in written information. However, there are some bottlenecks in using statistical methods to produce mappings of reduced information in a comfortable manner. On one hand, often used statistical software for PCs has restrictions for the amount for calculable data; on the other hand, the results of the mufltidimensional scaling routines are not quite satisfying. Therefore, this article introduces a new iteration model for the calculation of co-word maps that eases the problem. The iteration model is for positioning the words in the two-dimensional plane due to their connections to each other, and its consists of a quick and stabile algorithm that has been implemented with software for personal computers. A graphic module represents the data in well-known 'technology maps'