Search (184 results, page 1 of 10)

  • × theme_ss:"Informetrie"
  • × year_i:[1990 TO 2000}
  1. Small, H.: Update on science mapping : creating large document spaces (1997) 0.03
    0.030956313 = product of:
      0.09286894 = sum of:
        0.016567415 = weight(_text_:of in 410) [ClassicSimilarity], result of:
          0.016567415 = score(doc=410,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2704316 = fieldWeight in 410, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=410)
        0.028615767 = weight(_text_:systems in 410) [ClassicSimilarity], result of:
          0.028615767 = score(doc=410,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.23767869 = fieldWeight in 410, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=410)
        0.047685754 = weight(_text_:software in 410) [ClassicSimilarity], result of:
          0.047685754 = score(doc=410,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30681872 = fieldWeight in 410, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=410)
      0.33333334 = coord(3/9)
    
    Abstract
    Science mapping projects have been revived by the advent of virtual reality (VR) software capable of navigating large sysnthetic 3 dimensional spaces. Unlike the earlier mapping efforts aimed at creating simple maps at either a global or local level, the focus is now on creating large scale maps displaying many thousands of documents which can be input into the new VR systems. Presents a general framework for creating large scale document spaces as well as some new methods which perform some of the individual processing steps. The methods are designed primarily for citation data but could be applied to other types of data, including hypertext links
  2. Egghe, L.; Rousseau, R.: Averaging and globalising quotients of informetric and scientometric data (1996) 0.03
    0.030285552 = product of:
      0.09085666 = sum of:
        0.050336715 = weight(_text_:applications in 7659) [ClassicSimilarity], result of:
          0.050336715 = score(doc=7659,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 7659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=7659)
        0.024596233 = weight(_text_:of in 7659) [ClassicSimilarity], result of:
          0.024596233 = score(doc=7659,freq=30.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.4014868 = fieldWeight in 7659, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=7659)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 7659) [ClassicSimilarity], result of:
              0.031847417 = score(doc=7659,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 7659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7659)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    It is possible, using ISI's Journal Citation Report (JCR), to calculate average impact factors (AIF) for LCR's subject categories but it can be more useful to know the global Impact Factor (GIF) of a subject category and compare the 2 values. Reports results of a study to compare the relationships between AIFs and GIFs of subjects, based on the particular case of the average impact factor of a subfield versus the impact factor of this subfield as a whole, the difference being studied between an average of quotients, denoted as AQ, and a global average, obtained as a quotient of averages, and denoted as GQ. In the case of impact factors, AQ becomes the average impact factor of a field, and GQ becomes its global impact factor. Discusses a number of applications of this technique in the context of informetrics and scientometrics
    Source
    Journal of information science. 22(1996) no.3, S.165-170
  3. Harter, S.P.; Cheng, Y.-R.: Colinked descriptors : improving vocabulary selection for end-user searching (1996) 0.03
    0.025467023 = product of:
      0.07640107 = sum of:
        0.010999769 = weight(_text_:of in 4216) [ClassicSimilarity], result of:
          0.010999769 = score(doc=4216,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.17955035 = fieldWeight in 4216, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4216)
        0.0245278 = weight(_text_:systems in 4216) [ClassicSimilarity], result of:
          0.0245278 = score(doc=4216,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 4216, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=4216)
        0.040873505 = weight(_text_:software in 4216) [ClassicSimilarity], result of:
          0.040873505 = score(doc=4216,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.2629875 = fieldWeight in 4216, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=4216)
      0.33333334 = coord(3/9)
    
    Abstract
    This article introduces a new concept and technique for information retrieval called 'colinked descriptors'. Borrowed from an analogous idea in bibliometrics - cocited references - colinked descriptors provide a theory and method for identifying search terms that, by hypothesis, will be superior to those entered initially by a searcher. The theory suggests a means of moving automatically from 2 or more initial search terms, to other terms that should be superior in retrieval performance to the 2 original terms. A research project designed to test this colinked descriptor hypothesis is reported. The results suggest that the approach is effective, although methodological problems in testing the idea are reported. Algorithms to generate colinked descriptors can be incorporated easily into system interfaces, front-end or pre-search systems, or help software, in any database that employs a thesaurus. The potential use of colinked descriptors is a strong argument for building richer and more complex thesauri that reflect as many legitimate links among descriptors as possible
    Source
    Journal of the American Society for Information Science. 47(1996) no.4, S.311-325
  4. Coulter, N.; Monarch, I.; Konda, S.: Software engineering as seen through its research literature : a study in co-word analysis (1998) 0.02
    0.024235483 = product of:
      0.10905968 = sum of:
        0.014666359 = weight(_text_:of in 2161) [ClassicSimilarity], result of:
          0.014666359 = score(doc=2161,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23940048 = fieldWeight in 2161, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2161)
        0.09439332 = weight(_text_:software in 2161) [ClassicSimilarity], result of:
          0.09439332 = score(doc=2161,freq=6.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.6073436 = fieldWeight in 2161, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=2161)
      0.22222222 = coord(2/9)
    
    Abstract
    This empirical research demonstrates the effectiveness of content analysis to map the research literature of the software engineering discipline. The results suggest that certain research themes in software engineering have remained constant, but with changing thrusts
    Source
    Journal of the American Society for Information Science. 49(1998) no.13, S.1206-1223
  5. Tsay, M.-Y.: From Science Citation Index to Journal Citation Reports, amd criteria for journals evaluation (1997) 0.02
    0.023112813 = product of:
      0.104007654 = sum of:
        0.083051346 = weight(_text_:applications in 657) [ClassicSimilarity], result of:
          0.083051346 = score(doc=657,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.4815245 = fieldWeight in 657, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=657)
        0.020956306 = weight(_text_:of in 657) [ClassicSimilarity], result of:
          0.020956306 = score(doc=657,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34207192 = fieldWeight in 657, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=657)
      0.22222222 = coord(2/9)
    
    Abstract
    Investigates the characteristics of Journal Citation Reports (JCR) through the study of the Science Citation Index (SCI). Other criteria for evaluating a journal are also discussed. The compilation process of SCI data, and the characteristics, applications and limitations of SCI are studied. A detailed description of JCR is provided including: journal ranking listing, citing journal listing, cited journal listing, subject category listing, source data, impact factor, immediacy index, cited half-life and citing half-life. The applications and limitations of JCR are also explored. In addition to the criteria listed in JCR, the size, circulation and influence of journals are also considered significant criteria fir evaluation purposes
    Source
    Journal of information; communication; and library science. 4(1997) no.2, S.27-41
  6. Moed, H.F.; Bruin, R.E.D.; Leeuwen, T.N.V.: New bibliometric tools for the assessment of national research performance : database description, overview of indicators and first applications (1995) 0.02
    0.02070809 = product of:
      0.0931864 = sum of:
        0.07118686 = weight(_text_:applications in 3376) [ClassicSimilarity], result of:
          0.07118686 = score(doc=3376,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.41273528 = fieldWeight in 3376, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=3376)
        0.021999538 = weight(_text_:of in 3376) [ClassicSimilarity], result of:
          0.021999538 = score(doc=3376,freq=24.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3591007 = fieldWeight in 3376, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3376)
      0.22222222 = coord(2/9)
    
    Abstract
    Gives an outline of a new bibliometric database based upon all articles published by authors from the Netherlands and processed during 1980-1993 by ISI for the SCI, SSCI and AHCI. Describes various types of information added to the database: data on articles citing the Dutch publications; detailed citation data on ISI journals and subfields; and a classification system of the main publishing organizations. Also gives an overview of the types of bibliometric indicators constructed. and discusses their relationship to indicators developed by other researchers in the field. Gives 2 applications to illustrate the potentials of the database and of the bibliometric indicators derived from it: one that represents a synthesis of 'classical' macro indicator studies on the one hand and bibliometric analyses of research groups on the other; and a second that gives for the first time a detailed analysis of a country's publications per institutional sector
  7. Karki, M.M.S.: Patent citation analysis : a policy analysis tool (1997) 0.02
    0.020236818 = product of:
      0.09106568 = sum of:
        0.06711562 = weight(_text_:applications in 2076) [ClassicSimilarity], result of:
          0.06711562 = score(doc=2076,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.38913056 = fieldWeight in 2076, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0625 = fieldNorm(doc=2076)
        0.023950063 = weight(_text_:of in 2076) [ClassicSimilarity], result of:
          0.023950063 = score(doc=2076,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.39093933 = fieldWeight in 2076, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2076)
      0.22222222 = coord(2/9)
    
    Abstract
    Citation analysis of patents uses bibliometric techniques to analyze the wealth of information contained in patents. Describes the various facets of patent citations and patent citation studies and their important applications. Describes the construction of technology indicators based on patent citation analysis, including: identification of leading edge technological activity; measurement of national patent citation performance; competitive intelligence; linkages to science; measurement of foreign dependence; highly cited patents; and number of non patent links
  8. Pillai, C.V.R.; Girijakumari, S.: Widening horizons of informetrics (1996) 0.02
    0.019523773 = product of:
      0.08785698 = sum of:
        0.06711562 = weight(_text_:applications in 7172) [ClassicSimilarity], result of:
          0.06711562 = score(doc=7172,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.38913056 = fieldWeight in 7172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0625 = fieldNorm(doc=7172)
        0.020741362 = weight(_text_:of in 7172) [ClassicSimilarity], result of:
          0.020741362 = score(doc=7172,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.33856338 = fieldWeight in 7172, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=7172)
      0.22222222 = coord(2/9)
    
    Abstract
    Traces the origin and development of informetrics in the field of library and information science. 'Informatrics' is seen as a generic term to denote studies in which quantitative methods are applied. Discusses various applications of informetrics including citation analysis; impact factor; absolescence and ageing studies; bibliographic coupling; co-citation; and measurement of information such as retrieval performance assessment. Outlines recent developments in informetrics and calls for attention to be paid to the quality of future research in the field to ensure its reliability
  9. Chen, Y.-S.; Chong, P.P.; Tong, M.Y.: Dynamic behavior of Bradford's law (1995) 0.02
    0.01912218 = product of:
      0.08604981 = sum of:
        0.06711562 = weight(_text_:applications in 2150) [ClassicSimilarity], result of:
          0.06711562 = score(doc=2150,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.38913056 = fieldWeight in 2150, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0625 = fieldNorm(doc=2150)
        0.018934188 = weight(_text_:of in 2150) [ClassicSimilarity], result of:
          0.018934188 = score(doc=2150,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3090647 = fieldWeight in 2150, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2150)
      0.22222222 = coord(2/9)
    
    Abstract
    Examines 2 problems associated with Bradford's law: since empirical data deviate from the law in many applications, what are the significant factors influencing the Bradford graphs; what will be the evolution over time of the Bradford graphs? A computational analysis of the 2 problems is made based on Herbert Simon's model. Several significant findings about the dynamic behaviour of Bradford's law are identified
    Source
    Journal of the American Society for Information Science. 46(1995) no.5, S.370-383
  10. Osareh, F.: Bibliometrics, citation analysis and co-citation analysis : a review of literature I (1996) 0.02
    0.01912218 = product of:
      0.08604981 = sum of:
        0.06711562 = weight(_text_:applications in 7170) [ClassicSimilarity], result of:
          0.06711562 = score(doc=7170,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.38913056 = fieldWeight in 7170, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0625 = fieldNorm(doc=7170)
        0.018934188 = weight(_text_:of in 7170) [ClassicSimilarity], result of:
          0.018934188 = score(doc=7170,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3090647 = fieldWeight in 7170, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=7170)
      0.22222222 = coord(2/9)
    
    Abstract
    Part 1 of a 2 part article reviewing the technique of bibliometrics and one of its most widely used methods, citation analysis. Traces the history and development of bibliometrics, including its definition, scope, role in scholarly communication and applications. Treats citation analysis similarly with particular reference to bibliographic coupling and cocitation coupling
  11. Kopcsa, A.; Schiebel, E.: Science and technology mapping : a new iteration model for representing multidimensional relationships (1998) 0.02
    0.01683698 = product of:
      0.075766414 = sum of:
        0.017962547 = weight(_text_:of in 326) [ClassicSimilarity], result of:
          0.017962547 = score(doc=326,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2932045 = fieldWeight in 326, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=326)
        0.05780387 = weight(_text_:software in 326) [ClassicSimilarity], result of:
          0.05780387 = score(doc=326,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.3719205 = fieldWeight in 326, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=326)
      0.22222222 = coord(2/9)
    
    Abstract
    Much effort has been done to develop more objective quantitative methods to analyze and integrate survey information for understanding research trends and research structures. Co-word analysis is one class of techniques that exploits the use of co-occurences of items in written information. However, there are some bottlenecks in using statistical methods to produce mappings of reduced information in a comfortable manner. On one hand, often used statistical software for PCs has restrictions for the amount for calculable data; on the other hand, the results of the mufltidimensional scaling routines are not quite satisfying. Therefore, this article introduces a new iteration model for the calculation of co-word maps that eases the problem. The iteration model is for positioning the words in the two-dimensional plane due to their connections to each other, and its consists of a quick and stabile algorithm that has been implemented with software for personal computers. A graphic module represents the data in well-known 'technology maps'
    Source
    Journal of the American Society for Information Science. 49(1998) no.1, S.7-17
  12. Hudnut, S.K.: Finding answers by the numbers : statistical analysis of online search results (1993) 0.02
    0.01541975 = product of:
      0.069388874 = sum of:
        0.050336715 = weight(_text_:applications in 555) [ClassicSimilarity], result of:
          0.050336715 = score(doc=555,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 555, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=555)
        0.019052157 = weight(_text_:of in 555) [ClassicSimilarity], result of:
          0.019052157 = score(doc=555,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3109903 = fieldWeight in 555, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=555)
      0.22222222 = coord(2/9)
    
    Abstract
    Online searchers today no longer limit themselves to locating references to articles. More and more, they are called upon to locate specific answers to questions such as: Who is my chief competitor for this technology? Who is publishing the most on this subject? What is the geographic distribution of this product? These questions demand answers, not necessarily from record content, but from statistical analysis of the terms in a set of records. Most online services now provide a tool for statistical analysis such as GET on Orbit, ZOOM on ESA/IRS and RANK/RANK FILES on Dialog. With these commands, users can analyze term frequency to extrapolate very precise answers to a wide range of questions. This paper discusses the many uses of term frequency analysis and how it can be applied to areas of competitive intelligence, market analysis, bibliometric analysis and improvements of search results. The applications are illustrated by examples from Dialog
    Source
    Proceedings of the 14th National Online Meeting 1993, New York, 4-6 May 1993. Ed.: M.E. Williams
  13. Harter, S.P.: Colinked descriptors (1993) 0.01
    0.014485389 = product of:
      0.06518425 = sum of:
        0.018934188 = weight(_text_:of in 7963) [ClassicSimilarity], result of:
          0.018934188 = score(doc=7963,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3090647 = fieldWeight in 7963, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=7963)
        0.046250064 = weight(_text_:systems in 7963) [ClassicSimilarity], result of:
          0.046250064 = score(doc=7963,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.38414678 = fieldWeight in 7963, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=7963)
      0.22222222 = coord(2/9)
    
    Abstract
    Reports the preliminary results of an investigation into the effectiveness of colinked descriptors, a new concept and technique suitable for incorporating into the design of interfaces for information retrieval. The idea is borrowed from the analogous idea in bibliometrics-cocited references. Preliminary results suggest that the technique is extremely effective. As a retrieval technique, colinked descriptors can easily be incorporated into information retrieval interfaces, front-end systems, or standalone, pre-search systems
    Source
    Integrating technologies - converging professions: proceedings of the 56th Annual Meeting of the American Society for Information Science, Columbus, OH, 24-28 October 1993. Ed.: S. Bonzi
  14. Chung, Y.-K.: Core international journals of classification systems : an application of Bradford's law (1994) 0.01
    0.013349254 = product of:
      0.06007164 = sum of:
        0.01960283 = weight(_text_:of in 5070) [ClassicSimilarity], result of:
          0.01960283 = score(doc=5070,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.31997898 = fieldWeight in 5070, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5070)
        0.04046881 = weight(_text_:systems in 5070) [ClassicSimilarity], result of:
          0.04046881 = score(doc=5070,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.33612844 = fieldWeight in 5070, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5070)
      0.22222222 = coord(2/9)
    
    Abstract
    By analyzing the source documents and their references by classification systems researchers in the world, this paper presents core journals of the field during the period 1981-1990. The findings show that journal literature in this study confirms to Bradford's law and provides 'Cataloging and classification quarterly (CCQ)' as the most productive journbal, 'Library resources and technical services (LRTS)' as the most frequently cited journal of the field and 'Knowledge organization (KO)', formerly 'International classification (IC)' as the second productive and frequently cited journal of the field. The principal journals publishing source items differs from those used as reference sources of the field. The high-ranked international journals over the years are clearly those to be acquired to obtain the greatest coverage of the field for the least cost
  15. Diodato, V.: Dictionary of bibliometrics (1994) 0.01
    0.012913696 = product of:
      0.05811163 = sum of:
        0.020956306 = weight(_text_:of in 5666) [ClassicSimilarity], result of:
          0.020956306 = score(doc=5666,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34207192 = fieldWeight in 5666, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=5666)
        0.037155323 = product of:
          0.074310645 = sum of:
            0.074310645 = weight(_text_:22 in 5666) [ClassicSimilarity], result of:
              0.074310645 = score(doc=5666,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.5416616 = fieldWeight in 5666, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5666)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Footnote
    Rez. in: Journal of library and information science 22(1996) no.2, S.116-117 (L.C. Smith)
  16. Chung, Y.-K.: Bradford distribution and core authors in classification systems literature (1994) 0.01
    0.012589732 = product of:
      0.056653794 = sum of:
        0.023950063 = weight(_text_:of in 5066) [ClassicSimilarity], result of:
          0.023950063 = score(doc=5066,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.39093933 = fieldWeight in 5066, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5066)
        0.03270373 = weight(_text_:systems in 5066) [ClassicSimilarity], result of:
          0.03270373 = score(doc=5066,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2716328 = fieldWeight in 5066, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=5066)
      0.22222222 = coord(2/9)
    
    Abstract
    Bradford's law of scatter was applied to the analysis of the authors of source documents on the subject of classification schemes, published in core periodicals over the period 1981-1990. Results indicated that: core authors of the international classification system literature are Library of Congress, M. Dewey, S. Ranganathan, J. Comaroni, A. Neelameghan, L. Chan and K. Markey; the highly cited authors are linked either to the developers of the classification schemes or to a research centre, or else they authored the most frequently cited books; and the data conforms to Bradford's Law of Scatter
  17. Bookstein, A.: Informetric distributions : I. Unified overview (1990) 0.01
    0.011549704 = product of:
      0.051973667 = sum of:
        0.014818345 = weight(_text_:of in 6902) [ClassicSimilarity], result of:
          0.014818345 = score(doc=6902,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 6902, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=6902)
        0.037155323 = product of:
          0.074310645 = sum of:
            0.074310645 = weight(_text_:22 in 6902) [ClassicSimilarity], result of:
              0.074310645 = score(doc=6902,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.5416616 = fieldWeight in 6902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6902)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Date
    22. 7.2006 18:55:29
    Source
    Journal of the American Society for Information Science. 41(1990) no.5, S.368-375
  18. Bookstein, A.: Informetric distributions : II. Resilience to ambiguity (1990) 0.01
    0.011549704 = product of:
      0.051973667 = sum of:
        0.014818345 = weight(_text_:of in 4689) [ClassicSimilarity], result of:
          0.014818345 = score(doc=4689,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 4689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=4689)
        0.037155323 = product of:
          0.074310645 = sum of:
            0.074310645 = weight(_text_:22 in 4689) [ClassicSimilarity], result of:
              0.074310645 = score(doc=4689,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.5416616 = fieldWeight in 4689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4689)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Date
    22. 7.2006 18:55:55
    Source
    Journal of the American Society for Information Science. 41(1990) no.5, S.376-386
  19. Kostoff, R.N.: Citation analysis cross field normalization : a new paradigm (1997) 0.01
    0.011475093 = product of:
      0.051637918 = sum of:
        0.018934188 = weight(_text_:of in 464) [ClassicSimilarity], result of:
          0.018934188 = score(doc=464,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3090647 = fieldWeight in 464, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=464)
        0.03270373 = weight(_text_:systems in 464) [ClassicSimilarity], result of:
          0.03270373 = score(doc=464,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2716328 = fieldWeight in 464, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=464)
      0.22222222 = coord(2/9)
    
    Abstract
    Proposes a new paradigm for comparing quality of published papers across different disciplines. This method uses a figure of merit of the ratio of actual citations received to the potential maximum number of citations that could have been received. It is analogous to approaches used to compare performance in physical systems, and appears intrinsically more useful than present approaches
  20. Su, Y.; Han, L.-F.: ¬A new literature growth model : variable exponential growth law of literature (1998) 0.01
    0.010692684 = product of:
      0.04811708 = sum of:
        0.010584532 = weight(_text_:of in 3690) [ClassicSimilarity], result of:
          0.010584532 = score(doc=3690,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.17277241 = fieldWeight in 3690, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=3690)
        0.037532546 = product of:
          0.07506509 = sum of:
            0.07506509 = weight(_text_:22 in 3690) [ClassicSimilarity], result of:
              0.07506509 = score(doc=3690,freq=4.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.54716086 = fieldWeight in 3690, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3690)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Date
    22. 5.1999 19:22:35

Authors

Languages

Types

  • a 168
  • s 13
  • m 3
  • b 1
  • More… Less…