Search (74 results, page 4 of 4)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Informetrie"
  1. Li, J.; Willett, P.: ArticleRank : a PageRank-based alternative to numbers of citations for analysing citation networks (2009) 0.01
    0.00636322 = product of:
      0.01272644 = sum of:
        0.01272644 = product of:
          0.02545288 = sum of:
            0.02545288 = weight(_text_:j in 751) [ClassicSimilarity], result of:
              0.02545288 = score(doc=751,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.17553353 = fieldWeight in 751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=751)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  2. Johan Bollen, J.; Van de Sompel, H.: Usage impact factor : the effects of sample characteristics on usage-based impact metrics (2008) 0.01
    0.00636322 = product of:
      0.01272644 = sum of:
        0.01272644 = product of:
          0.02545288 = sum of:
            0.02545288 = weight(_text_:j in 1346) [ClassicSimilarity], result of:
              0.02545288 = score(doc=1346,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.17553353 = fieldWeight in 1346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1346)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Antonakis, J.; Lalive, R.: Quantifying scholarly impact : IQp versus the Hirsch h (2008) 0.01
    0.00636322 = product of:
      0.01272644 = sum of:
        0.01272644 = product of:
          0.02545288 = sum of:
            0.02545288 = weight(_text_:j in 1722) [ClassicSimilarity], result of:
              0.02545288 = score(doc=1722,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.17553353 = fieldWeight in 1722, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1722)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Sarabia, J.M.; Sarabia, M.: Explicit expressions for the Leimkuhler curve in parametric families (2008) 0.01
    0.00636322 = product of:
      0.01272644 = sum of:
        0.01272644 = product of:
          0.02545288 = sum of:
            0.02545288 = weight(_text_:j in 2120) [ClassicSimilarity], result of:
              0.02545288 = score(doc=2120,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.17553353 = fieldWeight in 2120, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2120)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper we obtain the Leimkuhler curve in the case of some important statistical distributions proposed in the informetrics and econometrics literature. In this way, we complete the previous work of Burrell [Burrell, Q. L. (2005). Symmetry and other transformation features of Lorenz/Leimkuhler representations of informetric data. Information Processing and Management, 41, 1317-1329], where several open problems were stated. To do this, we use a recent and general definition of the Leimkuhler curve proposed by Sarabia [Sarabia, J. M. (2008a). A general definition of the Leimkuhler curve. Journal of Informetrics, 2, 156-163], and a new representation of the Leimkuhler curve in terms of the first-moment distribution of the population. Specifically, we obtain the Leimkuhler curve of the following distributions: classical and exponentiated Pareto distributions; three-parameter lognormal distribution; generalized gamma distribution, which includes to the exponential and classical gamma distributions among others; generalized beta distribution of the first kind and generalized beta distribution of the second kind, which includes as particular or limiting cases next important families like beta distribution of the second kind, Singh-Maddala, Dagum, Fisk or Lomax distributions. All the obtained Leimkuhler curves can be computed easily.
  5. Wallace, M.L.; Gingras, Y.; Duhon, R.: ¬A new approach for detecting scientific specialties from raw cocitation networks (2009) 0.01
    0.00636322 = product of:
      0.01272644 = sum of:
        0.01272644 = product of:
          0.02545288 = sum of:
            0.02545288 = weight(_text_:j in 2709) [ClassicSimilarity], result of:
              0.02545288 = score(doc=2709,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.17553353 = fieldWeight in 2709, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2709)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We use a technique recently developed by V. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre (2008) to detect scientific specialties from author cocitation networks. This algorithm has distinct advantages over most previous methods used to obtain cocitation clusters since it avoids the use of similarity measures, relies entirely on the topology of the weighted network, and can be applied to relatively large networks. Most importantly, it requires no subjective interpretation of the cocitation data or of the communities found. Using two examples, we show that the resulting specialties are the smallest coherent groups of researchers (within a hierarchy of cluster sizes) and can thus be identified unambiguously. Furthermore, we confirm that these communities are indeed representative of what we know about the structure of a given scientific discipline and that as specialties, they can be accurately characterized by a few keywords (from the publication titles). We argue that this robust and efficient algorithm is particularly well-suited to cocitation networks and that the results generated can be of great use to researchers studying various facets of the structure and evolution of science.
  6. Bar-Ilan, J.; Peritz, B.C.: ¬A method for measuring the evolution of a topic on the Web : the case of "informetrics" (2009) 0.01
    0.00636322 = product of:
      0.01272644 = sum of:
        0.01272644 = product of:
          0.02545288 = sum of:
            0.02545288 = weight(_text_:j in 3089) [ClassicSimilarity], result of:
              0.02545288 = score(doc=3089,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.17553353 = fieldWeight in 3089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3089)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Kim, P.J.; Lee, J.Y.; Park, J.-H.: Developing a new collection-evaluation method : mapping and the user-side h-index (2009) 0.01
    0.00636322 = product of:
      0.01272644 = sum of:
        0.01272644 = product of:
          0.02545288 = sum of:
            0.02545288 = weight(_text_:j in 3171) [ClassicSimilarity], result of:
              0.02545288 = score(doc=3171,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.17553353 = fieldWeight in 3171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3171)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Tonta, Y.; Ünal, Y.: Scatter of journals and literature obsolescence reflected in document delivery requests (2005) 0.01
    0.006182823 = product of:
      0.012365646 = sum of:
        0.012365646 = product of:
          0.024731291 = sum of:
            0.024731291 = weight(_text_:22 in 3271) [ClassicSimilarity], result of:
              0.024731291 = score(doc=3271,freq=2.0), product of:
                0.15980367 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045634337 = queryNorm
                0.15476047 = fieldWeight in 3271, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3271)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 3.2005 10:54:22
  9. Ahlgren, P.; Jarneving, B.; Rousseau, R.: Requirements for a cocitation similarity measure, with special reference to Pearson's correlation coefficient (2003) 0.01
    0.006182823 = product of:
      0.012365646 = sum of:
        0.012365646 = product of:
          0.024731291 = sum of:
            0.024731291 = weight(_text_:22 in 5171) [ClassicSimilarity], result of:
              0.024731291 = score(doc=5171,freq=2.0), product of:
                0.15980367 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045634337 = queryNorm
                0.15476047 = fieldWeight in 5171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5171)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    9. 7.2006 10:22:35
  10. Bensman, S.J.: Urquhart's and Garfield's laws : the British controversy over their validity (2001) 0.01
    0.005090576 = product of:
      0.010181152 = sum of:
        0.010181152 = product of:
          0.020362305 = sum of:
            0.020362305 = weight(_text_:j in 6026) [ClassicSimilarity], result of:
              0.020362305 = score(doc=6026,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.14042683 = fieldWeight in 6026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6026)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The British controversy over the validity of Urquhart's and Garfield's Laws during the 1970s constitutes an important episode in the formulation of the probability structure of human knowledge. This controversy took place within the historical context of the convergence of two scientific revolutions-the bibliometric and the biometric-that had been launched in Britain. The preceding decades had witnessed major breakthroughs in understanding the probability distributions underlying the use of human knowledge. Two of the most important of these breakthroughs were the laws posited by Donald J. Urquhart and Eugene Garfield, who played major roles in establishing the institutional bases of the bibliometric revolution. For his part, Urquhart began his realization of S. C. Bradford's concept of a national science library by analyzing the borrowing of journals on interlibrary loan from the Science Museum Library in 1956. He found that 10% of the journals accounted for 80% of the loans and formulated Urquhart's Law, by which the interlibrary use of a journal is a measure of its total use. This law underlay the operations of the National Lending Library for Science and Technology (NLLST), which Urquhart founded. The NLLST became the British Library Lending Division (BLLD) and ultimately the British Library Document Supply Centre (BLDSC). In contrast, Garfield did a study of 1969 journal citations as part of the process of creating the Science Citation Index (SCI), formulating his Law of Concentration, by which the bulk of the information needs in science can be satisfied by a relatively small, multidisciplinary core of journals. This law became the operational principle of the Institute for Scientif ic Information created by Garfield. A study at the BLLD under Urquhart's successor, Maurice B. Line, found low correlations of NLLST use with SCI citations, and publication of this study started a major controversy, during which both laws were called into question. The study was based on the faulty use of the Spearman rank correlation coefficient, and the controversy over it was instrumental in causing B. C. Brookes to investigate bibliometric laws as probabilistic phenomena and begin to link the bibliometric with the biometric revolution. This paper concludes with a resolution of the controversy by means of a statistical technique that incorporates Brookes' criticism of the Spearman rank-correlation method and demonstrates the mutual supportiveness of the two laws
  11. Qin, J.: Semantic patterns in bibliographically coupled documents (2002) 0.01
    0.005090576 = product of:
      0.010181152 = sum of:
        0.010181152 = product of:
          0.020362305 = sum of:
            0.020362305 = weight(_text_:j in 4266) [ClassicSimilarity], result of:
              0.020362305 = score(doc=4266,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.14042683 = fieldWeight in 4266, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4266)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Reedijk, J.; Moed, H.F.: Is the impact of journal impact factors decreasing? (2008) 0.01
    0.005090576 = product of:
      0.010181152 = sum of:
        0.010181152 = product of:
          0.020362305 = sum of:
            0.020362305 = weight(_text_:j in 1734) [ClassicSimilarity], result of:
              0.020362305 = score(doc=1734,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.14042683 = fieldWeight in 1734, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1734)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. De Bellis, N.: Bibliometrics and citation analysis : from the Science citation index to cybermetrics (2008) 0.01
    0.005090576 = product of:
      0.010181152 = sum of:
        0.010181152 = product of:
          0.020362305 = sum of:
            0.020362305 = weight(_text_:j in 3585) [ClassicSimilarity], result of:
              0.020362305 = score(doc=3585,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.14042683 = fieldWeight in 3585, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3585)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIS 61(2010) no.1, S.205-207 (Jeppe Nicolaisen) Weitere Rez. in: Mitt VÖB 63(2010) H.1/2, S.134-135 (J. Gorraiz u. M. Wieland): "Das Buch entwickelte sich aus einem mehrjährigen Forschungsprojekt mit dem Ziel, den schwer verständlichen quantitativen Kern der Bibliometrie in einem für primär italienische Bibliothekare leichteren historischen und philosophischen Kontext zu vermitteln, wie der Autor im Vorwort erklärt. Dank einer Empfehlung von Eugene Garfield steht dieses Werk nun auch in englischer Übersetzung einer internationalen Leserschaft zur Verfügung. Die über 400 Seiten lange Monografie von de Bellis gibt in acht Kapiteln einen detaillierten und sehr präzisen Überblick über die Bibliometrie und die Zitationsanalyse, ihre Natur und Entwicklung, ihre Kontroverse und Prognose. . . . Das Buch von de Bellis ist sehr empfehlenswert für alle die beabsichtigen, sich mit dieser neuen Wissenschaft zu beschäftigen. Es endet mit folgendem Statement: "Scientometricians have to learn to live in a multidimensional world". Und genau hier liegt die Herausforderung und Schönheit dieses Metiers."
  14. Adler, R.; Ewing, J.; Taylor, P.: Citation statistics : A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS) (2008) 0.00
    0.0038179322 = product of:
      0.0076358644 = sum of:
        0.0076358644 = product of:
          0.015271729 = sum of:
            0.015271729 = weight(_text_:j in 2417) [ClassicSimilarity], result of:
              0.015271729 = score(doc=2417,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.105320126 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Languages

  • e 66
  • d 8

Types

  • a 72
  • el 2
  • m 1
  • r 1
  • More… Less…

Classifications