Search (3 results, page 1 of 1)

  • × author_ss:"Huber, J.C."
  • × theme_ss:"Informetrie"
  • × year_i:[2000 TO 2010}
  1. Huber, J.C.; Wagner-Döbler, R.: Using the Mann-Whitney test on informetric data (2003) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 1686) [ClassicSimilarity], result of:
              0.0108246 = score(doc=1686,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 1686, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1686)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The fields of informetrics and scientometrics have suffered from the lack of a powerful test to detect the differences between two samples. We show that the Mann-Whitney test is a good test an the publication productivity of journals and of authors. Its main limitation is a lack of Power on small samples that have small differences. This is not the fault of the test, but rather reflects the fact that small, similar samples have little to distinguish between them.
    Type
    a
  2. Huber, J.C.: ¬A new model that generated Lotka's law (2002) 0.00
    0.0026473717 = product of:
      0.0052947435 = sum of:
        0.0052947435 = product of:
          0.010589487 = sum of:
            0.010589487 = weight(_text_:a in 248) [ClassicSimilarity], result of:
              0.010589487 = score(doc=248,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19940455 = fieldWeight in 248, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=248)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we develop a new model for a process that generates Lotka's Law. We show that four relatively mild assumptions create a process that fits five different informetric distributions: rate of production, career duration, randomness, and Poisson distribution over time, as well as Lotka's Law. By simulation, we obtain good fits to three empirical samples that exhibit the extreme range of the observed parameters. The overall error is 7% or less. An advantage of this model is that the parameters can be linked to observable human factors. That is, the model is not merely descriptive, but also provides insight into the causes of differences between samples. Furthermore, the differences can be tested with powerful statistical tools
    Type
    a
  3. Huber, J.C.: ¬A new method for analyzing scientific productivity (2001) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 6845) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=6845,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 6845, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6845)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Previously, a new method for measuring scientific productivity was demonstrated for authors in mathematical logic and some subareas of 19th-century physics. The purpose of this article is to apply this new method to other fields to support its general applicability. We show that the method yields the same results for modern physicists, biologists, psychologists, inventors, and composers. That is, each individual's production is constant over time, and the time-period fluctuations follow the Poisson distribution. However, the productivity (e.g., papers per year) varies widely across individuals. We show that the distribution of productivity does not follow the normal (i.e., bell curve) distribution, but rather follows the exponential distribution. Thus, most authors produce at the lowest rate and very few authors produce at the higher rates. We also show that the career duration of individuals follows the exponential distribution. Thus, most authors have a very short career and very few have a long career. The principal advantage of the new method is that the detail structure of author productivity can be examined, such as trends, etc. Another advantage is that information science studies have guidance for the length of time interval being examined and estimating when an author's entire body of work has been recorded.
    Type
    a