Search (14 results, page 1 of 1)

  • × author_ss:"Schreiber, M."
  1. Schreiber, M.: Das Web ist eine Wolke (2009) 0.04
    0.044493016 = product of:
      0.13347904 = sum of:
        0.017205253 = weight(_text_:und in 2620) [ClassicSimilarity], result of:
          0.017205253 = score(doc=2620,freq=48.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.35989314 = fieldWeight in 2620, product of:
              6.928203 = tf(freq=48.0), with freq of:
                48.0 = termFreq=48.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2620)
        0.0022912533 = weight(_text_:in in 2620) [ClassicSimilarity], result of:
          0.0022912533 = score(doc=2620,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.0780921 = fieldWeight in 2620, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2620)
        0.017205253 = weight(_text_:und in 2620) [ClassicSimilarity], result of:
          0.017205253 = score(doc=2620,freq=48.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.35989314 = fieldWeight in 2620, product of:
              6.928203 = tf(freq=48.0), with freq of:
                48.0 = termFreq=48.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2620)
        0.024767097 = weight(_text_:einzelne in 2620) [ClassicSimilarity], result of:
          0.024767097 = score(doc=2620,freq=2.0), product of:
            0.12695427 = queryWeight, product of:
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.021569785 = queryNorm
            0.19508676 = fieldWeight in 2620, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2620)
        0.0101513015 = weight(_text_:bibliotheken in 2620) [ClassicSimilarity], result of:
          0.0101513015 = score(doc=2620,freq=2.0), product of:
            0.08127756 = queryWeight, product of:
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.021569785 = queryNorm
            0.124896735 = fieldWeight in 2620, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2620)
        0.024767097 = weight(_text_:einzelne in 2620) [ClassicSimilarity], result of:
          0.024767097 = score(doc=2620,freq=2.0), product of:
            0.12695427 = queryWeight, product of:
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.021569785 = queryNorm
            0.19508676 = fieldWeight in 2620, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2620)
        0.01594407 = weight(_text_:deutsche in 2620) [ClassicSimilarity], result of:
          0.01594407 = score(doc=2620,freq=2.0), product of:
            0.10186133 = queryWeight, product of:
              4.7224083 = idf(docFreq=1068, maxDocs=44218)
              0.021569785 = queryNorm
            0.1565272 = fieldWeight in 2620, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7224083 = idf(docFreq=1068, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2620)
        0.0101513015 = weight(_text_:bibliotheken in 2620) [ClassicSimilarity], result of:
          0.0101513015 = score(doc=2620,freq=2.0), product of:
            0.08127756 = queryWeight, product of:
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.021569785 = queryNorm
            0.124896735 = fieldWeight in 2620, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2620)
        0.0101513015 = weight(_text_:bibliotheken in 2620) [ClassicSimilarity], result of:
          0.0101513015 = score(doc=2620,freq=2.0), product of:
            0.08127756 = queryWeight, product of:
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.021569785 = queryNorm
            0.124896735 = fieldWeight in 2620, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2620)
        8.4512506E-4 = weight(_text_:s in 2620) [ClassicSimilarity], result of:
          8.4512506E-4 = score(doc=2620,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.036037173 = fieldWeight in 2620, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2620)
      0.33333334 = coord(10/30)
    
    Content
    "Auf einem Uralt-PC HD-Videos schneiden, Handys mit 500 GByte Speicherplatz ausstatten, Software und Daten auf jedem beliebigen Computer abrufen - das soll Cloud Computing ermöglichen. Und das alles nur über einen Browser sowie einen schnellen Internetanschluss. Was genau verbirgt sich aber hinter diesem Begriff, der seit Monaten durch die Medien wandert?. CHIP erklärt das Prinzip und verrät, ob Cloud Computing nur ein Hype oder die Zukunft ist. Hardware ade: Die Software läuft in jedem Browser Der Grundgedanke beim Cloud Computing ist, dass alle Anwendungen im Web laufen - von einfacher Software bis hin zu kompletten Betriebssystemen. Der User muss sich keine teure Hardware anschaffen, sich keine Gedanken um die Aktualisierung des Systems machen und auch keine Software mehr kaufen. Das klingt nach Zukunftsmusik, aber die Ansätze sind bereits vorhanden. Google zeigt, wie's geht: Office-Tools, E-Mail-Konten, RSS-Reader, ein Kalender und weitere Programme laufen plattformunabhängig im Webbrowser. Alle Programme und Daten lagern auf den Google-Servern und werden je nach Bedarf geladen. Möglich wird das durch riesige Serverparks von Unternehmen wie Microsoft, Google, Amazon oder IBM: Die Anlagen stellen viel mehr Leistung bereit, als sie verbrauchen können.
    Es entsteht Leerlauf, der Geld kostet, ohne Nutzen zu bringen. Um die Rechnerauslastung zu optimieren, bieten die Firmen ihre Rechenpower Privatkunden und Unternehmen an. Ein cleveres Geschäftsmodell, das sich für beide Seiten lohnt. Der einzelne Kunde zahlt nicht für Programmlizenzen oder Server, sondern nur die tatsächlich verbrauchte Leistung - zu Stoßzeiten, kann er flexibel Rechenpower hinzubuchen. Der User nutzt also skalierbare IT-Services. In diesem Netzwerk lassen sich auch diverse Anbieter miteinander verknüpfen. Beispielsweise Amazons virtuellen Speicher "Simple Storage Service" (S3), mit Googles Entwicklungsplattform "App Engine" (GAE). So bestehen die Dienstleistungen aus einer Bündelung verschiedener Angebote, die nach einem Baukastenprinzip funktionieren - eine Wolke (engl. "cloud") aus Servern und Services ensteht. Der Nutzer holt sich jeweils die Leistungen, die er braucht und kombiniert sie nach seinen persönlichen Bedürfnissen. Die Grundlagen: Mehr Leistung durch Zusammenarbeit So angesagt Cloud Computing auch ist - es ist keine neue Erfindung. Sondern vielmehr eine Zusammenführung längst bestehender Techniken. Zu den Voraussetzungen gehören Computercluster, Grid Computing und Utility Computing. Ein Cluster besteht aus einer Reihe von Computern, die untereinander vernetzt sind und somit die Rechenpower erhöhen (High Performance Computing). Außerdem können sie das Risiko eines Datencrashs minimieren, indem ein defekter Server seine Aufgaben auf einen anderen umleitet (High Availability Cluster). Cluster werden häufig auch als Serverparks oder Serverfarmen bezeichnet.
    Ein Grid dient hauptsächlich zur Bewältigung rechenintensiver Aufgaben. Der Unterschied zu Clustern: Grids bestehen aus einer losen Verkettung weltweit verstreuter Server, denen sich diverse Institutionen anschließen können. Standardisierte Bibliotheken und Middleware erleichtern die Zusammenarbeit: Die dritte Voraussetzung für das Cloud Computing ist das Utility Computing. Hier bieten Unternehmen Leistungen wie Onlinespeicher, virtuelle Server und Software als gebündelten Service an und rechnen nach verbrauchter Leistung ab. Neue Möglichkeiten: Software nach dem Baukastenprinzip Auf diesen Grundlagen aufbauend entsteht das Cloud Computing. Es verbindet die Komponenten und eröffnet dadurch diverse Möglichkeiten, etwa die "Infrastructure as a Service" (IaaS): Die Betreiber übernehmen die komplette Infrastruktur, etwa virtualisierte Hardware. Diese ist wie bei der"Amazon Elastic Compute Cloud" (EC2) je nach Anforderung skalierbar. Die "Platform as a Service" (PaaS) richtet sich hauptsächlich an Entwickler: Hier stellt der Betreiber kein Enduser-Programm, sondern eine komplette Arbeitsumgebung bereit. So können Software-Anbieter eigene Webapplikationen schreiben und vertreiben. Das wohl bekannteste Beispiel ist die "Google App Engine' die Python als Programmiersprache sowie das Python-Web-Framework "Django" einsetzt. Die fertige Software liegt auf den Servern des Betreibers und benötigt weder eine lokale Installation noch eigene Hardware. PaaS wird daher auch als "Cloudware" bezeichnet.
    Der dritte Ansatz des Cloud Computings ist die "Software as a Service" (SaaS): Im Gegensatz zum klassischen Modell, bei dem der Kunde eine Software kauft und sie auf seinem PC installiert, kann der Nutzer die Programme beim SaaS nur "mieten". Die Tools laufen im Browser und sind in der Regel plattformunabhängig. Während der User die Angebote lediglich nutzt, bietet die Verbindung aller Teilbereiche vor allem jungen Start-ups, wesentliche Vorteile: Da sie nicht mehr auf eigene Server angewiesen sind, sinkt der Kostendruck. So ist es möglich, eine Webseite ohne eigene Hardware aufzubauen und bei Bedarf mehr Rechenpower zu mieten. Zehn.de (www.zehn.de) etwa, ist das erste deutsche Portal, das die vernetzten Strukturen des Cloud Computings voll ausschöpft. Während die Entwickler für die gesamte Kommunikation der Seite wie Front- und Backend, Datensätze und -filter auf die Google App Engine setzen, liegt die Software zur semantischen Analyse der Inhalte bei Amazon EC2 und Bilder sowie Videos bei Amazon S3."
    Source
    Chip. 2009, H.2, S.24-25
  2. Schreiber, M.: Uncertainties and ambiguities in percentiles and how to avoid them (2013) 0.00
    0.002005331 = product of:
      0.020053308 = sum of:
        0.006110009 = weight(_text_:in in 675) [ClassicSimilarity], result of:
          0.006110009 = score(doc=675,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2082456 = fieldWeight in 675, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=675)
        0.002253667 = weight(_text_:s in 675) [ClassicSimilarity], result of:
          0.002253667 = score(doc=675,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 675, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=675)
        0.011689632 = product of:
          0.023379264 = sum of:
            0.023379264 = weight(_text_:22 in 675) [ClassicSimilarity], result of:
              0.023379264 = score(doc=675,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.30952093 = fieldWeight in 675, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=675)
          0.5 = coord(1/2)
      0.1 = coord(3/30)
    
    Abstract
    The recently proposed fractional scoring scheme is used to attribute publications to percentile rank classes. It is shown that in this way uncertainties and ambiguities in the evaluation of specific quantile values and percentile ranks do not occur. Using the fractional scoring the total score of all papers exactly reproduces the theoretical value.
    Date
    22. 3.2013 19:52:05
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.3, S.640-643
  3. Waltman, L.; Schreiber, M.: On the calculation of percentile-based bibliometric indicators (2013) 0.00
    0.0015135966 = product of:
      0.015135966 = sum of:
        0.005915991 = weight(_text_:in in 616) [ClassicSimilarity], result of:
          0.005915991 = score(doc=616,freq=10.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.20163295 = fieldWeight in 616, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=616)
        0.007529726 = product of:
          0.022589177 = sum of:
            0.022589177 = weight(_text_:l in 616) [ClassicSimilarity], result of:
              0.022589177 = score(doc=616,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.26348472 = fieldWeight in 616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.046875 = fieldNorm(doc=616)
          0.33333334 = coord(1/3)
        0.0016902501 = weight(_text_:s in 616) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=616,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=616)
      0.1 = coord(3/30)
    
    Abstract
    A percentile-based bibliometric indicator is an indicator that values publications based on their position within the citation distribution of their field. The most straightforward percentile-based indicator is the proportion of frequently cited publications, for instance, the proportion of publications that belong to the top 10% most frequently cited of their field. Recently, more complex percentile-based indicators have been proposed. A difficulty in the calculation of percentile-based indicators is caused by the discrete nature of citation distributions combined with the presence of many publications with the same number of citations. We introduce an approach to calculating percentile-based indicators that deals with this difficulty in a more satisfactory way than earlier approaches suggested in the literature. We show in a formal mathematical framework that our approach leads to indicators that do not suffer from biases in favor of or against particular fields of science.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.2, S.372-379
  4. Schreiber, M.: Fractionalized counting of publications for the g-Index (2009) 0.00
    0.0013802482 = product of:
      0.013802482 = sum of:
        0.0045825066 = weight(_text_:in in 3125) [ClassicSimilarity], result of:
          0.0045825066 = score(doc=3125,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1561842 = fieldWeight in 3125, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3125)
        0.007529726 = product of:
          0.022589177 = sum of:
            0.022589177 = weight(_text_:l in 3125) [ClassicSimilarity], result of:
              0.022589177 = score(doc=3125,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.26348472 = fieldWeight in 3125, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3125)
          0.33333334 = coord(1/3)
        0.0016902501 = weight(_text_:s in 3125) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=3125,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 3125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=3125)
      0.1 = coord(3/30)
    
    Abstract
    L. Egghe ([2008]) studied the h-index (Hirsch index) and the g-index, counting the authorship of cited articles in a fractional way. But his definition of the gF-index for the case that the article count is fractionalized yielded values that were close to or even larger than the original g-index. Here I propose an alternative definition by which the g-index is modified in such a way that the resulting gm-index is always smaller than the original g-index. Based on the interpretation of the g-index as the highest number of articles of a scientist that received on average g or more citations, in the specification of the new gm-index the articles are counted fractionally not only for the rank but also for the average.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.10, S.2145-2150
  5. Schreiber, M.: ¬An empirical investigation of the g-index for 26 physicists in comparison with the h-index, the A-index, and the R-index (2008) 0.00
    0.0012092832 = product of:
      0.0120928325 = sum of:
        0.004409519 = weight(_text_:in in 1968) [ClassicSimilarity], result of:
          0.004409519 = score(doc=1968,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.15028831 = fieldWeight in 1968, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1968)
        0.006274772 = product of:
          0.018824315 = sum of:
            0.018824315 = weight(_text_:l in 1968) [ClassicSimilarity], result of:
              0.018824315 = score(doc=1968,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.2195706 = fieldWeight in 1968, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1968)
          0.33333334 = coord(1/3)
        0.0014085418 = weight(_text_:s in 1968) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=1968,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 1968, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1968)
      0.1 = coord(3/30)
    
    Abstract
    J.E. Hirsch (2005) introduced the h-index to quantify an individual's scientific research output by the largest number h of a scientist's papers that received at least h citations. To take into account the highly skewed frequency distribution of citations, L. Egghe (2006a) proposed the g-index as an improvement of the h-index. I have worked out 26 practical cases of physicists from the Institute of Physics at Chemnitz University of Technology, and compare the h and g values in this study. It is demonstrated that the g-index discriminates better between different citation patterns. This also can be achieved by evaluating B.H. Jin's (2006) A-index, which reflects the average number of citations in the h-core, and interpreting it in conjunction with the h-index. h and A can be combined into the R-index to measure the h-core's citation intensity. I also have determined the A and R values for the 26 datasets. For a better comparison, I utilize interpolated indices. The correlations between the various indices as well as with the total number of papers and the highest citation counts are discussed. The largest Pearson correlation coefficient is found between g and R. Although the correlation between g and h is relatively strong, the arrangement of the datasets is significantly different depending on whether they are put into order according to the values of either h or g.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.9, S.1513-1522
  6. Pörzgen, R.; Schreiber, M.: ¬Die Informationsvermittlungsstelle : Planung, Einrichtung, Betrieb (1993) 0.00
    7.8339124E-4 = product of:
      0.011750868 = sum of:
        0.0061733257 = weight(_text_:in in 5524) [ClassicSimilarity], result of:
          0.0061733257 = score(doc=5524,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21040362 = fieldWeight in 5524, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=5524)
        0.0055775414 = weight(_text_:s in 5524) [ClassicSimilarity], result of:
          0.0055775414 = score(doc=5524,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.23783323 = fieldWeight in 5524, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=5524)
      0.06666667 = coord(2/30)
    
    Footnote
    Rez. in: Mitteilungsblatt VdB NW N.F. 43(1993) H.3, S.331-333 (A. Weber)
    Pages
    IV,124 S
  7. Schreiber, M.: Revisiting the g-index : the average number of citations in the g-core (2009) 0.00
    6.7611027E-4 = product of:
      0.010141654 = sum of:
        0.007887987 = weight(_text_:in in 3313) [ClassicSimilarity], result of:
          0.007887987 = score(doc=3313,freq=10.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.26884392 = fieldWeight in 3313, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=3313)
        0.002253667 = weight(_text_:s in 3313) [ClassicSimilarity], result of:
          0.002253667 = score(doc=3313,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 3313, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=3313)
      0.06666667 = coord(2/30)
    
    Abstract
    The g-index is discussed in terms of the average number of citations of the publications in the g-core, showing that it combines features of the h-index and the A-index in one number. For a visualization, data of 8 famous physicists are presented and analyzed. In comparison with the h-index, the g-index increases between 67% and 144%, on average by a factor of 2.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.1, S.169-174
  8. Schreiber, M.: Inconsistencies in the highly cited publications indicator (2013) 0.00
    5.575784E-4 = product of:
      0.008363675 = sum of:
        0.006110009 = weight(_text_:in in 815) [ClassicSimilarity], result of:
          0.006110009 = score(doc=815,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2082456 = fieldWeight in 815, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=815)
        0.002253667 = weight(_text_:s in 815) [ClassicSimilarity], result of:
          0.002253667 = score(doc=815,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 815, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=815)
      0.06666667 = coord(2/30)
    
    Abstract
    One way of evaluating individual scientists is the determination of the number of highly cited publications, where the threshold is given by a large reference set. It is shown that this indicator behaves in a counterintuitive way, leading to inconsistencies in the ranking of different scientists.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.6, S.1298-1302
  9. Schreiber, M.: Empirical evidence for the relevance of fractional scoring in the calculation of percentile rank scores (2013) 0.00
    5.43019E-4 = product of:
      0.008145284 = sum of:
        0.0061733257 = weight(_text_:in in 640) [ClassicSimilarity], result of:
          0.0061733257 = score(doc=640,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21040362 = fieldWeight in 640, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=640)
        0.0019719584 = weight(_text_:s in 640) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=640,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=640)
      0.06666667 = coord(2/30)
    
    Abstract
    Fractional scoring has been proposed to avoid inconsistencies in the attribution of publications to percentile rank classes. Uncertainties and ambiguities in the evaluation of percentile ranks can be demonstrated most easily with small data sets. But for larger data sets, an often large number of papers with the same citation count leads to the same uncertainties and ambiguities, which can be avoided by fractional scoring, demonstrated by four different empirical data sets with several thousand publications each, which are assigned to six percentile rank classes. Only by utilizing fractional scoring does, the total score of all papers exactly reproduce the theoretical value in each case.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.861-867
  10. Schreiber, M.: Inconsistencies of recently proposed citation impact indicators and how to avoid them (2012) 0.00
    5.096362E-4 = product of:
      0.0076445425 = sum of:
        0.006236001 = weight(_text_:in in 459) [ClassicSimilarity], result of:
          0.006236001 = score(doc=459,freq=16.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21253976 = fieldWeight in 459, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=459)
        0.0014085418 = weight(_text_:s in 459) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=459,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 459, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=459)
      0.06666667 = coord(2/30)
    
    Abstract
    It is shown that under certain circumstances in particular for small data sets, the recently proposed citation impact indicators I3(6PR) and R(6,k) behave inconsistently when additional papers or citations are taken into consideration. Three simple examples are presented, in which the indicators fluctuate strongly and the ranking of scientists in the evaluated group is sometimes completely mixed up by minor changes in the database. The erratic behavior is traced to the specific way in which weights are attributed to the six percentile rank classes, specifically for the tied papers. For 100 percentile rank classes, the effects will be less serious. For the six classes, it is demonstrated that a different way of assigning weights avoids these problems, although the nonlinearity of the weights for the different percentile rank classes can still lead to (much less frequent) changes in the ranking. This behavior is not undesired because it can be used to correct for differences in citation behavior in different fields. Remaining deviations from the theoretical value R(6,k) = 1.91 can be avoided by a new scoring rule: the fractional scoring. Previously proposed consistency criteria are amended by another property of strict independence at which a performance indicator should aim.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.10, S.2062-2073
  11. Schreiber, M.: ¬A variant of the h-index to measure recent performance (2015) 0.00
    4.2247731E-4 = product of:
      0.0063371593 = sum of:
        0.004365201 = weight(_text_:in in 2262) [ClassicSimilarity], result of:
          0.004365201 = score(doc=2262,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.14877784 = fieldWeight in 2262, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2262)
        0.0019719584 = weight(_text_:s in 2262) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=2262,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 2262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2262)
      0.06666667 = coord(2/30)
    
    Abstract
    The predictive power of the h-index has been shown to depend on citations to rather old publications. This has raised doubts about its usefulness for predicting future scientific achievements. Here, I investigate a variant that considers only recent publications and is therefore more useful in academic hiring processes and for the allocation of research resources. It is simply defined in analogy to the usual h-index, but takes into account only publications from recent years, and it can easily be determined from the ISI Web of Knowledge.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.11, S.2373-2380
  12. Schreiber, M.: Do we need the g-index? (2013) 0.00
    3.854188E-4 = product of:
      0.0057812817 = sum of:
        0.003527615 = weight(_text_:in in 1113) [ClassicSimilarity], result of:
          0.003527615 = score(doc=1113,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.120230645 = fieldWeight in 1113, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1113)
        0.002253667 = weight(_text_:s in 1113) [ClassicSimilarity], result of:
          0.002253667 = score(doc=1113,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 1113, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=1113)
      0.06666667 = coord(2/30)
    
    Abstract
    Using a very small sample of 8 data sets it was recently shown by De Visscher (2011) that the g-index is very close to the square root of the total number of citations. It was argued that there is no bibliometrically meaningful difference. Using another somewhat larger empirical sample of 26 data sets I show that the difference may be larger and I argue in favor of the g-index.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2396-2399
  13. Schreiber, M.: Restricting the h-index to a citation time window : a case study of a timed Hirsch index (2014) 0.00
    1.6629338E-4 = product of:
      0.004988801 = sum of:
        0.004988801 = weight(_text_:in in 1563) [ClassicSimilarity], result of:
          0.004988801 = score(doc=1563,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.17003182 = fieldWeight in 1563, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1563)
      0.033333335 = coord(1/30)
    
    Abstract
    The h-index has been shown to increase in many cases mostly because of citations to rather old publications. This inertia can be circumvented by restricting the evaluation to a citation time window. Here I report results of an empirical study analyzing the evolution of the thus defined timed h-index in dependence on the length of the citation time window.
  14. Schreiber, M.: ¬A case study of the modified Hirsch index hm accounting for multiple coauthors (2009) 0.00
    6.5731954E-5 = product of:
      0.0019719584 = sum of:
        0.0019719584 = weight(_text_:s in 2858) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=2858,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 2858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2858)
      0.033333335 = coord(1/30)
    
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.6, S.1274-1282

Languages

  • e 12
  • d 2

Types