Search (305 results, page 1 of 16)

  • × theme_ss:"Informetrie"
  1. Herb, U.; Beucke, D.: ¬Die Zukunft der Impact-Messung : Social Media, Nutzung und Zitate im World Wide Web (2013) 0.16
    0.15873328 = product of:
      0.47619984 = sum of:
        0.23809992 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.23809992 = score(doc=2188,freq=2.0), product of:
            0.3177388 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03747799 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.23809992 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.23809992 = score(doc=2188,freq=2.0), product of:
            0.3177388 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03747799 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
      0.33333334 = coord(2/6)
    
    Content
    Vgl. unter: https://www.leibniz-science20.de%2Fforschung%2Fprojekte%2Faltmetrics-in-verschiedenen-wissenschaftsdisziplinen%2F&ei=2jTgVaaXGcK4Udj1qdgB&usg=AFQjCNFOPdONj4RKBDf9YDJOLuz3lkGYlg&sig2=5YI3KWIGxBmk5_kv0P_8iQ.
  2. Dobrota, M.; Dobrota, M.: ARWU ranking uncertainty and sensitivity : what if the award factor was Excluded? (2016) 0.08
    0.080006525 = product of:
      0.24001957 = sum of:
        0.22986408 = weight(_text_:ranking in 2652) [ClassicSimilarity], result of:
          0.22986408 = score(doc=2652,freq=20.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            1.1339021 = fieldWeight in 2652, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=2652)
        0.0101555 = product of:
          0.030466499 = sum of:
            0.030466499 = weight(_text_:22 in 2652) [ClassicSimilarity], result of:
              0.030466499 = score(doc=2652,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23214069 = fieldWeight in 2652, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2652)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The Academic Ranking of World Universities (ARWU) uses six university performance indicators, including "Alumni" and "Awards"-the number of alumni and staff winning Nobel Prizes and Fields Medals. These two indicators raised doubts about the reliability of this ranking method because they are difficult to cope with. Recently, a newsletter was published featuring a reduced ARWU ranking list, leaving out Nobel Prize and Fields Medal indicators: the Alternative Ranking (Excluding Award Factor). We used uncertainty and sensitivity analyses to examine and compare the stability and confidence of the official ARWU ranking and the Alternative Ranking. The results indicate that if the ARWU ranking is reduced to the 4-indicator Alternative Ranking, it shows greater certainty and stability in ranking universities.
    Date
    22. 1.2016 14:40:53
  3. Chan, H.C.; Kim, H.-W.; Tan, W.C.: Information systems citation patterns from International Conference on Information Systems articles (2006) 0.05
    0.045352414 = product of:
      0.13605724 = sum of:
        0.12590174 = weight(_text_:ranking in 201) [ClassicSimilarity], result of:
          0.12590174 = score(doc=201,freq=6.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.62106377 = fieldWeight in 201, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=201)
        0.0101555 = product of:
          0.030466499 = sum of:
            0.030466499 = weight(_text_:22 in 201) [ClassicSimilarity], result of:
              0.030466499 = score(doc=201,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23214069 = fieldWeight in 201, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=201)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Research patterns could enhance understanding of the Information Systems (IS) field. Citation analysis is the methodology commonly used to determine such research patterns. In this study, the citation methodology is applied to one of the top-ranked Information Systems conferences - International Conference on Information Systems (ICIS). Information is extracted from papers in the proceedings of ICIS 2000 to 2002. A total of 145 base articles and 4,226 citations are used. Research patterns are obtained using total citations, citations per journal or conference, and overlapping citations. We then provide the citation ranking of journals and conferences. We also examine the difference between the citation ranking in this study and the ranking of IS journals and IS conferences in other studies. Based on the comparison, we confirm that IS research is a multidisciplinary research area. We also identify the most cited papers and authors in the IS research area, and the organizations most active in producing papers in the top-rated IS conference. We discuss the findings and implications of the study.
    Date
    3. 1.2007 17:22:03
  4. Perianes-Rodriguez, A.; Ruiz-Castillo, J.: ¬The impact of classification systems in the evaluation of the research performance of the Leiden Ranking universities (2018) 0.04
    0.04322958 = product of:
      0.12968874 = sum of:
        0.121149 = weight(_text_:ranking in 4374) [ClassicSimilarity], result of:
          0.121149 = score(doc=4374,freq=8.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5976189 = fieldWeight in 4374, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4374)
        0.008539738 = product of:
          0.025619213 = sum of:
            0.025619213 = weight(_text_:29 in 4374) [ClassicSimilarity], result of:
              0.025619213 = score(doc=4374,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.19432661 = fieldWeight in 4374, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4374)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    In this article, we investigate the consequences of choosing different classification systems-namely, the way publications (or journals) are assigned to scientific fields-for the ranking of research units. We study the impact of this choice on the ranking of 500 universities in the 2013 edition of the Leiden Ranking in two cases. First, we compare a Web of Science (WoS) journal-level classification system, consisting of 236 subject categories, and a publication-level algorithmically constructed system, denoted G8, consisting of 5,119 clusters. The result is that the consequences of the move from the WoS to the G8 system using the Top 1% citation impact indicator are much greater than the consequences of this move using the Top 10% indicator. Second, we compare the G8 classification system and a publication-level alternative of the same family, the G6 system, consisting of 1,363 clusters. The result is that, although less important than in the previous case, the consequences of the move from the G6 to the G8 system under the Top 1% indicator are still of a large order of magnitude.
    Date
    29. 7.2018 14:41:34
  5. Meho, L.I.; Rogers, Y.: Citation counting, citation ranking, and h-index of human-computer interaction researchers : a comparison of Scopus and Web of Science (2008) 0.04
    0.03779368 = product of:
      0.11338104 = sum of:
        0.10491812 = weight(_text_:ranking in 2352) [ClassicSimilarity], result of:
          0.10491812 = score(doc=2352,freq=6.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.51755315 = fieldWeight in 2352, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2352)
        0.008462917 = product of:
          0.025388751 = sum of:
            0.025388751 = weight(_text_:22 in 2352) [ClassicSimilarity], result of:
              0.025388751 = score(doc=2352,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.19345059 = fieldWeight in 2352, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2352)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    This study examines the differences between Scopus and Web of Science in the citation counting, citation ranking, and h-index of 22 top human-computer interaction (HCI) researchers from EQUATOR - a large British Interdisciplinary Research Collaboration project. Results indicate that Scopus provides significantly more coverage of HCI literature than Web of Science, primarily due to coverage of relevant ACM and IEEE peer-reviewed conference proceedings. No significant differences exist between the two databases if citations in journals only are compared. Although broader coverage of the literature does not significantly alter the relative citation ranking of individual researchers, Scopus helps distinguish between the researchers in a more nuanced fashion than Web of Science in both citation counting and h-index. Scopus also generates significantly different maps of citation networks of individual scholars than those generated by Web of Science. The study also presents a comparison of h-index scores based on Google Scholar with those based on the union of Scopus and Web of Science. The study concludes that Scopus can be used as a sole data source for citation-based research and evaluation in HCI, especially when citations in conference proceedings are sought, and that researchers should manually calculate h scores instead of relying on system calculations.
  6. Mayr, P.: Bradfordizing als Re-Ranking-Ansatz in Literaturinformationssystemen (2011) 0.04
    0.037682008 = product of:
      0.11304602 = sum of:
        0.102798335 = weight(_text_:ranking in 4292) [ClassicSimilarity], result of:
          0.102798335 = score(doc=4292,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5070964 = fieldWeight in 4292, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=4292)
        0.010247685 = product of:
          0.030743055 = sum of:
            0.030743055 = weight(_text_:29 in 4292) [ClassicSimilarity], result of:
              0.030743055 = score(doc=4292,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23319192 = fieldWeight in 4292, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4292)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    In diesem Artikel wird ein Re-Ranking-Ansatz für Suchsysteme vorgestellt, der die Recherche nach wissenschaftlicher Literatur messbar verbessern kann. Das nichttextorientierte Rankingverfahren Bradfordizing wird eingeführt und anschließend im empirischen Teil des Artikels bzgl. der Effektivität für typische fachbezogene Recherche-Topics evaluiert. Dem Bradford Law of Scattering (BLS), auf dem Bradfordizing basiert, liegt zugrunde, dass sich die Literatur zu einem beliebigen Fachgebiet bzw. -thema in Zonen unterschiedlicher Dokumentenkonzentration verteilt. Dem Kernbereich mit hoher Konzentration der Literatur folgen Bereiche mit mittlerer und geringer Konzentration. Bradfordizing sortiert bzw. rankt eine Dokumentmenge damit nach den sogenannten Kernzeitschriften. Der Retrievaltest mit 164 intellektuell bewerteten Fragestellungen in Fachdatenbanken aus den Bereichen Sozial- und Politikwissenschaften, Wirtschaftswissenschaften, Psychologie und Medizin zeigt, dass die Dokumente der Kernzeitschriften signifikant häufiger relevant bewertet werden als Dokumente der zweiten Dokumentzone bzw. den Peripherie-Zeitschriften. Die Implementierung von Bradfordizing und weiteren Re-Rankingverfahren liefert unmittelbare Mehrwerte für den Nutzer.
    Date
    9. 2.2011 17:47:29
  7. Rötzer, F.: Bindestriche in Titeln von Artikeln schaden der wissenschaftlichen Reputation (2019) 0.04
    0.037682008 = product of:
      0.11304602 = sum of:
        0.102798335 = weight(_text_:ranking in 5697) [ClassicSimilarity], result of:
          0.102798335 = score(doc=5697,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5070964 = fieldWeight in 5697, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=5697)
        0.010247685 = product of:
          0.030743055 = sum of:
            0.030743055 = weight(_text_:29 in 5697) [ClassicSimilarity], result of:
              0.030743055 = score(doc=5697,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23319192 = fieldWeight in 5697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5697)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Wissenschaftler wollen herausgefunden haben, dass das wichtige Ranking nach Zitierhäufigkeit und dem Journal Impact Factor fehlerhaft ist. Man sollte ja denken, dass Programme, seien sie nun KI-gestützt oder nicht, vorurteilslos nach bestimmten Kriterien etwa ein Ranking erstellen können. Aber es kommen immer wieder unbedachte Einflüsse ins Spiel, die lange Zeit unbemerkt bleiben können. Bei KI-Programmen ist in letzter Zeit klar geworden, dass die Datenauswahl eine verzerrende Rolle spielen kann, die zu seltsamen Ergebnissen führt.
    Date
    29. 6.2019 17:46:17
  8. Zhang, Y.; Jansen, B.J.; Spink, A.: Identification of factors predicting clickthrough in Web searching using neural network analysis (2009) 0.04
    0.037651278 = product of:
      0.112953834 = sum of:
        0.102798335 = weight(_text_:ranking in 2742) [ClassicSimilarity], result of:
          0.102798335 = score(doc=2742,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5070964 = fieldWeight in 2742, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.0101555 = product of:
          0.030466499 = sum of:
            0.030466499 = weight(_text_:22 in 2742) [ClassicSimilarity], result of:
              0.030466499 = score(doc=2742,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23214069 = fieldWeight in 2742, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2742)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    In this research, we aim to identify factors that significantly affect the clickthrough of Web searchers. Our underlying goal is determine more efficient methods to optimize the clickthrough rate. We devise a clickthrough metric for measuring customer satisfaction of search engine results using the number of links visited, number of queries a user submits, and rank of clicked links. We use a neural network to detect the significant influence of searching characteristics on future user clickthrough. Our results show that high occurrences of query reformulation, lengthy searching duration, longer query length, and the higher ranking of prior clicked links correlate positively with future clickthrough. We provide recommendations for leveraging these findings for improving the performance of search engine retrieval and result ranking, along with implications for search engine marketing.
    Date
    22. 3.2009 17:49:11
  9. Waltman, L.; Calero-Medina, C.; Kosten, J.; Noyons, E.C.M.; Tijssen, R.J.W.; Eck, N.J. van; Leeuwen, T.N. van; Raan, A.F.J. van; Visser, M.S.; Wouters, P.: ¬The Leiden ranking 2011/2012 : data collection, indicators, and interpretation (2012) 0.04
    0.03640075 = product of:
      0.21840449 = sum of:
        0.21840449 = weight(_text_:ranking in 514) [ClassicSimilarity], result of:
          0.21840449 = score(doc=514,freq=26.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            1.0773728 = fieldWeight in 514, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=514)
      0.16666667 = coord(1/6)
    
    Abstract
    The Leiden Ranking 2011/2012 is a ranking of universities based on bibliometric indicators of publication output, citation impact, and scientific collaboration. The ranking includes 500 major universities from 41 different countries. This paper provides an extensive discussion of the Leiden Ranking 2011/2012. The ranking is compared with other global university rankings, in particular the Academic Ranking of World Universities (commonly known as the Shanghai Ranking) and the Times Higher Education World University Rankings. The comparison focuses on the methodological choices underlying the different rankings. Also, a detailed description is offered of the data collection methodology of the Leiden Ranking 2011/2012 and of the indicators used in the ranking. Various innovations in the Leiden Ranking 2011/2012 are presented. These innovations include (1) an indicator based on counting a university's highly cited publications, (2) indicators based on fractional rather than full counting of collaborative publications, (3) the possibility of excluding non-English language publications, and (4) the use of stability intervals. Finally, some comments are made on the interpretation of the ranking and a number of limitations of the ranking are pointed out.
  10. Folly, G.; Hajtman, B.; Nagy, J.I.; Ruff, I.: Some methodological problems in ranking scientists by citation analysis (1981) 0.03
    0.032306403 = product of:
      0.1938384 = sum of:
        0.1938384 = weight(_text_:ranking in 3275) [ClassicSimilarity], result of:
          0.1938384 = score(doc=3275,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.95619017 = fieldWeight in 3275, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.125 = fieldNorm(doc=3275)
      0.16666667 = coord(1/6)
    
  11. Haiqi, Z.: ¬The literature of Qigong : publication patterns and subject headings (1997) 0.03
    0.032217465 = product of:
      0.09665239 = sum of:
        0.084804304 = weight(_text_:ranking in 862) [ClassicSimilarity], result of:
          0.084804304 = score(doc=862,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.4183332 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=862)
        0.011848084 = product of:
          0.03554425 = sum of:
            0.03554425 = weight(_text_:22 in 862) [ClassicSimilarity], result of:
              0.03554425 = score(doc=862,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.2708308 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Reports results of a bibliometric study of the literature of Qigong: a relaxation technique used to teach patients to control their heart rate, blood pressure, temperature and other involuntary functions through controlles breathing. All articles indexed in the MEDLINE CD-ROM database, between 1965 and 1995 were identified using 'breathing exercises' MeSH term. The articles were analyzed for geographical and language distribution and a ranking exercise enabled a core list of periodicals to be identified. In addition, the study shed light on the changing frequency of the MeSH terms and evaluated the research areas by measuring the information from these respective MeSH headings
    Source
    International forum on information and documentation. 22(1997) no.3, S.38-44
  12. Vieira, E.S.; Cabral, J.A.S.; Gomes, J.A.N.F.: Definition of a model based on bibliometric indicators for assessing applicants to academic positions (2014) 0.03
    0.032217465 = product of:
      0.09665239 = sum of:
        0.084804304 = weight(_text_:ranking in 1221) [ClassicSimilarity], result of:
          0.084804304 = score(doc=1221,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.4183332 = fieldWeight in 1221, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1221)
        0.011848084 = product of:
          0.03554425 = sum of:
            0.03554425 = weight(_text_:22 in 1221) [ClassicSimilarity], result of:
              0.03554425 = score(doc=1221,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.2708308 = fieldWeight in 1221, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1221)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    A model based on a set of bibliometric indicators is proposed for the prediction of the ranking of applicants to an academic position as produced by a committee of peers. The results show that a very small number of indicators may lead to a robust prediction of about 75% of the cases. We start with 12 indicators to build a few composite indicators by factor analysis. Following a discrete choice model, we arrive at 3 comparatively good predicative models. We conclude that these models have a surprisingly good predictive power and may help peers in their selection process.
    Date
    18. 3.2014 18:22:21
  13. Meho, L.I.; Sonnenwald, D.H.: Citation ranking versus peer evaluation of senior faculty research performance : a case study of Kurdish scholarship (2000) 0.03
    0.032053016 = product of:
      0.19231808 = sum of:
        0.19231808 = weight(_text_:ranking in 4382) [ClassicSimilarity], result of:
          0.19231808 = score(doc=4382,freq=14.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.94869053 = fieldWeight in 4382, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=4382)
      0.16666667 = coord(1/6)
    
    Abstract
    The purpose of this study is to analyze the relationship between citation ranking and peer evaluation in assessing senior faculty research performance. Other studies typically derive their peer evaluation data directly from referees, often in the form of ranking. This study uses two additional sources of peer evaluation data: citation contant analysis and book review content analysis. 2 main questions are investigated: (a) To what degree does citation ranking correlate with data from citation content analysis, book reviews and peer ranking? (b) Is citation ranking a valif evaluative indicator of research performance of senior faculty members? This study shows that citation ranking can provide a valid indicator for comparative evaluation of senior faculty research performance
  14. Jiang, Z.; Liu, X.; Chen, Y.: Recovering uncaptured citations in a scholarly network : a two-step citation analysis to estimate publication importance (2016) 0.03
    0.031401675 = product of:
      0.09420502 = sum of:
        0.085665286 = weight(_text_:ranking in 3018) [ClassicSimilarity], result of:
          0.085665286 = score(doc=3018,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.42258036 = fieldWeight in 3018, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3018)
        0.008539738 = product of:
          0.025619213 = sum of:
            0.025619213 = weight(_text_:29 in 3018) [ClassicSimilarity], result of:
              0.025619213 = score(doc=3018,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.19432661 = fieldWeight in 3018, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3018)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The citation relationships between publications, which are significant for assessing the importance of scholarly components within a network, have been used for various scientific applications. Missing citation metadata in scholarly databases, however, create problems for classical citation-based ranking algorithms and challenge the performance of citation-based retrieval systems. In this research, we utilize a two-step citation analysis method to investigate the importance of publications for which citation information is partially missing. First, we calculate the importance of the author and then use his importance to estimate the publication importance for some selected articles. To evaluate this method, we designed a simulation experiment-"random citation-missing"-to test the two-step citation analysis that we carried out with the Association for Computing Machinery (ACM) Digital Library (DL). In this experiment, we simulated different scenarios in a large-scale scientific digital library, from high-quality citation data, to very poor quality data, The results show that a two-step citation analysis can effectively uncover the importance of publications in different situations. More importantly, we found that the optimized impact from the importance of an author (first step) is exponentially increased when the quality of citation decreases. The findings from this study can further enhance citation-based publication-ranking algorithms for real-world applications.
    Date
    12. 6.2016 20:31:29
  15. Mayr, P.: Bradfordizing mit Katalogdaten : Alternative Sicht auf Suchergebnisse und Publikationsquellen durch Re-Ranking (2010) 0.03
    0.029675325 = product of:
      0.17805195 = sum of:
        0.17805195 = weight(_text_:ranking in 4301) [ClassicSimilarity], result of:
          0.17805195 = score(doc=4301,freq=12.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.87831676 = fieldWeight in 4301, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=4301)
      0.16666667 = coord(1/6)
    
    Abstract
    Nutzer erwarten für Literaturrecherchen in wissenschaftlichen Suchsystemen einen möglichst hohen Anteil an relevanten und qualitativen Dokumenten in den Trefferergebnissen. Insbesondere die Reihenfolge und Struktur der gelisteten Ergebnisse (Ranking) spielt, neben dem direkten Volltextzugriff auf die Dokumente, für viele Nutzer inzwischen eine entscheidende Rolle. Abgegrenzt wird Ranking oder Relevance Ranking von sogenannten Sortierungen zum Beispiel nach dem Erscheinungsjahr der Publikation, obwohl hier die Grenze zu »nach inhaltlicher Relevanz« gerankten Listen konzeptuell nicht sauber zu ziehen ist. Das Ranking von Dokumenten führt letztlich dazu, dass sich die Benutzer fokussiert mit den oberen Treffermengen eines Suchergebnisses beschäftigen. Der mittlere und untere Bereich eines Suchergebnisses wird häufig nicht mehr in Betracht gezogen. Aufgrund der Vielzahl an relevanten und verfügbaren Informationsquellen ist es daher notwendig, Kernbereiche in den Suchräumen zu identifizieren und diese anschließend dem Nutzer hervorgehoben zu präsentieren. Phillipp Mayr fasst hier die Ergebnisse seiner Dissertation zum Thema »Re-Ranking auf Basis von Bradfordizing für die verteilte Suche in Digitalen Bibliotheken« zusammen.
  16. Heine, M.M.: Bradford ranking conventions and their application to a growing literature (1998) 0.03
    0.028268103 = product of:
      0.16960861 = sum of:
        0.16960861 = weight(_text_:ranking in 1069) [ClassicSimilarity], result of:
          0.16960861 = score(doc=1069,freq=8.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.8366664 = fieldWeight in 1069, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1069)
      0.16666667 = coord(1/6)
    
    Abstract
    Bradford distributions describe the relationship between 'journal productivities' and 'journal rankings by productivity'. However, different ranking conventions exist, implying some ambiguity as to what the Bradford distribution 'is'. A need accordingly arises for a standard ranking convention to assist comparisons between empirical data, and also comparisons between empirical data and theoretical models. Five ranking conventions are described including the one used originally by Bradford, along with suggested distinctions between 'Bradford data set', 'Bradford distribution', 'Bradford graph', 'Bradford model', and 'Bradford's law'. Constructions such as the Lotka distribution, Groos droop (generalised to accomodate growth as well as fall-off in the Bradford log-graph), Brookes hooks, and the slope and intercept of the Bradford log graph are clarified on this basis
  17. So, C.Y.K.: Citation ranking versus expert judgement in evaluating communication scholars : effects of research specialty size and individual prominence (1998) 0.03
    0.027978165 = product of:
      0.16786899 = sum of:
        0.16786899 = weight(_text_:ranking in 327) [ClassicSimilarity], result of:
          0.16786899 = score(doc=327,freq=6.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.828085 = fieldWeight in 327, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0625 = fieldNorm(doc=327)
      0.16666667 = coord(1/6)
    
    Abstract
    Numerous attempts have been made to validate the use of citations as an evaluation method by comparing it with peer review. Unlike past studies using journals, research articles or universities as the subject matter, the present study extends the comparison to the ranking of individual scholars. Results show that citation ranking and expert judgement of communication scholars are highly correlated. The citation methods and the expert judgement method are found to work better in smaller research areas and yield more valid evaluation results for more prominent scholars
  18. Sen, B.K.; Pandalai, T.A.; Karanjai, A.: Ranking of scientists - a new approach (1998) 0.03
    0.027978165 = product of:
      0.16786899 = sum of:
        0.16786899 = weight(_text_:ranking in 5113) [ClassicSimilarity], result of:
          0.16786899 = score(doc=5113,freq=6.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.828085 = fieldWeight in 5113, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0625 = fieldNorm(doc=5113)
      0.16666667 = coord(1/6)
    
    Abstract
    A formula for the ranking of scientists based on diachronous citation counts is proposed. The paper generalises the fact that the citation generation potential (CGP) is not the same for all papers, it differs from paper to paper, and also to a certain extent depends on the subject domain of the papers. The method of ranking proposed in no way replaces peer review. It merely acts as an aid for peers to help them arrive at a better judgement.
  19. Menczer, F.: Lexical and semantic clustering by Web links (2004) 0.03
    0.027645696 = product of:
      0.082937084 = sum of:
        0.0726894 = weight(_text_:ranking in 3090) [ClassicSimilarity], result of:
          0.0726894 = score(doc=3090,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 3090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
        0.010247685 = product of:
          0.030743055 = sum of:
            0.030743055 = weight(_text_:29 in 3090) [ClassicSimilarity], result of:
              0.030743055 = score(doc=3090,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23319192 = fieldWeight in 3090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3090)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Recent Web-searching and -mining tools are combining text and link analysis to improve ranking and crawling algorithms. The central assumption behind such approaches is that there is a correiation between the graph structure of the Web and the text and meaning of pages. Here I formalize and empirically evaluate two general conjectures drawing connections from link information to lexical and semantic Web content. The link-content conjecture states that a page is similar to the pages that link to it, and the link-cluster conjecture that pages about the same topic are clustered together. These conjectures are offen simply assumed to hold, and Web search tools are built an such assumptions. The present quantitative confirmation sheds light an the connection between the success of the latest Web-mining techniques and the small world topology of the Web, with encouraging implications for the design of better crawling algorithms.
    Date
    9. 1.2005 19:20:29
  20. Sanderson, M.: Revisiting h measured on UK LIS and IR academics (2008) 0.03
    0.027645696 = product of:
      0.082937084 = sum of:
        0.0726894 = weight(_text_:ranking in 1867) [ClassicSimilarity], result of:
          0.0726894 = score(doc=1867,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 1867, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=1867)
        0.010247685 = product of:
          0.030743055 = sum of:
            0.030743055 = weight(_text_:29 in 1867) [ClassicSimilarity], result of:
              0.030743055 = score(doc=1867,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23319192 = fieldWeight in 1867, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1867)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    A brief communication appearing in this journal ranked UK-based LIS and (some) IR academics by their h-index using data derived from the Thomson ISI Web of Science(TM) (WoS). In this brief communication, the same academics were re-ranked, using other popular citation databases. It was found that for academics who publish more in computer science forums, their h was significantly different due to highly cited papers missed by WoS; consequently, their rank changed substantially. The study was widened to a broader set of UK-based LIS and IR academics in which results showed similar statistically significant differences. A variant of h, hmx, was introduced that allowed a ranking of the academics using all citation databases together.
    Date
    1. 6.2008 12:29:25

Years

Languages

Types

  • a 296
  • m 5
  • el 4
  • r 3
  • b 1
  • s 1
  • x 1
  • More… Less…