Search (100 results, page 1 of 5)

  • × theme_ss:"Informetrie"
  • × year_i:[2000 TO 2010}
  1. Chan, H.C.; Kim, H.-W.; Tan, W.C.: Information systems citation patterns from International Conference on Information Systems articles (2006) 0.05
    0.045352414 = product of:
      0.13605724 = sum of:
        0.12590174 = weight(_text_:ranking in 201) [ClassicSimilarity], result of:
          0.12590174 = score(doc=201,freq=6.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.62106377 = fieldWeight in 201, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=201)
        0.0101555 = product of:
          0.030466499 = sum of:
            0.030466499 = weight(_text_:22 in 201) [ClassicSimilarity], result of:
              0.030466499 = score(doc=201,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23214069 = fieldWeight in 201, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=201)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Research patterns could enhance understanding of the Information Systems (IS) field. Citation analysis is the methodology commonly used to determine such research patterns. In this study, the citation methodology is applied to one of the top-ranked Information Systems conferences - International Conference on Information Systems (ICIS). Information is extracted from papers in the proceedings of ICIS 2000 to 2002. A total of 145 base articles and 4,226 citations are used. Research patterns are obtained using total citations, citations per journal or conference, and overlapping citations. We then provide the citation ranking of journals and conferences. We also examine the difference between the citation ranking in this study and the ranking of IS journals and IS conferences in other studies. Based on the comparison, we confirm that IS research is a multidisciplinary research area. We also identify the most cited papers and authors in the IS research area, and the organizations most active in producing papers in the top-rated IS conference. We discuss the findings and implications of the study.
    Date
    3. 1.2007 17:22:03
  2. Meho, L.I.; Rogers, Y.: Citation counting, citation ranking, and h-index of human-computer interaction researchers : a comparison of Scopus and Web of Science (2008) 0.04
    0.03779368 = product of:
      0.11338104 = sum of:
        0.10491812 = weight(_text_:ranking in 2352) [ClassicSimilarity], result of:
          0.10491812 = score(doc=2352,freq=6.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.51755315 = fieldWeight in 2352, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2352)
        0.008462917 = product of:
          0.025388751 = sum of:
            0.025388751 = weight(_text_:22 in 2352) [ClassicSimilarity], result of:
              0.025388751 = score(doc=2352,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.19345059 = fieldWeight in 2352, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2352)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    This study examines the differences between Scopus and Web of Science in the citation counting, citation ranking, and h-index of 22 top human-computer interaction (HCI) researchers from EQUATOR - a large British Interdisciplinary Research Collaboration project. Results indicate that Scopus provides significantly more coverage of HCI literature than Web of Science, primarily due to coverage of relevant ACM and IEEE peer-reviewed conference proceedings. No significant differences exist between the two databases if citations in journals only are compared. Although broader coverage of the literature does not significantly alter the relative citation ranking of individual researchers, Scopus helps distinguish between the researchers in a more nuanced fashion than Web of Science in both citation counting and h-index. Scopus also generates significantly different maps of citation networks of individual scholars than those generated by Web of Science. The study also presents a comparison of h-index scores based on Google Scholar with those based on the union of Scopus and Web of Science. The study concludes that Scopus can be used as a sole data source for citation-based research and evaluation in HCI, especially when citations in conference proceedings are sought, and that researchers should manually calculate h scores instead of relying on system calculations.
  3. Zhang, Y.; Jansen, B.J.; Spink, A.: Identification of factors predicting clickthrough in Web searching using neural network analysis (2009) 0.04
    0.037651278 = product of:
      0.112953834 = sum of:
        0.102798335 = weight(_text_:ranking in 2742) [ClassicSimilarity], result of:
          0.102798335 = score(doc=2742,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5070964 = fieldWeight in 2742, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.0101555 = product of:
          0.030466499 = sum of:
            0.030466499 = weight(_text_:22 in 2742) [ClassicSimilarity], result of:
              0.030466499 = score(doc=2742,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23214069 = fieldWeight in 2742, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2742)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    In this research, we aim to identify factors that significantly affect the clickthrough of Web searchers. Our underlying goal is determine more efficient methods to optimize the clickthrough rate. We devise a clickthrough metric for measuring customer satisfaction of search engine results using the number of links visited, number of queries a user submits, and rank of clicked links. We use a neural network to detect the significant influence of searching characteristics on future user clickthrough. Our results show that high occurrences of query reformulation, lengthy searching duration, longer query length, and the higher ranking of prior clicked links correlate positively with future clickthrough. We provide recommendations for leveraging these findings for improving the performance of search engine retrieval and result ranking, along with implications for search engine marketing.
    Date
    22. 3.2009 17:49:11
  4. Meho, L.I.; Sonnenwald, D.H.: Citation ranking versus peer evaluation of senior faculty research performance : a case study of Kurdish scholarship (2000) 0.03
    0.032053016 = product of:
      0.19231808 = sum of:
        0.19231808 = weight(_text_:ranking in 4382) [ClassicSimilarity], result of:
          0.19231808 = score(doc=4382,freq=14.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.94869053 = fieldWeight in 4382, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=4382)
      0.16666667 = coord(1/6)
    
    Abstract
    The purpose of this study is to analyze the relationship between citation ranking and peer evaluation in assessing senior faculty research performance. Other studies typically derive their peer evaluation data directly from referees, often in the form of ranking. This study uses two additional sources of peer evaluation data: citation contant analysis and book review content analysis. 2 main questions are investigated: (a) To what degree does citation ranking correlate with data from citation content analysis, book reviews and peer ranking? (b) Is citation ranking a valif evaluative indicator of research performance of senior faculty members? This study shows that citation ranking can provide a valid indicator for comparative evaluation of senior faculty research performance
  5. Menczer, F.: Lexical and semantic clustering by Web links (2004) 0.03
    0.027645696 = product of:
      0.082937084 = sum of:
        0.0726894 = weight(_text_:ranking in 3090) [ClassicSimilarity], result of:
          0.0726894 = score(doc=3090,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 3090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
        0.010247685 = product of:
          0.030743055 = sum of:
            0.030743055 = weight(_text_:29 in 3090) [ClassicSimilarity], result of:
              0.030743055 = score(doc=3090,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23319192 = fieldWeight in 3090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3090)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Recent Web-searching and -mining tools are combining text and link analysis to improve ranking and crawling algorithms. The central assumption behind such approaches is that there is a correiation between the graph structure of the Web and the text and meaning of pages. Here I formalize and empirically evaluate two general conjectures drawing connections from link information to lexical and semantic Web content. The link-content conjecture states that a page is similar to the pages that link to it, and the link-cluster conjecture that pages about the same topic are clustered together. These conjectures are offen simply assumed to hold, and Web search tools are built an such assumptions. The present quantitative confirmation sheds light an the connection between the success of the latest Web-mining techniques and the small world topology of the Web, with encouraging implications for the design of better crawling algorithms.
    Date
    9. 1.2005 19:20:29
  6. Sanderson, M.: Revisiting h measured on UK LIS and IR academics (2008) 0.03
    0.027645696 = product of:
      0.082937084 = sum of:
        0.0726894 = weight(_text_:ranking in 1867) [ClassicSimilarity], result of:
          0.0726894 = score(doc=1867,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 1867, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=1867)
        0.010247685 = product of:
          0.030743055 = sum of:
            0.030743055 = weight(_text_:29 in 1867) [ClassicSimilarity], result of:
              0.030743055 = score(doc=1867,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23319192 = fieldWeight in 1867, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1867)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    A brief communication appearing in this journal ranked UK-based LIS and (some) IR academics by their h-index using data derived from the Thomson ISI Web of Science(TM) (WoS). In this brief communication, the same academics were re-ranked, using other popular citation databases. It was found that for academics who publish more in computer science forums, their h was significantly different due to highly cited papers missed by WoS; consequently, their rank changed substantially. The study was widened to a broader set of UK-based LIS and IR academics in which results showed similar statistically significant differences. A variant of h, hmx, was introduced that allowed a ranking of the academics using all citation databases together.
    Date
    1. 6.2008 12:29:25
  7. Cheng, S.; YunTao, P.; JunPeng, Y.; Hong, G.; ZhengLu, Y.; ZhiYu, H.: PageRank, HITS and impact factor for journal ranking (2009) 0.02
    0.022574786 = product of:
      0.13544871 = sum of:
        0.13544871 = weight(_text_:ranking in 2513) [ClassicSimilarity], result of:
          0.13544871 = score(doc=2513,freq=10.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.66815823 = fieldWeight in 2513, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2513)
      0.16666667 = coord(1/6)
    
    Abstract
    Journal citation measures are one of the most widely used bibliometric tools. The most well-known measure is the ISI Impact Factor, under the standard definition, the impact factor of journal j in a given year is the average number of citations received by papers published in the previous two years of journal j. However, the impact factor has its "intrinsic" limitations, it is a ranking measure based fundamentally on a pure counting of the in-degrees of nodes in the network, and its calculation does not take into account the "impact" or "prestige" of the journals in which the citations appear. Google's PageRank algorithm and Kleinberg's HITS method are webpage ranking algorithm, they compute the scores of webpages based on a combination of the number of hyperlinks that point to the page and the status of pages that the hyperlinks originate from, a page is important if it is pointed to by other important pages. We demonstrate how popular webpage algorithm PageRank and HITS can be used ranking journal, and we compared ISI impact factor, PageRank and HITS for journal ranking, and with PageRank and HITS compute respectively including self-citation and non self-citation, and discussed the merit and shortcomings and the scope of application that the various algorithms are used to rank journal.
  8. Sidiropoulos, A.; Manolopoulos, Y.: ¬A new perspective to automatically rank scientific conferences using digital libraries (2005) 0.02
    0.020983625 = product of:
      0.12590174 = sum of:
        0.12590174 = weight(_text_:ranking in 1011) [ClassicSimilarity], result of:
          0.12590174 = score(doc=1011,freq=6.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.62106377 = fieldWeight in 1011, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=1011)
      0.16666667 = coord(1/6)
    
    Abstract
    Citation analysis is performed in order to evaluate authors and scientific collections, such as journals and conference proceedings. Currently, two major systems exist that perform citation analysis: Science Citation Index (SCI) by the Institute for Scientific Information (ISI) and CiteSeer by the NEC Research Institute. The SCI, mostly a manual system up until recently, is based on the notion of the ISI Impact Factor, which has been used extensively for citation analysis purposes. On the other hand the CiteSeer system is an automatically built digital library using agents technology, also based on the notion of ISI Impact Factor. In this paper, we investigate new alternative notions besides the ISI impact factor, in order to provide a novel approach aiming at ranking scientific collections. Furthermore, we present a web-based system that has been built by extracting data from the Databases and Logic Programming (DBLP) website of the University of Trier. Our system, by using the new citation metrics, emerges as a useful tool for ranking scientific collections. In this respect, some first remarks are presented, e.g. on ranking conferences related to databases.
  9. Ding, Y.; Yan, E.; Frazho, A.; Caverlee, J.: PageRank for ranking authors in co-citation networks (2009) 0.02
    0.020983625 = product of:
      0.12590174 = sum of:
        0.12590174 = weight(_text_:ranking in 3161) [ClassicSimilarity], result of:
          0.12590174 = score(doc=3161,freq=6.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.62106377 = fieldWeight in 3161, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=3161)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper studies how varied damping factors in the PageRank algorithm influence the ranking of authors and proposes weighted PageRank algorithms. We selected the 108 most highly cited authors in the information retrieval (IR) area from the 1970s to 2008 to form the author co-citation network. We calculated the ranks of these 108 authors based on PageRank with the damping factor ranging from 0.05 to 0.95. In order to test the relationship between different measures, we compared PageRank and weighted PageRank results with the citation ranking, h-index, and centrality measures. We found that in our author co-citation network, citation rank is highly correlated with PageRank with different damping factors and also with different weighted PageRank algorithms; citation rank and PageRank are not significantly correlated with centrality measures; and h-index rank does not significantly correlate with centrality measures but does significantly correlate with other measures. The key factors that have impact on the PageRank of authors in the author co-citation network are being co-cited with important authors.
  10. Feitelson, D.G.; Yovel, U.: Predictive ranking of computer scientists using CiteSeer data (2004) 0.02
    0.019988567 = product of:
      0.1199314 = sum of:
        0.1199314 = weight(_text_:ranking in 1259) [ClassicSimilarity], result of:
          0.1199314 = score(doc=1259,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5916125 = fieldWeight in 1259, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1259)
      0.16666667 = coord(1/6)
    
    Abstract
    The increasing availability of digital libraries with cross-citation data on the Internet enables new studies in bibliometrics. The paper focuses on the list of 10.000 top-cited authors in computer science available as part of CiteSeer. Using data from several consecutive lists a model of how authors accrue citations with time is constructed. By comparing the rate at which individual authors accrue citations with the average rate, predictions are made of how their ranking in the list will change in the future.
  11. Ahlgren, P.; Jarneving, B.; Rousseau, R.: Requirements for a cocitation similarity measure, with special reference to Pearson's correlation coefficient (2003) 0.02
    0.018409979 = product of:
      0.055229932 = sum of:
        0.0484596 = weight(_text_:ranking in 5171) [ClassicSimilarity], result of:
          0.0484596 = score(doc=5171,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.23904754 = fieldWeight in 5171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03125 = fieldNorm(doc=5171)
        0.0067703333 = product of:
          0.020311 = sum of:
            0.020311 = weight(_text_:22 in 5171) [ClassicSimilarity], result of:
              0.020311 = score(doc=5171,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.15476047 = fieldWeight in 5171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5171)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Ahlgren, Jarneving, and. Rousseau review accepted procedures for author co-citation analysis first pointing out that since in the raw data matrix the row and column values are identical i,e, the co-citation count of two authors, there is no clear choice for diagonal values. They suggest the number of times an author has been co-cited with himself excluding self citation rather than the common treatment as zeros or as missing values. When the matrix is converted to a similarity matrix the normal procedure is to create a matrix of Pearson's r coefficients between data vectors. Ranking by r and by co-citation frequency and by intuition can easily yield three different orders. It would seem necessary that the adding of zeros to the matrix will not affect the value or the relative order of similarity measures but it is shown that this is not the case with Pearson's r. Using 913 bibliographic descriptions form the Web of Science of articles form JASIS and Scientometrics, authors names were extracted, edited and 12 information retrieval authors and 12 bibliometric authors each from the top 100 most cited were selected. Co-citation and r value (diagonal elements treated as missing) matrices were constructed, and then reconstructed in expanded form. Adding zeros can both change the r value and the ordering of the authors based upon that value. A chi-squared distance measure would not violate these requirements, nor would the cosine coefficient. It is also argued that co-citation data is ordinal data since there is no assurance of an absolute zero number of co-citations, and thus Pearson is not appropriate. The number of ties in co-citation data make the use of the Spearman rank order coefficient problematic.
    Date
    9. 7.2006 10:22:35
  12. Mayr, P.: Re-Ranking auf Basis von Bradfordizing für die verteilte Suche in Digitalen Bibliotheken (2009) 0.02
    0.018059827 = product of:
      0.108358964 = sum of:
        0.108358964 = weight(_text_:ranking in 4302) [ClassicSimilarity], result of:
          0.108358964 = score(doc=4302,freq=10.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5345266 = fieldWeight in 4302, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03125 = fieldNorm(doc=4302)
      0.16666667 = coord(1/6)
    
    Abstract
    Trotz großer Dokumentmengen für datenbankübergreifende Literaturrecherchen erwarten akademische Nutzer einen möglichst hohen Anteil an relevanten und qualitativen Dokumenten in den Trefferergebnissen. Insbesondere die Reihenfolge und Struktur der gelisteten Ergebnisse (Ranking) spielt, neben dem direkten Volltextzugriff auf die Dokumente, inzwischen eine entscheidende Rolle beim Design von Suchsystemen. Nutzer erwarten weiterhin flexible Informationssysteme, die es unter anderem zulassen, Einfluss auf das Ranking der Dokumente zu nehmen bzw. alternative Rankingverfahren zu verwenden. In dieser Arbeit werden zwei Mehrwertverfahren für Suchsysteme vorgestellt, die die typischen Probleme bei der Recherche nach wissenschaftlicher Literatur behandeln und damit die Recherchesituation messbar verbessern können. Die beiden Mehrwertdienste semantische Heterogenitätsbehandlung am Beispiel Crosskonkordanzen und Re-Ranking auf Basis von Bradfordizing, die in unterschiedlichen Phasen der Suche zum Einsatz kommen, werden hier ausführlich beschrieben und im empirischen Teil der Arbeit bzgl. der Effektivität für typische fachbezogene Recherchen evaluiert. Vorrangiges Ziel der Promotion ist es, zu untersuchen, ob das hier vorgestellte alternative Re-Rankingverfahren Bradfordizing im Anwendungsbereich bibliographischer Datenbanken zum einen operabel ist und zum anderen voraussichtlich gewinnbringend in Informationssystemen eingesetzt und dem Nutzer angeboten werden kann. Für die Tests wurden Fragestellungen und Daten aus zwei Evaluationsprojekten (CLEF und KoMoHe) verwendet. Die intellektuell bewerteten Dokumente stammen aus insgesamt sieben wissenschaftlichen Fachdatenbanken der Fächer Sozialwissenschaften, Politikwissenschaft, Wirtschaftswissenschaften, Psychologie und Medizin. Die Evaluation der Crosskonkordanzen (insgesamt 82 Fragestellungen) zeigt, dass sich die Retrievalergebnisse signifikant für alle Crosskonkordanzen verbessern; es zeigt sich zudem, dass interdisziplinäre Crosskonkordanzen den stärksten (positiven) Effekt auf die Suchergebnisse haben. Die Evaluation des Re-Ranking nach Bradfordizing (insgesamt 164 Fragestellungen) zeigt, dass die Dokumente der Kernzone (Kernzeitschriften) für die meisten Testreihen eine signifikant höhere Precision als Dokumente der Zone 2 und Zone 3 (Peripheriezeitschriften) ergeben. Sowohl für Zeitschriften als auch für Monographien kann dieser Relevanzvorteil nach Bradfordizing auf einer sehr breiten Basis von Themen und Fragestellungen an zwei unabhängigen Dokumentkorpora empirisch nachgewiesen werden.
  13. Aguillo, I.F.; Granadino, B.; Ortega, J.L.; Prieto, J.A.: Scientific research activity and communication measured with cybermetrics indicators (2006) 0.02
    0.017133057 = product of:
      0.102798335 = sum of:
        0.102798335 = weight(_text_:ranking in 5898) [ClassicSimilarity], result of:
          0.102798335 = score(doc=5898,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5070964 = fieldWeight in 5898, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=5898)
      0.16666667 = coord(1/6)
    
    Abstract
    To test feasibility of cybermetric indicators for describing and ranking university activities as shown in their Web sites, a large set of 9,330 institutions worldwide was compiled and analyzed. Using search engines' advanced features, size (number of pages), visibility (number of external inlinks), and number of rich files (pdf, ps, doc, ppt, and As formats) were obtained for each of the institutional domains of the universities. We found a statistically significant correlation between a Web ranking built on a combination of Webometric data and other university rankings based on bibliometric and other indicators. Results show that cybermetric measures could be useful for reflecting the contribution of technologically oriented institutions, increasing the visibility of developing countries, and improving the rankings based on Science Citation Index (SCI) data with known biases.
  14. Burrell, Q.L.: Egghe's construction of Lorenz curves resolved (2007) 0.02
    0.017133057 = product of:
      0.102798335 = sum of:
        0.102798335 = weight(_text_:ranking in 624) [ClassicSimilarity], result of:
          0.102798335 = score(doc=624,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5070964 = fieldWeight in 624, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=624)
      0.16666667 = coord(1/6)
    
    Abstract
    In a recent article (Burrell, 2006), the author pointed out that the version of Lorenz concentration theory presented by Egghe (2005a, 2005b) does not conform to the classical statistical/econometric approach. Rousseau (2007) asserts confusion on our part and a failure to grasp Egghe's construction, even though we simply reported what Egghe stated. Here the author shows that Egghe's construction rather than including the standard case, as claimed by Rousseau, actually leads to the Leimkuhler curve of the dual function, in the sense of Egghe. (Note that here we distinguish between the Lorenz curve, a convex form arising from ranking from smallest to largest, and the Leimkuhler curve, a concave form arising from ranking from largest to smallest. The two presentations are equivalent. See Burrell, 1991, 2005; Rousseau, 2007.)
  15. Bletsas, A.; Sahalos, J.N.: Hirsch index rankings require scaling and higher moment (2009) 0.02
    0.017133057 = product of:
      0.102798335 = sum of:
        0.102798335 = weight(_text_:ranking in 3141) [ClassicSimilarity], result of:
          0.102798335 = score(doc=3141,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5070964 = fieldWeight in 3141, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=3141)
      0.16666667 = coord(1/6)
    
    Abstract
    Simple bibliometric indicators, such as average number of citations per publication per researcher, or the recently proposed Hirsch index (h-index), are nowadays tracked by online repositories, including Web of Science (WOS), and often affect critical decision making. This work proposes appropriate scaling of the h-index based on its probability distribution that is calculated for any underlying citation distribution. The proposed approach outperforms existing index estimation models that have focused on the expected value only (i.e., first moment). Furthermore, it is shown that average number of citations per publication per scientific field, total number of publications per researcher, as well as researcher's h-index measured value, expected value, and standard deviation constitute the minimum information required for meaningful h-index ranking campaigns; otherwise contradicting ranking results emerge. This work may potentially shed light to (current or future) large-scale, h-index-based bibliometric evaluations.
  16. Woeginger, G.J.: Generalizations of Egghe's g-index (2009) 0.02
    0.016153201 = product of:
      0.0969192 = sum of:
        0.0969192 = weight(_text_:ranking in 2857) [ClassicSimilarity], result of:
          0.0969192 = score(doc=2857,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.47809508 = fieldWeight in 2857, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0625 = fieldNorm(doc=2857)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper introduces the generalized Egghe-indices as a new family of scientific impact measures for ranking the output of scientific researchers. The definition of this family is strongly inspired by Egghe's well-known g-index. The main contribution of the paper is a family of axiomatic characterizations that characterize every generalized Egghe-index in terms of four axioms.
  17. Egghe, L.; Rousseau, R.; Rousseau, S.: TOP-curves (2007) 0.01
    0.014134051 = product of:
      0.084804304 = sum of:
        0.084804304 = weight(_text_:ranking in 50) [ClassicSimilarity], result of:
          0.084804304 = score(doc=50,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.4183332 = fieldWeight in 50, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=50)
      0.16666667 = coord(1/6)
    
    Abstract
    Several characteristics of classical Lorenz curves make them unsuitable for the study of a group of topperformers. TOP-curves, defined as a kind of mirror image of TIP-curves used in poverty studies, are shown to possess the properties necessary for adequate empirical ranking of various data arrays, based on the properties of the highest performers (i.e., the core). TOP-curves and essential TOP-curves, also introduced in this article, simultaneously represent the incidence, intensity, and inequality among the top. It is shown that TOPdominance partial order, introduced in this article, is stronger than Lorenz dominance order. In this way, this article contributes to the study of cores, a central issue in applied informetrics.
  18. Yan, E.; Ding, Y.: Applying centrality measures to impact analysis : a coauthorship network analysis (2009) 0.01
    0.014134051 = product of:
      0.084804304 = sum of:
        0.084804304 = weight(_text_:ranking in 3083) [ClassicSimilarity], result of:
          0.084804304 = score(doc=3083,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.4183332 = fieldWeight in 3083, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3083)
      0.16666667 = coord(1/6)
    
    Abstract
    Many studies on coauthorship networks focus on network topology and network statistical mechanics. This article takes a different approach by studying micro-level network properties with the aim of applying centrality measures to impact analysis. Using coauthorship data from 16 journals in the field of library and information science (LIS) with a time span of 20 years (1988-2007), we construct an evolving coauthorship network and calculate four centrality measures (closeness centrality, betweenness centrality, degree centrality, and PageRank) for authors in this network. We find that the four centrality measures are significantly correlated with citation counts. We also discuss the usability of centrality measures in author ranking and suggest that centrality measures can be useful indicators for impact analysis.
  19. Egghe, L.; Rousseau, R.; Hooydonk, G. van: Methods for accrediting publications to authors or countries : consequences for evaluation studies (2000) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 4384) [ClassicSimilarity], result of:
          0.0726894 = score(doc=4384,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 4384, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=4384)
      0.16666667 = coord(1/6)
    
    Abstract
    One aim of science evaluation studies is to determine quantitatively the contribution of different players (authors, departments, countries) to the whole system. This information is then used to study the evolution of the system, for instance to gauge the results of special national or international programs. Taking articles as our basic data, we want to determine the exact relative contribution of each coauthor or each country. These numbers are brought together to obtain country scores, or department scores, etc. It turns out, as we will show in this article, that different scoring methods can yield totally different rankings. Conseqeuntly, a ranking between countries, universities, research groups or authors, based on one particular accrediting methods does not contain an absolute truth about their relative importance
  20. Oppenheim, C.: Using the h-Index to rank influential British researchers in information science and librarianship (2007) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 780) [ClassicSimilarity], result of:
          0.0726894 = score(doc=780,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=780)
      0.16666667 = coord(1/6)
    
    Abstract
    The recently developed h-index has been applied to the literature produced by senior British-based academics in librarianship and information science. The majority of those evaluated currently hold senior positions in UK information science and librarianship departments; however, a small number of staff in other departments and retired "founding fathers" were analyzed as well. The analysis was carried out using the Web of Science (Thomson Scientific, Philadelphia, PA) for the years from 1992 to October 2005, and included both secondauthored papers and self-citations. The top-ranking British information scientist, Peter Willett, has an h-index of 31. However, it was found that Eugene Garfield, the founder of modern citation studies, has an even higher h-index of 36. These results support other studies suggesting that the h-index is a useful tool in the armory of bibliometrics.

Authors

Languages

  • e 89
  • d 11

Types

  • a 96
  • r 3
  • el 2
  • x 1
  • More… Less…