Search (68 results, page 4 of 4)

  • × theme_ss:"Citation indexing"
  1. Sidiropoulos, A.; Manolopoulos, Y.: ¬A new perspective to automatically rank scientific conferences using digital libraries (2005) 0.01
    0.009195855 = product of:
      0.01839171 = sum of:
        0.01839171 = product of:
          0.03678342 = sum of:
            0.03678342 = weight(_text_:web in 1011) [ClassicSimilarity], result of:
              0.03678342 = score(doc=1011,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.21634221 = fieldWeight in 1011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1011)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Citation analysis is performed in order to evaluate authors and scientific collections, such as journals and conference proceedings. Currently, two major systems exist that perform citation analysis: Science Citation Index (SCI) by the Institute for Scientific Information (ISI) and CiteSeer by the NEC Research Institute. The SCI, mostly a manual system up until recently, is based on the notion of the ISI Impact Factor, which has been used extensively for citation analysis purposes. On the other hand the CiteSeer system is an automatically built digital library using agents technology, also based on the notion of ISI Impact Factor. In this paper, we investigate new alternative notions besides the ISI impact factor, in order to provide a novel approach aiming at ranking scientific collections. Furthermore, we present a web-based system that has been built by extracting data from the Databases and Logic Programming (DBLP) website of the University of Trier. Our system, by using the new citation metrics, emerges as a useful tool for ranking scientific collections. In this respect, some first remarks are presented, e.g. on ranking conferences related to databases.
  2. Leydesdorff, L.: On the normalization and visualization of author co-citation data : Salton's Cosine versus the Jaccard index (2008) 0.01
    0.009195855 = product of:
      0.01839171 = sum of:
        0.01839171 = product of:
          0.03678342 = sum of:
            0.03678342 = weight(_text_:web in 1341) [ClassicSimilarity], result of:
              0.03678342 = score(doc=1341,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.21634221 = fieldWeight in 1341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1341)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The debate about which similarity measure one should use for the normalization in the case of Author Co-citation Analysis (ACA) is further complicated when one distinguishes between the symmetrical co-citation - or, more generally, co-occurrence - matrix and the underlying asymmetrical citation - occurrence - matrix. In the Web environment, the approach of retrieving original citation data is often not feasible. In that case, one should use the Jaccard index, but preferentially after adding the number of total citations (i.e., occurrences) on the main diagonal. Unlike Salton's cosine and the Pearson correlation, the Jaccard index abstracts from the shape of the distributions and focuses only on the intersection and the sum of the two sets. Since the correlations in the co-occurrence matrix may be spurious, this property of the Jaccard index can be considered as an advantage in this case.
  3. Leydesdorff, L.; Salah, A.A.A.: Maps on the basis of the Arts & Humanities Citation Index : the journals Leonardo and Art Journal versus "digital humanities" as a topic (2010) 0.01
    0.009195855 = product of:
      0.01839171 = sum of:
        0.01839171 = product of:
          0.03678342 = sum of:
            0.03678342 = weight(_text_:web in 3436) [ClassicSimilarity], result of:
              0.03678342 = score(doc=3436,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.21634221 = fieldWeight in 3436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3436)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The possibilities of using the Arts & Humanities Citation Index (A&HCI) for journal mapping have not been sufficiently recognized because of the absence of a Journal Citations Report (JCR) for this database. A quasi-JCR for the A&HCI ([2008]) was constructed from the data contained in the Web of Science and is used for the evaluation of two journals as examples: Leonardo and Art Journal. The maps on the basis of the aggregated journal-journal citations within this domain can be compared with maps including references to journals in the Science Citation Index and Social Science Citation Index. Art journals are cited by (social) science journals more than by other art journals, but these journals draw upon one another in terms of their own references. This cultural impact in terms of being cited is not found when documents with a topic such as digital humanities are analyzed. This community of practice functions more as an intellectual organizer than a journal.
  4. Noruzi, A.: Google Scholar : the new generation of citation indexes (2005) 0.01
    0.009195855 = product of:
      0.01839171 = sum of:
        0.01839171 = product of:
          0.03678342 = sum of:
            0.03678342 = weight(_text_:web in 5061) [ClassicSimilarity], result of:
              0.03678342 = score(doc=5061,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.21634221 = fieldWeight in 5061, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5061)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Google Scholar (http://scholar.google.com) provides a new method of locating potentially relevant articles on a given subject by identifying subsequent articles that cite a previously published article. An important feature of Google Scholar is that researchers can use it to trace interconnections among authors citing articles on the same topic and to determine the frequency with which others cite a specific article, as it has a "cited by" feature. This study begins with an overview of how to use Google Scholar for citation analysis and identifies advanced search techniques not well documented by Google Scholar. This study also compares the citation counts provided by Web of Science and Google Scholar for articles in the field of "Webometrics." It makes several suggestions for improving Google Scholar. Finally, it concludes that Google Scholar provides a free alternative or complement to other citation indexes.
  5. Wilson, C.S.; Tenopir, C.: Local citation analysis, publishing and reading patterns : using multiple methods to evaluate faculty use of an academic library's research collection (2008) 0.01
    0.007663213 = product of:
      0.015326426 = sum of:
        0.015326426 = product of:
          0.030652853 = sum of:
            0.030652853 = weight(_text_:web in 1960) [ClassicSimilarity], result of:
              0.030652853 = score(doc=1960,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.18028519 = fieldWeight in 1960, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1960)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study assessed the intermix of local citation analysis and survey of journal use and reading patterns for evaluating an academic library's research collection. Journal articles and their cited references from faculties at the University of New South Wales were downloaded from the Web of Science (WoS) and journal impact factors from the Journal Citation Reports. The survey of the University of New South Wales (UNSW) academic staff asked both reader-related and reading-related questions. Both methods showed that academics in medicine published more and had more coauthors per paper than academics in the other faculties; however, when correlated with the number of students and academic staff, science published more and engineering published in higher impact journals. When recalled numbers of articles published were compared to actual numbers, all faculties over-estimated their productivity by nearly two-fold. The distribution of cited serial references was highly skewed with over half of the titles cited only once. The survey results corresponded with U.S. university surveys with one exception: Engineering academics reported the highest number of article readings and read mostly for research related activities. Citation analysis data showed that the UNSW library provided the majority of journals in which researchers published and cited, mostly in electronic formats. However, the availability of non-journal cited sources was low. The joint methods provided both confirmatory and contradictory results and proved useful in evaluating library research collections.
  6. Hammond, C.C.; Brown, S.W.: Citation searching : search smarter & find more (2008) 0.01
    0.007663213 = product of:
      0.015326426 = sum of:
        0.015326426 = product of:
          0.030652853 = sum of:
            0.030652853 = weight(_text_:web in 2206) [ClassicSimilarity], result of:
              0.030652853 = score(doc=2206,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.18028519 = fieldWeight in 2206, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2206)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    At the University of Connecticut, we have been enticing graduate students to join graduate student trainers to learn how to answer the following questions and improve the breadth of their research: Do you need to find articles published outside your primary discipline? What are some seminal articles in your field? Have you ever wanted to know who cited an article you wrote? We are participating in Elsevier's Student Ambassador Program (SAmP) in which graduate students train their peers on "citation searching" research using Scopus and Web of Science, two tremendous citation databases. We are in the fourth semester of these training programs, and they are wildly successful: We have offered more than 30 classes and taught more than 350 students from March 2007 through March 2008. Chelsea is a Ph.D. candidate in the department of communication science at the University of Connecticut (UConn) and was trained as a librarian; she was one of the first peer trainers in the citation searching program. Stephanie is an electronic resource librarian at the University of Connecticut and is the librarian coordinating the program. Together, we would like to explain what we teach in the classes in the hopes of helping even more researchers perform better searches.
  7. Gorraiz, J.; Purnell, P.J.; Glänzel, W.: Opportunities for and limitations of the Book Citation Index (2013) 0.01
    0.007663213 = product of:
      0.015326426 = sum of:
        0.015326426 = product of:
          0.030652853 = sum of:
            0.030652853 = weight(_text_:web in 966) [ClassicSimilarity], result of:
              0.030652853 = score(doc=966,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.18028519 = fieldWeight in 966, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=966)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article offers important background information about a new product, the Book Citation Index (BKCI), launched in 2011 by Thomson Reuters. Information is illustrated by some new facts concerning The BKCI's use in bibliometrics, coverage analysis, and a series of idiosyncrasies worthy of further discussion. The BKCI was launched primarily to assist researchers identify useful and relevant research that was previously invisible to them, owing to the lack of significant book content in citation indexes such as the Web of Science. So far, the content of 33,000 books has been added to the desktops of the global research community, the majority in the arts, humanities, and social sciences fields. Initial analyses of the data from The BKCI have indicated that The BKCI, in its current version, should not be used for bibliometric or evaluative purposes. The most significant limitations to this potential application are the high share of publications without address information, the inflation of publication counts, the lack of cumulative citation counts from different hierarchical levels, and inconsistency in citation counts between the cited reference search and the book citation index. However, The BKCI is a first step toward creating a reliable and necessary citation data source for monographs - a very challenging issue, because, unlike journals and conference proceedings, books have specific requirements, and several problems emerge not only in the context of subject classification, but also in their role as cited publications and in citing publications.
  8. Robinson-García, N.; Jiménez-Contreras, E.; Torres-Salinas, D.: Analyzing data citation practices using the data citation index : a study of backup strategies of end users (2016) 0.01
    0.007663213 = product of:
      0.015326426 = sum of:
        0.015326426 = product of:
          0.030652853 = sum of:
            0.030652853 = weight(_text_:web in 3225) [ClassicSimilarity], result of:
              0.030652853 = score(doc=3225,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.18028519 = fieldWeight in 3225, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3225)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We present an analysis of data citation practices based on the Data Citation Index (DCI) (Thomson Reuters). This database launched in 2012 links data sets and data studies with citations received from the other citation indexes. The DCI harvests citations to research data from papers indexed in the Web of Science. It relies on the information provided by the data repository. The findings of this study show that data citation practices are far from common in most research fields. Some differences have been reported on the way researchers cite data: Although in the areas of science and engineering & technology data sets were the most cited, in the social sciences and arts & humanities data studies play a greater role. A total of 88.1% of the records have received no citation, but some repositories show very low uncitedness rates. Although data citation practices are rare in most fields, they have expanded in disciplines such as crystallography and genomics. We conclude by emphasizing the role that the DCI could play in encouraging the consistent, standardized citation of research data-a role that would enhance their value as a means of following the research process from data collection to publication.

Years

Languages

  • e 56
  • d 12

Types

  • a 66
  • el 6
  • m 1
  • s 1
  • More… Less…