Search (69 results, page 2 of 4)

  • × language_ss:"e"
  • × theme_ss:"Citation indexing"
  1. Araújo, P.C. de; Gutierres Castanha, R.C.; Hjoerland, B.: Citation indexing and indexes (2021) 0.03
    0.026081776 = product of:
      0.13040888 = sum of:
        0.13040888 = weight(_text_:index in 444) [ClassicSimilarity], result of:
          0.13040888 = score(doc=444,freq=8.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.5793543 = fieldWeight in 444, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=444)
      0.2 = coord(1/5)
    
    Abstract
    A citation index is a bibliographic database that provides citation links between documents. The first modern citation index was suggested by the researcher Eugene Garfield in 1955 and created by him in 1964, and it represents an important innovation to knowledge organization and information retrieval. This article describes citation indexes in general, considering the modern citation indexes, including Web of Science, Scopus, Google Scholar, Microsoft Academic, Crossref, Dimensions and some special citation indexes and predecessors to the modern citation index like Shepard's Citations. We present comparative studies of the major ones and survey theoretical problems related to the role of citation indexes as subject access points (SAP), recognizing the implications to knowledge organization and information retrieval. Finally, studies on citation behavior are presented and the influence of citation indexes on knowledge organization, information retrieval and the scientific information ecosystem is recognized.
    Object
    Science Citation Index
  2. Sombatsompop, N.; Markpin, T.: Making an equality of ISI impact factors for different subject fields (2005) 0.02
    0.022587484 = product of:
      0.11293741 = sum of:
        0.11293741 = weight(_text_:index in 3467) [ClassicSimilarity], result of:
          0.11293741 = score(doc=3467,freq=6.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.50173557 = fieldWeight in 3467, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=3467)
      0.2 = coord(1/5)
    
    Abstract
    The journal impact factors, published by the Institute for Scientific Information (ISI; Philadelphia, PA), are widely known and are used to evaluate overall journal quality and the quality of the papers published therein. However, when making comparisons between subject fields, the work of individual scientists and their research institutions as reflected in their articles' ISI impact factors can become meaningless. This inequality will remain as long as ISI impact factors are employed as an instrument to assess the quality of international research. Here we propose a new mathematical index entitled Impact Factor PointAverage (IFPA) for assessment of the quality of individual research work in different subject fields. The index is established based an a normalization of differences in impact factors, rankings, and number of journal titles in different subject fields. The proposed index is simple and enables the ISI impact factors to be used with equality, especially when evaluating the quality of research work in different subject fields.
  3. González, L.; Campanario, J.M.: Structure of the impact factor of journals included in the Social Sciences Citation Index : citations from documents labeled "Editorial Material" (2007) 0.02
    0.022587484 = product of:
      0.11293741 = sum of:
        0.11293741 = weight(_text_:index in 75) [ClassicSimilarity], result of:
          0.11293741 = score(doc=75,freq=6.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.50173557 = fieldWeight in 75, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=75)
      0.2 = coord(1/5)
    
    Abstract
    We investigated how citations from documents labeled by the Institute for Scientific Information (ISI) as "editorial material" contribute to the impact factor of academic journals in which they were published. Our analysis is based on records corresponding to the documents classified by the ISI as editorial material published in journals covered by the Social Sciences Citation Index between 1999 and 2003 (50,273 records corresponding to editorial material published in 2,374 journals). The results appear to rule out widespread manipulation of the impact factor by academic journals publishing large amounts of editorial material with many citations to the journal itself as a strategy to increase the impact factor.
    Object
    Social Sciences Citation Index
  4. Nicolaisen, J.: Citation analysis (2007) 0.02
    0.022333153 = product of:
      0.11166576 = sum of:
        0.11166576 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
          0.11166576 = score(doc=6091,freq=2.0), product of:
            0.18038483 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051511593 = queryNorm
            0.61904186 = fieldWeight in 6091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=6091)
      0.2 = coord(1/5)
    
    Date
    13. 7.2008 19:53:22
  5. Døsen, K.: One more reference on self-reference (1992) 0.02
    0.022333153 = product of:
      0.11166576 = sum of:
        0.11166576 = weight(_text_:22 in 4604) [ClassicSimilarity], result of:
          0.11166576 = score(doc=4604,freq=2.0), product of:
            0.18038483 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051511593 = queryNorm
            0.61904186 = fieldWeight in 4604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=4604)
      0.2 = coord(1/5)
    
    Date
    7. 2.2005 14:10:22
  6. Gorraiz, J.; Purnell, P.J.; Glänzel, W.: Opportunities for and limitations of the Book Citation Index (2013) 0.02
    0.021734815 = product of:
      0.10867407 = sum of:
        0.10867407 = weight(_text_:index in 966) [ClassicSimilarity], result of:
          0.10867407 = score(doc=966,freq=8.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.48279524 = fieldWeight in 966, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=966)
      0.2 = coord(1/5)
    
    Abstract
    This article offers important background information about a new product, the Book Citation Index (BKCI), launched in 2011 by Thomson Reuters. Information is illustrated by some new facts concerning The BKCI's use in bibliometrics, coverage analysis, and a series of idiosyncrasies worthy of further discussion. The BKCI was launched primarily to assist researchers identify useful and relevant research that was previously invisible to them, owing to the lack of significant book content in citation indexes such as the Web of Science. So far, the content of 33,000 books has been added to the desktops of the global research community, the majority in the arts, humanities, and social sciences fields. Initial analyses of the data from The BKCI have indicated that The BKCI, in its current version, should not be used for bibliometric or evaluative purposes. The most significant limitations to this potential application are the high share of publications without address information, the inflation of publication counts, the lack of cumulative citation counts from different hierarchical levels, and inconsistency in citation counts between the cited reference search and the book citation index. However, The BKCI is a first step toward creating a reliable and necessary citation data source for monographs - a very challenging issue, because, unlike journals and conference proceedings, books have specific requirements, and several problems emerge not only in the context of subject classification, but also in their role as cited publications and in citing publications.
    Object
    Book Citation Index
  7. Leydesdorff, L.: Dynamic and evolutionary updates of classificatory schemes in scientific journal structures (2002) 0.02
    0.02151637 = product of:
      0.10758185 = sum of:
        0.10758185 = weight(_text_:index in 1249) [ClassicSimilarity], result of:
          0.10758185 = score(doc=1249,freq=4.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.4779429 = fieldWeight in 1249, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1249)
      0.2 = coord(1/5)
    
    Abstract
    Can the inclusion of new journals in the Science Citation Index be used for the indication of structural change in the database, and how can this change be compared with reorganizations of reiations among previously included journals? Change in the number of journals (n) is distinguished from change in the number of journal categories (m). Although the number of journals can be considered as a given at each moment in time, the number of journal categories is based an a reconstruction that is time-stamped ex post. The reflexive reconstruction is in need of an update when new information becomes available in a next year. Implications of this shift towards an evolutionary perspective are specified.
    Object
    Science Citation Index
  8. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.019739904 = product of:
      0.09869952 = sum of:
        0.09869952 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
          0.09869952 = score(doc=3925,freq=4.0), product of:
            0.18038483 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051511593 = queryNorm
            0.54716086 = fieldWeight in 3925, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=3925)
      0.2 = coord(1/5)
    
    Date
    22. 7.2006 15:22:28
  9. Leydesdorff, L.: Visualization of the citation impact environments of scientific journals : an online mapping exercise (2007) 0.02
    0.018822905 = product of:
      0.09411452 = sum of:
        0.09411452 = weight(_text_:index in 82) [ClassicSimilarity], result of:
          0.09411452 = score(doc=82,freq=6.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.418113 = fieldWeight in 82, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=82)
      0.2 = coord(1/5)
    
    Abstract
    Aggregated journal-journal citation networks based on the Journal Citation Reports 2004 of the Science Citation Index (5,968 journals) and the Social Science Citation Index (1,712 journals) are made accessible from the perspective of any of these journals. A vector-space model Is used for normalization, and the results are brought online at http://www.leydesdorff.net/jcr04 as input files for the visualization program Pajek. The user is thus able to analyze the citation environment in terms of links and graphs. Furthermore, the local impact of a journal is defined as its share of the total citations in the specific journal's citation environments; the vertical size of the nodes is varied proportionally to this citation impact. The horizontal size of each node can be used to provide the same information after correction for within-journal (self-)citations. In the "citing" environment, the equivalents of this measure can be considered as a citation activity index which maps how the relevant journal environment is perceived by the collective of authors of a given journal. As a policy application, the mechanism of Interdisciplinary developments among the sciences is elaborated for the case of nanotechnology journals.
  10. Baird, L.M.; Oppenheim, C.: Do citations matter? (1994) 0.02
    0.018442601 = product of:
      0.092213005 = sum of:
        0.092213005 = weight(_text_:index in 6896) [ClassicSimilarity], result of:
          0.092213005 = score(doc=6896,freq=4.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.40966535 = fieldWeight in 6896, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=6896)
      0.2 = coord(1/5)
    
    Abstract
    Citation indexes are based on the principle of authors citing previous articles of relevance. The paper demonstrates the long history of citing for precedent and notes how ISI's citation indexes differ from 'Shephards Citations'. The paper analyses some of the criticisms of citations counting, and some of the uses for which citation analysis has been employed. The paper also examines the idea of the development of an Acknowledgement Index, and concludes such an index is unlikely to be commercially viable. The paper describes a citation study of Eugene Garfield, and concludes that he may be the most heavily cited information scientist, that he is a heavy self-citer, and that the reasons why other authors cite Garfield are different from the reasons why he cites himself. The paper concludes that citation studies remain a valid methgod of analysis of individuals', institutions', or journals' impact, but need to be used with caution and in conjunction with other measures
  11. Lawrence, S.; Giles, C.L.; Bollaker, K.: Digital libraries and Autonomous Citation Indexing (1999) 0.02
    0.018442601 = product of:
      0.092213005 = sum of:
        0.092213005 = weight(_text_:index in 4951) [ClassicSimilarity], result of:
          0.092213005 = score(doc=4951,freq=4.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.40966535 = fieldWeight in 4951, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=4951)
      0.2 = coord(1/5)
    
    Abstract
    Autonomous Citation Indexing (ACI) automates the construction of citation indexes - Lower cost, wider availability: ACI is completely autonomous - no manual effort is required. This should result in lower cost and wider availability. Broader coverage: Because no manual effort is required, there are few barriers to indexing a broader range of literature, compared to indexes like the Science Citation Index that require manual effort. More timely feedback: Conference papers, technical reports, and preprints can be indexed, providing far more timely feedback in many cases (often such publications appear far in advance of corresponding journal publications). Citation context: ACI groups together the context of citations to a given article, allowing researchers to easily see what is being said and why the article was cited. Benefits for both literature search and evaluation. Freely available: Our implementation of ACI is available at no cost for non-commercial use. Several orgnizations have requested the software and expressed interest in providing an index within their domain, or in using ACI within their own digital libraries.
  12. Aguillo, I.F.; Granadino, B.; Ortega, J.L.; Prieto, J.A.: Scientific research activity and communication measured with cybermetrics indicators (2006) 0.02
    0.018442601 = product of:
      0.092213005 = sum of:
        0.092213005 = weight(_text_:index in 5898) [ClassicSimilarity], result of:
          0.092213005 = score(doc=5898,freq=4.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.40966535 = fieldWeight in 5898, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=5898)
      0.2 = coord(1/5)
    
    Abstract
    To test feasibility of cybermetric indicators for describing and ranking university activities as shown in their Web sites, a large set of 9,330 institutions worldwide was compiled and analyzed. Using search engines' advanced features, size (number of pages), visibility (number of external inlinks), and number of rich files (pdf, ps, doc, ppt, and As formats) were obtained for each of the institutional domains of the universities. We found a statistically significant correlation between a Web ranking built on a combination of Webometric data and other university rankings based on bibliometric and other indicators. Results show that cybermetric measures could be useful for reflecting the contribution of technologically oriented institutions, increasing the visibility of developing countries, and improving the rankings based on Science Citation Index (SCI) data with known biases.
    Object
    Science Citation Index
  13. Persson, O.; Beckmann, M.: Locating the network of interacting authors in scientific specialities (1995) 0.02
    0.01738785 = product of:
      0.08693925 = sum of:
        0.08693925 = weight(_text_:index in 3300) [ClassicSimilarity], result of:
          0.08693925 = score(doc=3300,freq=2.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.3862362 = fieldWeight in 3300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0625 = fieldNorm(doc=3300)
      0.2 = coord(1/5)
    
    Abstract
    Seeks to describe the social networks, or invisible colleges, that make up a scientific speciality, in terms of mathematically precise sets generated by document citations and accessible through the Social Science Citation Index. The document and author sets that encompass a scientific specialty are the basis for some interdependent citation matrices. The method of construction of these sets and matrices is illustrated through an application to the literature on invisible colleges
  14. Moed, H.F.: Differences in the construction of SCI based bibliometric indicators among various producers : a first overview (1996) 0.02
    0.01738785 = product of:
      0.08693925 = sum of:
        0.08693925 = weight(_text_:index in 5073) [ClassicSimilarity], result of:
          0.08693925 = score(doc=5073,freq=2.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.3862362 = fieldWeight in 5073, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0625 = fieldNorm(doc=5073)
      0.2 = coord(1/5)
    
    Abstract
    Discusses basic technical methodological issues with respect to data collection and the construction of bibliometric indicators, particular at the macro or meso level. Focuses on the use of the Science Citation Index. Aims to highlight important decisions that have to be made in the process of data collection and the construction of bibliometric indicators. Illustrates differences in the methodologies applied by several important producers of bibliometric indicators, thus illustrating the complexity of the process of 'standardization'
  15. Wouters, P.: ¬The signs of science (1998) 0.02
    0.01738785 = product of:
      0.08693925 = sum of:
        0.08693925 = weight(_text_:index in 1023) [ClassicSimilarity], result of:
          0.08693925 = score(doc=1023,freq=2.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.3862362 = fieldWeight in 1023, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0625 = fieldNorm(doc=1023)
      0.2 = coord(1/5)
    
    Abstract
    Since the 'Science Citation Index' emerged within the system of scientific communication in 1964, an intense controversy about its character has been raging: in what sense can citation analysis be trusted? This debate can be characterized as the confrontation of different perspectives on science. Discusses the citation representation of science: the way the citation creates a new reality of as well as in the world of science; the main features of this reality; and some implications for science and science policy
  16. Zhao, D.; Strotmann, A.: Can citation analysis of Web publications better detect research fronts? (2007) 0.02
    0.015368836 = product of:
      0.07684418 = sum of:
        0.07684418 = weight(_text_:index in 471) [ClassicSimilarity], result of:
          0.07684418 = score(doc=471,freq=4.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.3413878 = fieldWeight in 471, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=471)
      0.2 = coord(1/5)
    
    Abstract
    We present evidence that in some research fields, research published in journals and reported on the Web may collectively represent different evolutionary stages of the field, with journals lagging a few years behind the Web on average, and that a "two-tier" scholarly communication system may therefore be evolving. We conclude that in such fields, (a) for detecting current research fronts, author co-citation analyses (ACA) using articles published on the Web as a data source can outperform traditional ACAs using articles published in journals as data, and that (b) as a result, it is important to use multiple data sources in citation analysis studies of scholarly communication for a complete picture of communication patterns. Our evidence stems from comparing the respective intellectual structures of the XML research field, a subfield of computer science, as revealed from three sets of ACA covering two time periods: (a) from the field's beginnings in 1996 to 2001, and (b) from 2001 to 2006. For the first time period, we analyze research articles both from journals as indexed by the Science Citation Index (SCI) and from the Web as indexed by CiteSeer. We follow up by an ACA of SCI data for the second time period. We find that most trends in the evolution of this field from the first to the second time period that we find when comparing ACA results from the SCI between the two time periods already were apparent in the ACA results from CiteSeer during the first time period.
    Object
    Science Citation Index
  17. Robinson-García, N.; Jiménez-Contreras, E.; Torres-Salinas, D.: Analyzing data citation practices using the data citation index : a study of backup strategies of end users (2016) 0.02
    0.015368836 = product of:
      0.07684418 = sum of:
        0.07684418 = weight(_text_:index in 3225) [ClassicSimilarity], result of:
          0.07684418 = score(doc=3225,freq=4.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.3413878 = fieldWeight in 3225, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3225)
      0.2 = coord(1/5)
    
    Abstract
    We present an analysis of data citation practices based on the Data Citation Index (DCI) (Thomson Reuters). This database launched in 2012 links data sets and data studies with citations received from the other citation indexes. The DCI harvests citations to research data from papers indexed in the Web of Science. It relies on the information provided by the data repository. The findings of this study show that data citation practices are far from common in most research fields. Some differences have been reported on the way researchers cite data: Although in the areas of science and engineering & technology data sets were the most cited, in the social sciences and arts & humanities data studies play a greater role. A total of 88.1% of the records have received no citation, but some repositories show very low uncitedness rates. Although data citation practices are rare in most fields, they have expanded in disciplines such as crystallography and genomics. We conclude by emphasizing the role that the DCI could play in encouraging the consistent, standardized citation of research data-a role that would enhance their value as a means of following the research process from data collection to publication.
  18. Safder, I.; Ali, M.; Aljohani, N.R.; Nawaz, R.; Hassan, S.-U.: Neural machine translation for in-text citation classification (2023) 0.02
    0.015368836 = product of:
      0.07684418 = sum of:
        0.07684418 = weight(_text_:index in 1053) [ClassicSimilarity], result of:
          0.07684418 = score(doc=1053,freq=4.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.3413878 = fieldWeight in 1053, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1053)
      0.2 = coord(1/5)
    
    Abstract
    The quality of scientific publications can be measured by quantitative indices such as the h-index, Source Normalized Impact per Paper, or g-index. However, these measures lack to explain the function or reasons for citations and the context of citations from citing publication to cited publication. We argue that citation context may be considered while calculating the impact of research work. However, mining citation context from unstructured full-text publications is a challenging task. In this paper, we compiled a data set comprising 9,518 citations context. We developed a deep learning-based architecture for citation context classification. Unlike feature-based state-of-the-art models, our proposed focal-loss and class-weight-aware BiLSTM model with pretrained GloVe embedding vectors use citation context as input to outperform them in multiclass citation context classification tasks. Our model improves on the baseline state-of-the-art by achieving an F1 score of 0.80 with an accuracy of 0.81 for citation context classification. Moreover, we delve into the effects of using different word embeddings on the performance of the classification model and draw a comparison between fastText, GloVe, and spaCy pretrained word embeddings.
  19. Cronin, B.; Weaver-Wozniak, S.: Online access to acknowledgements (1993) 0.02
    0.015214371 = product of:
      0.07607185 = sum of:
        0.07607185 = weight(_text_:index in 7827) [ClassicSimilarity], result of:
          0.07607185 = score(doc=7827,freq=2.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.33795667 = fieldWeight in 7827, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7827)
      0.2 = coord(1/5)
    
    Abstract
    Reviews the scale, range and consistency of acknowledgement behaviour, in citations, for a number of academic disciplines. The qualitative and quantitative evidence suggests a pervasive and consistent practice in which acknowledgements define a variety of social, cognitive and instrumental relationships between scholars and within and across disciplines. As such they may be used alongside other bibliometric indicators, such as citations, to map networks of influence. Considers the case for using acknowledgements data in the assessment of academic performance and proposes an online acknowledgement index to facilitate this process, perhaps as a logical extension of ISI's citation indexing products
  20. East, J.W.: Citations to conference papers and the implications for cataloging (1985) 0.02
    0.015214371 = product of:
      0.07607185 = sum of:
        0.07607185 = weight(_text_:index in 7928) [ClassicSimilarity], result of:
          0.07607185 = score(doc=7928,freq=2.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.33795667 = fieldWeight in 7928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7928)
      0.2 = coord(1/5)
    
    Abstract
    Problems in the cataloging of conference proceedings, and their treatment by some of the major cataloging codes, are briefly reviewed. To determine how conference papers are cited in the literature, and thus how researchers are likely to be seeking them in the catalog, fifty conference papers in the field of chemistry, delivered in 1970 and subsequently published, were searches in the Science Citation Index covering a ten-year period. The citations to the papers were examined to ascertain the implications of current citation practices for the cataloging of conference proceedings. The results suggest that conference proceedings are customarily cited like any other work of collective authorship and that the conference name is of little value as an access point

Authors

Types

  • a 68
  • el 3
  • m 1
  • More… Less…

Classifications