Search (998 results, page 1 of 50)

  • × year_i:[2010 TO 2020}
  1. Gnoli, C.; Merli, G.; Pavan, G.; Bernuzzi, E.; Priano, M.: Freely faceted classification for a Web-based bibliographic archive : the BioAcoustic Reference Database (2010) 0.12
    0.1192029 = product of:
      0.17880434 = sum of:
        0.06542256 = weight(_text_:reference in 3739) [ClassicSimilarity], result of:
          0.06542256 = score(doc=3739,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31784135 = fieldWeight in 3739, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3739)
        0.11338177 = sum of:
          0.07910801 = weight(_text_:database in 3739) [ClassicSimilarity], result of:
            0.07910801 = score(doc=3739,freq=6.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.38679397 = fieldWeight in 3739, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3739)
          0.034273762 = weight(_text_:22 in 3739) [ClassicSimilarity], result of:
            0.034273762 = score(doc=3739,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.19345059 = fieldWeight in 3739, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3739)
      0.6666667 = coord(2/3)
    
    Abstract
    The Integrative Level Classification (ILC) research project is experimenting with a knowledge organization system based on phenomena rather than disciplines. Each phenomenon has a constant notation, which can be combined with that of any other phenomenon in a freely faceted structure. Citation order can express differential focality of the facets. Very specific subjects can have long classmarks, although their complexity is reduced by various devices. Freely faceted classification is being tested by indexing a corpus of about 3300 papers in the interdisciplinary domain of bioacoustics. The subjects of these papers often include phenomena from a wide variety of integrative levels (mechanical waves, animals, behaviour, vessels, fishing, law, ...) as well as information about the methods of study, as predicted in the León Manifesto. The archive is recorded in a MySQL database, and can be fed and searched through PHP Web interfaces. Indexer's work is made easier by mechanisms that suggest possible classes on the basis of matching title words with terms in the ILC schedules, and synthesize automatically the verbal caption corresponding to the classmark being edited. Users can search the archive by selecting and combining values in each facet. Search refinement should be improved, especially for the cases where no record, or too many records, match the faceted query. However, experience is being gained progressively, showing that freely faceted classification by phenomena, theories, and methods is feasible and successfully working.
    Object
    BioAcoustic Reference Database
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  2. Colliander, C.: ¬A novel approach to citation normalization : a similarity-based method for creating reference sets (2015) 0.10
    0.102454424 = product of:
      0.15368164 = sum of:
        0.13084511 = weight(_text_:reference in 1663) [ClassicSimilarity], result of:
          0.13084511 = score(doc=1663,freq=16.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.6356827 = fieldWeight in 1663, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1663)
        0.022836514 = product of:
          0.045673028 = sum of:
            0.045673028 = weight(_text_:database in 1663) [ClassicSimilarity], result of:
              0.045673028 = score(doc=1663,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.2233156 = fieldWeight in 1663, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1663)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A similarity-oriented approach for deriving reference values used in citation normalization is explored and contrasted with the dominant approach of utilizing database-defined journal sets as a basis for deriving such values. In the similarity-oriented approach, an assessed article's raw citation count is compared with a reference value that is derived from a reference set, which is constructed in such a way that articles in this set are estimated to address a subject matter similar to that of the assessed article. This estimation is based on second-order similarity and utilizes a combination of 2 feature sets: bibliographic references and technical terminology. The contribution of an article in a given reference set to the reference value is dependent on its degree of similarity to the assessed article. It is shown that reference values calculated by the similarity-oriented approach are considerably better at predicting the assessed articles' citation count compared to the reference values given by the journal-set approach, thus significantly reducing the variability in the observed citation distribution that stems from the variability in the articles' addressed subject matter.
  3. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.10
    0.095362306 = product of:
      0.14304346 = sum of:
        0.052338045 = weight(_text_:reference in 168) [ClassicSimilarity], result of:
          0.052338045 = score(doc=168,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.2542731 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.09070542 = sum of:
          0.06328641 = weight(_text_:database in 168) [ClassicSimilarity], result of:
            0.06328641 = score(doc=168,freq=6.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.3094352 = fieldWeight in 168, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
          0.02741901 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
            0.02741901 = score(doc=168,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.15476047 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
      0.6666667 = coord(2/3)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
  4. Freitas, J.L.; Gabriel Jr., R.F.; Bufrem, L.S.: Theoretical approximations between Brazilian and Spanish authors' production in the field of knowledge organization in the production of journals on information science in Brazil (2012) 0.07
    0.06731068 = product of:
      0.10096602 = sum of:
        0.037008587 = weight(_text_:reference in 144) [ClassicSimilarity], result of:
          0.037008587 = score(doc=144,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.17979822 = fieldWeight in 144, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=144)
        0.06395743 = sum of:
          0.036538422 = weight(_text_:database in 144) [ClassicSimilarity], result of:
            0.036538422 = score(doc=144,freq=2.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.17865248 = fieldWeight in 144, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.03125 = fieldNorm(doc=144)
          0.02741901 = weight(_text_:22 in 144) [ClassicSimilarity], result of:
            0.02741901 = score(doc=144,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.15476047 = fieldWeight in 144, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=144)
      0.6666667 = coord(2/3)
    
    Abstract
    This work identifies and analyzes literature about knowledge organization (KO), expressed in scientific journals' communication of information science (IS). It performs an exploratory study on the Base de Dados Referencial de Artigos de Periódicos em Ciência da Informação (BRAPCI, Reference Database of Journal Articles on Information Science) between the years 2000 and 2010. The descriptors relating to "knowledge organization" are used in order to recover and analyze the corresponding articles and to identify descriptors and concepts which integrate the semantic universe related to KO. Through the analysis of content, based on metrical studies, this article gathers and interprets data relating to documents and authors. Through this, it demonstrates the development of this field and its research fronts according to the observed characteristics, as well as noting the transformation indicative in the production of knowledge. The work describes the influences of the Spanish researchers on Brazilian literature in the fields of knowledge and information organization. As a result, it presents the most cited and productive authors, the theoretical currents which support them, and the most significant relationships of the Spanish-Brazilian authors network. Based on the constant key-words analysis in the cited articles, the co-existence of the French conception current and the incipient Spanish influence in Brazil is observed. Through this, it contributes to the comprehension of the thematic range relating to KO, stimulating both criticism and self-criticism, debate and knowledge creation, based on studies that have been developed and institutionalized in academic contexts in Spain and Brazil.
    Content
    Beitrag einer Section "Selected Papers from the 1ST Brazilian Conference on Knowledge Organization And Representation, Faculdade de Ciência da Informação, Campus Universitário Darcy Ribeiro Brasília, DF Brasil, October 20-22, 2011" Vgl.: http://www.ergon-verlag.de/isko_ko/downloads/ko_39_2012_3_g.pdf.
  5. Wissensspeicher in digitalen Räumen : Nachhaltigkeit, Verfügbarkeit, semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008 (2010) 0.07
    0.06731068 = product of:
      0.10096602 = sum of:
        0.037008587 = weight(_text_:reference in 774) [ClassicSimilarity], result of:
          0.037008587 = score(doc=774,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.17979822 = fieldWeight in 774, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=774)
        0.06395743 = sum of:
          0.036538422 = weight(_text_:database in 774) [ClassicSimilarity], result of:
            0.036538422 = score(doc=774,freq=2.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.17865248 = fieldWeight in 774, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.03125 = fieldNorm(doc=774)
          0.02741901 = weight(_text_:22 in 774) [ClassicSimilarity], result of:
            0.02741901 = score(doc=774,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.15476047 = fieldWeight in 774, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=774)
      0.6666667 = coord(2/3)
    
    Content
    C. Begriffsarbeit in der Wissensorganisation Ingetraut Dahlberg: Begriffsarbeit in der Wissensorganisation Claudio Gnoli, Gabriele Merli, Gianni Pavan, Elisabetta Bernuzzi, and Marco Priano: Freely faceted classification for a Web-based bibliographic archive The BioAcoustic Reference Database Stefan Hauser: Terminologiearbeit im Bereich Wissensorganisation - Vergleich dreier Publikationen anhand der Darstellung des Themenkomplexes Thesaurus Daniel Kless: Erstellung eines allgemeinen Standards zur Wissensorganisation: Nutzen, Möglichkeiten, Herausforderungen, Wege D. Kommunikation und Lernen Gerald Beck und Simon Meissner: Strukturierung und Vermittlung von heterogenen (Nicht-)Wissensbeständen in der Risikokommunikation Angelo Chianese, Francesca Cantone, Mario Caropreso, and Vincenzo Moscato: ARCHAEOLOGY 2.0: Cultural E-Learning tools and distributed repositories supported by SEMANTICA, a System for Learning Object Retrieval and Adaptive Courseware Generation for e-learning environments Sonja Hierl, Lydia Bauer, Nadja Böller und Josef Herget: Kollaborative Konzeption von Ontologien in der Hochschullehre: Theorie, Chancen und mögliche Umsetzung Marc Wilhelm Küster, Christoph Ludwig, Yahya Al-Haff und Andreas Aschenbrenner: TextGrid: eScholarship und der Fortschritt der Wissenschaft durch vernetzte Angebote
  6. Liu, S.; Chen, C.: ¬The differences between latent topics in abstracts and citation contexts of citing papers (2013) 0.06
    0.06484189 = product of:
      0.09726283 = sum of:
        0.08012595 = weight(_text_:reference in 671) [ClassicSimilarity], result of:
          0.08012595 = score(doc=671,freq=6.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.3892746 = fieldWeight in 671, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=671)
        0.017136881 = product of:
          0.034273762 = sum of:
            0.034273762 = weight(_text_:22 in 671) [ClassicSimilarity], result of:
              0.034273762 = score(doc=671,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.19345059 = fieldWeight in 671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=671)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Although it is commonly expected that the citation context of a reference is likely to provide more detailed and direct information about the nature of a citation, few studies in the literature have specifically addressed the extent to which the information in different parts of a scientific publication differs. Do abstracts tend to use conceptually broader terms than sentences in a citation context in the body of a publication? In this article, we propose a method to analyze and compare latent topics in scientific publications, in particular, from abstracts of papers that cited a target reference and from sentences that cited the target reference. We conducted an experiment and applied topical modeling techniques to full-text papers in eight biomedicine journals. Topics derived from the two sources are compared in terms of their similarities and broad-narrow relationships defined based on information entropy. The results show that abstracts and citation contexts are characterized by distinct sets of topics with moderate overlaps. Furthermore, the results confirm that topics from abstracts of citing papers have broader terms than topics from citation contexts formed by citing sentences. The method and the findings could be used to enhance and extend the current methodologies for research evaluation and citation evaluation.
    Date
    22. 3.2013 19:50:00
  7. Tomaszewski, R.: Citations to chemical databases in scholarly articles : to cite or not to cite? (2019) 0.06
    0.05883938 = product of:
      0.08825907 = sum of:
        0.06542256 = weight(_text_:reference in 5471) [ClassicSimilarity], result of:
          0.06542256 = score(doc=5471,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31784135 = fieldWeight in 5471, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5471)
        0.022836514 = product of:
          0.045673028 = sum of:
            0.045673028 = weight(_text_:database in 5471) [ClassicSimilarity], result of:
              0.045673028 = score(doc=5471,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.2233156 = fieldWeight in 5471, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5471)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose Chemical databases have had a significant impact on the way scientists search for and use information. The purpose of this paper is to spark informed discussion and fuel debate on the issue of citations to chemical databases. Design/methodology/approach A citation analysis to four major chemical databases was undertaken to examine resource coverage and impact in the scientific literature. Two commercial databases (SciFinder and Reaxys) and two public databases (PubChem and ChemSpider) were analyzed using the "Cited Reference Search" in the Science Citation Index Expanded from the Web of Science (WoS) database. Citations to these databases between 2000 and 2016 (inclusive) were evaluated by document types and publication growth curves. A review of the distribution trends of chemical databases in peer-reviewed articles was conducted through a citation count analysis by country, organization, journal and WoS category. Findings In total, 862 scholarly articles containing a citation to one or more of the four databases were identified as only steadily increasing since 2000. The study determined that authors at academic institutions worldwide reference chemical databases in high-impact journals from notable publishers and mainly in the field of chemistry. Originality/value The research is a first attempt to evaluate the practice of citation to major chemical databases in the scientific literature. This paper proposes that citing chemical databases gives merit and recognition to the resources as well as credibility and validity to the scholarly communication process and also further discusses recommendations for citing and referencing databases.
  8. 80 years of Zentralblatt MATH : 80 footprints of distinguished mathematicians in Zentralblatt (2011) 0.06
    0.055277795 = product of:
      0.08291669 = sum of:
        0.055512875 = weight(_text_:reference in 4737) [ClassicSimilarity], result of:
          0.055512875 = score(doc=4737,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.2696973 = fieldWeight in 4737, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=4737)
        0.027403818 = product of:
          0.054807637 = sum of:
            0.054807637 = weight(_text_:database in 4737) [ClassicSimilarity], result of:
              0.054807637 = score(doc=4737,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.26797873 = fieldWeight in 4737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4737)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Founded in 1931 by Otto Neugebauer as the printed documentation service "Zentralblatt für Mathematik und ihre Grenzgebiete", Zentralblatt MATH (ZBMATH) celebrates its 80th anniversary in 2011. Today it is the most comprehensive and active reference database in pure and applied mathematics worldwide. Many prominent mathematicians have been involved in this service as reviewers or editors and have, like all mathematicians, left their footprints in ZBMATH, in a long list of entries describing all of their research publications in mathematics. This book provides one review from each of the 80 years of ZBMATH. Names like Courant, Kolmogorov, Hardy, Hirzebruch, Faltings and many others can be found here. In addition to the original reviews, the book offers the authors' profiles indicating their co-authors, their favorite journals and the time span of their publication activities. In addition to this, a generously illustrated essay by Silke Göbel describes the history of ZBMATH.
  9. Peroni, S.; Dutton, A.; Gray, T.; Shotton, D.: Setting our bibliographic references free : towards open citation data (2015) 0.06
    0.055277795 = product of:
      0.08291669 = sum of:
        0.055512875 = weight(_text_:reference in 1790) [ClassicSimilarity], result of:
          0.055512875 = score(doc=1790,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.2696973 = fieldWeight in 1790, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=1790)
        0.027403818 = product of:
          0.054807637 = sum of:
            0.054807637 = weight(_text_:database in 1790) [ClassicSimilarity], result of:
              0.054807637 = score(doc=1790,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.26797873 = fieldWeight in 1790, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1790)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - Citation data needs to be recognised as a part of the Commons - those works that are freely and legally available for sharing - and placed in an open repository. The paper aims to discuss this issue. Design/methodology/approach - The Open Citation Corpus is a new open repository of scholarly citation data, made available under a Creative Commons CC0 1.0 public domain dedication and encoded as Open Linked Data using the SPAR Ontologies. Findings - The Open Citation Corpus presently provides open access (OA) to reference lists from 204,637 articles from the OA Subset of PubMed Central, containing 6,325,178 individual references to 3,373,961 unique papers. Originality/value - Scholars, publishers and institutions may freely build upon, enhance and reuse the open citation data for any purpose, without restriction under copyright or database law.
  10. Rotolo, D.; Leydesdorff, L.: Matching Medline/PubMed data with Web of Science: A routine in R language (2015) 0.06
    0.055277795 = product of:
      0.08291669 = sum of:
        0.055512875 = weight(_text_:reference in 2224) [ClassicSimilarity], result of:
          0.055512875 = score(doc=2224,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.2696973 = fieldWeight in 2224, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=2224)
        0.027403818 = product of:
          0.054807637 = sum of:
            0.054807637 = weight(_text_:database in 2224) [ClassicSimilarity], result of:
              0.054807637 = score(doc=2224,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.26797873 = fieldWeight in 2224, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2224)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We present a novel routine, namely medlineR, based on the R language, that allows the user to match data from Medline/PubMed with records indexed in the ISI Web of Science (WoS) database. The matching allows exploiting the rich and controlled vocabulary of medical subject headings (MeSH) of Medline/PubMed with additional fields of WoS. The integration provides data (e.g., citation data, list of cited reference, list of the addresses of authors' host organizations, WoS subject categories) to perform a variety of scientometric analyses. This brief communication describes medlineR, the method on which it relies, and the steps the user should follow to perform the matching across the two databases. To demonstrate the differences from Leydesdorff and Opthof (Journal of the American Society for Information Science and Technology, 64(5), 1076-1080), we conclude this artcle by testing the routine on the MeSH category "Burgada syndrome."
  11. Leydesdorff, L.; Bornmann, L.: ¬The operationalization of "fields" as WoS subject categories (WCs) in evaluative bibliometrics : the cases of "library and information science" and "science & technology studies" (2016) 0.06
    0.055277795 = product of:
      0.08291669 = sum of:
        0.055512875 = weight(_text_:reference in 2779) [ClassicSimilarity], result of:
          0.055512875 = score(doc=2779,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.2696973 = fieldWeight in 2779, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=2779)
        0.027403818 = product of:
          0.054807637 = sum of:
            0.054807637 = weight(_text_:database in 2779) [ClassicSimilarity], result of:
              0.054807637 = score(doc=2779,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.26797873 = fieldWeight in 2779, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2779)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Normalization of citation scores using reference sets based on Web of Science subject categories (WCs) has become an established ("best") practice in evaluative bibliometrics. For example, the Times Higher Education World University Rankings are, among other things, based on this operationalization. However, WCs were developed decades ago for the purpose of information retrieval and evolved incrementally with the database; the classification is machine-based and partially manually corrected. Using the WC "information science & library science" and the WCs attributed to journals in the field of "science and technology studies," we show that WCs do not provide sufficient analytical clarity to carry bibliometric normalization in evaluation practices because of "indexer effects." Can the compliance with "best practices" be replaced with an ambition to develop "best possible practices"? New research questions can then be envisaged.
  12. Kumbhar, R.: Library classification trends in the 21st century (2012) 0.06
    0.055039626 = product of:
      0.08255944 = sum of:
        0.06542256 = weight(_text_:reference in 736) [ClassicSimilarity], result of:
          0.06542256 = score(doc=736,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31784135 = fieldWeight in 736, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=736)
        0.017136881 = product of:
          0.034273762 = sum of:
            0.034273762 = weight(_text_:22 in 736) [ClassicSimilarity], result of:
              0.034273762 = score(doc=736,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.19345059 = fieldWeight in 736, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=736)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This book would serve as a good introductory textbook for a library science student or as a reference work on the types of classification currently in use. College and Research Libraries - covers all aspects of library classification - it is the only book that reviews literature published over a decade's time span (1999-2009) - well thought chapterization which is in tune with the LIS and classification curriculum - useful reference tool for researchers in classification - a valuable contribution to the bibliographic control of classification literature Library Classification Trends in the 21st Century traces development in and around library classification as reported in literature published in the first decade of the 21st century. It reviews literature published on various aspects of library classification, including modern applications of classification such as internet resource discovery, automatic book classification, text categorization, modern manifestations of classification such as taxonomies, folksonomies and ontologies and interoperable systems enabling crosswalk. The book also features classification education and an exploration of relevant topics.
    Date
    22. 2.2013 12:23:55
  13. Kiren, T.; Shoaib, M.: ¬A novel ontology matching approach using key concepts (2016) 0.06
    0.055039626 = product of:
      0.08255944 = sum of:
        0.06542256 = weight(_text_:reference in 2589) [ClassicSimilarity], result of:
          0.06542256 = score(doc=2589,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31784135 = fieldWeight in 2589, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2589)
        0.017136881 = product of:
          0.034273762 = sum of:
            0.034273762 = weight(_text_:22 in 2589) [ClassicSimilarity], result of:
              0.034273762 = score(doc=2589,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.19345059 = fieldWeight in 2589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2589)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose Ontologies are used to formally describe the concepts within a domain in a machine-understandable way. Matching of heterogeneous ontologies is often essential for many applications like semantic annotation, query answering or ontology integration. Some ontologies may include a large number of entities which make the ontology matching process very complex in terms of the search space and execution time requirements. The purpose of this paper is to present a technique for finding degree of similarity between ontologies that trims down the search space by eliminating the ontology concepts that have less likelihood of being matched. Design/methodology/approach Algorithms are written for finding key concepts, concept matching and relationship matching. WordNet is used for solving synonym problems during the matching process. The technique is evaluated using the reference alignments between ontologies from ontology alignment evaluation initiative benchmark in terms of degree of similarity, Pearson's correlation coefficient and IR measures precision, recall and F-measure. Findings Positive correlation between the degree of similarity and degree of similarity (reference alignment) and computed values of precision, recall and F-measure showed that if only key concepts of ontologies are compared, a time and search space efficient ontology matching system can be developed. Originality/value On the basis of the present novel approach for ontology matching, it is concluded that using key concepts for ontology matching gives comparable results in reduced time and space.
    Date
    20. 1.2015 18:30:22
  14. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.05
    0.053570844 = product of:
      0.16071253 = sum of:
        0.16071253 = product of:
          0.48213756 = sum of:
            0.48213756 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.48213756 = score(doc=973,freq=2.0), product of:
                0.42893425 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050593734 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  15. McCain, K.W.: Assessing obliteration by incorporation : issues and caveats (2012) 0.05
    0.05237096 = product of:
      0.07855644 = sum of:
        0.046260733 = weight(_text_:reference in 485) [ClassicSimilarity], result of:
          0.046260733 = score(doc=485,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.22474778 = fieldWeight in 485, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=485)
        0.032295708 = product of:
          0.064591415 = sum of:
            0.064591415 = weight(_text_:database in 485) [ClassicSimilarity], result of:
              0.064591415 = score(doc=485,freq=4.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.31581596 = fieldWeight in 485, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=485)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Empirical studies of obliteration by incorporation (OBI) may be conducted at the level of the database record or the fulltext citation-in-context. To assess the difference between the two approaches, 1,040 articles with a variant of the phrase "evolutionarily stable strategies" (ESS) were identified by searching the Web of Science (Thomson Reuters, Philadelphia, PA) and discipline-level databases. The majority (72%) of all articles were published in life sciences journals. The ESS concept is associated with a small set of canonical publications by John Maynard Smith; OBI represents a decoupling of the use of the phrase and a citation to a John Maynard Smith publication. Across all articles at the record level, OBI is measured by the number of articles with the phrase in the database record but which lack a reference to a source article (implicit citations). At the citation-in-context level, articles that coupled a non-Maynard Smith citation with the ESS phrase (indirect citations) were counted along with those that cited relevant Maynard Smith publications (explicit citations) and OBI counted only based on those articles that lacked any citation coupled with the ESS text phrase. The degree of OBI observed depended on the level of analysis. Record-level OBI trended upward, peaking in 2002 (62%), with a secondary drop and rebound to 53% (2008). Citation-in-context OBI percentages were lower with no clear pattern. Several issues relating to the design of empirical OBI studies are discussed.
  16. Liu, J.S.; Chen, H.-H.; Ho, M.H.-C.; Li, Y.-C.: Citations with different levels of relevancy : tracing the main paths of legal opinions (2014) 0.05
    0.05237096 = product of:
      0.07855644 = sum of:
        0.046260733 = weight(_text_:reference in 1546) [ClassicSimilarity], result of:
          0.046260733 = score(doc=1546,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.22474778 = fieldWeight in 1546, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1546)
        0.032295708 = product of:
          0.064591415 = sum of:
            0.064591415 = weight(_text_:database in 1546) [ClassicSimilarity], result of:
              0.064591415 = score(doc=1546,freq=4.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.31581596 = fieldWeight in 1546, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1546)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This study explores the effect from considering citation relevancy in the main path analysis. Traditional citation-based analyses treat all citations equally even though there can be various reasons and different levels of relevancy for one document to reference another. Taking the relevancy level into consideration is intuitively advantageous because it adopts more accurate information and will thus make the results of a citation-based analysis more trustworthy. This is nevertheless a challenging task. We are aware of no citation-based analysis that has taken the relevancy level into consideration. The difficulty lies in the fact that the existing patent or patent citation database provides no readily available relevancy level information. We overcome this issue by obtaining citation relevancy information from a legal database that has relevancy level ranked by legal experts. This paper selects trademark dilution, a legal concept that has been the subject of many lawsuit cases, as the target for exploration. We apply main path analysis, taking citation relevancy into consideration, and verify the results against a set of test cases that are mentioned in an authoritative trademark book. The findings show that relevancy information helps main path analysis uncover legal cases of higher importance. Nevertheless, in terms of the number of significant cases retrieved, relevancy information does not seem to make a noticeable difference.
  17. Signoles, A.; Bitoun, C.; Valderrama, A.: Implementing FRBR to improve retrieval of in-house information in a medium-sized international institute (2012) 0.05
    0.051906526 = product of:
      0.07785979 = sum of:
        0.037008587 = weight(_text_:reference in 1911) [ClassicSimilarity], result of:
          0.037008587 = score(doc=1911,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.17979822 = fieldWeight in 1911, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=1911)
        0.040851198 = product of:
          0.081702396 = sum of:
            0.081702396 = weight(_text_:database in 1911) [ClassicSimilarity], result of:
              0.081702396 = score(doc=1911,freq=10.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.3994791 = fieldWeight in 1911, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1911)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The International Institute for Educational Planning (IIEP) is a specialized institute of UNESCO which undertakes training and research in the field of educational planning and management. IIEP disseminates publications which are the outputs of its research findings. The Documentation Centre is responsible for the maintenance and upkeep of several databases. In-house databases include a projects database, consisting of activity records (updated by administrative and research staff), and a grey literature document database and reference archive (mission reports, lessons, masters' papers). The latter contains heterogeneous, multilingual documents which are the outputs of activities. The external database is a publicly accessible bibliographic database which follows AACR. The databases are separate which results in a loss of information. The process was undertaken within the wider context of reorganizing internal cataloguing rules to comply with changing international standards. The objective is to make IIEP's various databases interoperable by factorizing the fragmented elements and reconciling heterogeneous data from multiple sources (different contributors, indexed and non-indexed content). The choice of FRBR can be explained due to the appropriateness of an access point by work. On an information level, it allows the user to optimally retrieve resources through connections between the works. On an institutional level, it would enable the history and evolution of activities and their outputs to be traced. The FRBRized catalogue would be enriched through inter-database relationships and would offer fuller records. The first step was to establish the users' different needs and to develop a typology of the data to be processed. Methodology used was based on the FRBRer model. Then, identifying the entities enabled the work and its levels, the attributes of each group and the relationships to be determined. To account for the processes of time and the complexity of the levels of work, FRBRoo and CIDOC-CRM models were envisaged. Finally, an FRBRoo model was developed.
  18. Hohmann, G.: ¬Die Anwendung des CIDOC-CRM für die semantische Wissensrepräsentation in den Kulturwissenschaften (2010) 0.05
    0.05071809 = product of:
      0.07607713 = sum of:
        0.055512875 = weight(_text_:reference in 4011) [ClassicSimilarity], result of:
          0.055512875 = score(doc=4011,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.2696973 = fieldWeight in 4011, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=4011)
        0.020564256 = product of:
          0.041128512 = sum of:
            0.041128512 = weight(_text_:22 in 4011) [ClassicSimilarity], result of:
              0.041128512 = score(doc=4011,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.23214069 = fieldWeight in 4011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4011)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Das CIDOC Conceptual Reference Model (CRM) ist eine Ontologie für den Bereich des Kulturellen Erbes, die als ISO 21127 standardisiert ist. Inzwischen liegen auch OWL-DL-Implementationen des CRM vor, die ihren Einsatz auch im Semantic Web ermöglicht. OWL-DL ist eine entscheidbare Untermenge der Web Ontology Language, die vom W3C spezifiziert wurde. Lokale Anwendungsontologien, die ebenfalls in OWL-DL modelliert werden, können über Subklassenbeziehungen mit dem CRM als Referenzontologie verbunden werden. Dadurch wird es automatischen Prozessen ermöglicht, autonom heterogene Daten semantisch zu validieren, zueinander in Bezug zu setzen und Anfragen über verschiedene Datenbestände innerhalb der Wissensdomäne zu verarbeiten und zu beantworten.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  19. Doorn, M. van; Polman, K.: From classification to thesaurus ... and back? : subject indexing tools at the library of the Afrika-Studiecentrum Leiden (2010) 0.05
    0.05071809 = product of:
      0.07607713 = sum of:
        0.055512875 = weight(_text_:reference in 4062) [ClassicSimilarity], result of:
          0.055512875 = score(doc=4062,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.2696973 = fieldWeight in 4062, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=4062)
        0.020564256 = product of:
          0.041128512 = sum of:
            0.041128512 = weight(_text_:22 in 4062) [ClassicSimilarity], result of:
              0.041128512 = score(doc=4062,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.23214069 = fieldWeight in 4062, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4062)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    An African Studies Thesaurus was constructed for the purpose of subject indexing and retrieval in the Library of the African Studies Centre (ASC) in Leiden in 2001-2006. A word-based system was considered a more user-friendly alternative to the Universal Decimal Classification (UDC) codes which were used for subject access in the ASC catalogue at the time. In the process of thesaurus construction UDC codes were used as a starting point. In addition, when constructing the thesaurus, each descriptor was also assigned a UDC code from the recent edition of the UDC Master Reference File (MRF), thus replacing many of the old UDC codes used by then, some of which dated from the 1952 French edition. The presence of the UDC codes in the thesaurus leaves open the possibility of linking the thesaurus to different language versions of the UDC MRF in the future. In a parallel but separate operation each UDC code which had been assigned to an item in the library's catalogue was subsequently converted into one or more thesaurus descriptors.
    Date
    22. 7.2010 19:48:33
  20. Albarrán, P.; Ruiz-Castillo, J.: References made and citations received by scientific articles (2011) 0.05
    0.05071809 = product of:
      0.07607713 = sum of:
        0.055512875 = weight(_text_:reference in 4185) [ClassicSimilarity], result of:
          0.055512875 = score(doc=4185,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.2696973 = fieldWeight in 4185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=4185)
        0.020564256 = product of:
          0.041128512 = sum of:
            0.041128512 = weight(_text_:22 in 4185) [ClassicSimilarity], result of:
              0.041128512 = score(doc=4185,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.23214069 = fieldWeight in 4185, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4185)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article studies massive evidence about references made and citations received after a 5-year citation window by 3.7 million articles published in 1998 to 2002 in 22 scientific fields. We find that the distributions of references made and citations received share a number of basic features across sciences. Reference distributions are rather skewed to the right while citation distributions are even more highly skewed: The mean is about 20 percentage points to the right of the median, and articles with a remarkable or an outstanding number of citations represent about 9% of the total. Moreover, the existence of a power law representing the upper tail of citation distributions cannot be rejected in 17 fields whose articles represent 74.7% of the total. Contrary to the evidence in other contexts, the value of the scale parameter is above 3.5 in 13 of the 17 cases. Finally, power laws are typically small, but capture a considerable proportion of the total citations received.

Languages

  • e 788
  • d 201
  • a 1
  • f 1
  • hu 1
  • i 1
  • More… Less…

Types

  • a 851
  • el 98
  • m 78
  • s 29
  • x 17
  • r 8
  • b 5
  • n 4
  • i 2
  • z 1
  • More… Less…

Themes

Subjects

Classifications