Search (393 results, page 1 of 20)

  • × year_i:[2020 TO 2030}
  1. Safder, I.; Ali, M.; Aljohani, N.R.; Nawaz, R.; Hassan, S.-U.: Neural machine translation for in-text citation classification (2023) 0.07
    0.07102022 = product of:
      0.17755055 = sum of:
        0.11412249 = weight(_text_:context in 1053) [ClassicSimilarity], result of:
          0.11412249 = score(doc=1053,freq=16.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.64760154 = fieldWeight in 1053, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1053)
        0.06342807 = weight(_text_:index in 1053) [ClassicSimilarity], result of:
          0.06342807 = score(doc=1053,freq=4.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.3413878 = fieldWeight in 1053, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1053)
      0.4 = coord(2/5)
    
    Abstract
    The quality of scientific publications can be measured by quantitative indices such as the h-index, Source Normalized Impact per Paper, or g-index. However, these measures lack to explain the function or reasons for citations and the context of citations from citing publication to cited publication. We argue that citation context may be considered while calculating the impact of research work. However, mining citation context from unstructured full-text publications is a challenging task. In this paper, we compiled a data set comprising 9,518 citations context. We developed a deep learning-based architecture for citation context classification. Unlike feature-based state-of-the-art models, our proposed focal-loss and class-weight-aware BiLSTM model with pretrained GloVe embedding vectors use citation context as input to outperform them in multiclass citation context classification tasks. Our model improves on the baseline state-of-the-art by achieving an F1 score of 0.80 with an accuracy of 0.81 for citation context classification. Moreover, we delve into the effects of using different word embeddings on the performance of the classification model and draw a comparison between fastText, GloVe, and spaCy pretrained word embeddings.
  2. Bullard, J.; Dierking, A.; Grundner, A.: Centring LGBT2QIA+ subjects in knowledge organization systems (2020) 0.07
    0.06951503 = product of:
      0.11585837 = sum of:
        0.04841807 = weight(_text_:context in 5996) [ClassicSimilarity], result of:
          0.04841807 = score(doc=5996,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 5996, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=5996)
        0.055919025 = weight(_text_:system in 5996) [ClassicSimilarity], result of:
          0.055919025 = score(doc=5996,freq=8.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.41757566 = fieldWeight in 5996, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=5996)
        0.011521274 = product of:
          0.03456382 = sum of:
            0.03456382 = weight(_text_:22 in 5996) [ClassicSimilarity], result of:
              0.03456382 = score(doc=5996,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23214069 = fieldWeight in 5996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5996)
          0.33333334 = coord(1/3)
      0.6 = coord(3/5)
    
    Abstract
    This paper contains a report of two interdependent knowledge organization (KO) projects for an LGBT2QIA+ library. The authors, in the context of volunteer library work for an independent library, redesigned the classification system and subject cataloguing guidelines to centre LGBT2QIA+ subjects. We discuss the priorities of creating and maintaining knowledge organization systems for a historically marginalized community and address the challenge that queer subjectivity poses to the goals of KO. The classification system features a focus on identity and physically reorganizes the library space in a way that accounts for the multiple and overlapping labels that constitute the currently articulated boundaries of this community. The subject heading system focuses on making visible topics and elements of identity made invisible by universal systems and by the newly implemented classification system. We discuss how this project may inform KO for other marginalized subjects, particularly through process and documentation that prioritizes transparency and the acceptance of an unfinished endpoint for queer KO.
    Date
    6.10.2020 21:22:33
  3. Cheti, A.; Viti, E.: Functionality and merits of a faceted thesaurus : the case of the Nuovo soggettario (2023) 0.05
    0.05273932 = product of:
      0.08789886 = sum of:
        0.04841807 = weight(_text_:context in 1181) [ClassicSimilarity], result of:
          0.04841807 = score(doc=1181,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 1181, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=1181)
        0.027959513 = weight(_text_:system in 1181) [ClassicSimilarity], result of:
          0.027959513 = score(doc=1181,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 1181, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=1181)
        0.011521274 = product of:
          0.03456382 = sum of:
            0.03456382 = weight(_text_:22 in 1181) [ClassicSimilarity], result of:
              0.03456382 = score(doc=1181,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23214069 = fieldWeight in 1181, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1181)
          0.33333334 = coord(1/3)
      0.6 = coord(3/5)
    
    Abstract
    The Nuovo soggettario, the official Italian subject indexing system edited by the National Central Library of Florence, is made up of interactive components, the core of which is a general thesaurus and some rules of a conventional syntax for subject string construction. The Nuovo soggettario Thesaurus is in compliance with ISO 25964: 2011-2013, IFLA LRM, and FAIR principle (findability, accessibility, interoperability, and reusability). Its open data are available in the Zthes, MARC21, and in SKOS formats and allow for interoperability with l library, archive, and museum databases. The Thesaurus's macrostructure is organized into four fundamental macro-categories, thirteen categories, and facets. The facets allow for the orderly development of hierarchies, thereby limiting polyhierarchies and promoting the grouping of homogenous concepts. This paper addresses the main features and peculiarities which have characterized the consistent development of this categorical structure and its effects on the syntactic sphere in a predominantly pre-coordinated usage context.
    Date
    26.11.2023 18:59:22
  4. Bedford, D.: Knowledge architectures : structures and semantics (2021) 0.05
    0.04634335 = product of:
      0.07723891 = sum of:
        0.032278713 = weight(_text_:context in 566) [ClassicSimilarity], result of:
          0.032278713 = score(doc=566,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.18316938 = fieldWeight in 566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.03125 = fieldNorm(doc=566)
        0.03727935 = weight(_text_:system in 566) [ClassicSimilarity], result of:
          0.03727935 = score(doc=566,freq=8.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.27838376 = fieldWeight in 566, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=566)
        0.0076808496 = product of:
          0.023042548 = sum of:
            0.023042548 = weight(_text_:22 in 566) [ClassicSimilarity], result of:
              0.023042548 = score(doc=566,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.15476047 = fieldWeight in 566, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=566)
          0.33333334 = coord(1/3)
      0.6 = coord(3/5)
    
    Abstract
    Knowledge Architectures reviews traditional approaches to managing information and explains why they need to adapt to support 21st-century information management and discovery. Exploring the rapidly changing environment in which information is being managed and accessed, the book considers how to use knowledge architectures, the basic structures and designs that underlie all of the parts of an effective information system, to best advantage. Drawing on 40 years of work with a variety of organizations, Bedford explains that failure to understand the structure behind any given system can be the difference between an effective solution and a significant and costly failure. Demonstrating that the information user environment has shifted significantly in the past 20 years, the book explains that end users now expect designs and behaviors that are much closer to the way they think, work, and act. Acknowledging how important it is that those responsible for developing an information or knowledge management system understand knowledge structures, the book goes beyond a traditional library science perspective and uses case studies to help translate the abstract and theoretical to the practical and concrete. Explaining the structures in a simple and intuitive way and providing examples that clearly illustrate the challenges faced by a range of different organizations, Knowledge Architectures is essential reading for those studying and working in library and information science, data science, systems development, database design, and search system architecture and engineering.
    Content
    Section 1 Context and purpose of knowledge architecture -- 1 Making the case for knowledge architecture -- 2 The landscape of knowledge assets -- 3 Knowledge architecture and design -- 4 Knowledge architecture reference model -- 5 Knowledge architecture segments -- Section 2 Designing for availability -- 6 Knowledge object modeling -- 7 Knowledge structures for encoding, formatting, and packaging -- 8 Functional architecture for identification and distinction -- 9 Functional architectures for knowledge asset disposition and destruction -- 10 Functional architecture designs for knowledge preservation and conservation -- Section 3 Designing for accessibility -- 11 Functional architectures for knowledge seeking and discovery -- 12 Functional architecture for knowledge search -- 13 Functional architecture for knowledge categorization -- 14 Functional architectures for indexing and keywording -- 15 Functional architecture for knowledge semantics -- 16 Functional architecture for knowledge abstraction and surrogation -- Section 4 Functional architectures to support knowledge consumption -- 17 Functional architecture for knowledge augmentation, derivation, and synthesis -- 18 Functional architecture to manage risk and harm -- 19 Functional architectures for knowledge authentication and provenance -- 20 Functional architectures for securing knowledge assets -- 21 Functional architectures for authorization and asset management -- Section 5 Pulling it all together - the big picture knowledge architecture -- 22 Functional architecture for knowledge metadata and metainformation -- 23 The whole knowledge architecture - pulling it all together
  5. Lee, D.: Hornbostel-Sachs Classification of Musical Instruments (2020) 0.04
    0.04400172 = product of:
      0.0733362 = sum of:
        0.040348392 = weight(_text_:context in 5755) [ClassicSimilarity], result of:
          0.040348392 = score(doc=5755,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 5755, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5755)
        0.023299592 = weight(_text_:system in 5755) [ClassicSimilarity], result of:
          0.023299592 = score(doc=5755,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 5755, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5755)
        0.009688215 = product of:
          0.029064644 = sum of:
            0.029064644 = weight(_text_:29 in 5755) [ClassicSimilarity], result of:
              0.029064644 = score(doc=5755,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19432661 = fieldWeight in 5755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5755)
          0.33333334 = coord(1/3)
      0.6 = coord(3/5)
    
    Abstract
    This paper discusses the Hornbostel-Sachs Classification of Musical Instruments. This classification system was originally designed for musical instruments and books about instruments, and was first published in German in 1914. Hornbostel-Sachs has dominated organological discourse and practice since its creation, and this article analyses the scheme's context, background, versions and impact. The position of Hornbostel-Sachs in the history and development of instrument classification is explored. This is followed by a detailed analysis of the mechanics of the scheme, including its decimal notation, the influential broad categories of the scheme, its warrant and its typographical layout. The version history of the scheme is outlined and the relationships between versions is visualised, including its translations, the introduction of the electrophones category and the Musical Instruments Museums Online (MIMO) version designed for a digital environment. The reception of Hornbostel-Sachs is analysed, and its usage, criticism and impact are all considered. As well as dominating organological research and practice for over a century, it is shown that Hornbostel-Sachs also had a significant influence on the bibliographic classification of music.
    Footnote
    Derived from the article of similar title in the ISKO Encyclopedia of Knowledge Organization Version 1.1 (= 1.0 plus details on electrophones and Wikipedia); version 1.0 published 2019-01-17, this version 2019-05-29. Article category: KOS, specific (domain specific). The author would like to thank the anonymous reviewers for their useful comments, as well as the editor, Professor Birger Hjørland, for all of his insightful comments and ideas.
  6. Golub, K.; Tyrkkö, J.; Hansson, J.; Ahlström, I.: Subject indexing in humanities : a comparison between a local university repository and an international bibliographic service (2020) 0.04
    0.041510582 = product of:
      0.103776455 = sum of:
        0.040348392 = weight(_text_:context in 5982) [ClassicSimilarity], result of:
          0.040348392 = score(doc=5982,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 5982, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5982)
        0.06342807 = weight(_text_:index in 5982) [ClassicSimilarity], result of:
          0.06342807 = score(doc=5982,freq=4.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.3413878 = fieldWeight in 5982, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5982)
      0.4 = coord(2/5)
    
    Abstract
    As the humanities develop in the realm of increasingly more pronounced digital scholarship, it is important to provide quality subject access to a vast range of heterogeneous information objects in digital services. The study aims to paint a representative picture of the current state of affairs of the use of subject index terms in humanities journal articles with particular reference to the well-established subject access needs of humanities researchers, with the purpose of identifying which improvements are needed in this context. Design/methodology/approach The comparison of subject metadata on a sample of 649 peer-reviewed journal articles from across the humanities is conducted in a university repository, against Scopus, the former reflecting local and national policies and the latter being the most comprehensive international abstract and citation database of research output. Findings The study shows that established bibliographic objectives to ensure subject access for humanities journal articles are not supported in either the world's largest commercial abstract and citation database Scopus or the local repository of a public university in Sweden. The indexing policies in the two services do not seem to address the needs of humanities scholars for highly granular subject index terms with appropriate facets; no controlled vocabularies for any humanities discipline are used whatsoever. Originality/value In all, not much has changed since 1990s when indexing for the humanities was shown to lag behind the sciences. The community of researchers and information professionals, today working together on digital humanities projects, as well as interdisciplinary research teams, should demand that their subject access needs be fulfilled, especially in commercial services like Scopus and discovery services.
  7. Furner, J.: Definitions of "metadata" : a brief survey of international standards (2020) 0.04
    0.04089543 = product of:
      0.102238566 = sum of:
        0.04841807 = weight(_text_:context in 5912) [ClassicSimilarity], result of:
          0.04841807 = score(doc=5912,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 5912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=5912)
        0.0538205 = weight(_text_:index in 5912) [ClassicSimilarity], result of:
          0.0538205 = score(doc=5912,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.28967714 = fieldWeight in 5912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=5912)
      0.4 = coord(2/5)
    
    Abstract
    A search on the term "metadata" in the International Organization for Standardization's Online Browsing Platform (ISO OBP) reveals that there are 96 separate ISO standards that provide definitions of the term. Between them, these standards supply 46 different definitions-a lack of standardization that we might not have expected, given the context. In fact, if we make creative use of Simpson's index of concentration (originally devised as a measure of ecological diversity) to measure the degree of standardization of definition in this case, we arrive at a value of 0.05, on a scale of zero to one. It is suggested, however, that the situation is not as problematic as it might seem: that low cross-domain levels of standardization of definition should not be cause for concern.
  8. Krattenthaler, C.: Was der h-Index wirklich aussagt (2021) 0.04
    0.03797218 = product of:
      0.1898609 = sum of:
        0.1898609 = weight(_text_:index in 407) [ClassicSimilarity], result of:
          0.1898609 = score(doc=407,freq=14.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            1.021885 = fieldWeight in 407, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0625 = fieldNorm(doc=407)
      0.2 = coord(1/5)
    
    Abstract
    Diese Note legt dar, dass der sogenannte h-Index (Hirschs bibliometrischer Index) im Wesentlichen dieselbe Information wiedergibt wie die Gesamtanzahl von Zitationen von Publikationen einer Autorin oder eines Autors, also ein nutzloser bibliometrischer Index ist. Dies basiert auf einem faszinierenden Satz der Wahrscheinlichkeitstheorie, der hier ebenfalls erläutert wird.
    Content
    Vgl.: DOI: 10.1515/dmvm-2021-0050. Auch abgedruckt u.d.T.: 'Der h-Index - "ein nutzloser bibliometrischer Index"' in Open Password Nr. 1007 vom 06.12.2021 unter: https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzM3NCwiZDI3MzMzOTEwMzUzIiwwLDAsMzQ4LDFd.
    Object
    h-index
  9. Tausch, A.: Zitierungen sind nicht alles : Classroom Citation, Libcitation und die Zukunft bibliometrischer und szientometrischer Leistungsvergleiche (2022) 0.03
    0.03469106 = product of:
      0.08672766 = sum of:
        0.06342807 = weight(_text_:index in 827) [ClassicSimilarity], result of:
          0.06342807 = score(doc=827,freq=4.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.3413878 = fieldWeight in 827, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=827)
        0.023299592 = weight(_text_:system in 827) [ClassicSimilarity], result of:
          0.023299592 = score(doc=827,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 827, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=827)
      0.4 = coord(2/5)
    
    Abstract
    Der Beitrag soll zeigen, welche fortgeschrittenen bibliometrischen und szientometrischen Daten für ein bewährtes Sample von 104 österreichischen Politikwissenschaftler*innen und 51 transnationalen Verlagsunternehmen enge statistische Beziehungen zwischen Indikatoren der Präsenz von Wissenschaftler*innen und transnationalen Verlagsunternehmen in den akademischen Lehrveranstaltungen der Welt (Classroom Citation, gemessen mit Open Syllabus) und anderen, herkömmlicheren bibliometrischen und szientometrischen Indikatoren (Libcitation gemessen mit dem OCLC Worldcat, sowie der H-Index der Zitierung in den vom System Scopus erfassten Fachzeitschriften der Welt bzw. dem Book Citation Index) bestehen. Die statistischen Berechnungen zeigen, basierend auf den Faktorenanalysen, die engen statistischen Beziehungen zwischen diesen Dimensionen. Diese Ergebnisse sind insbesondere in den Tabellen 5 und 9 dieser Arbeit (Komponentenkorrelationen) ableitbar.
  10. Ghosh, S.S.; Das, S.; Chatterjee, S.K.: Human-centric faceted approach for ontology construction (2020) 0.03
    0.032144334 = product of:
      0.08036084 = sum of:
        0.057061244 = weight(_text_:context in 5731) [ClassicSimilarity], result of:
          0.057061244 = score(doc=5731,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32380077 = fieldWeight in 5731, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5731)
        0.023299592 = weight(_text_:system in 5731) [ClassicSimilarity], result of:
          0.023299592 = score(doc=5731,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 5731, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5731)
      0.4 = coord(2/5)
    
    Abstract
    In this paper, we propose an ontology building method, called human-centric faceted approach for ontology construction (HCFOC). HCFOC uses the human-centric approach, improvised with the idea of selective dissemination of information (SDI), to deal with context. Further, this ontology construction process makes use of facet analysis and an analytico-synthetic classification approach. This novel fusion contributes to the originality of HCFOC and distinguishes it from other existing ontology construction methodologies. Based on HCFOC, an ontology of the tourism domain has been designed using the Protégé-5.5.0 ontology editor. The HCFOC methodology has provided the necessary flexibility, extensibility, robustness and has facilitated the capturing of background knowledge. It models the tourism ontology in such a way that it is able to deal with the context of a tourist's information need with precision. This is evident from the result that more than 90% of the user's queries were successfully met. The use of domain knowledge and techniques from both library and information science and computer science has helped in the realization of the desired purpose of this ontology construction process. It is envisaged that HCFOC will have implications for ontology developers. The demonstrated tourism ontology can support any tourism information retrieval system.
  11. Villaespesa, E.; Crider, S.: ¬A critical comparison analysis between human and machine-generated tags for the Metropolitan Museum of Art's collection (2021) 0.03
    0.032144334 = product of:
      0.08036084 = sum of:
        0.057061244 = weight(_text_:context in 341) [ClassicSimilarity], result of:
          0.057061244 = score(doc=341,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32380077 = fieldWeight in 341, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=341)
        0.023299592 = weight(_text_:system in 341) [ClassicSimilarity], result of:
          0.023299592 = score(doc=341,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 341, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=341)
      0.4 = coord(2/5)
    
    Abstract
    Purpose Based on the highlights of The Metropolitan Museum of Art's collection, the purpose of this paper is to examine the similarities and differences between the subject keywords tags assigned by the museum and those produced by three computer vision systems. Design/methodology/approach This paper uses computer vision tools to generate the data and the Getty Research Institute's Art and Architecture Thesaurus (AAT) to compare the subject keyword tags. Findings This paper finds that there are clear opportunities to use computer vision technologies to automatically generate tags that expand the terms used by the museum. This brings a new perspective to the collection that is different from the traditional art historical one. However, the study also surfaces challenges about the accuracy and lack of context within the computer vision results. Practical implications This finding has important implications on how these machine-generated tags complement the current taxonomies and vocabularies inputted in the collection database. In consequence, the museum needs to consider the selection process for choosing which computer vision system to apply to their collection. Furthermore, they also need to think critically about the kind of tags they wish to use, such as colors, materials or objects. Originality/value The study results add to the rapidly evolving field of computer vision within the art information context and provide recommendations of aspects to consider before selecting and implementing these technologies.
  12. Chen, L.; Ding, J.; Larivière, V.: Measuring the citation context of national self-references : how a web journal club is used (2022) 0.03
    0.032144334 = product of:
      0.08036084 = sum of:
        0.057061244 = weight(_text_:context in 545) [ClassicSimilarity], result of:
          0.057061244 = score(doc=545,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32380077 = fieldWeight in 545, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=545)
        0.023299592 = weight(_text_:system in 545) [ClassicSimilarity], result of:
          0.023299592 = score(doc=545,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 545, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=545)
      0.4 = coord(2/5)
    
    Abstract
    The emphasis on research evaluation has brought scrutiny to the role of self-citations in the scholarly communication process. While author self-citations have been studied at length, little is known on national-level self-references (SRs). This paper analyses the citation context of national SRs, using the full-text of 184,859 papers published in PLOS journals. It investigates the differences between national SRs and nonself-references (NSRs) in terms of their in-text mention, presence in enumerations, and location features. For all countries, national SRs exhibit a higher level of engagement than NSRs. NSRs are more often found in enumerative citances than SRs, which suggests that researchers pay more attention to domestic than foreign studies. There are more mentions of national research in the methods section, which provides evidence that methodologies developed in a nation are more likely to be used by other researchers from the same nation. Publications from the United States are cited at a higher rate in each of the sections, indicating that the country still maintains a dominant position in science. On the whole, this paper contributes to a better understanding of the role of national SRs in the scholarly communication system, and how it varies across countries and over time.
  13. Marcondes, C.H.: ¬The role of vocabularies in the age of data : the question of research data (2022) 0.03
    0.032144334 = product of:
      0.08036084 = sum of:
        0.057061244 = weight(_text_:context in 1113) [ClassicSimilarity], result of:
          0.057061244 = score(doc=1113,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32380077 = fieldWeight in 1113, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1113)
        0.023299592 = weight(_text_:system in 1113) [ClassicSimilarity], result of:
          0.023299592 = score(doc=1113,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 1113, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1113)
      0.4 = coord(2/5)
    
    Abstract
    The objective of this work is to discuss how vocabularies can contribute to assigning computational semantics to digital research data within the context of Big Data, so that computers can process them, allowing their reuse on large scale. A conceptualization of data is developed in an attempt to make it clearer what would be data, as an essential element of the Big Data phenomenon, and in particular, digital research data. It then proceeds to analyse digital research data uses and cases and their relation to semantics and vocabularies. Data is conceptualized as an artificial, intentional construction that represents a property of an entity within a specific domain and serves as the essential component of Big Data. The concept of semantic expressivity is discussed, and is used to classify the different vocabularies; within such a classification ontologies, are shown to be a type of knowledge organization system with a higher degree of semantic expressivity. Features of vocabularies that may be used within the context of the Semantic Web and the Linked Open Data to assign machine-processable semantics to Big Data are suggested. It is shown that semantics may be assigned at different data aggregation levels.
  14. Das, S.; Paik, J.H.: Gender tagging of named entities using retrieval-assisted multi-context aggregation : an unsupervised approach (2023) 0.03
    0.031997908 = product of:
      0.07999477 = sum of:
        0.068473496 = weight(_text_:context in 941) [ClassicSimilarity], result of:
          0.068473496 = score(doc=941,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.38856095 = fieldWeight in 941, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=941)
        0.011521274 = product of:
          0.03456382 = sum of:
            0.03456382 = weight(_text_:22 in 941) [ClassicSimilarity], result of:
              0.03456382 = score(doc=941,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23214069 = fieldWeight in 941, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=941)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Inferring the gender of named entities present in a text has several practical applications in information sciences. Existing approaches toward name gender identification rely exclusively on using the gender distributions from labeled data. In the absence of such labeled data, these methods fail. In this article, we propose a two-stage model that is able to infer the gender of names present in text without requiring explicit name-gender labels. We use coreference resolution as the backbone for our proposed model. To aid coreference resolution where the existing contextual information does not suffice, we use a retrieval-assisted context aggregation framework. We demonstrate that state-of-the-art name gender inference is possible without supervision. Our proposed method matches or outperforms several supervised approaches and commercially used methods on five English language datasets from different domains.
    Date
    22. 3.2023 12:00:14
  15. Dederke, J.; Hirschmann, B.; Johann, D.: ¬Der Data Citation Index von Clarivate : Eine wertvolle Ressource für die Forschung und für Bibliotheken? (2022) 0.03
    0.031073285 = product of:
      0.15536642 = sum of:
        0.15536642 = weight(_text_:index in 50) [ClassicSimilarity], result of:
          0.15536642 = score(doc=50,freq=6.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.836226 = fieldWeight in 50, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.078125 = fieldNorm(doc=50)
      0.2 = coord(1/5)
    
    Abstract
    Der Data Citation Index (DCI) stellt eine durchsuchbare Sammlung bibliografischer Metadaten zu Forschungsdaten in Datensätzen und Datenstudien ausgewählter Repositorien dar. Der DCI deckt alle wissenschaftlichen Disziplinen ab.
    Object
    Data Citation Index
  16. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.03
    0.030492827 = product of:
      0.07623207 = sum of:
        0.06279058 = weight(_text_:index in 40) [ClassicSimilarity], result of:
          0.06279058 = score(doc=40,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.33795667 = fieldWeight in 40, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.013441487 = product of:
          0.04032446 = sum of:
            0.04032446 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.04032446 = score(doc=40,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
  17. Peponakis, M.; Mastora, A.; Kapidakis, S.; Doerr, M.: Expressiveness and machine processability of Knowledge Organization Systems (KOS) : an analysis of concepts and relations (2020) 0.03
    0.029319597 = product of:
      0.07329899 = sum of:
        0.040348392 = weight(_text_:context in 5787) [ClassicSimilarity], result of:
          0.040348392 = score(doc=5787,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 5787, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5787)
        0.032950602 = weight(_text_:system in 5787) [ClassicSimilarity], result of:
          0.032950602 = score(doc=5787,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 5787, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5787)
      0.4 = coord(2/5)
    
    Abstract
    This study considers the expressiveness (that is the expressive power or expressivity) of different types of Knowledge Organization Systems (KOS) and discusses its potential to be machine-processable in the context of the Semantic Web. For this purpose, the theoretical foundations of KOS are reviewed based on conceptualizations introduced by the Functional Requirements for Subject Authority Data (FRSAD) and the Simple Knowledge Organization System (SKOS); natural language processing techniques are also implemented. Applying a comparative analysis, the dataset comprises a thesaurus (Eurovoc), a subject headings system (LCSH) and a classification scheme (DDC). These are compared with an ontology (CIDOC-CRM) by focusing on how they define and handle concepts and relations. It was observed that LCSH and DDC focus on the formalism of character strings (nomens) rather than on the modelling of semantics; their definition of what constitutes a concept is quite fuzzy, and they comprise a large number of complex concepts. By contrast, thesauri have a coherent definition of what constitutes a concept, and apply a systematic approach to the modelling of relations. Ontologies explicitly define diverse types of relations, and are by their nature machine-processable. The paper concludes that the potential of both the expressiveness and machine processability of each KOS is extensively regulated by its structural rules. It is harder to represent subject headings and classification schemes as semantic networks with nodes and arcs, while thesauri are more suitable for such a representation. In addition, a paradigm shift is revealed which focuses on the modelling of relations between concepts, rather than the concepts themselves.
  18. Binding, C.; Gnoli, C.; Tudhope, D.: Migrating a complex classification scheme to the semantic web : expressing the Integrative Levels Classification using SKOS RDF (2021) 0.03
    0.029319597 = product of:
      0.07329899 = sum of:
        0.040348392 = weight(_text_:context in 600) [ClassicSimilarity], result of:
          0.040348392 = score(doc=600,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 600, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=600)
        0.032950602 = weight(_text_:system in 600) [ClassicSimilarity], result of:
          0.032950602 = score(doc=600,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 600, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=600)
      0.4 = coord(2/5)
    
    Abstract
    Purpose The Integrative Levels Classification (ILC) is a comprehensive "freely faceted" knowledge organization system not previously expressed as SKOS (Simple Knowledge Organization System). This paper reports and reflects on work converting the ILC to SKOS representation. Design/methodology/approach The design of the ILC representation and the various steps in the conversion to SKOS are described and located within the context of previous work considering the representation of complex classification schemes in SKOS. Various issues and trade-offs emerging from the conversion are discussed. The conversion implementation employed the STELETO transformation tool. Findings The ILC conversion captures some of the ILC facet structure by a limited extension beyond the SKOS standard. SPARQL examples illustrate how this extension could be used to create faceted, compound descriptors when indexing or cataloguing. Basic query patterns are provided that might underpin search systems. Possible routes for reducing complexity are discussed. Originality/value Complex classification schemes, such as the ILC, have features which are not straight forward to represent in SKOS and which extend beyond the functionality of the SKOS standard. The ILC's facet indicators are modelled as rdf:Property sub-hierarchies that accompany the SKOS RDF statements. The ILC's top-level fundamental facet relationships are modelled by extensions of the associative relationship - specialised sub-properties of skos:related. An approach for representing faceted compound descriptions in ILC and other faceted classification schemes is proposed.
  19. Seeber, M.; Vlegels, J.; Cattaneo, M.: Conditions that do or do not disadvantage interdisciplinary research proposals in project evaluation (2022) 0.03
    0.029319597 = product of:
      0.07329899 = sum of:
        0.040348392 = weight(_text_:context in 636) [ClassicSimilarity], result of:
          0.040348392 = score(doc=636,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 636, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=636)
        0.032950602 = weight(_text_:system in 636) [ClassicSimilarity], result of:
          0.032950602 = score(doc=636,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 636, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=636)
      0.4 = coord(2/5)
    
    Abstract
    Despite interdisciplinary research playing a pivotal role in modern science, interdisciplinary research proposals appear to have a lower chance of being funded. Scholars suggested that interdisciplinary research may be disadvantaged in the evaluation and should be earmarked specific resources and evaluated by specific panels. However, empirical evidence is limited regarding the conditions under which interdisciplinary proposals are disadvantaged. We explore this issue in the context of the European Cooperation in Science and Technology (COST) research funding framework, which, contrary to the common practice, does not organize the evaluation process around disciplinary panels, but with a panel-free system. The sample includes data from five calls, from March 2015 to September 2017, for a total of 1,928 proposals, and 5,330 evaluations conducted by 3,050 reviewers. We find that the effect of a proposal's degree of interdisciplinarity is negligible and not significant. We found no variation in this result across scientific fields and disciplinary expertise of reviewers, and no evidence of disciplinary "turf wars." These results suggest that factors assumed to disadvantage interdisciplinary proposals, such as being inherently more challenging to be evaluated and being riskier, are less problematic when the evaluation is not organized around disciplinary panels but rather with a panel-free system.
  20. Late, E.; Kumpulainen, S.: Interacting with digitised historical newspapers : understanding the use of digital surrogates as primary sources (2022) 0.03
    0.029319597 = product of:
      0.07329899 = sum of:
        0.040348392 = weight(_text_:context in 685) [ClassicSimilarity], result of:
          0.040348392 = score(doc=685,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 685, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=685)
        0.032950602 = weight(_text_:system in 685) [ClassicSimilarity], result of:
          0.032950602 = score(doc=685,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 685, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=685)
      0.4 = coord(2/5)
    
    Abstract
    Purpose The paper examines academic historians' information interactions with material from digital historical-newspaper collections as the research process unfolds. Design/methodology/approach The study employed qualitative analysis from in-depth interviews with Finnish history scholars who use digitised historical newspapers as primary sources for their research. A model for task-based information interaction guided the collection and analysis of data. Findings The study revealed numerous information interactions within activities related to task-planning, the search process, selecting and working with the items and synthesis and reporting. The information interactions differ with the activities involved, which call for system support mechanisms specific to each activity type. Various activities feature information search, which is an essential research method for those using digital collections in the compilation and analysis of data. Furthermore, application of quantitative methods and multidisciplinary collaboration may be shaping culture in history research toward convergence with the research culture of the natural sciences. Originality/value For sustainable digital humanities infrastructure and digital collections, it is of great importance that system designers understand how the collections are accessed, why and their use in the real-world context. The study enriches understanding of the collections' utilisation and advances a theoretical framework for explicating task-based information interaction.

Languages

  • e 298
  • d 89
  • pt 4
  • More… Less…

Types

  • a 361
  • el 69
  • m 9
  • p 8
  • x 2
  • r 1
  • s 1
  • More… Less…