Search (151 results, page 1 of 8)

  • × year_i:[2020 TO 2030}
  1. Yu, L.; Fan, Z.; Li, A.: ¬A hierarchical typology of scholarly information units : based on a deduction-verification study (2020) 0.07
    0.06731068 = product of:
      0.10096602 = sum of:
        0.037008587 = weight(_text_:reference in 5655) [ClassicSimilarity], result of:
          0.037008587 = score(doc=5655,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.17979822 = fieldWeight in 5655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=5655)
        0.06395743 = sum of:
          0.036538422 = weight(_text_:database in 5655) [ClassicSimilarity], result of:
            0.036538422 = score(doc=5655,freq=2.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.17865248 = fieldWeight in 5655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.03125 = fieldNorm(doc=5655)
          0.02741901 = weight(_text_:22 in 5655) [ClassicSimilarity], result of:
            0.02741901 = score(doc=5655,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.15476047 = fieldWeight in 5655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=5655)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of this paper is to lay a theoretical foundation for identifying operational information units for library and information professional activities in the context of scholarly communication. Design/methodology/approach The study adopts a deduction-verification approach to formulate a typology of units for scholarly information. It first deduces possible units from an existing conceptualization of information, which defines information as the combined product of data and meaning, and then tests the usefulness of these units via two empirical investigations, one with a group of scholarly papers and the other with a sample of scholarly information users. Findings The results show that, on defining an information unit as a piece of information that is complete in both data and meaning, to such an extent that it remains meaningful to its target audience when retrieved and displayed independently in a database, it is then possible to formulate a hierarchical typology of units for scholarly information. The typology proposed in this study consists of three levels, which in turn, consists of 1, 5 and 44 units, respectively. Research limitations/implications The result of this study has theoretical implications on both the philosophical and conceptual levels: on the philosophical level, it hinges on, and reinforces the objective view of information; on the conceptual level, it challenges the conceptualization of work by IFLA's Functional Requirements for Bibliographic Records and Library Reference Model but endorses that by Library of Congress's BIBFRAME 2.0 model. Practical implications It calls for reconsideration of existing operational units in a variety of library and information activities. Originality/value The study strengthens the conceptual foundation of operational information units and brings to light the primacy of "one work" as an information unit and the possibility for it to be supplemented by smaller units.
    Date
    14. 1.2020 11:15:22
  2. Bedford, D.: Knowledge architectures : structures and semantics (2021) 0.07
    0.06731068 = product of:
      0.10096602 = sum of:
        0.037008587 = weight(_text_:reference in 566) [ClassicSimilarity], result of:
          0.037008587 = score(doc=566,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.17979822 = fieldWeight in 566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=566)
        0.06395743 = sum of:
          0.036538422 = weight(_text_:database in 566) [ClassicSimilarity], result of:
            0.036538422 = score(doc=566,freq=2.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.17865248 = fieldWeight in 566, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.03125 = fieldNorm(doc=566)
          0.02741901 = weight(_text_:22 in 566) [ClassicSimilarity], result of:
            0.02741901 = score(doc=566,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.15476047 = fieldWeight in 566, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=566)
      0.6666667 = coord(2/3)
    
    Abstract
    Knowledge Architectures reviews traditional approaches to managing information and explains why they need to adapt to support 21st-century information management and discovery. Exploring the rapidly changing environment in which information is being managed and accessed, the book considers how to use knowledge architectures, the basic structures and designs that underlie all of the parts of an effective information system, to best advantage. Drawing on 40 years of work with a variety of organizations, Bedford explains that failure to understand the structure behind any given system can be the difference between an effective solution and a significant and costly failure. Demonstrating that the information user environment has shifted significantly in the past 20 years, the book explains that end users now expect designs and behaviors that are much closer to the way they think, work, and act. Acknowledging how important it is that those responsible for developing an information or knowledge management system understand knowledge structures, the book goes beyond a traditional library science perspective and uses case studies to help translate the abstract and theoretical to the practical and concrete. Explaining the structures in a simple and intuitive way and providing examples that clearly illustrate the challenges faced by a range of different organizations, Knowledge Architectures is essential reading for those studying and working in library and information science, data science, systems development, database design, and search system architecture and engineering.
    Content
    Section 1 Context and purpose of knowledge architecture -- 1 Making the case for knowledge architecture -- 2 The landscape of knowledge assets -- 3 Knowledge architecture and design -- 4 Knowledge architecture reference model -- 5 Knowledge architecture segments -- Section 2 Designing for availability -- 6 Knowledge object modeling -- 7 Knowledge structures for encoding, formatting, and packaging -- 8 Functional architecture for identification and distinction -- 9 Functional architectures for knowledge asset disposition and destruction -- 10 Functional architecture designs for knowledge preservation and conservation -- Section 3 Designing for accessibility -- 11 Functional architectures for knowledge seeking and discovery -- 12 Functional architecture for knowledge search -- 13 Functional architecture for knowledge categorization -- 14 Functional architectures for indexing and keywording -- 15 Functional architecture for knowledge semantics -- 16 Functional architecture for knowledge abstraction and surrogation -- Section 4 Functional architectures to support knowledge consumption -- 17 Functional architecture for knowledge augmentation, derivation, and synthesis -- 18 Functional architecture to manage risk and harm -- 19 Functional architectures for knowledge authentication and provenance -- 20 Functional architectures for securing knowledge assets -- 21 Functional architectures for authorization and asset management -- Section 5 Pulling it all together - the big picture knowledge architecture -- 22 Functional architecture for knowledge metadata and metainformation -- 23 The whole knowledge architecture - pulling it all together
  3. Golub, K.; Tyrkkö, J.; Hansson, J.; Ahlström, I.: Subject indexing in humanities : a comparison between a local university repository and an international bibliographic service (2020) 0.05
    0.05237096 = product of:
      0.07855644 = sum of:
        0.046260733 = weight(_text_:reference in 5982) [ClassicSimilarity], result of:
          0.046260733 = score(doc=5982,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.22474778 = fieldWeight in 5982, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5982)
        0.032295708 = product of:
          0.064591415 = sum of:
            0.064591415 = weight(_text_:database in 5982) [ClassicSimilarity], result of:
              0.064591415 = score(doc=5982,freq=4.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.31581596 = fieldWeight in 5982, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5982)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    As the humanities develop in the realm of increasingly more pronounced digital scholarship, it is important to provide quality subject access to a vast range of heterogeneous information objects in digital services. The study aims to paint a representative picture of the current state of affairs of the use of subject index terms in humanities journal articles with particular reference to the well-established subject access needs of humanities researchers, with the purpose of identifying which improvements are needed in this context. Design/methodology/approach The comparison of subject metadata on a sample of 649 peer-reviewed journal articles from across the humanities is conducted in a university repository, against Scopus, the former reflecting local and national policies and the latter being the most comprehensive international abstract and citation database of research output. Findings The study shows that established bibliographic objectives to ensure subject access for humanities journal articles are not supported in either the world's largest commercial abstract and citation database Scopus or the local repository of a public university in Sweden. The indexing policies in the two services do not seem to address the needs of humanities scholars for highly granular subject index terms with appropriate facets; no controlled vocabularies for any humanities discipline are used whatsoever. Originality/value In all, not much has changed since 1990s when indexing for the humanities was shown to lag behind the sciences. The community of researchers and information professionals, today working together on digital humanities projects, as well as interdisciplinary research teams, should demand that their subject access needs be fulfilled, especially in commercial services like Scopus and discovery services.
  4. Hartel, J.: ¬The red thread of information (2020) 0.04
    0.04226508 = product of:
      0.063397616 = sum of:
        0.046260733 = weight(_text_:reference in 5839) [ClassicSimilarity], result of:
          0.046260733 = score(doc=5839,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.22474778 = fieldWeight in 5839, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5839)
        0.017136881 = product of:
          0.034273762 = sum of:
            0.034273762 = weight(_text_:22 in 5839) [ClassicSimilarity], result of:
              0.034273762 = score(doc=5839,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.19345059 = fieldWeight in 5839, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5839)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose In The Invisible Substrate of Information Science, a landmark article about the discipline of information science, Marcia J. Bates wrote that ".we are always looking for the red thread of information in the social texture of people's lives" (1999a, p. 1048). To sharpen our understanding of information science and to elaborate Bates' idea, the work at hand answers the question: Just what does the red thread of information entail? Design/methodology/approach Through a close reading of Bates' oeuvre and by applying concepts from the reference literature of information science, nine composite entities that qualify as the red thread of information are identified, elaborated, and related to existing concepts in the information science literature. In the spirit of a scientist-poet (White, 1999), several playful metaphors related to the color red are employed. Findings Bates' red thread of information entails: terms, genres, literatures, classification systems, scholarly communication, information retrieval, information experience, information institutions, and information policy. This same constellation of phenomena can be found in resonant visions of information science, namely, domain analysis (Hjørland, 2002), ethnography of infrastructure (Star, 1999), and social epistemology (Shera, 1968). Research limitations/implications With the vital vermilion filament in clear view, newcomers can more easily engage the material, conceptual, and social machinery of information science, and specialists are reminded of what constitutes information science as a whole. Future researchers and scientist-poets may wish to supplement the nine composite entities with additional, emergent information phenomena. Originality/value Though the explication of information science that follows is relatively orthodox and time-bound, the paper offers an imaginative, accessible, yet technically precise way of understanding the field.
    Date
    30. 4.2020 21:03:22
  5. Das, S.; Bagchi, M.; Hussey, P.: How to teach domain ontology-based knowledge graph construction? : an Irish experiment (2023) 0.04
    0.04226508 = product of:
      0.063397616 = sum of:
        0.046260733 = weight(_text_:reference in 1126) [ClassicSimilarity], result of:
          0.046260733 = score(doc=1126,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.22474778 = fieldWeight in 1126, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1126)
        0.017136881 = product of:
          0.034273762 = sum of:
            0.034273762 = weight(_text_:22 in 1126) [ClassicSimilarity], result of:
              0.034273762 = score(doc=1126,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.19345059 = fieldWeight in 1126, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1126)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Domains represent concepts which belong to specific parts of the world. The particularized meaning of words linguistically encoding such domain concepts are provided by domain specific resources. The explicit meaning of such words are increasingly captured computationally using domain-specific ontologies, which, even for the same reference domain, are most often than not semantically incompatible. As information systems that rely on domain ontologies expand, there is a growing need to not only design domain ontologies and domain ontology-grounded Knowl­edge Graphs (KGs) but also to align them to general standards and conventions for interoperability. This often presents an insurmountable challenge to domain experts who have to additionally learn the construction of domain ontologies and KGs. Until now, several research methodologies have been proposed by different research groups using different technical approaches and based on scenarios of different domains of application. However, no methodology has been proposed which not only facilitates designing conceptually well-founded ontologies, but is also, equally, grounded in the general pedagogical principles of knowl­edge organization and, thereby, flexible enough to teach, and reproduce vis-à-vis domain experts. The purpose of this paper is to provide such a general, pedagogically flexible semantic knowl­edge modelling methodology. We exemplify the methodology by examples and illustrations from a professional-level digital healthcare course, and conclude with an evaluation grounded in technological parameters as well as user experience design principles.
    Date
    20.11.2023 17:19:22
  6. Eadon, Y.M.: ¬(Not) part of the system : resolving epistemic disconnect through archival reference (2020) 0.04
    0.04137686 = product of:
      0.12413058 = sum of:
        0.12413058 = weight(_text_:reference in 23) [ClassicSimilarity], result of:
          0.12413058 = score(doc=23,freq=10.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.60306156 = fieldWeight in 23, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=23)
      0.33333334 = coord(1/3)
    
    Abstract
    Information seeking practices of conspiracists are examined by introducing the new archival user group of "conspiracist researchers." The epistemic commitments of archival knowledge organization (AKO), rooted in provenance and access/secrecy, fundamentally conflict with the epistemic features of conspiracism, namely: mistrust of authority figures and institutions, accompanying overreliance on firsthand inquiry, and a tendency towards indicative mood/confirmation bias. Through interviews with reference personnel working at two state archives in the American west, I illustrate that the reference interaction is a vital turning point for the conspiracist researcher. Reference personnel can build trust with conspiracist researchers by displaying epistemic empathy and subverting hegemonic archival logics. The burden of bridging the epistemic gap through archival user education thus falls almost exclusively onto reference personnel. Domain analysis is presented as one possible starting point for developing an archival knowledge organization system (AKOS) that could be more epistemically flexible.
  7. Rae, A.R.; Mork, J.G.; Demner-Fushman, D.: ¬The National Library of Medicine indexer assignment dataset : a new large-scale dataset for reviewer assignment research (2023) 0.03
    0.03295506 = product of:
      0.09886518 = sum of:
        0.09886518 = sum of:
          0.064591415 = weight(_text_:database in 885) [ClassicSimilarity], result of:
            0.064591415 = score(doc=885,freq=4.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.31581596 = fieldWeight in 885, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.0390625 = fieldNorm(doc=885)
          0.034273762 = weight(_text_:22 in 885) [ClassicSimilarity], result of:
            0.034273762 = score(doc=885,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.19345059 = fieldWeight in 885, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=885)
      0.33333334 = coord(1/3)
    
    Abstract
    MEDLINE is the National Library of Medicine's (NLM) journal citation database. It contains over 28 million references to biomedical and life science journal articles, and a key feature of the database is that all articles are indexed with NLM Medical Subject Headings (MeSH). The library employs a team of MeSH indexers, and in recent years they have been asked to index close to 1 million articles per year in order to keep MEDLINE up to date. An important part of the MEDLINE indexing process is the assignment of articles to indexers. High quality and timely indexing is only possible when articles are assigned to indexers with suitable expertise. This article introduces the NLM indexer assignment dataset: a large dataset of 4.2 million indexer article assignments for articles indexed between 2011 and 2019. The dataset is shown to be a valuable testbed for expert matching and assignment algorithms, and indexer article assignment is also found to be useful domain-adaptive pre-training for the closely related task of reviewer assignment.
    Date
    22. 1.2023 18:49:49
  8. Li, G.; Siddharth, L.; Luo, J.: Embedding knowledge graph of patent metadata to measure knowledge proximity (2023) 0.03
    0.03197872 = product of:
      0.09593615 = sum of:
        0.09593615 = sum of:
          0.054807637 = weight(_text_:database in 920) [ClassicSimilarity], result of:
            0.054807637 = score(doc=920,freq=2.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.26797873 = fieldWeight in 920, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.046875 = fieldNorm(doc=920)
          0.041128512 = weight(_text_:22 in 920) [ClassicSimilarity], result of:
            0.041128512 = score(doc=920,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.23214069 = fieldWeight in 920, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=920)
      0.33333334 = coord(1/3)
    
    Abstract
    Knowledge proximity refers to the strength of association between any two entities in a structural form that embodies certain aspects of a knowledge base. In this work, we operationalize knowledge proximity within the context of the US Patent Database (knowledge base) using a knowledge graph (structural form) named "PatNet" built using patent metadata, including citations, inventors, assignees, and domain classifications. We train various graph embedding models using PatNet to obtain the embeddings of entities and relations. The cosine similarity between the corresponding (or transformed) embeddings of entities denotes the knowledge proximity between these. We compare the embedding models in terms of their performances in predicting target entities and explaining domain expansion profiles of inventors and assignees. We then apply the embeddings of the best-preferred model to associate homogeneous (e.g., patent-patent) and heterogeneous (e.g., inventor-assignee) pairs of entities.
    Date
    22. 3.2023 12:06:55
  9. Williams, B.: Dimensions & VOSViewer bibliometrics in the reference interview (2020) 0.03
    0.030530527 = product of:
      0.09159158 = sum of:
        0.09159158 = weight(_text_:reference in 5719) [ClassicSimilarity], result of:
          0.09159158 = score(doc=5719,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.4449779 = fieldWeight in 5719, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5719)
      0.33333334 = coord(1/3)
    
    Abstract
    The VOSviewer software provides easy access to bibliometric mapping using data from Dimensions, Scopus and Web of Science. The properly formatted and structured citation data, and the ease in which it can be exported open up new avenues for use during citation searches and eference interviews. This paper details specific techniques for using advanced searches in Dimensions, exporting the citation data, and drawing insights from the maps produced in VOS Viewer. These search techniques and data export practices are fast and accurate enough to build into reference interviews for graduate students, faculty, and post-PhD researchers. The search results derived from them are accurate and allow a more comprehensive view of citation networks embedded in ordinary complex boolean searches.
  10. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.026785422 = product of:
      0.08035626 = sum of:
        0.08035626 = product of:
          0.24106878 = sum of:
            0.24106878 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24106878 = score(doc=862,freq=2.0), product of:
                0.42893425 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050593734 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  11. Asubiaro, T.V.; Onaolapo, S.: ¬A comparative study of the coverage of African journals in Web of Science, Scopus, and CrossRef (2023) 0.03
    0.02664893 = product of:
      0.079946786 = sum of:
        0.079946786 = sum of:
          0.045673028 = weight(_text_:database in 992) [ClassicSimilarity], result of:
            0.045673028 = score(doc=992,freq=2.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.2233156 = fieldWeight in 992, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.0390625 = fieldNorm(doc=992)
          0.034273762 = weight(_text_:22 in 992) [ClassicSimilarity], result of:
            0.034273762 = score(doc=992,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.19345059 = fieldWeight in 992, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=992)
      0.33333334 = coord(1/3)
    
    Abstract
    This is the first study that evaluated the coverage of journals from Africa in Web of Science, Scopus, and CrossRef. A list of active journals published in each of the 55 African countries was compiled from Ulrich's periodicals directory and African Journals Online (AJOL) website. Journal master lists for Web of Science, Scopus, and CrossRef were searched for the African journals. A total of 2,229 unique active African journals were identified from Ulrich (N = 2,117, 95.0%) and AJOL (N = 243, 10.9%) after removing duplicates. The volume of African journals in Web of Science and Scopus databases is 7.4% (N = 166) and 7.8% (N = 174), respectively, compared to the 45.6% (N = 1,017) covered in CrossRef. While making up only 17.% of all the African journals, South African journals had the best coverage in the two most authoritative databases, accounting for 73.5% and 62.1% of all the African journals in Web of Science and Scopus, respectively. In contrast, Nigeria published 44.5% of all the African journals. The distribution of the African journals is biased in favor of Medical, Life and Health Sciences and Humanities and the Arts in the three databases. The low representation of African journals in CrossRef, a free indexing infrastructure that could be harnessed for building an African-centric research indexing database, is concerning.
    Date
    22. 6.2023 14:09:06
  12. Rockelle Strader, C.: Cataloging to support information literacy : the IFLA Library Reference Model's user tasks in the context of the Framework for Information Literacy for Higher Education (2021) 0.03
    0.026169024 = product of:
      0.07850707 = sum of:
        0.07850707 = weight(_text_:reference in 713) [ClassicSimilarity], result of:
          0.07850707 = score(doc=713,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.38140965 = fieldWeight in 713, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=713)
      0.33333334 = coord(1/3)
    
    Abstract
    Cataloging practices, as exemplified by the five user tasks of the IFLA Library Reference Model, can support information literacy practices. The six frames of the Framework for Information Literacy for Higher Education are used as lenses to examine the user tasks. Two themes emerge from this examination: context matters, and catalogers must tailor bibliographic descriptions to meet users' expectations and information needs. Catalogers need to solicit feedback from various user communities to reform cataloging practices to remain current and viable. Such conversations will enrich the catalog and enhance (reclaim?) its position as a primary tool for research and learning. Supplemental data for this article is available online at https://doi.org/10.1080/01639374.2021.1939828.
  13. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.02
    0.022321185 = product of:
      0.06696355 = sum of:
        0.06696355 = product of:
          0.20089066 = sum of:
            0.20089066 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.20089066 = score(doc=5669,freq=2.0), product of:
                0.42893425 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050593734 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  14. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.02
    0.022321185 = product of:
      0.06696355 = sum of:
        0.06696355 = product of:
          0.20089066 = sum of:
            0.20089066 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.20089066 = score(doc=1000,freq=2.0), product of:
                0.42893425 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050593734 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  15. Yang, T.-H.; Hsieh, Y.-L.; Liu, S.-H.; Chang, Y.-C.; Hsu, W.-L.: ¬A flexible template generation and matching method with applications for publication reference metadata extraction (2021) 0.02
    0.02180752 = product of:
      0.06542256 = sum of:
        0.06542256 = weight(_text_:reference in 63) [ClassicSimilarity], result of:
          0.06542256 = score(doc=63,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31784135 = fieldWeight in 63, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=63)
      0.33333334 = coord(1/3)
    
    Abstract
    Conventional rule-based approaches use exact template matching to capture linguistic information and necessarily need to enumerate all variations. We propose a novel flexible template generation and matching scheme called the principle-based approach (PBA) based on sequence alignment, and employ it for reference metadata extraction (RME) to demonstrate its effectiveness. The main contributions of this research are threefold. First, we propose an automatic template generation that can capture prominent patterns using the dominating set algorithm. Second, we devise an alignment-based template-matching technique that uses a logistic regression model, which makes it more general and flexible than pure rule-based approaches. Last, we apply PBA to RME on extensive cross-domain corpora and demonstrate its robustness and generality. Experiments reveal that the same set of templates produced by the PBA framework not only deliver consistent performance on various unseen domains, but also surpass hand-crafted knowledge (templates). We use four independent journal style test sets and one conference style test set in the experiments. When compared to renowned machine learning methods, such as conditional random fields (CRF), as well as recent deep learning methods (i.e., bi-directional long short-term memory with a CRF layer, Bi-LSTM-CRF), PBA has the best performance for all datasets.
  16. Kelly, M.: Epistemology, epistemic belief, personal epistemology, and epistemics : a review of concepts as they impact information behavior research (2021) 0.02
    0.02180752 = product of:
      0.06542256 = sum of:
        0.06542256 = weight(_text_:reference in 170) [ClassicSimilarity], result of:
          0.06542256 = score(doc=170,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31784135 = fieldWeight in 170, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=170)
      0.33333334 = coord(1/3)
    
    Abstract
    A review of a range of epistemic concepts that are commonly researched was conducted with reference to conventional epistemology and with reference to foundational approaches to justification. These were assessed in relation to previous research undertaken linking information behavior and experience with paradigm, metatheory, and discourse. This research assesses how the epistemic concept is treated, both within information science and within disciplines that have affinities to the topics or agents that have been the subject of inquiry within the field. An attempt is made to clarify the types of connections that are associated with the epistemic concept and to provide a clearer view of how research focused on information behavior might consider the questions underpinning assumptions relating to knowledge and knowing. The symbiotic connection between epistemics and information science is advanced as a suitably nuanced conception of socially organized knowledge from which to define the appropriate level at which knowledge claims can be usefully advanced. It is proposed that fostering a better understanding of epistemics as a research practice might also provide for the development of a range of insights and methods that reflect the dynamic context within which the study of information behavior and information experience is located.
  17. Wu, Z.; Lu, C.; Zhao, Y.; Xie, J.; Zou, D.; Su, X.: ¬The protection of user preference privacy in personalized information retrieval : challenges and overviews (2021) 0.02
    0.02180752 = product of:
      0.06542256 = sum of:
        0.06542256 = weight(_text_:reference in 520) [ClassicSimilarity], result of:
          0.06542256 = score(doc=520,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31784135 = fieldWeight in 520, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=520)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper reviews a large number of research achievements relevant to user privacy protection in an untrusted network environment, and then analyzes and evaluates their application limitations in personalized information retrieval, to establish the conditional constraints that an effective approach for user preference privacy protection in personalized information retrieval should meet, thus providing a basic reference for the solution of this problem. First, based on the basic framework of a personalized information retrieval platform, we establish a complete set of constraints for user preference privacy protection in terms of security, usability, efficiency, and accuracy. Then, we comprehensively review the technical features for all kinds of popular methods for user privacy protection, and analyze their application limitations in personalized information retrieval, according to the constraints of preference privacy protection. The results show that personalized information retrieval has higher requirements for users' privacy protection, i.e., it is required to comprehensively improve the security of users' preference privacy on the untrusted server-side, under the precondition of not changing the platform, algorithm, efficiency, and accuracy of personalized information retrieval. However, all kinds of existing privacy methods still cannot meet the above requirements. This paper is an important study attempt to the problem of user preference privacy protection of personalized information retrieval, which can provide a basic reference and direction for the further study of the problem.
  18. Radford, M.L.; Costello, L.; Montague, K.E.: "Death of social encounters" : investigating COVID-19's initial impact on virtual reference services in academic libraries (2022) 0.02
    0.02180752 = product of:
      0.06542256 = sum of:
        0.06542256 = weight(_text_:reference in 749) [ClassicSimilarity], result of:
          0.06542256 = score(doc=749,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31784135 = fieldWeight in 749, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=749)
      0.33333334 = coord(1/3)
    
    Abstract
    This investigation explores the initial impact of the COVID-19 pandemic on live chat virtual reference services (VRS) in academic libraries and on user behaviors from March to December 2020 using Goffman's theoretical framework (1956, 1967, 1971). Data from 300 responses by academic librarians to two longitudinal online surveys and 28 semi-structured interviews were quantitatively and qualitatively analyzed. Results revealed that academic librarians were well-positioned to provide VRS as university information hubs during pandemic shutdowns. Qualitative analysis revealed that participants received gratitude for VRS help, but also experienced frustrations and angst with limited accessibility during COVID-19. Participants reported changes including VRS volume, level of complexity, and question topics. Results reveal the range and frequency of new services with librarians striving to make personal connections with users through VRS, video consultations, video chat, and other strategies. Participants found it difficult to maintain these connections, coping through grit and mutual support when remote work became necessary. They adapted to challenges, including isolation, technology learning curves, and disrupted work routines. Librarians' responses chronicle their innovative approaches, fierce determination, emotional labor, and dedication to helping users and colleagues through this unprecedented time. Results have vital implications for the future of VRS.
  19. Dunsire, G.; Fritz, D.; Fritz, R.: Instructions, interfaces, and interoperable data : the RIMMF experience with RDA revisited (2020) 0.02
    0.02158834 = product of:
      0.06476502 = sum of:
        0.06476502 = weight(_text_:reference in 5751) [ClassicSimilarity], result of:
          0.06476502 = score(doc=5751,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31464687 = fieldWeight in 5751, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5751)
      0.33333334 = coord(1/3)
    
    Abstract
    This article presents a case study of RIMMF, a software tool developed to improve the orientation and training of catalogers who use Resource Description and Access (RDA) to maintain bibliographic data. The cataloging guidance and instructions of RDA are based on the Functional Requirements conceptual models that are now consolidated in the IFLA Library Reference Model, but many catalogers are applying RDA in systems that have evolved from inventory and text-processing applications developed from older metadata paradigms. The article describes how RIMMF interacts with the RDA Toolkit and RDA Registry to offer cataloger-friendly multilingual data input and editing interfaces.
  20. Aalberg, T.; O'Neill, E.; Zumer, M.: Extending the LRM Model to integrating resources (2021) 0.02
    0.02158834 = product of:
      0.06476502 = sum of:
        0.06476502 = weight(_text_:reference in 295) [ClassicSimilarity], result of:
          0.06476502 = score(doc=295,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31464687 = fieldWeight in 295, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=295)
      0.33333334 = coord(1/3)
    
    Abstract
    Integrating resources are distinct in that they change over time in such a way that their previous content is replaced with updated content. This study examines how integrating resources can be modeled using the entities and relationships of the IFLA Library Reference Model (LRM) and clarifies how they can be identified. While monographs have been extensively analyzed, integrating resources have received very little attention. Applying the model unmodified to integrating resources is neither practical nor theoretically sound. With the addition of two proposed relationships, the model can be extended to accommodate the diachronic relationship intrinsic between expressions and manifestations exhibited by integrating resources.

Languages

  • e 122
  • d 29

Types

  • a 141
  • el 25
  • m 4
  • p 3
  • x 1
  • More… Less…