Search (184 results, page 2 of 10)

  • × year_i:[2020 TO 2030}
  1. Berg, A.; Nelimarkka, M.: Do you see what I see? : measuring the semantic differences in image-recognition services' outputs (2023) 0.04
    0.03586427 = product of:
      0.10759281 = sum of:
        0.10759281 = weight(_text_:systematic in 1070) [ClassicSimilarity], result of:
          0.10759281 = score(doc=1070,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 1070, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=1070)
      0.33333334 = coord(1/3)
    
    Abstract
    As scholars increasingly undertake large-scale analysis of visual materials, advanced computational tools show promise for informing that process. One technique in the toolbox is image recognition, made readily accessible via Google Vision AI, Microsoft Azure Computer Vision, and Amazon's Rekognition service. However, concerns about such issues as bias factors and low reliability have led to warnings against research employing it. A systematic study of cross-service label agreement concretized such issues: using eight datasets, spanning professionally produced and user-generated images, the work showed that image-recognition services disagree on the most suitable labels for images. Beyond supporting caveats expressed in prior literature, the report articulates two mitigation strategies, both involving the use of multiple image-recognition services: Highly explorative research could include all the labels, accepting noisier but less restrictive analysis output. Alternatively, scholars may employ word-embedding-based approaches to identify concepts that are similar enough for their purposes, then focus on those labels filtered in.
  2. Varadarajan, U.; Dutta, B.: Models for narrative information : a study (2022) 0.04
    0.03586427 = product of:
      0.10759281 = sum of:
        0.10759281 = weight(_text_:systematic in 1102) [ClassicSimilarity], result of:
          0.10759281 = score(doc=1102,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 1102, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=1102)
      0.33333334 = coord(1/3)
    
    Abstract
    From the literature study, it was observed that there are significantly fewer studies that review ontology-based narrative models. This motivates the current work. A parametric approach was adopted to report the existing ontology-driven models for narrative information. The work considers the narrative and ontology components as parameters. This study hopes to encompass the relevant literature and ontology models together. The work adopts a systematic literature review methodology for an extensive literature selection. The models were selected from the literature using a stratified random sampling technique. The findings illustrate an overview of the narrative models across domains. The study identifies the differences and similarities of knowledge representation in ontology-based narrative information models. This paper will explore the basic concepts and top-level concepts in the models. Besides, this study provides a study of the narrative theories in the context of ongoing research. It also identifies the state-of-the-art literature for ontology-based narrative information.
  3. Hong, Y.; Zeng, M.L.: International Classification of Diseases (ICD) (2022) 0.04
    0.03586427 = product of:
      0.10759281 = sum of:
        0.10759281 = weight(_text_:systematic in 1112) [ClassicSimilarity], result of:
          0.10759281 = score(doc=1112,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 1112, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=1112)
      0.33333334 = coord(1/3)
    
    Abstract
    This article presents the history, contents, structures, functions, and applications of the International Classification of Diseases (ICD), which is a global standard maintained by the World Health Organization (WHO). The article aims to present ICD from the knowledge organization perspective and focuses on the current versions, ICD-10 and ICD-11. It also introduces the relationship between ICD and other health knowledge organization systems (KOSs), plus efforts in research and development reported in health informatics. The article concludes that the high-level effort of promoting a unified classification system such as ICD is critical in providing a common language for systematic recording, reporting, analysis, interpretation, and comparison of mortality and morbidity data. It greatly enhances the constancy of coding across languages, cultures, and healthcare systems around the world.
  4. Fernanda de Jesus, A.; Ferreira de Castro, F.: Proposal for the publication of linked open bibliographic data (2024) 0.04
    0.03586427 = product of:
      0.10759281 = sum of:
        0.10759281 = weight(_text_:systematic in 1161) [ClassicSimilarity], result of:
          0.10759281 = score(doc=1161,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
      0.33333334 = coord(1/3)
    
    Abstract
    Linked Open Data (LOD) are a set of principles for publishing structured, connected data available for reuse under an open license. The objective of this paper is to analyze the publishing of bibliographic data such as LOD, having as a product the elaboration of theoretical-methodological recommendations for the publication of these data, in an approach based on the ten best practices for publishing LOD, from the World Wide Web Consortium. The starting point was the conduction of a Systematic Review of Literature, where initiatives to publish bibliographic data such as LOD were identified. An empirical study of these institutions was also conducted. As a result, theoretical-methodological recommendations were obtained for the process of publishing bibliographic data such as LOD.
  5. Metzinger, T.: Artificial suffering : an argument for a global moratorium on synthetic phenomenology (2021) 0.04
    0.03586427 = product of:
      0.10759281 = sum of:
        0.10759281 = weight(_text_:systematic in 1212) [ClassicSimilarity], result of:
          0.10759281 = score(doc=1212,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 1212, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=1212)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper has a critical and a constructive part. The first part formulates a political demand, based on ethical considerations: Until 2050, there should be a global moratorium on synthetic phenomenology, strictly banning all research that directly aims at or knowingly risks the emergence of artificial consciousness on post-biotic carrier systems. The second part lays the first conceptual foundations for an open-ended process with the aim of gradually refining the original moratorium, tying it to an ever more fine-grained, rational, evidence-based, and hopefully ethically convincing set of constraints. The systematic research program defined by this process could lead to an incremental reformulation of the original moratorium. It might result in a moratorium repeal even before 2050, in the continuation of a strict ban beyond the year 2050, or a gradually evolving, more substantial, and ethically refined view of which if any kinds of conscious experience we want to implement in AI systems.
  6. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.03
    0.03447506 = product of:
      0.103425175 = sum of:
        0.103425175 = sum of:
          0.05630404 = weight(_text_:indexing in 40) [ClassicSimilarity], result of:
            0.05630404 = score(doc=40,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.29604656 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
          0.047121134 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
            0.047121134 = score(doc=40,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.2708308 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
      0.33333334 = coord(1/3)
    
    Date
    17.11.2020 12:22:59
    Theme
    Citation indexing
  7. Rae, A.R.; Mork, J.G.; Demner-Fushman, D.: ¬The National Library of Medicine indexer assignment dataset : a new large-scale dataset for reviewer assignment research (2023) 0.03
    0.030177874 = product of:
      0.09053362 = sum of:
        0.09053362 = sum of:
          0.05687567 = weight(_text_:indexing in 885) [ClassicSimilarity], result of:
            0.05687567 = score(doc=885,freq=4.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.29905218 = fieldWeight in 885, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0390625 = fieldNorm(doc=885)
          0.033657953 = weight(_text_:22 in 885) [ClassicSimilarity], result of:
            0.033657953 = score(doc=885,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.19345059 = fieldWeight in 885, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=885)
      0.33333334 = coord(1/3)
    
    Abstract
    MEDLINE is the National Library of Medicine's (NLM) journal citation database. It contains over 28 million references to biomedical and life science journal articles, and a key feature of the database is that all articles are indexed with NLM Medical Subject Headings (MeSH). The library employs a team of MeSH indexers, and in recent years they have been asked to index close to 1 million articles per year in order to keep MEDLINE up to date. An important part of the MEDLINE indexing process is the assignment of articles to indexers. High quality and timely indexing is only possible when articles are assigned to indexers with suitable expertise. This article introduces the NLM indexer assignment dataset: a large dataset of 4.2 million indexer article assignments for articles indexed between 2011 and 2019. The dataset is shown to be a valuable testbed for expert matching and assignment algorithms, and indexer article assignment is also found to be useful domain-adaptive pre-training for the closely related task of reviewer assignment.
    Date
    22. 1.2023 18:49:49
  8. Asubiaro, T.V.; Onaolapo, S.: ¬A comparative study of the coverage of African journals in Web of Science, Scopus, and CrossRef (2023) 0.03
    0.030177874 = product of:
      0.09053362 = sum of:
        0.09053362 = sum of:
          0.05687567 = weight(_text_:indexing in 992) [ClassicSimilarity], result of:
            0.05687567 = score(doc=992,freq=4.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.29905218 = fieldWeight in 992, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0390625 = fieldNorm(doc=992)
          0.033657953 = weight(_text_:22 in 992) [ClassicSimilarity], result of:
            0.033657953 = score(doc=992,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.19345059 = fieldWeight in 992, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=992)
      0.33333334 = coord(1/3)
    
    Abstract
    This is the first study that evaluated the coverage of journals from Africa in Web of Science, Scopus, and CrossRef. A list of active journals published in each of the 55 African countries was compiled from Ulrich's periodicals directory and African Journals Online (AJOL) website. Journal master lists for Web of Science, Scopus, and CrossRef were searched for the African journals. A total of 2,229 unique active African journals were identified from Ulrich (N = 2,117, 95.0%) and AJOL (N = 243, 10.9%) after removing duplicates. The volume of African journals in Web of Science and Scopus databases is 7.4% (N = 166) and 7.8% (N = 174), respectively, compared to the 45.6% (N = 1,017) covered in CrossRef. While making up only 17.% of all the African journals, South African journals had the best coverage in the two most authoritative databases, accounting for 73.5% and 62.1% of all the African journals in Web of Science and Scopus, respectively. In contrast, Nigeria published 44.5% of all the African journals. The distribution of the African journals is biased in favor of Medical, Life and Health Sciences and Humanities and the Arts in the three databases. The low representation of African journals in CrossRef, a free indexing infrastructure that could be harnessed for building an African-centric research indexing database, is concerning.
    Date
    22. 6.2023 14:09:06
  9. Peponakis, M.; Mastora, A.; Kapidakis, S.; Doerr, M.: Expressiveness and machine processability of Knowledge Organization Systems (KOS) : an analysis of concepts and relations (2020) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 5787) [ClassicSimilarity], result of:
          0.08966068 = score(doc=5787,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 5787, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5787)
      0.33333334 = coord(1/3)
    
    Abstract
    This study considers the expressiveness (that is the expressive power or expressivity) of different types of Knowledge Organization Systems (KOS) and discusses its potential to be machine-processable in the context of the Semantic Web. For this purpose, the theoretical foundations of KOS are reviewed based on conceptualizations introduced by the Functional Requirements for Subject Authority Data (FRSAD) and the Simple Knowledge Organization System (SKOS); natural language processing techniques are also implemented. Applying a comparative analysis, the dataset comprises a thesaurus (Eurovoc), a subject headings system (LCSH) and a classification scheme (DDC). These are compared with an ontology (CIDOC-CRM) by focusing on how they define and handle concepts and relations. It was observed that LCSH and DDC focus on the formalism of character strings (nomens) rather than on the modelling of semantics; their definition of what constitutes a concept is quite fuzzy, and they comprise a large number of complex concepts. By contrast, thesauri have a coherent definition of what constitutes a concept, and apply a systematic approach to the modelling of relations. Ontologies explicitly define diverse types of relations, and are by their nature machine-processable. The paper concludes that the potential of both the expressiveness and machine processability of each KOS is extensively regulated by its structural rules. It is harder to represent subject headings and classification schemes as semantic networks with nodes and arcs, while thesauri are more suitable for such a representation. In addition, a paradigm shift is revealed which focuses on the modelling of relations between concepts, rather than the concepts themselves.
  10. Jiang, X.; Zhu, X.; Chen, J.: Main path analysis on cyclic citation networks (2020) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 5813) [ClassicSimilarity], result of:
          0.08966068 = score(doc=5813,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 5813, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5813)
      0.33333334 = coord(1/3)
    
    Abstract
    Main path analysis is a famous network-based method for understanding the evolution of a scientific domain. Most existing methods have two steps, weighting citation arcs based on search path counting and exploring main paths in a greedy fashion, with the assumption that citation networks are acyclic. The only available proposal that avoids manual cycle removal is to preprint transform a cyclic network to an acyclic counterpart. Through a detailed discussion about the issues concerning this approach, especially deriving the "de-preprinted" main paths for the original network, this article proposes an alternative solution with two-fold contributions. Based on the argument that a publication cannot influence itself through a citation cycle, the SimSPC algorithm is proposed to weight citation arcs by counting simple search paths. A set of algorithms are further proposed for main path exploration and extraction directly from cyclic networks based on a novel data structure main path tree. The experiments on two cyclic citation networks demonstrate the usefulness of the alternative solution. In the meanwhile, experiments show that publications in strongly connected components may sit on the turning points of main path networks, which signifies the necessity of a systematic way of dealing with citation cycles.
  11. Heinström, J.; Sormunen, E.; Savolainen, R.; Ek, S.: Developing an empirical measure of everyday information mastering (2020) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 5914) [ClassicSimilarity], result of:
          0.08966068 = score(doc=5914,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 5914, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5914)
      0.33333334 = coord(1/3)
    
    Abstract
    The aim of the study was to develop an empirical measure for everyday information mastering (EIM). EIM describes the ways that individuals, based on their beliefs, attitudes, and expectations, orient themselves to information as a resource of everyday action. The key features of EIM were identified by conceptual analysis focusing on three EIM frameworks. Four modes of EIM-Proactive, Social, Reactive, and Passive-and their 12 constituents were identified. A survey of 39 items was developed in two pilot studies to operationalize the identified modes as measurable EIM constituents. The respondents in the main study were upper secondary school students (n = 412). Exploratory factor analysis (EFA) was applied to validate subscales for each EIM constituent. Seven subscales emerged: Inquiring and Scanning in the Proactive mode, Social media-centered, and Experiential in the Social mode, and Information poor, Overwhelmed, and Blunting in the Passive mode. Two constituents, Serendipitous and Intuitive, were not supported in the EFA. The findings highlight that the core constituents of an individual's everyday information mastering can be operationalized as psychometric scales. The instrument contributes to the systematic empirical study of EIM constituents and their relationships. The study further sheds light on key modes of EIM.
  12. Schlagwein, D.: Consolidated, systemic conceptualization, and definition of the "sharing economy" (2020) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 5921) [ClassicSimilarity], result of:
          0.08966068 = score(doc=5921,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 5921, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5921)
      0.33333334 = coord(1/3)
    
    Abstract
    The "sharing economy" has recently emerged as a major global phenomenon in practice and is consequently an important research topic. What, precisely, is meant by this term, "sharing economy"? The literature to date offers many, often incomplete and conflicting definitions. This makes it difficult for researchers to lead a coherent discourse, to compare findings and to select appropriate cases. Alternative terms (e.g., "collaborative consumption," "gig economy," and "access economy") are a further complication. To resolve this issue, our article develops a consolidated (based on all prior work) and systemic (relating to the phenomenon in its entire scope) definition of the sharing economy. The definition is based on the detailed analysis of definitions and explanations in 152 sources identified in a systematic literature review. We identify 36 original understandings of the term "sharing economy." Using semantic integration strategies, we consolidate 84 semantic facets in these definitions into 18 characteristics of the sharing economy. Resolving conflicts in the meaning and scope of these characteristics, we arrive at a consolidated, systemic definition. We evaluate the definition's appropriateness and applicability by applying it to cases claimed by the media to be examples of the sharing economy. This article's definition is useful for future research and discourse on the sharing economy.
  13. Thompson, N.; McGill, T.; Bunn, A.; Alexander, R.: Cultural factors and the role of privacy concerns in acceptance of government surveillance (2020) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 5940) [ClassicSimilarity], result of:
          0.08966068 = score(doc=5940,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 5940, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5940)
      0.33333334 = coord(1/3)
    
    Abstract
    Though there is a tension between citizens' privacy concerns and their acceptance of government surveillance, there is little systematic research in this space and less still in a cross-cultural context. We address the research gap by modeling the factors that drive public acceptance of government surveillance, and by exploring the influence of national culture. The research involved an online survey of 242 Australian and Sri Lankan residents. Data were analyzed using partial least squares, revealing that privacy concerns around initial collection of citizens' data influenced levels of acceptance of surveillance in Australia but not Sri Lanka, whereas concerns about secondary use of data did not influence levels of acceptance in either country. These findings suggest that respondents conflate surveillance with the collection of data and may not consider subsequent secondary use. We also investigate cultural differences, finding that societal collectivism and power distance significantly affect the strength of the relationships between privacy concerns and acceptance of surveillance, on the one hand, and adoption of privacy protections, on the other. Our research also considers the role of trust in government, and perceived need for surveillance. Findings are discussed with their implications for theory and practice.
  14. Kim, L.; Portenoy, J.H.; West, J.D.; Stovel, K.W.: Scientific journals still matter in the era of academic search engines and preprint archives (2020) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 5961) [ClassicSimilarity], result of:
          0.08966068 = score(doc=5961,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 5961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5961)
      0.33333334 = coord(1/3)
    
    Abstract
    Journals play a critical role in the scientific process because they evaluate the quality of incoming papers and offer an organizing filter for search. However, the role of journals has been called into question because new preprint archives and academic search engines make it easier to find articles independent of the journals that publish them. Research on this issue is complicated by the deeply confounded relationship between article quality and journal reputation. We present an innovative proxy for individual article quality that is divorced from the journal's reputation or impact factor: the number of citations to preprints posted on arXiv.org. Using this measure to study three subfields of physics that were early adopters of arXiv, we show that prior estimates of the effect of journal reputation on an individual article's impact (measured by citations) are likely inflated. While we find that higher-quality preprints in these subfields are now less likely to be published in journals compared to prior years, we find little systematic evidence that the role of journal reputation on article performance has declined.
  15. Patin, B.; Sebastian, M.; Yeon, J.; Bertolini, D.; Grimm, A.: Interrupting epistemicide : a practical framework for naming, identifying, and ending epistemic injustice in the information professions (2021) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 353) [ClassicSimilarity], result of:
          0.08966068 = score(doc=353,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 353, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=353)
      0.33333334 = coord(1/3)
    
    Abstract
    The information professions need a paradigmatic shift to address the epistemicide happening within our field and the ways we have systematically undermined knowledge systems falling outside of Western traditions. Epistemicide is the killing, silencing, annihilation, or devaluing of a knowledge system. We argue epistemicide happens when epistemic injustices are persistent and systematic and collectively work as a structured and systemic oppression of particular ways of knowing. We present epistemicide as a conceptual approach for understanding and analyzing ways knowledge systems are silenced or devalued within Information Science. We extend Fricker's framework by: (a) identifying new types of epistemic injustices, and (b) by adding to Fricker's concepts of Primary and Secondary Harm and introducing the concept of a Third Harm happening at an intergenerational level. Addressing epistemicide is critical for information professionals because we task ourselves with handling knowledge from every field. Acknowledgement of and taking steps to interrupt epistemic injustices and these specific harms are supportive of the social justice movements already happening. This paper serves as an interruption of epistemic injustice by presenting actions toward justice in the form of operationalized interventions of epistemicide.
  16. Oliphant, T.: Emerging (information) realities and epistemic injustice (2021) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 358) [ClassicSimilarity], result of:
          0.08966068 = score(doc=358,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 358, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=358)
      0.33333334 = coord(1/3)
    
    Abstract
    Emergent realities such as the COVID-19 pandemic and corresponding "infodemic," the resurgence of Black Lives Matter, climate catastrophe, and fake news, misinformation, disinformation, and so on challenge information researchers to reconsider the limitations and potential of the user-centered paradigm that has guided much library and information studies (LIS) research. In order to engage with these emergent realities, understanding who people are in terms of their social identities, social power, and as epistemic agents-that is, knowers, speakers, listeners, and informants-may provide insight into human information interactions. These are matters of epistemic injustice. Drawing heavily from Miranda Fricker's work Epistemic Injustice: Power & the Ethics of Knowing, I use the concept of epistemic injustice (testimonial, systematic, and hermeneutical injustice) to consider people as epistemic beings rather than "users" in order to potentially illuminate new understandings of the subfields of information behavior and information literacy. Focusing on people as knowers, speakers, listeners, and informants rather than "users" presents an opportunity for information researchers, practitioners, and LIS educators to work in service of the epistemic interests of people and in alignment with liberatory aims.
  17. Tian, Y; Gomez, R.; Cifor, M.; Wilson, J.; Morgan, H.: ¬The information practices of law enforcement : passive and active collaboration and its implication for sanctuary laws in Washington state (2021) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 386) [ClassicSimilarity], result of:
          0.08966068 = score(doc=386,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 386, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=386)
      0.33333334 = coord(1/3)
    
    Abstract
    Although Washington state sanctuary policies of 2017 prohibit collaboration between local law enforcement and federal immigration enforcement in noncriminal cases, compliance with sanctuary policies has not been systematically studied. We explore information practices and collaboration between local law enforcement and federal immigration enforcement in Grant County, Washington, based on records from November 2017 to May 2019 obtained by the University of Washington Center for Human Rights through Public Records Act (PRA) requests. Qualitative analysis of over 8,000 pages reveals a baseline of passive and active information sharing and collaboration between local law enforcement and federal immigration agencies before Washington sanctuary laws went into effect in May 2019, a practice that needs to stop if agencies are to comply with the laws. We employ a systematic methodology to obtain (through PRA and other Access to Information requests) and analyze official records through qualitative content analysis, to monitor and hold local law enforcement accountable in their compliance with sanctuary laws. This method can be used to examine law enforcement information behaviors in other counties in Washington, and in other states that offer sanctuary protections, as a way to monitor compliance with sanctuary laws and strengthen the protection of immigrants' rights.
  18. Cabanac, G.; Labbé, C.: Prevalence of nonsensical algorithmically generated papers in the scientific literature (2021) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 410) [ClassicSimilarity], result of:
          0.08966068 = score(doc=410,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 410, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=410)
      0.33333334 = coord(1/3)
    
    Abstract
    In 2014 leading publishers withdrew more than 120 nonsensical publications automatically generated with the SCIgen program. Casual observations suggested that similar problematic papers are still published and sold, without follow-up retractions. No systematic screening has been performed and the prevalence of such nonsensical publications in the scientific literature is unknown. Our contribution is 2-fold. First, we designed a detector that combs the scientific literature for grammar-based computer-generated papers. Applied to SCIgen, it has a 83.6% precision. Second, we performed a scientometric study of the 243 detected SCIgen-papers from 19 publishers. We estimate the prevalence of SCIgen-papers to be 75 per million papers in Information and Computing Sciences. Only 19% of the 243 problematic papers were dealt with: formal retraction (12) or silent removal (34). Publishers still serve and sometimes sell the remaining 197 papers without any caveat. We found evidence of citation manipulation via edited SCIgen bibliographies. This work reveals metric gaming up to the point of absurdity: fraudsters publish nonsensical algorithmically generated papers featuring genuine references. It stresses the need to screen papers for nonsense before peer-review and chase citation manipulation in published papers. Overall, this is yet another illustration of the harmful effects of the pressure to publish or perish.
  19. Luhmann, J.; Burghardt, M.: Digital humanities - A discipline in its own right? : an analysis of the role and position of digital humanities in the academic landscape (2022) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 460) [ClassicSimilarity], result of:
          0.08966068 = score(doc=460,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=460)
      0.33333334 = coord(1/3)
    
    Abstract
    Although digital humanities (DH) has received a lot of attention in recent years, its status as "a discipline in its own right" (Schreibman et al., A companion to digital humanities (pp. xxiii-xxvii). Blackwell; 2004) and its position in the overall academic landscape are still being negotiated. While there are countless essays and opinion pieces that debate the status of DH, little research has been dedicated to exploring the field in a systematic and empirical way (Poole, Journal of Documentation; 2017:73). This study aims to contribute to the existing research gap by comparing articles published over the past three decades in three established English-language DH journals (Computers and the Humanities, Literary and Linguistic Computing, Digital Humanities Quarterly) with research articles from journals in 15 other academic disciplines (corpus size: 34,041 articles; 299 million tokens). As a method of analysis, we use latent Dirichlet allocation topic modeling, combined with recent approaches that aggregate topic models by means of hierarchical agglomerative clustering. Our findings indicate that DH is simultaneously a discipline in its own right and a highly interdisciplinary field, with many connecting factors to neighboring disciplines-first and foremost, computational linguistics, and information science. Detailed descriptive analyses shed some light on the diachronic development of DH and also highlight topics that are characteristic for DH.
  20. Granikov, V.; El Sherif, R.; Bouthillier, F.; Pluye, P.: Factors and outcomes of collaborative information seeking : a mixed studies review with a framework synthesis (2022) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 528) [ClassicSimilarity], result of:
          0.08966068 = score(doc=528,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 528, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=528)
      0.33333334 = coord(1/3)
    
    Abstract
    Despite being necessary, keeping up to date with new information and trends remains challenging in many fields due to information overload, time constraints, and insufficient evaluation skills. Collaboration, or sharing the effort among group members, may be a solution, but more knowledge is needed. To guide future research on the potential role of collaboration in keeping up to date, we conducted a systematic literature review with a framework synthesis aimed to adapt the conceptual framework for environmental scanning to a collaborative context. Our specific objectives were to identify the factors and outcomes of collaborative information seeking (CIS) and use them to propose an adapted conceptual framework. Fifty-one empirical studies were included and synthesized using a hybrid thematic synthesis. The adapted framework includes seven types of influencing factors and five types of outcomes. Our review contributes to the theoretical expansion of knowledge on CIS in general and provides a conceptual framework to study collaboration in keeping up to date. Overall, our findings will be useful to researchers, practitioners, team leaders, and system designers implementing and evaluating collaborative information projects.

Languages

  • e 152
  • d 29
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 174
  • el 27
  • p 5
  • m 3
  • x 1
  • More… Less…