Search (128 results, page 2 of 7)

  • × year_i:[2020 TO 2030}
  1. Sanfilippo, M.R.; Shvartzshnaider, Y.; Reyes, I.; Nissenbaum, H.; Egelman, S.: Disaster privacy/privacy disaster (2020) 0.02
    0.024234561 = product of:
      0.048469122 = sum of:
        0.048469122 = product of:
          0.096938245 = sum of:
            0.096938245 = weight(_text_:policy in 5960) [ClassicSimilarity], result of:
              0.096938245 = score(doc=5960,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.35544267 = fieldWeight in 5960, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5960)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Privacy expectations during disasters differ significantly from nonemergency situations. This paper explores the actual privacy practices of popular disaster apps, highlighting location information flows. Our empirical study compares content analysis of privacy policies and government agency policies, structured by the contextual integrity framework, with static and dynamic app analysis documenting the personal data sent by 15 apps. We identify substantive gaps between regulation and guidance, privacy policies, and information flows, resulting from ambiguities and exploitation of exemptions. Results also indicate gaps between governance and practice, including the following: (a) Many apps ignore self-defined policies; (b) while some policies state they "might" access location data under certain conditions, those conditions are not met as 12 apps included in our study capture location immediately upon initial launch under default settings; and (c) not all third-party data recipients are identified in policy, including instances that violate expectations of trusted third parties.
  2. Hagen, L.; Patel, M.; Luna-Reyes, L.: Human-supervised data science framework for city governments : a design science approach (2023) 0.02
    0.024234561 = product of:
      0.048469122 = sum of:
        0.048469122 = product of:
          0.096938245 = sum of:
            0.096938245 = weight(_text_:policy in 1016) [ClassicSimilarity], result of:
              0.096938245 = score(doc=1016,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.35544267 = fieldWeight in 1016, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1016)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The importance of involving humans in the data science process has been widely discussed in the literature. However, studies lack details on how to involve humans in the process. Using a design science approach, this paper proposes and evaluates a human-supervised data science framework in the context of local governments. Our findings suggest that the involvement of a stakeholder group, public managers in this case, in the process of data science project enhanced quality of data science outcomes. Public managers' detailed knowledge on both the data and context was beneficial for improving future data science infrastructure. In addition, the study suggests that local governments can harness the value of data-driven approaches to policy and decision making through focalized investments in improving data and data science infrastructure, which includes culture and processes necessary to incorporate data science and analytics into the decision-making process.
  3. Zhou, H.; Guns, R.; Engels, T.C.E.: Towards indicating interdisciplinarity : characterizing interdisciplinary knowledge flow (2023) 0.02
    0.024234561 = product of:
      0.048469122 = sum of:
        0.048469122 = product of:
          0.096938245 = sum of:
            0.096938245 = weight(_text_:policy in 1072) [ClassicSimilarity], result of:
              0.096938245 = score(doc=1072,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.35544267 = fieldWeight in 1072, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1072)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study contributes to the recent discussions on indicating interdisciplinarity, that is, going beyond catch-all metrics of interdisciplinarity. We propose a contextual framework to improve the granularity and usability of the existing methodology for interdisciplinary knowledge flow (IKF) in which scientific disciplines import and export knowledge from/to other disciplines. To characterize the knowledge exchange between disciplines, we recognize three aspects of IKF under this framework, namely broadness, intensity, and homogeneity. We show how to utilize them to uncover different forms of interdisciplinarity, especially between disciplines with the largest volume of IKF. We apply this framework in two use cases, one at the level of disciplines and one at the level of journals, to show how it can offer a more holistic and detailed viewpoint on the interdisciplinarity of scientific entities than aggregated and context-unaware indicators. We further compare our proposed framework, an indicating process, with established indicators and discuss how such information tools on interdisciplinarity can assist science policy practices such as performance-based research funding systems and panel-based peer review processes.
  4. Danskin, A.; Seeman, D.; Bouchard, M.; Kammerer, K.; Kilpatrick, L.; Mumbower, K.: FAST the inside track : where we are, where do we want to be, and how do we get there? (2023) 0.02
    0.024234561 = product of:
      0.048469122 = sum of:
        0.048469122 = product of:
          0.096938245 = sum of:
            0.096938245 = weight(_text_:policy in 1150) [ClassicSimilarity], result of:
              0.096938245 = score(doc=1150,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.35544267 = fieldWeight in 1150, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1150)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is an overview of the development of FAST (Faceted Application of Subject Terminology) from its inception in the late 1990s, through its development and implementation to the work being undertaken by OCLC and the FAST Policy and Outreach Committee (FPOC) to develop and promote FAST. FPOC members explain how FAST is used by institutions in Canada, the United Kingdom, and the United States. They cover their experience of implementing FAST and the benefits they have derived. The final section considers the value of FAST as a faceted vocabulary and the potential for future development and linked data.
  5. ¬Der Student aus dem Computer (2023) 0.02
    0.024119893 = product of:
      0.048239786 = sum of:
        0.048239786 = product of:
          0.09647957 = sum of:
            0.09647957 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.09647957 = score(doc=1079,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  6. Jaeger, L.: Wissenschaftler versus Wissenschaft (2020) 0.02
    0.020674193 = product of:
      0.041348387 = sum of:
        0.041348387 = product of:
          0.08269677 = sum of:
            0.08269677 = weight(_text_:22 in 4156) [ClassicSimilarity], result of:
              0.08269677 = score(doc=4156,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.46428138 = fieldWeight in 4156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 14:08:22
  7. Ibrahim, G.M.; Taylor, M.: Krebszellen manipulieren Neurone : Gliome (2023) 0.02
    0.020674193 = product of:
      0.041348387 = sum of:
        0.041348387 = product of:
          0.08269677 = sum of:
            0.08269677 = weight(_text_:22 in 1203) [ClassicSimilarity], result of:
              0.08269677 = score(doc=1203,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.46428138 = fieldWeight in 1203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1203)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2023, H.10, S.22-24
  8. Moksness, L.; Olsen, S.O.: Perceived quality and self-identity in scholarly publishing (2020) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 5677) [ClassicSimilarity], result of:
              0.08078188 = score(doc=5677,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 5677, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5677)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The purpose of the study was to understand if and how 2 proposed facets of self-identity (work-self and career-self) and journals' perceived quality (impact, visibility, and content quality) influence and explain the intention to publish in open access (OA) or nonopen access (non-OA) journals. This study integrates attitude and identity theory within a cross-sectional survey design. The sample consists of about 1,600 researchers in Norway, and the data were collected via e-mail invitation using a digital surveying tool and analyzed using structural equation modeling techniques. We determined that perceived impact-quality increases the intention to publish non-OA, while decreasing the intention to publish OA. Content quality is only associated with non-OA journals. Perceived visibility increases the intention to publish OA, while the opposite effect is found for non-OA. Career-self salience has the strongest effect on impact-quality, while content quality is most important when work-self is salient. This research contributes to a deeper understanding about how perceived quality influences intention to publish in OA and non-OA journals, and how self-identity salience affects different facets of perceived quality in valence and strength. Findings have implications for policy development, implementation, and assessment and may contribute to improving OA adoption.
  9. Koster, L.: Persistent identifiers for heritage objects (2020) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 5718) [ClassicSimilarity], result of:
              0.08078188 = score(doc=5718,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 5718, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5718)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Persistent identifiers (PID's) are essential for getting access and referring to library, archive and museum (LAM) collection objects in a sustainable and unambiguous way, both internally and externally. Heritage institutions need a universal policy for the use of PID's in order to have an efficient digital infrastructure at their disposal and to achieve optimal interoperability, leading to open data, open collections and efficient resource management. Here the discussion is limited to PID's that institutions can assign to objects they own or administer themselves. PID's for people, subjects etc. can be used by heritage institutions, but are generally managed by other parties. The first part of this article consists of a general theoretical description of persistent identifiers. First of all, I discuss the questions of what persistent identifiers are and what they are not, and what is needed to administer and use them. The most commonly used existing PID systems are briefly characterized. Then I discuss the types of objects PID's can be assigned to. This section concludes with an overview of the requirements that apply if PIDs should also be used for linked data. The second part examines current infrastructural practices, and existing PID systems and their advantages and shortcomings. Based on these practical issues and the pros and cons of existing PID systems a list of requirements for PID systems is presented which is used to address a number of practical considerations. This section concludes with a number of recommendations.
  10. Moore, S.A.: Revisiting "the 1990s debutante" : scholar-led publishing and the prehistory of the open access movement (2020) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 5920) [ClassicSimilarity], result of:
              0.08078188 = score(doc=5920,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 5920, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5920)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The movement for open access publishing (OA) is often said to have its roots in the scientific disciplines, having been popularized by scientific publishers and formalized through a range of top-down policy interventions. But there is an often-neglected prehistory of OA that can be found in the early DIY publishers of the late 1980s and early 1990s. Managed entirely by working academics, these journals published research in the humanities and social sciences and stand out for their unique set of motivations and practices. This article explores this separate lineage in the history of the OA movement through a critical-theoretical analysis of the motivations and practices of the early scholar-led publishers. Alongside showing the involvement of the humanities and social sciences in the formation of OA, the analysis reveals the importance that these journals placed on experimental practices, critique of commercial publishing, and the desire to reach new audiences. Understood in today's context, this research is significant for adding complexity to the history of OA, which policymakers, advocates, and publishing scholars should keep in mind as OA goes mainstream.
  11. Borgman, C.L.; Wofford, M.F.; Golshan, M.S.; Darch, P.T.: Collaborative qualitative research at scale : reflections on 20 years of acquiring global data and making data global (2021) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 239) [ClassicSimilarity], result of:
              0.08078188 = score(doc=239,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=239)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A 5-year project to study scientific data uses in geography, starting in 1999, evolved into 20 years of research on data practices in sensor networks, environmental sciences, biology, seismology, undersea science, biomedicine, astronomy, and other fields. By emulating the "team science" approaches of the scientists studied, the UCLA Center for Knowledge Infrastructures accumulated a comprehensive collection of qualitative data about how scientists generate, manage, use, and reuse data across domains. Building upon Paul N. Edwards's model of "making global data"-collecting signals via consistent methods, technologies, and policies-to "make data global"-comparing and integrating those data, the research team has managed and exploited these data as a collaborative resource. This article reflects on the social, technical, organizational, economic, and policy challenges the team has encountered in creating new knowledge from data old and new. We reflect on continuity over generations of students and staff, transitions between grants, transfer of legacy data between software tools, research methods, and the role of professional data managers in the social sciences.
  12. Mathieu, C.: Defining knowledge workers' creation, description, and storage practices as impact on enterprise content management strategy (2022) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 500) [ClassicSimilarity], result of:
              0.08078188 = score(doc=500,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 500, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=500)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As part of the effort to digitally transform, organizations are seeking more and better solutions to long-standing enterprise content management challenges. Such solutions rarely investigate the relationship between knowledge workers' daily work to capture information and the perceived or actual value of that information to the enterprise per established content management strategy. The study described in this paper seeks to identify gaps in content management practices versus policy by modeling the conventions by which one organization's knowledge workers typically generate, store, and later recover their daily work products. Thirty-five interviews with knowledge workers at the Jet Propulsion Laboratory were conducted on this subject. The results of these interviews provide an insight as to how knowledge workers interact with enterprise content in their dual roles as both the primary creators and primary consumers of enterprise content. This paper, which outlines various permutations of the digital object creation, description, and storage (CDS) model, provides basic strategies for bringing the value perceptions of knowledge workers into alignment with institutional directives related to improving content findability and reuse in the enterprise.
  13. Pech, G.; Delgado, C.; Sorella, S.P.: Classifying papers into subfields using Abstracts, Titles, Keywords and KeyWords Plus through pattern detection and optimization procedures : an application in Physics (2022) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 744) [ClassicSimilarity], result of:
              0.08078188 = score(doc=744,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Classifying papers according to the fields of knowledge is critical to clearly understand the dynamics of scientific (sub)fields, their leading questions, and trends. Most studies rely on journal categories defined by popular databases such as WoS or Scopus, but some experts find that those categories may not correctly map the existing subfields nor identify the subfield of a specific article. This study addresses the classification problem using data from each paper (Abstract, Title, Keywords, and the KeyWords Plus) and the help of experts to identify the existing subfields and journals exclusive of each subfield. These "exclusive journals" are critical to obtain, through a pattern detection procedure that uses machine learning techniques (from software NVivo), a list of the frequent terms that are specific to each subfield. With that list of terms and with the help of optimization procedures, we can identify to which subfield each paper most likely belongs. This study can contribute to support scientific policy-makers, funding, and research institutions-via more accurate academic performance evaluations-, to support editors in their tasks to redefine the scopes of journals, and to support popular databases in their processes of refining categories.
  14. Mehra, B.; Jabery, B.S.: "Don't Say Gay" in Alabama : a taxonomic framework of LGBTQ+ information support services in public libraries - An exploratory website content analysis of critical resistance (2023) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 1019) [ClassicSimilarity], result of:
              0.08078188 = score(doc=1019,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 1019, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1019)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The American state of Alabama has recently developed a national notoriety as a toxic place for lesbian, gay, bisexual, transgender, and questioning/queer (LGBTQ+) people owing to several laws that have supported human rights violations and denied their civil liberties. This case study assesses how Alabama's public libraries are providing culturally relevant web access and coverage to LGBTQ+ information to meet their needs/concerns in a region that is oppressive to sexual and gender minorities. In the process, it illustrates public libraries' emerging role as simultaneously impotent to the majority's infringements, while finding creative ways to serve as counter narrative spaces of resistance representing "voices" of, and from, the margins. This exploratory assessment is based on documenting web-based information for LGBTQ+ people in Alabama's 230 public libraries and identifies seven intersectional examples of information offerings, categorized into three groupings: (a) information sources (collections, resources); (b) information policy/planning (assigned role, strategic representation); (c) connections (internal, external, news/events). It provides a taxonomic framework with representative examples that challenge the regional stereotype of solely deficit marginalization. The discussion provides new opportunities to build collaborations of sharing within Alabama's public library networks to better address LGBTQ+ concerns and inequities in their local and regional communities.
  15. Söderström, K.R.: Global reach, regional strength : spatial patterns of a big science facility (2023) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 1034) [ClassicSimilarity], result of:
              0.08078188 = score(doc=1034,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 1034, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1034)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The European Synchrotron Radiation Facility (ESRF), a leading facility in synchrotron science, plays a crucial role in supporting both the local and the international scientific community by providing advanced instrumentation for their research. However, our understanding of the actual reach of the facility and its spatial dynamics remains limited. Thus, a methodology is proposed where author affiliation links are processed, analyzed, and visualized. A case study that focuses on the ESRF is implemented, where the author affiliation links of 17,870 publications over the period 2011-2021 are processed, revealing 76,850 addresses, of which 11,120 are unique locations. The results of the case study bring to light robust patterns of increased internationalization over time, accompanied by regional agglomeration and the emergence of potential research hotspots. The methodology and results are likely to be of interest to researchers in Spatial Scientometrics, which addresses some of the current challenges in the field. Managers, funders, and policy-makers can utilize this method or similar approaches to enrich impact analyses of large-scale science facilities, vital for insuring their sustained support. The code for the methodology, as well as the interactive visualizations, is freely available on GitHub for further exploration and replication of the methodology.
  16. Kodua-Ntim, K.: Narrative review on open access institutional repositories and knowledge sharing in South Africa (2023) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 1050) [ClassicSimilarity], result of:
              0.08078188 = score(doc=1050,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 1050, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1050)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This brief communication conveys a critical assessment of the benefits, challenges, and potential of Open Access Institutional Repositories (OAIRs) for knowledge sharing in South Africa. The review identifies best practices and recommendations to promote and improve their usage. Researchers need training and support to understand guidelines and best practices for depositing their work. Limited funding for OAIRs can be addressed by government funding or exploring alternative models. Legal and policy frameworks must support OAIRs and ensure they comply with international standards. Proper management and indexing policies enhance institutional visibility and information retrieval. OAIRs promote collaboration and cooperation among researchers and provide a platform for knowledge sharing and feedback. Standardized platforms and frameworks ensure digital outputs are accessible and usable for the academic community. Sharing knowledge on self-archiving encourages researchers to deposit their works. Formal reviews must focus on metadata and ensure that articles are from DHET-accredited journals and that theses and dissertations meet institutional requirements. These efforts promote open access and preserve scholarly works for future generations.
  17. Xiao, F.; Chi, Y.; He, D.: Promoting data use through understanding user behaviors : a model for human open government data interaction (2023) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 1190) [ClassicSimilarity], result of:
              0.08078188 = score(doc=1190,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 1190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1190)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recent dramatic increases in the ability to generate, collect, and use datasets have inspired numerous academic and policy discussions regarding the emerging field of human data interaction (HDI). Given the challenges in interacting with open government data (OGD) and the existing research gap in this field, our study intends to explore HDI in the OGD domain and investigate ways HDI can further contribute to OGD promotion. Building upon two existing behavioral models, we proposed an initial conceptual model for OGD interaction, then using this model, conducted two studies to empirically examine users' behaviors when interacting with OGD. Ultimately, we refined this model for OGD interaction and invited three experts to validate it to enhance its understandability, comprehensiveness, and reasonableness. This comprehensive model for human OGD interaction will contribute to the theoretical work of the HDI field as well as the practical design of OGD platforms and data literacy education.
  18. Gu, D.; Liu, H.; Zhao, H.; Yang, X.; Li, M.; Lian, C.: ¬A deep learning and clustering-based topic consistency modeling framework for matching health information supply and demand (2024) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 1209) [ClassicSimilarity], result of:
              0.08078188 = score(doc=1209,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 1209, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1209)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Improving health literacy through health information dissemination is one of the most economical and effective mechanisms for improving population health. This process needs to fully accommodate the thematic suitability of health information supply and demand and reduce the impact of information overload and supply-demand mismatch on the enthusiasm of health information acquisition. We propose a health information topic modeling analysis framework that integrates deep learning methods and clustering techniques to model the supply-side and demand-side topics of health information and to quantify the thematic alignment of supply and demand. To validate the effectiveness of the framework, we have conducted an empirical analysis on a dataset with 90,418 pieces of textual data from two prominent social networking platforms. The results show that the supply of health information in general has not yet met the demand, the demand for health information has not yet been met to a considerable extent, especially for disease-related topics, and there is clear inconsistency between the supply and demand sides for the same health topics. Public health policy-making departments and content producers can adjust their information selection and dissemination strategies according to the distribution of identified health topics, thereby improving the effectiveness of public health information dissemination.
  19. Koch, C.: Was ist Bewusstsein? (2020) 0.02
    0.017228495 = product of:
      0.03445699 = sum of:
        0.03445699 = product of:
          0.06891398 = sum of:
            0.06891398 = weight(_text_:22 in 5723) [ClassicSimilarity], result of:
              0.06891398 = score(doc=5723,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.38690117 = fieldWeight in 5723, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5723)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    17. 1.2020 22:15:11
  20. Wagner, E.: Über Impfstoffe zur digitalen Identität? (2020) 0.02
    0.017228495 = product of:
      0.03445699 = sum of:
        0.03445699 = product of:
          0.06891398 = sum of:
            0.06891398 = weight(_text_:22 in 5846) [ClassicSimilarity], result of:
              0.06891398 = score(doc=5846,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.38690117 = fieldWeight in 5846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5846)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 5.2020 17:22:40

Languages

  • e 98
  • d 30

Types

  • a 122
  • el 20
  • m 2
  • p 2
  • x 1
  • More… Less…