Search (632 results, page 2 of 32)

  • × type_ss:"a"
  • × year_i:[2020 TO 2030}
  1. Candela, G.: ¬An automatic data quality approach to assess semantic data from cultural heritage institutions (2023) 0.03
    0.02822417 = product of:
      0.042336255 = sum of:
        0.01867095 = weight(_text_:on in 997) [ClassicSimilarity], result of:
          0.01867095 = score(doc=997,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=997)
        0.023665305 = product of:
          0.04733061 = sum of:
            0.04733061 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
              0.04733061 = score(doc=997,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2708308 = fieldWeight in 997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=997)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In recent years, cultural heritage institutions have been exploring the benefits of applying Linked Open Data to their catalogs and digital materials. Innovative and creative methods have emerged to publish and reuse digital contents to promote computational access, such as the concepts of Labs and Collections as Data. Data quality has become a requirement for researchers and training methods based on artificial intelligence and machine learning. This article explores how the quality of Linked Open Data made available by cultural heritage institutions can be automatically assessed. The results obtained can be useful for other institutions who wish to publish and assess their collections.
    Date
    22. 6.2023 18:23:31
  2. Wu, Z.; Li, R.; Zhou, Z.; Guo, J.; Jiang, J.; Su, X.: ¬A user sensitive subject protection approach for book search service (2020) 0.03
    0.026668733 = product of:
      0.0400031 = sum of:
        0.02309931 = weight(_text_:on in 5617) [ClassicSimilarity], result of:
          0.02309931 = score(doc=5617,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21044704 = fieldWeight in 5617, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5617)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 5617) [ClassicSimilarity], result of:
              0.03380758 = score(doc=5617,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 5617, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5617)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In a digital library, book search is one of the most important information services. However, with the rapid development of network technologies such as cloud computing, the server-side of a digital library is becoming more and more untrusted; thus, how to prevent the disclosure of users' book query privacy is causing people's increasingly extensive concern. In this article, we propose to construct a group of plausible fake queries for each user book query to cover up the sensitive subjects behind users' queries. First, we propose a basic framework for the privacy protection in book search, which requires no change to the book search algorithm running on the server-side, and no compromise to the accuracy of book search. Second, we present a privacy protection model for book search to formulate the constraints that ideal fake queries should satisfy, that is, (i) the feature similarity, which measures the confusion effect of fake queries on users' queries, and (ii) the privacy exposure, which measures the cover-up effect of fake queries on users' sensitive subjects. Third, we discuss the algorithm implementation for the privacy model. Finally, the effectiveness of our approach is demonstrated by theoretical analysis and experimental evaluation.
    Date
    6. 1.2020 17:22:25
  3. Rha, E.Y.; Belkin, N.: Exploring social aspects of task perception using cognitive sociology : a social cognitive perspective (2020) 0.03
    0.026668733 = product of:
      0.0400031 = sum of:
        0.02309931 = weight(_text_:on in 5973) [ClassicSimilarity], result of:
          0.02309931 = score(doc=5973,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21044704 = fieldWeight in 5973, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5973)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 5973) [ClassicSimilarity], result of:
              0.03380758 = score(doc=5973,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 5973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5973)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of this paper is to explore effects of individuals' social context on their perception of a task, for better understanding of social aspects of task-based information seeking behavior. Design/methodology/approach This study took a qualitative case approach and conducted semi-structured one-on-one interviews with 12 participants. A cross-context comparative approach was chosen to identify effects of the social contexts on individuals. For comparative analysis, the research population was tenured faculty members in two different disciplines, natural sciences and humanities. The interview data were analyzed and coded using NVivo12 through an open coding process. Findings The results demonstrate that the same task type is differently perceived by individuals in different social contexts. Reasons for the different perceptions in the different contexts are associated with social factors of the disciplines, specifically social norms and practices. Originality/value This study uses a novel theoretical framework, cognitive sociology, to examine social aspects of human perception in relation to task-based information seeking behavior, which has been little understood theoretically and empirically in the field of information science.
    Date
    20. 1.2015 18:30:22
  4. Ilhan, A.; Fietkiewicz, K.J.: Data privacy-related behavior and concerns of activity tracking technology users from Germany and the USA (2021) 0.03
    0.026668733 = product of:
      0.0400031 = sum of:
        0.02309931 = weight(_text_:on in 180) [ClassicSimilarity], result of:
          0.02309931 = score(doc=180,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21044704 = fieldWeight in 180, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=180)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 180) [ClassicSimilarity], result of:
              0.03380758 = score(doc=180,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 180, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=180)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose This investigation aims to examine the differences and similarities between activity tracking technology users from two regions (the USA and Germany) in their intended privacy-related behavior. The focus lies on data handling after hypothetical discontinuance of use, data protection and privacy policy seeking, and privacy concerns. Design/methodology/approach The data was collected through an online survey in 2019. In order to identify significant differences between participants from Germany and the USA, the chi-squared test and the Mann-Whitney U test were applied. Findings The intensity of several privacy-related concerns was significantly different between the two groups. The majority of the participants did not inform themselves about the respective data privacy policies or terms and conditions before installing an activity tracking application. The majority of the German participants knew that they could request the deletion of all their collected data. In contrast, only 35% out of 68 participants from the US knew about this option. Research limitations/implications This study intends to raise awareness about managing the collected health and fitness data after stopping to use activity tracking technologies. Furthermore, to reduce privacy and security concerns, the involvement of the government, companies and users is necessary to handle and share data more considerably and in a sustainable way. Originality/value This study sheds light on users of activity tracking technologies from a broad perspective (here, participants from the USA and Germany). It incorporates not only concerns and the privacy paradox but (intended) user behavior, including seeking information on data protection and privacy policy and handling data after hypothetical discontinuance of use of the technology.
    Date
    20. 1.2015 18:30:22
  5. Cerda-Cosme, R.; Méndez, E.: Analysis of shared research data in Spanish scientific papers about COVID-19 : a first approach (2023) 0.03
    0.026668733 = product of:
      0.0400031 = sum of:
        0.02309931 = weight(_text_:on in 916) [ClassicSimilarity], result of:
          0.02309931 = score(doc=916,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21044704 = fieldWeight in 916, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=916)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 916) [ClassicSimilarity], result of:
              0.03380758 = score(doc=916,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 916, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=916)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    During the coronavirus pandemic, changes in the way science is done and shared occurred, which motivates meta-research to help understand science communication in crises and improve its effectiveness. The objective is to study how many Spanish scientific papers on COVID-19 published during 2020 share their research data. Qualitative and descriptive study applying nine attributes: (a) availability, (b) accessibility, (c) format, (d) licensing, (e) linkage, (f) funding, (g) editorial policy, (h) content, and (i) statistics. We analyzed 1,340 papers, 1,173 (87.5%) did not have research data. A total of 12.5% share their research data of which 2.1% share their data in repositories, 5% share their data through a simple request, 0.2% do not have permission to share their data, and 5.2% share their data as supplementary material. There is a small percentage that shares their research data; however, it demonstrates the researchers' poor knowledge on how to properly share their research data and their lack of knowledge on what is research data.
    Date
    21. 3.2023 19:22:02
  6. Vakkari, P.; Järvelin, K.; Chang, Y.-W.: ¬The association of disciplinary background with the evolution of topics and methods in Library and Information Science research 1995-2015 (2023) 0.03
    0.026668733 = product of:
      0.0400031 = sum of:
        0.02309931 = weight(_text_:on in 998) [ClassicSimilarity], result of:
          0.02309931 = score(doc=998,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21044704 = fieldWeight in 998, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=998)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 998) [ClassicSimilarity], result of:
              0.03380758 = score(doc=998,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 998, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=998)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The paper reports a longitudinal analysis of the topical and methodological development of Library and Information Science (LIS). Its focus is on the effects of researchers' disciplines on these developments. The study extends an earlier cross-sectional study (Vakkari et al., Journal of the Association for Information Science and Technology, 2022a, 73, 1706-1722) by a coordinated dataset representing a content analysis of articles published in 31 scholarly LIS journals in 1995, 2005, and 2015. It is novel in its coverage of authors' disciplines, topical and methodological aspects in a coordinated dataset spanning two decades thus allowing trend analysis. The findings include a shrinking trend in the share of LIS from 67 to 36% while Computer Science, and Business and Economics increase their share from 9 and 6% to 21 and 16%, respectively. The earlier cross-sectional study (Vakkari et al., Journal of the Association for Information Science and Technology, 2022a, 73, 1706-1722) for the year 2015 identified three topical clusters of LIS research, focusing on topical subfields, methodologies, and contributing disciplines. Correspondence analysis confirms their existence already in 1995 and traces their development through the decades. The contributing disciplines infuse their concepts, research questions, and approaches to LIS and may also subsume vital parts of LIS in their own structures of knowledge production.
    Date
    22. 6.2023 18:15:06
  7. Yu, L.; Fan, Z.; Li, A.: ¬A hierarchical typology of scholarly information units : based on a deduction-verification study (2020) 0.03
    0.026437946 = product of:
      0.03965692 = sum of:
        0.026133886 = weight(_text_:on in 5655) [ClassicSimilarity], result of:
          0.026133886 = score(doc=5655,freq=12.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.23809364 = fieldWeight in 5655, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=5655)
        0.013523032 = product of:
          0.027046064 = sum of:
            0.027046064 = weight(_text_:22 in 5655) [ClassicSimilarity], result of:
              0.027046064 = score(doc=5655,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.15476047 = fieldWeight in 5655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5655)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of this paper is to lay a theoretical foundation for identifying operational information units for library and information professional activities in the context of scholarly communication. Design/methodology/approach The study adopts a deduction-verification approach to formulate a typology of units for scholarly information. It first deduces possible units from an existing conceptualization of information, which defines information as the combined product of data and meaning, and then tests the usefulness of these units via two empirical investigations, one with a group of scholarly papers and the other with a sample of scholarly information users. Findings The results show that, on defining an information unit as a piece of information that is complete in both data and meaning, to such an extent that it remains meaningful to its target audience when retrieved and displayed independently in a database, it is then possible to formulate a hierarchical typology of units for scholarly information. The typology proposed in this study consists of three levels, which in turn, consists of 1, 5 and 44 units, respectively. Research limitations/implications The result of this study has theoretical implications on both the philosophical and conceptual levels: on the philosophical level, it hinges on, and reinforces the objective view of information; on the conceptual level, it challenges the conceptualization of work by IFLA's Functional Requirements for Bibliographic Records and Library Reference Model but endorses that by Library of Congress's BIBFRAME 2.0 model. Practical implications It calls for reconsideration of existing operational units in a variety of library and information activities. Originality/value The study strengthens the conceptual foundation of operational information units and brings to light the primacy of "one work" as an information unit and the possibility for it to be supplemented by smaller units.
    Date
    14. 1.2020 11:15:22
  8. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.02642109 = product of:
      0.07926327 = sum of:
        0.07926327 = product of:
          0.23778981 = sum of:
            0.23778981 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.23778981 = score(doc=862,freq=2.0), product of:
                0.42309996 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04990557 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  9. Zheng, X.; Chen, J.; Yan, E.; Ni, C.: Gender and country biases in Wikipedia citations to scholarly publications (2023) 0.02
    0.024192145 = product of:
      0.036288217 = sum of:
        0.016003672 = weight(_text_:on in 886) [ClassicSimilarity], result of:
          0.016003672 = score(doc=886,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 886, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=886)
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 886) [ClassicSimilarity], result of:
              0.040569093 = score(doc=886,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 886, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=886)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ensuring Wikipedia cites scholarly publications based on quality and relevancy without biases is critical to credible and fair knowledge dissemination. We investigate gender- and country-based biases in Wikipedia citation practices using linked data from the Web of Science and a Wikipedia citation dataset. Using coarsened exact matching, we show that publications by women are cited less by Wikipedia than expected, and publications by women are less likely to be cited than those by men. Scholarly publications by authors affiliated with non-Anglosphere countries are also disadvantaged in getting cited by Wikipedia, compared with those by authors affiliated with Anglosphere countries. The level of gender- or country-based inequalities varies by research field, and the gender-country intersectional bias is prominent in math-intensive STEM fields. To ensure the credibility and equality of knowledge presentation, Wikipedia should consider strategies and guidelines to cite scholarly publications independent of the gender and country of authors.
    Date
    22. 1.2023 18:53:32
  10. Ma, Y.: Relatedness and compatibility : the concept of privacy in Mandarin Chinese and American English corpora (2023) 0.02
    0.024192145 = product of:
      0.036288217 = sum of:
        0.016003672 = weight(_text_:on in 887) [ClassicSimilarity], result of:
          0.016003672 = score(doc=887,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 887, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=887)
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 887) [ClassicSimilarity], result of:
              0.040569093 = score(doc=887,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=887)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This study investigates how privacy as an ethical concept exists in two languages: Mandarin Chinese and American English. The exploration relies on two genres of corpora from 10 years: social media posts and news articles, 2010-2019. A mixed-methods approach combining structural topic modeling (STM) and human interpretation were used to work with the data. Findings show various privacy-related topics across the two languages. Moreover, some of these different topics revealed fundamental incompatibilities for understanding privacy across these two languages. In other words, some of the variations of topics do not just reflect contextual differences; they reveal how the two languages value privacy in different ways that can relate back to the society's ethical tradition. This study is one of the first empirically grounded intercultural explorations of the concept of privacy. It has shown that natural language is promising to operationalize intercultural and comparative privacy research, and it provides an examination of the concept as it is understood in these two languages.
    Date
    22. 1.2023 18:59:40
  11. Hertzum, M.: Information seeking by experimentation : trying something out to discover what happens (2023) 0.02
    0.024192145 = product of:
      0.036288217 = sum of:
        0.016003672 = weight(_text_:on in 915) [ClassicSimilarity], result of:
          0.016003672 = score(doc=915,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 915, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=915)
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 915) [ClassicSimilarity], result of:
              0.040569093 = score(doc=915,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=915)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Experimentation is the process of trying something out to discover what happens. It is a widespread information practice, yet often bypassed in information-behavior research. This article argues that experimentation complements prior knowledge, documents, and people as an important fourth class of information sources. Relative to the other classes, the distinguishing characteristics of experimentation are that it is a personal-as opposed to interpersonal-source and that it provides "backtalk." When the information seeker tries something out and then attends to the resulting situation, it is as though the materials of the situation talk back: They provide the information seeker with a situated and direct experience of the consequences of the tried-out options. In this way, experimentation involves obtaining information by creating it. It also involves turning material and behavioral processes into information interactions. Thereby, information seeking by experimentation is important to practical information literacy and extends information-behavior research with new insights on the interrelations between creating and seeking information.
    Date
    21. 3.2023 19:22:29
  12. Milard, B.; Pitarch, Y.: Egocentric cocitation networks and scientific papers destinies (2023) 0.02
    0.024192145 = product of:
      0.036288217 = sum of:
        0.016003672 = weight(_text_:on in 918) [ClassicSimilarity], result of:
          0.016003672 = score(doc=918,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=918)
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 918) [ClassicSimilarity], result of:
              0.040569093 = score(doc=918,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=918)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    To what extent is the destiny of a scientific paper shaped by the cocitation network in which it is involved? What are the social contexts that can explain these structuring? Using bibliometric data, interviews with researchers, and social network analysis, this article proposes a typology based on egocentric cocitation networks that displays a quadruple structuring (before and after publication): polarization, clusterization, atomization, and attrition. It shows that the academic capital of the authors and the intellectual resources of their research are key factors of these destinies, as are the social relations between the authors concerned. The circumstances of the publishing are also correlated with the structuring of the egocentric cocitation networks, showing how socially embedded they are. Finally, the article discusses the contribution of these original networks to the analyze of scientific production and its dynamics.
    Date
    21. 3.2023 19:22:14
  13. Cheti, A.; Viti, E.: Functionality and merits of a faceted thesaurus : the case of the Nuovo soggettario (2023) 0.02
    0.024192145 = product of:
      0.036288217 = sum of:
        0.016003672 = weight(_text_:on in 1181) [ClassicSimilarity], result of:
          0.016003672 = score(doc=1181,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 1181, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=1181)
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 1181) [ClassicSimilarity], result of:
              0.040569093 = score(doc=1181,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 1181, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1181)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Nuovo soggettario, the official Italian subject indexing system edited by the National Central Library of Florence, is made up of interactive components, the core of which is a general thesaurus and some rules of a conventional syntax for subject string construction. The Nuovo soggettario Thesaurus is in compliance with ISO 25964: 2011-2013, IFLA LRM, and FAIR principle (findability, accessibility, interoperability, and reusability). Its open data are available in the Zthes, MARC21, and in SKOS formats and allow for interoperability with l library, archive, and museum databases. The Thesaurus's macrostructure is organized into four fundamental macro-categories, thirteen categories, and facets. The facets allow for the orderly development of hierarchies, thereby limiting polyhierarchies and promoting the grouping of homogenous concepts. This paper addresses the main features and peculiarities which have characterized the consistent development of this categorical structure and its effects on the syntactic sphere in a predominantly pre-coordinated usage context.
    Date
    26.11.2023 18:59:22
  14. Kim, J.(im); Kim, J.(enna): Effect of forename string on author name disambiguation (2020) 0.02
    0.023842867 = product of:
      0.0357643 = sum of:
        0.01886051 = weight(_text_:on in 5930) [ClassicSimilarity], result of:
          0.01886051 = score(doc=5930,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 5930, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5930)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 5930) [ClassicSimilarity], result of:
              0.03380758 = score(doc=5930,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 5930, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5930)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In author name disambiguation, author forenames are used to decide which name instances are disambiguated together and how much they are likely to refer to the same author. Despite such a crucial role of forenames, their effect on the performance of heuristic (string matching) and algorithmic disambiguation is not well understood. This study assesses the contributions of forenames in author name disambiguation using multiple labeled data sets under varying ratios and lengths of full forenames, reflecting real-world scenarios in which an author is represented by forename variants (synonym) and some authors share the same forenames (homonym). The results show that increasing the ratios of full forenames substantially improves both heuristic and machine-learning-based disambiguation. Performance gains by algorithmic disambiguation are pronounced when many forenames are initialized or homonyms are prevalent. As the ratios of full forenames increase, however, they become marginal compared to those by string matching. Using a small portion of forename strings does not reduce much the performances of both heuristic and algorithmic disambiguation methods compared to using full-length strings. These findings provide practical suggestions, such as restoring initialized forenames into a full-string format via record linkage for improved disambiguation performances.
    Date
    11. 7.2020 13:22:58
  15. Kang, M.: Dual paths to continuous online knowledge sharing : a repetitive behavior perspective (2020) 0.02
    0.023842867 = product of:
      0.0357643 = sum of:
        0.01886051 = weight(_text_:on in 5985) [ClassicSimilarity], result of:
          0.01886051 = score(doc=5985,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 5985, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5985)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 5985) [ClassicSimilarity], result of:
              0.03380758 = score(doc=5985,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 5985, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5985)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose Continuous knowledge sharing by active users, who are highly active in answering questions, is crucial to the sustenance of social question-and-answer (Q&A) sites. The purpose of this paper is to examine such knowledge sharing considering reason-based elaborate decision and habit-based automated cognitive processes. Design/methodology/approach To verify the research hypotheses, survey data on subjective intentions and web-crawled data on objective behavior are utilized. The sample size is 337 with the response rate of 27.2 percent. Negative binomial and hierarchical linear regressions are used given the skewed distribution of the dependent variable (i.e. the number of answers). Findings Both elaborate decision (linking satisfaction, intentions and continuance behavior) and automated cognitive processes (linking past and continuance behavior) are significant and substitutable. Research limitations/implications By measuring both subjective intentions and objective behavior, it verifies a detailed mechanism linking continuance intentions, past behavior and continuous knowledge sharing. The significant influence of automated cognitive processes implies that online knowledge sharing is habitual for active users. Practical implications Understanding that online knowledge sharing is habitual is imperative to maintaining continuous knowledge sharing by active users. Knowledge sharing trends should be monitored to check if the frequency of sharing decreases. Social Q&A sites should intervene to restore knowledge sharing behavior through personalized incentives. Originality/value This is the first study utilizing both subjective intentions and objective behavior data in the context of online knowledge sharing. It also introduces habit-based automated cognitive processes to this context. This approach extends the current understanding of continuous online knowledge sharing behavior.
    Date
    20. 1.2015 18:30:22
  16. Gorichanaz, T.: Sanctuary : an institutional vision for the digital age (2021) 0.02
    0.023842867 = product of:
      0.0357643 = sum of:
        0.01886051 = weight(_text_:on in 107) [ClassicSimilarity], result of:
          0.01886051 = score(doc=107,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 107, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=107)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 107) [ClassicSimilarity], result of:
              0.03380758 = score(doc=107,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=107)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose Trends in information technology and contemplative practices compel us to consider the intersections of information and contemplation. The purpose of this paper is to consider these intersections at the level of institutions. Design/methodology/approach First, the notion of institution is defined and discussed, along with information institutions and contemplative institutions. Next, sanctuary is proposed and explored as a vision for institutions in the digital age. Findings Sanctuary is a primordial human institution that has especial urgency in the digital age. This paper develops an info-contemplative framework for sanctuaries, including the elements: stability, silence, refuge, privacy and reform. Research limitations/implications This is a conceptual paper that, though guided by prior empirical and theoretical work, would benefit from application, validation and critique. This paper is meant as a starting point for discussions of institutions for the digital age. Practical implications As much as this paper is meant to prompt further research, it also provides guidance and inspiration for professionals to infuse their work with aspects of sanctuary and be attentive to the tensions inherent in sanctuary. Originality/value This paper builds on discourse at the intersection of information studies and contemplative studies, also connecting this with recent work on information institutions.
    Date
    22. 1.2021 14:20:55
  17. Thelwall, M.; Thelwall, S.: ¬A thematic analysis of highly retweeted early COVID-19 tweets : consensus, information, dissent and lockdown life (2020) 0.02
    0.023842867 = product of:
      0.0357643 = sum of:
        0.01886051 = weight(_text_:on in 178) [ClassicSimilarity], result of:
          0.01886051 = score(doc=178,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 178, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=178)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 178) [ClassicSimilarity], result of:
              0.03380758 = score(doc=178,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=178)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose Public attitudes towards COVID-19 and social distancing are critical in reducing its spread. It is therefore important to understand public reactions and information dissemination in all major forms, including on social media. This article investigates important issues reflected on Twitter in the early stages of the public reaction to COVID-19. Design/methodology/approach A thematic analysis of the most retweeted English-language tweets mentioning COVID-19 during March 10-29, 2020. Findings The main themes identified for the 87 qualifying tweets accounting for 14 million retweets were: lockdown life; attitude towards social restrictions; politics; safety messages; people with COVID-19; support for key workers; work; and COVID-19 facts/news. Research limitations/implications Twitter played many positive roles, mainly through unofficial tweets. Users shared social distancing information, helped build support for social distancing, criticised government responses, expressed support for key workers and helped each other cope with social isolation. A few popular tweets not supporting social distancing show that government messages sometimes failed. Practical implications Public health campaigns in future may consider encouraging grass roots social web activity to support campaign goals. At a methodological level, analysing retweet counts emphasised politics and ignored practical implementation issues. Originality/value This is the first qualitative analysis of general COVID-19-related retweeting.
    Date
    20. 1.2015 18:30:22
  18. Hottenrott, H.; Rose, M.E.; Lawson, C.: ¬The rise of multiple institutional affiliations in academia (2021) 0.02
    0.023842867 = product of:
      0.0357643 = sum of:
        0.01886051 = weight(_text_:on in 313) [ClassicSimilarity], result of:
          0.01886051 = score(doc=313,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 313, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=313)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 313) [ClassicSimilarity], result of:
              0.03380758 = score(doc=313,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 313, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=313)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This study provides the first systematic, international, large-scale evidence on the extent and nature of multiple institutional affiliations on journal publications. Studying more than 15 million authors and 22 million articles from 40 countries we document that: In 2019, almost one in three articles was (co-)authored by authors with multiple affiliations and the share of authors with multiple affiliations increased from around 10% to 16% since 1996. The growth of multiple affiliations is prevalent in all fields and it is stronger in high impact journals. About 60% of multiple affiliations are between institutions from within the academic sector. International co-affiliations, which account for about a quarter of multiple affiliations, most often involve institutions from the United States, China, Germany and the United Kingdom, suggesting a core-periphery network. Network analysis also reveals a number communities of countries that are more likely to share affiliations. We discuss potential causes and show that the timing of the rise in multiple affiliations can be linked to the introduction of more competitive funding structures such as "excellence initiatives" in a number of countries. We discuss implications for science and science policy.
  19. Zhang, L.; Lu, W.; Yang, J.: LAGOS-AND : a large gold standard dataset for scholarly author name disambiguation (2023) 0.02
    0.023842867 = product of:
      0.0357643 = sum of:
        0.01886051 = weight(_text_:on in 883) [ClassicSimilarity], result of:
          0.01886051 = score(doc=883,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 883, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=883)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 883) [ClassicSimilarity], result of:
              0.03380758 = score(doc=883,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 883, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=883)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this article, we present a method to automatically build large labeled datasets for the author ambiguity problem in the academic world by leveraging the authoritative academic resources, ORCID and DOI. Using the method, we built LAGOS-AND, two large, gold-standard sub-datasets for author name disambiguation (AND), of which LAGOS-AND-BLOCK is created for clustering-based AND research and LAGOS-AND-PAIRWISE is created for classification-based AND research. Our LAGOS-AND datasets are substantially different from the existing ones. The initial versions of the datasets (v1.0, released in February 2021) include 7.5 M citations authored by 798 K unique authors (LAGOS-AND-BLOCK) and close to 1 M instances (LAGOS-AND-PAIRWISE). And both datasets show close similarities to the whole Microsoft Academic Graph (MAG) across validations of six facets. In building the datasets, we reveal the variation degrees of last names in three literature databases, PubMed, MAG, and Semantic Scholar, by comparing author names hosted to the authors' official last names shown on the ORCID pages. Furthermore, we evaluate several baseline disambiguation methods as well as the MAG's author IDs system on our datasets, and the evaluation helps identify several interesting findings. We hope the datasets and findings will bring new insights for future studies. The code and datasets are publicly available.
    Date
    22. 1.2023 18:40:36
  20. Oh, H.; Nam, S.; Zhu, Y.: Structured abstract summarization of scientific articles : summarization using full-text section information (2023) 0.02
    0.023842867 = product of:
      0.0357643 = sum of:
        0.01886051 = weight(_text_:on in 889) [ClassicSimilarity], result of:
          0.01886051 = score(doc=889,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 889, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=889)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 889) [ClassicSimilarity], result of:
              0.03380758 = score(doc=889,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 889, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=889)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The automatic summarization of scientific articles differs from other text genres because of the structured format and longer text length. Previous approaches have focused on tackling the lengthy nature of scientific articles, aiming to improve the computational efficiency of summarizing long text using a flat, unstructured abstract. However, the structured format of scientific articles and characteristics of each section have not been fully explored, despite their importance. The lack of a sufficient investigation and discussion of various characteristics for each section and their influence on summarization results has hindered the practical use of automatic summarization for scientific articles. To provide a balanced abstract proportionally emphasizing each section of a scientific article, the community introduced the structured abstract, an abstract with distinct, labeled sections. Using this information, in this study, we aim to understand tasks ranging from data preparation to model evaluation from diverse viewpoints. Specifically, we provide a preprocessed large-scale dataset and propose a summarization method applying the introduction, methods, results, and discussion (IMRaD) format reflecting the characteristics of each section. We also discuss the objective benchmarks and perspectives of state-of-the-art algorithms and present the challenges and research directions in this area.
    Date
    22. 1.2023 18:57:12

Languages

  • e 597
  • d 29
  • pt 4
  • sp 1
  • More… Less…

Types

  • el 48
  • p 1
  • More… Less…