Search (61 results, page 1 of 4)

  • × language_ss:"e"
  • × type_ss:"el"
  • × year_i:[2020 TO 2030}
  1. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.01
    0.013840953 = product of:
      0.03460238 = sum of:
        0.012614433 = weight(_text_:a in 40) [ClassicSimilarity], result of:
          0.012614433 = score(doc=40,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.23593865 = fieldWeight in 40, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.021987949 = product of:
          0.043975897 = sum of:
            0.043975897 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.043975897 = score(doc=40,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
    Type
    a
  2. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.01
    0.011745024 = product of:
      0.02936256 = sum of:
        0.005779455 = weight(_text_:a in 405) [ClassicSimilarity], result of:
          0.005779455 = score(doc=405,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 405, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.023583105 = sum of:
          0.0047362936 = weight(_text_:information in 405) [ClassicSimilarity], result of:
            0.0047362936 = score(doc=405,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.058186423 = fieldWeight in 405, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0234375 = fieldNorm(doc=405)
          0.018846812 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
            0.018846812 = score(doc=405,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.116070345 = fieldWeight in 405, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=405)
      0.4 = coord(2/5)
    
    Abstract
    Human brain size nearly quadrupled in the six million years since Homo last shared a common ancestor with chimpanzees, but human brains are thought to have decreased in volume since the end of the last Ice Age. The timing and reason for this decrease is enigmatic. Here we use change-point analysis to estimate the timing of changes in the rate of hominin brain evolution. We find that hominin brains experienced positive rate changes at 2.1 and 1.5 million years ago, coincident with the early evolution of Homo and technological innovations evident in the archeological record. But we also find that human brain size reduction was surprisingly recent, occurring in the last 3,000 years. Our dating does not support hypotheses concerning brain size reduction as a by-product of body size reduction, a result of a shift to an agricultural diet, or a consequence of self-domestication. We suggest our analysis supports the hypothesis that the recent decrease in brain size may instead result from the externalization of knowledge and advantages of group-level decision-making due in part to the advent of social systems of distributed cognition and the storage and sharing of information. Humans live in social groups in which multiple brains contribute to the emergence of collective intelligence. Although difficult to study in the deep history of Homo, the impacts of group size, social organization, collective intelligence and other potential selective forces on brain evolution can be elucidated using ants as models. The remarkable ecological diversity of ants and their species richness encompasses forms convergent in aspects of human sociality, including large group size, agrarian life histories, division of labor, and collective cognition. Ants provide a wide range of social systems to generate and test hypotheses concerning brain size enlargement or reduction and aid in interpreting patterns of brain evolution identified in humans. Although humans and ants represent very different routes in social and cognitive evolution, the insights ants offer can broadly inform us of the selective forces that influence brain size.
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]
    Type
    a
  3. Petras, V.: ¬The identity of information science (2023) 0.01
    0.008631657 = product of:
      0.021579143 = sum of:
        0.0068111527 = weight(_text_:a in 1077) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=1077,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 1077, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1077)
        0.01476799 = product of:
          0.02953598 = sum of:
            0.02953598 = weight(_text_:information in 1077) [ClassicSimilarity], result of:
              0.02953598 = score(doc=1077,freq=28.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3628561 = fieldWeight in 1077, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1077)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose This paper offers a definition of the core of information science, which encompasses most research in the field. The definition provides a unique identity for information science and positions it in the disciplinary universe. Design/methodology/approach After motivating the objective, a definition of the core and an explanation of its key aspects are provided. The definition is related to other definitions of information science before controversial discourse aspects are briefly addressed: discipline vs. field, science vs. humanities, library vs. information science and application vs. theory. Interdisciplinarity as an often-assumed foundation of information science is challenged. Findings Information science is concerned with how information is manifested across space and time. Information is manifested to facilitate and support the representation, access, documentation and preservation of ideas, activities, or practices, and to enable different types of interactions. Research and professional practice encompass the infrastructures - institutions and technology -and phenomena and practices around manifested information across space and time as its core contribution to the scholarly landscape. Information science collaborates with other disciplines to work on complex information problems that need multi- and interdisciplinary approaches to address them. Originality/value The paper argues that new information problems may change the core of the field, but throughout its existence, the discipline has remained quite stable in its central focus, yet proved to be highly adaptive to the tremendous changes in the forms, practices, institutions and technologies around and for manifested information.
    Type
    a
  4. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.01
    0.0074442835 = product of:
      0.018610708 = sum of:
        0.009138121 = weight(_text_:a in 5365) [ClassicSimilarity], result of:
          0.009138121 = score(doc=5365,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1709182 = fieldWeight in 5365, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
        0.009472587 = product of:
          0.018945174 = sum of:
            0.018945174 = weight(_text_:information in 5365) [ClassicSimilarity], result of:
              0.018945174 = score(doc=5365,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23274569 = fieldWeight in 5365, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5365)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  5. Huurdeman, H.C.; Kamps, J.: Designing multistage search systems to support the information seeking process (2020) 0.01
    0.007189882 = product of:
      0.017974705 = sum of:
        0.0068111527 = weight(_text_:a in 5882) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=5882,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 5882, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5882)
        0.011163551 = product of:
          0.022327103 = sum of:
            0.022327103 = weight(_text_:information in 5882) [ClassicSimilarity], result of:
              0.022327103 = score(doc=5882,freq=16.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27429342 = fieldWeight in 5882, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5882)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Due to the advances in information retrieval in the past decades, search engines have become extremely efficient at acquiring useful sources in response to a user's query. However, for more prolonged and complex information seeking tasks, these search engines are not as well suited. During complex information seeking tasks, various stages may occur, which imply varying support needs for users. However, the implications of theoretical information seeking models for concrete search user interfaces (SUI) design are unclear, both at the level of the individual features and of the whole interface. Guidelines and design patterns for concrete SUIs, on the other hand, provide recommendations for feature design, but these are separated from their role in the information seeking process. This chapter addresses the question of how to design SUIs with enhanced support for the macro-level process, first by reviewing previous research. Subsequently, we outline a framework for complex task support, which explicitly connects the temporal development of complex tasks with different levels of support by SUI features. This is followed by a discussion of concrete system examples which include elements of the three dimensions of our framework in an exploratory search and sensemaking context. Moreover, we discuss the connection of navigation with the search-oriented framework. In our final discussion and conclusion, we provide recommendations for designing more holistic SUIs which potentially evolve along with a user's information seeking process.
    Source
    Understanding and improving information search [Vgl. unter: https://www.researchgate.net/publication/341747751_Designing_Multistage_Search_Systems_to_Support_the_Information_Seeking_Process]
  6. Aizawa, A.; Kohlhase, M.: Mathematical information retrieval (2021) 0.01
    0.00711762 = product of:
      0.01779405 = sum of:
        0.0067426977 = weight(_text_:a in 667) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=667,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 667, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
        0.011051352 = product of:
          0.022102704 = sum of:
            0.022102704 = weight(_text_:information in 667) [ClassicSimilarity], result of:
              0.022102704 = score(doc=667,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27153665 = fieldWeight in 667, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=667)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We present an overview of the NTCIR Math Tasks organized during NTCIR-10, 11, and 12. These tasks are primarily dedicated to techniques for searching mathematical content with formula expressions. In this chapter, we first summarize the task design and introduce test collections generated in the tasks. We also describe the features and main challenges of mathematical information retrieval systems and discuss future perspectives in the field.
    Series
    ¬The Information retrieval series, vol 43
    Source
    Evaluating information retrieval and access tasks. Eds.: Sakai, T., Oard, D., Kando, N. [https://doi.org/10.1007/978-981-15-5554-1_12]
    Type
    a
  7. Jansen, B.; Browne, G.M.: Navigating information spaces : index / mind map / topic map? (2021) 0.01
    0.0068851607 = product of:
      0.017212901 = sum of:
        0.010897844 = weight(_text_:a in 436) [ClassicSimilarity], result of:
          0.010897844 = score(doc=436,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20383182 = fieldWeight in 436, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=436)
        0.006315058 = product of:
          0.012630116 = sum of:
            0.012630116 = weight(_text_:information in 436) [ClassicSimilarity], result of:
              0.012630116 = score(doc=436,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1551638 = fieldWeight in 436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=436)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper discusses the use of wiki technology to provide a navigation structure for a collection of newspaper clippings. We overview the architecture of the wiki, discuss the navigation structure and pose the question: is the navigation structure an index, and if so, what type, or is it just a linkage structure or topic map. Does such a distinction really matter? Are these definitions in reality function based?
  8. Machado, L.; Martínez-Ávila, D.; Barcellos Almeida, M.; Borges, M.M.: Towards a moderate realistic foundation for ontological knowledge organization systems : the question of the naturalness of classifications (2023) 0.01
    0.0067985477 = product of:
      0.016996369 = sum of:
        0.012260076 = weight(_text_:a in 894) [ClassicSimilarity], result of:
          0.012260076 = score(doc=894,freq=18.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.22931081 = fieldWeight in 894, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=894)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 894) [ClassicSimilarity], result of:
              0.009472587 = score(doc=894,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 894, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=894)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Several authors emphasize the need for a change in classification theory due to the influence of a dogmatic and monistic ontology supported by an outdated essentialism. These claims tend to focus on the fallibility of knowledge, the need for a pluralistic view, and the theoretical burden of observations. Regardless of the legitimacy of these concerns, there is the risk, when not moderate, to fall into the opposite relativistic extreme. Based on a narrative review of the literature, we aim to reflectively discuss the theoretical foundations that can serve as a basis for a realist position supporting pluralistic ontological classifications. The goal is to show that, against rather conventional solutions, objective scientific-based approaches to natural classifications are presented to be viable, allowing a proper distinction between ontological and taxonomic questions. Supported by critical scientific realism, we consider that such an approach is suitable for the development of ontological Knowledge Organization Systems (KOS). We believe that ontological perspectivism can provide the necessary adaptation to the different granularities of reality.
    Source
    Journal of information science. 54(2023) no.x, S.xx-xx
    Type
    a
  9. Baines, D.; Elliott, R.J.: Defining misinformation, disinformation and malinformation : an urgent need for clarity during the COVID-19 infodemic (2020) 0.01
    0.0065762657 = product of:
      0.016440663 = sum of:
        0.0076151006 = weight(_text_:a in 5853) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=5853,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 5853, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5853)
        0.008825562 = product of:
          0.017651124 = sum of:
            0.017651124 = weight(_text_:information in 5853) [ClassicSimilarity], result of:
              0.017651124 = score(doc=5853,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21684799 = fieldWeight in 5853, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5853)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    COVID-19 is an unprecedented global health crisis that will have immeasurable consequences for our economic and social well-being. Tedros Adhanom Ghebreyesus, the director general of the World Health Organization, stated "We're not just fighting an epidemic; we're fighting an infodemic". Currently, there is no robust scientific basis to the existing definitions of false information used in the fight against the COVID-19infodemic. The purpose of this paper is to demonstrate how the use of a novel taxonomy and related model (based upon a conceptual framework that synthesizes insights from information science, philosophy, media studies and politics) can produce new scientific definitions of mis-, dis- and malinformation. We undertake our analysis from the viewpoint of information systems research. The conceptual approach to defining mis-,dis- and malinformation can be applied to a wide range of empirical examples and, if applied properly, may prove useful in fighting the COVID-19 infodemic. In sum, our research suggests that: (i) analyzing all types of information is important in the battle against the COVID-19 infodemic; (ii) a scientific approach is required so that different methods are not used by different studies; (iii) "misinformation", as an umbrella term, can be confusing and should be dropped from use; (iv) clear, scientific definitions of information types will be needed going forward; (v) malinformation is an overlooked phenomenon involving reconfigurations of the truth.
    Type
    a
  10. Rockelle Strader, C.: Cataloging to support information literacy : the IFLA Library Reference Model's user tasks in the context of the Framework for Information Literacy for Higher Education (2021) 0.01
    0.006548052 = product of:
      0.01637013 = sum of:
        0.005779455 = weight(_text_:a in 713) [ClassicSimilarity], result of:
          0.005779455 = score(doc=713,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 713, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=713)
        0.010590675 = product of:
          0.02118135 = sum of:
            0.02118135 = weight(_text_:information in 713) [ClassicSimilarity], result of:
              0.02118135 = score(doc=713,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2602176 = fieldWeight in 713, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=713)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Cataloging practices, as exemplified by the five user tasks of the IFLA Library Reference Model, can support information literacy practices. The six frames of the Framework for Information Literacy for Higher Education are used as lenses to examine the user tasks. Two themes emerge from this examination: context matters, and catalogers must tailor bibliographic descriptions to meet users' expectations and information needs. Catalogers need to solicit feedback from various user communities to reform cataloging practices to remain current and viable. Such conversations will enrich the catalog and enhance (reclaim?) its position as a primary tool for research and learning. Supplemental data for this article is available online at https://doi.org/10.1080/01639374.2021.1939828.
    Type
    a
  11. Qi, Q.; Hessen, D.J.; Heijden, P.G.M. van der: Improving information retrieval through correspondenceanalysis instead of latent semantic analysis (2023) 0.01
    0.006548052 = product of:
      0.01637013 = sum of:
        0.005779455 = weight(_text_:a in 1045) [ClassicSimilarity], result of:
          0.005779455 = score(doc=1045,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 1045, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1045)
        0.010590675 = product of:
          0.02118135 = sum of:
            0.02118135 = weight(_text_:information in 1045) [ClassicSimilarity], result of:
              0.02118135 = score(doc=1045,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2602176 = fieldWeight in 1045, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1045)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The initial dimensions extracted by latent semantic analysis (LSA) of a document-term matrixhave been shown to mainly display marginal effects, which are irrelevant for informationretrieval. To improve the performance of LSA, usually the elements of the raw document-term matrix are weighted and the weighting exponent of singular values can be adjusted.An alternative information retrieval technique that ignores the marginal effects is correspon-dence analysis (CA). In this paper, the information retrieval performance of LSA and CA isempirically compared. Moreover, it is explored whether the two weightings also improve theperformance of CA. The results for four empirical datasets show that CA always performsbetter than LSA. Weighting the elements of the raw data matrix can improve CA; however,it is data dependent and the improvement is small. Adjusting the singular value weightingexponent often improves the performance of CA; however, the extent of the improvementdepends on the dataset and the number of dimensions. (PDF) Improving information retrieval through correspondence analysis instead of latent semantic analysis.
    Source
    Journal of intelligent information systems [https://doi.org/10.1007/s10844-023-00815-y]
    Type
    a
  12. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.01
    0.0064290287 = product of:
      0.016072571 = sum of:
        0.008258085 = weight(_text_:a in 572) [ClassicSimilarity], result of:
          0.008258085 = score(doc=572,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1544581 = fieldWeight in 572, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
        0.007814486 = product of:
          0.015628971 = sum of:
            0.015628971 = weight(_text_:information in 572) [ClassicSimilarity], result of:
              0.015628971 = score(doc=572,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1920054 = fieldWeight in 572, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=572)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
    Type
    a
  13. Hudon, M.: ¬The status of knowledge organization in library and information science master's programs (2021) 0.01
    0.0064290287 = product of:
      0.016072571 = sum of:
        0.008258085 = weight(_text_:a in 697) [ClassicSimilarity], result of:
          0.008258085 = score(doc=697,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1544581 = fieldWeight in 697, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=697)
        0.007814486 = product of:
          0.015628971 = sum of:
            0.015628971 = weight(_text_:information in 697) [ClassicSimilarity], result of:
              0.015628971 = score(doc=697,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1920054 = fieldWeight in 697, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=697)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The content of master's programs accredited by the American Library Association was examined to assess the status of knowledge organization (KO) as a subject in current training. Data collected show that KO remains very visible in a majority of programs, mainly in the form of required and electives courses focusing on descriptive cataloging, classification, and metadata. Observed tendencies include, however, the recent elimination of the required KO course in several programs, the reality that one third of KO electives listed in course catalogs have not been scheduled in the past three years, and the fact that two-thirds of those teaching KO specialize in other areas of information science.
    Type
    a
  14. Broughton, V.: Faceted classification in support of diversity : the role of concepts and terms in representing religion (2020) 0.01
    0.005898641 = product of:
      0.014746603 = sum of:
        0.0100103095 = weight(_text_:a in 5992) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=5992,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 5992, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5992)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 5992) [ClassicSimilarity], result of:
              0.009472587 = score(doc=5992,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 5992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5992)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The paper examines the development of facet analysis as a methodology and the role it plays in building classifications and other knowledge-organization tools. The use of categorical analysis in areas other than library and information science is also considered. The suitability of the faceted approach for humanities documentation is explored through a critical description of the FATKS (Facet Analytical Theory in Managing Knowledge Structure for Humanities) project carried out at University College London. This research focused on building a conceptual model for the subject of religion together with a relational database and search-and-browse interfaces that would support some degree of automatic classification. The paper concludes with a discussion of the differences between the conceptual model and the vocabulary used to populate it, and how, in the case of religion, the choice of terminology can create an apparent bias in the system.
    Type
    a
  15. Patriarca, S.: Information literacy gives us the tools to check sources and to verify factual statements : What does Popper`s "Es gibt keine Autoritäten" mean? (2021) 0.01
    0.0055169817 = product of:
      0.013792454 = sum of:
        0.005898632 = weight(_text_:a in 331) [ClassicSimilarity], result of:
          0.005898632 = score(doc=331,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.11032722 = fieldWeight in 331, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=331)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 331) [ClassicSimilarity], result of:
              0.015787644 = score(doc=331,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 331, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=331)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    I wonder if you would consider an English perspective on the exchange between Bernd Jörs and Hermann Huemer. In my career in the independent education sector I can recall many discussions and Government reports about cross-curricular issues such as logical reasoning and critical thinking, In the IB system this led to the inclusion in the Diploma of "Theory of Knowledge." In the UK we had "key skills" and "critical thinking." One such key skill is what we now call "information literacy." "In his parody of Information literacy, Dr Jörs seems to have confused a necessary condition for a sufficient condition. The fact that information competence may be necessary for serious academic study does not of course make it sufficient. When that is understood the joke about the megalomaniac rather loses its force. (We had better pass over the rant which follows, the sneer at "earth sciences" and the German prejudice towards Austrians)."
    Type
    a
  16. Dhillon, P.; Singh, M.: ¬An extended ontology model for trust evaluation using advanced hybrid ontology (2023) 0.01
    0.0051638708 = product of:
      0.012909677 = sum of:
        0.008173384 = weight(_text_:a in 981) [ClassicSimilarity], result of:
          0.008173384 = score(doc=981,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 981, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=981)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 981) [ClassicSimilarity], result of:
              0.009472587 = score(doc=981,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 981, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=981)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In the blooming area of Internet technology, the concept of Internet-of-Things (IoT) holds a distinct position that interconnects a large number of smart objects. In the context of social IoT (SIoT), the argument of trust and reliability is evaluated in the presented work. The proposed framework is divided into two blocks, namely Verification Block (VB) and Evaluation Block (EB). VB defines various ontology-based relationships computed for the objects that reflect the security and trustworthiness of an accessed service. While, EB is used for the feedback analysis and proves to be a valuable step that computes and governs the success rate of the service. Support vector machine (SVM) is applied to categorise the trust-based evaluation. The security aspect of the proposed approach is comparatively evaluated for DDoS and malware attacks in terms of success rate, trustworthiness and execution time. The proposed secure ontology-based framework provides better performance compared with existing architectures.
    Source
    Journal of information science. 41(2023) Jan., S.1-23
    Type
    a
  17. Gil-Berrozpe, J.C.: Description, categorization, and representation of hyponymy in environmental terminology (2022) 0.00
    0.0049628555 = product of:
      0.012407139 = sum of:
        0.006092081 = weight(_text_:a in 1004) [ClassicSimilarity], result of:
          0.006092081 = score(doc=1004,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.11394546 = fieldWeight in 1004, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1004)
        0.006315058 = product of:
          0.012630116 = sum of:
            0.012630116 = weight(_text_:information in 1004) [ClassicSimilarity], result of:
              0.012630116 = score(doc=1004,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1551638 = fieldWeight in 1004, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1004)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Terminology has evolved from static and prescriptive theories to dynamic and cognitive approaches. Thanks to these approaches, there have been significant advances in the design and elaboration of terminological resources. This has resulted in the creation of tools such as terminological knowledge bases, which are able to show how concepts are interrelated through different semantic or conceptual relations. Of these relations, hyponymy is the most relevant to terminology work because it deals with concept categorization and term hierarchies. This doctoral thesis presents an enhancement of the semantic structure of EcoLexicon, a terminological knowledge base on environmental science. The aim of this research was to improve the description, categorization, and representation of hyponymy in environmental terminology. Therefore, we created HypoLexicon, a new stand-alone module for EcoLexicon in the form of a hyponymy-based terminological resource. This resource contains twelve terminological entries from four specialized domains (Biology, Chemistry, Civil Engineering, and Geology), which consist of 309 concepts and 465 terms associated with those concepts. This research was mainly based on the theoretical premises of Frame-based Terminology. This theory was combined with Cognitive Linguistics, for conceptual description and representation; Corpus Linguistics, for the extraction and processing of linguistic and terminological information; and Ontology, related to hyponymy and relevant for concept categorization. HypoLexicon was constructed from the following materials: (i) the EcoLexicon English Corpus; (ii) other specialized terminological resources, including EcoLexicon; (iii) Sketch Engine; and (iv) Lexonomy. This thesis explains the methodologies applied for corpus extraction and compilation, corpus analysis, the creation of conceptual hierarchies, and the design of the terminological template. The results of the creation of HypoLexicon are discussed by highlighting the information in the hyponymy-based terminological entries: (i) parent concept (hypernym); (ii) child concepts (hyponyms, with various hyponymy levels); (iii) terminological definitions; (iv) conceptual categories; (v) hyponymy subtypes; and (vi) hyponymic contexts. Furthermore, the features and the navigation within HypoLexicon are described from the user interface and the admin interface. In conclusion, this doctoral thesis lays the groundwork for developing a terminological resource that includes definitional, relational, ontological and contextual information about specialized hypernyms and hyponyms. All of this information on specialized knowledge is simple to follow thanks to the hierarchical structure of the terminological template used in HypoLexicon. Therefore, not only does it enhance knowledge representation, but it also facilitates its acquisition.
    Type
    a
  18. Frederick, D.E.: ChatGPT: a viral data-driven disruption in the information environment (2023) 0.00
    0.0049571716 = product of:
      0.012392929 = sum of:
        0.0068111527 = weight(_text_:a in 983) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=983,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 983, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=983)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 983) [ClassicSimilarity], result of:
              0.011163551 = score(doc=983,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 983, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=983)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This study aims to introduce librarians to ChatGPT and challenge them to think about how it fits into their work and what learning they will need to do in order to stay relevant in the realm of artificial intelligence. Design/methodology/approach Popular and scientific media sources were monitored over the course of two months to gather current discussions about the uses of and opinions about ChatGPT. This was analyzed in light of historical developments in education and libraries. Additional sources of information on the topic were described and discussed so that the issue is made relevant to librarians and libraries. Findings The potential risks and benefits of ChatGPT are highly relevant for librarians but also currently not fully understood. We are in a very early stage of understanding and using this technology but it does appear to have the possibility of becoming disruptive to libraries as well as many other aspects of life. Originality/value ChatGPT-3 has only been publicly available since the end of November 2022. We are just now starting to take a deeper dive into what this technology means for libraries. This paper is one of the early ones that provide librarians with some direction in terms of where to focus their interest and attention in learning about it.
    Type
    a
  19. Aydin, Ö.; Karaarslan, E.: OpenAI ChatGPT generated literature review: : digital twin in healthcare (2022) 0.00
    0.004709213 = product of:
      0.011773032 = sum of:
        0.008615503 = weight(_text_:a in 851) [ClassicSimilarity], result of:
          0.008615503 = score(doc=851,freq=20.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.16114321 = fieldWeight in 851, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=851)
        0.003157529 = product of:
          0.006315058 = sum of:
            0.006315058 = weight(_text_:information in 851) [ClassicSimilarity], result of:
              0.006315058 = score(doc=851,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0775819 = fieldWeight in 851, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=851)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Literature review articles are essential to summarize the related work in the selected field. However, covering all related studies takes too much time and effort. This study questions how Artificial Intelligence can be used in this process. We used ChatGPT to create a literature review article to show the stage of the OpenAI ChatGPT artificial intelligence application. As the subject, the applications of Digital Twin in the health field were chosen. Abstracts of the last three years (2020, 2021 and 2022) papers were obtained from the keyword "Digital twin in healthcare" search results on Google Scholar and paraphrased by ChatGPT. Later on, we asked ChatGPT questions. The results are promising; however, the paraphrased parts had significant matches when checked with the Ithenticate tool. This article is the first attempt to show the compilation and expression of knowledge will be accelerated with the help of artificial intelligence. We are still at the beginning of such advances. The future academic publishing process will require less human effort, which in turn will allow academics to focus on their studies. In future studies, we will monitor citations to this study to evaluate the academic validity of the content produced by the ChatGPT. 1. Introduction OpenAI ChatGPT (ChatGPT, 2022) is a chatbot based on the OpenAI GPT-3 language model. It is designed to generate human-like text responses to user input in a conversational context. OpenAI ChatGPT is trained on a large dataset of human conversations and can be used to create responses to a wide range of topics and prompts. The chatbot can be used for customer service, content creation, and language translation tasks, creating replies in multiple languages. OpenAI ChatGPT is available through the OpenAI API, which allows developers to access and integrate the chatbot into their applications and systems. OpenAI ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text, allowing it to engage in conversation with users naturally and intuitively. OpenAI ChatGPT is trained on a large dataset of human conversations, allowing it to understand and respond to a wide range of topics and contexts. It can be used in various applications, such as chatbots, customer service agents, and language translation systems. OpenAI ChatGPT is a state-of-the-art language model able to generate coherent and natural text that can be indistinguishable from text written by a human. As an artificial intelligence, ChatGPT may need help to change academic writing practices. However, it can provide information and guidance on ways to improve people's academic writing skills.
  20. Hobert, A.; Jahn, N.; Mayr, P.; Schmidt, B.; Taubert, N.: Open access uptake in Germany 2010-2018 : adoption in a diverse research landscape (2021) 0.00
    0.003699844 = product of:
      0.00924961 = sum of:
        0.006092081 = weight(_text_:a in 250) [ClassicSimilarity], result of:
          0.006092081 = score(doc=250,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.11394546 = fieldWeight in 250, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
        0.003157529 = product of:
          0.006315058 = sum of:
            0.006315058 = weight(_text_:information in 250) [ClassicSimilarity], result of:
              0.006315058 = score(doc=250,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0775819 = fieldWeight in 250, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=250)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    This study investigates the development of open access (OA) to journal articles from authors affiliated with German universities and non-university research institutions in the period 2010-2018. Beyond determining the overall share of openly available articles, a systematic classification of distinct categories of OA publishing allowed us to identify different patterns of adoption of OA. Taking into account the particularities of the German research landscape, variations in terms of productivity, OA uptake and approaches to OA are examined at the meso-level and possible explanations are discussed. The development of the OA uptake is analysed for the different research sectors in Germany (universities, non-university research institutes of the Helmholtz Association, Fraunhofer Society, Max Planck Society, Leibniz Association, and government research agencies). Combining several data sources (incl. Web of Science, Unpaywall, an authority file of standardised German affiliation information, the ISSN-Gold-OA 3.0 list, and OpenDOAR), the study confirms the growth of the OA share mirroring the international trend reported in related studies. We found that 45% of all considered articles during the observed period were openly available at the time of analysis. Our findings show that subject-specific repositories are the most prevalent type of OA. However, the percentages for publication in fully OA journals and OA via institutional repositories show similarly steep increases. Enabling data-driven decision-making regarding the implementation of OA in Germany at the institutional level, the results of this study furthermore can serve as a baseline to assess the impact recent transformative agreements with major publishers will likely have on scholarly communication.
    Type
    a

Types

  • a 48
  • p 9