Search (643 results, page 2 of 33)

  • × type_ss:"a"
  • × year_i:[2020 TO 2030}
  1. Morrison, H.; Borges, L.; Zhao, X.; Kakou, T.L.; Shanbhoug, A.N.: Change and growth in open access journal publishing and charging trends 2011-2021 (2022) 0.02
    0.022430437 = product of:
      0.044860873 = sum of:
        0.03931248 = weight(_text_:processing in 741) [ClassicSimilarity], result of:
          0.03931248 = score(doc=741,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 741, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=741)
        0.005548396 = product of:
          0.016645188 = sum of:
            0.016645188 = weight(_text_:science in 741) [ClassicSimilarity], result of:
              0.016645188 = score(doc=741,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=741)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This study examines trends in open access article processing charges (APCs) from 2011 to 2021, building on a 2011 study by Solomon and Björk. Two methods are employed, a modified replica and a status update of the 2011 journals. Data are drawn from multiple sources and datasets are available as open data. Most journals do not charge APCs; this has not changed. The global average per-journal APC increased slightly, from 906 to 958 USD, while the per-article average increased from 904 to 1,626 USD, indicating that authors choose to publish in more expensive journals. Publisher size, type, impact metrics and subject affect charging tendencies, average APC, and pricing trends. Half the journals from the 2011 sample are no longer listed in DOAJ in 2021, due to ceased publication or publisher de-listing. Conclusions include a caution about the potential of the APC model to increase costs beyond inflation. The university sector may be the most promising approach to economically sustainable no-fee OA journals. Universities publish many OA journals, nearly half of OA articles, tend not to charge APCs and when APCs are charged, the prices are very low on average.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.12, S.1793-1805
  2. Cheng, Y.-Y.; Xia, Y.: ¬A systematic review of methods for aligning, mapping, merging taxonomies in information sciences (2023) 0.02
    0.022430437 = product of:
      0.044860873 = sum of:
        0.03931248 = weight(_text_:processing in 1029) [ClassicSimilarity], result of:
          0.03931248 = score(doc=1029,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 1029, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1029)
        0.005548396 = product of:
          0.016645188 = sum of:
            0.016645188 = weight(_text_:science in 1029) [ClassicSimilarity], result of:
              0.016645188 = score(doc=1029,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 1029, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1029)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The purpose of this study is to provide a systematic literature review on taxonomy alignment methods in information science to explore the common research pipeline and characteristics. Design/methodology/approach The authors implement a five-step systematic literature review process relating to taxonomy alignment. They take on a knowledge organization system (KOS) perspective, and specifically examining the level of KOS on "taxonomies." Findings They synthesize the matching dimensions of 28 taxonomy alignment studies in terms of the taxonomy input, approach and output. In the input dimension, they develop three characteristics: tree shapes, variable names and symmetry; for approach: methodology, unit of matching, comparison type and relation type; for output: the number of merged solutions and whether original taxonomies are preserved in the solutions. Research limitations/implications The main research implications of this study are threefold: (1) to enhance the understanding of the characteristics of a taxonomy alignment work; (2) to provide a novel categorization of taxonomy alignment approaches into natural language processing approach, logic-based approach and heuristic-based approach; (3) to provide a methodological guideline on the must-include characteristics for future taxonomy alignment research. Originality/value There is no existing comprehensive review on the alignment of "taxonomies". Further, no other mapping survey research has discussed the comparison from a KOS perspective. Using a KOS lens is critical in understanding the broader picture of what other similar systems of organizations are, and enables us to define taxonomies more precisely.
  3. Al-Khatib, K.; Ghosa, T.; Hou, Y.; Waard, A. de; Freitag, D.: Argument mining for scholarly document processing : taking stock and looking ahead (2021) 0.02
    0.019458683 = product of:
      0.07783473 = sum of:
        0.07783473 = weight(_text_:processing in 568) [ClassicSimilarity], result of:
          0.07783473 = score(doc=568,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.4427661 = fieldWeight in 568, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=568)
      0.25 = coord(1/4)
    
    Source
    Proceedings of the Second Workshop on Scholarly Document Processing,
  4. Gil-Berrozpe, J.C.: Description, categorization, and representation of hyponymy in environmental terminology (2022) 0.02
    0.017944349 = product of:
      0.035888698 = sum of:
        0.03144998 = weight(_text_:processing in 1004) [ClassicSimilarity], result of:
          0.03144998 = score(doc=1004,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.17890452 = fieldWeight in 1004, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.03125 = fieldNorm(doc=1004)
        0.004438717 = product of:
          0.01331615 = sum of:
            0.01331615 = weight(_text_:science in 1004) [ClassicSimilarity], result of:
              0.01331615 = score(doc=1004,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.11641272 = fieldWeight in 1004, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1004)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Terminology has evolved from static and prescriptive theories to dynamic and cognitive approaches. Thanks to these approaches, there have been significant advances in the design and elaboration of terminological resources. This has resulted in the creation of tools such as terminological knowledge bases, which are able to show how concepts are interrelated through different semantic or conceptual relations. Of these relations, hyponymy is the most relevant to terminology work because it deals with concept categorization and term hierarchies. This doctoral thesis presents an enhancement of the semantic structure of EcoLexicon, a terminological knowledge base on environmental science. The aim of this research was to improve the description, categorization, and representation of hyponymy in environmental terminology. Therefore, we created HypoLexicon, a new stand-alone module for EcoLexicon in the form of a hyponymy-based terminological resource. This resource contains twelve terminological entries from four specialized domains (Biology, Chemistry, Civil Engineering, and Geology), which consist of 309 concepts and 465 terms associated with those concepts. This research was mainly based on the theoretical premises of Frame-based Terminology. This theory was combined with Cognitive Linguistics, for conceptual description and representation; Corpus Linguistics, for the extraction and processing of linguistic and terminological information; and Ontology, related to hyponymy and relevant for concept categorization. HypoLexicon was constructed from the following materials: (i) the EcoLexicon English Corpus; (ii) other specialized terminological resources, including EcoLexicon; (iii) Sketch Engine; and (iv) Lexonomy. This thesis explains the methodologies applied for corpus extraction and compilation, corpus analysis, the creation of conceptual hierarchies, and the design of the terminological template. The results of the creation of HypoLexicon are discussed by highlighting the information in the hyponymy-based terminological entries: (i) parent concept (hypernym); (ii) child concepts (hyponyms, with various hyponymy levels); (iii) terminological definitions; (iv) conceptual categories; (v) hyponymy subtypes; and (vi) hyponymic contexts. Furthermore, the features and the navigation within HypoLexicon are described from the user interface and the admin interface. In conclusion, this doctoral thesis lays the groundwork for developing a terminological resource that includes definitional, relational, ontological and contextual information about specialized hypernyms and hyponyms. All of this information on specialized knowledge is simple to follow thanks to the hierarchical structure of the terminological template used in HypoLexicon. Therefore, not only does it enhance knowledge representation, but it also facilitates its acquisition.
  5. Alipour, O.; Soheili, F.; Khasseh, A.A.: ¬A co-word analysis of global research on knowledge organization: 1900-2019 (2022) 0.02
    0.017944349 = product of:
      0.035888698 = sum of:
        0.03144998 = weight(_text_:processing in 1106) [ClassicSimilarity], result of:
          0.03144998 = score(doc=1106,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.17890452 = fieldWeight in 1106, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.03125 = fieldNorm(doc=1106)
        0.004438717 = product of:
          0.01331615 = sum of:
            0.01331615 = weight(_text_:science in 1106) [ClassicSimilarity], result of:
              0.01331615 = score(doc=1106,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.11641272 = fieldWeight in 1106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1106)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The study's objective is to analyze the structure of knowledge organization studies conducted worldwide. This applied research has been conducted with a scientometrics approach using the co-word analysis. The research records consisted of all articles published in the journals of Knowledge Organization and Cataloging & Classification Quarterly and keywords related to the field of knowledge organization indexed in Web of Science from 1900 to 2019, in which 17,950 records were analyzed entirely with plain text format. The total number of keywords was 25,480, which was reduced to 12,478 keywords after modifications and removal of duplicates. Then, 115 keywords with a frequency of at least 18 were included in the final analysis, and finally, the co-word network was drawn. BibExcel, UCINET, VOSviewer, and SPSS software were used to draw matrices, analyze co-word networks, and draw dendrograms. Furthermore, strategic diagrams were drawn using Excel software. The keywords "information retrieval," "classification," and "ontology" are among the most frequently used keywords in knowledge organization articles. Findings revealed that "Ontology*Semantic Web", "Digital Library*Information Retrieval" and "Indexing*Information Retrieval" are highly frequent co-word pairs, respectively. The results of hierarchical clustering indicated that the global research on knowledge organization consists of eight main thematic clusters; the largest is specified for the topic of "classification, indexing, and information retrieval." The smallest clusters deal with the topics of "data processing" and "theoretical concepts of information and knowledge organization" respectively. Cluster 1 (cataloging standards and knowledge organization) has the highest density, while Cluster 5 (classification, indexing, and information retrieval) has the highest centrality. According to the findings of this research, the keyword "information retrieval" has played a significant role in knowledge organization studies, both as a keyword and co-word pair. In the co-word section, there is a type of related or general topic relationship between co-word pairs. Results indicated that information retrieval is one of the main topics in knowledge organization, while the theoretical concepts of knowledge organization have been neglected. In general, the co-word structure of knowledge organization research indicates the multiplicity of global concepts and topics studied in this field globally.
  6. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.02
    0.01724272 = product of:
      0.06897088 = sum of:
        0.06897088 = product of:
          0.20691264 = sum of:
            0.20691264 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.20691264 = score(doc=862,freq=2.0), product of:
                0.36816013 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043425296 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  7. Petrovich, E.: Science mapping and science maps (2021) 0.01
    0.01436767 = product of:
      0.05747068 = sum of:
        0.05747068 = product of:
          0.08620602 = sum of:
            0.06245828 = weight(_text_:science in 595) [ClassicSimilarity], result of:
              0.06245828 = score(doc=595,freq=44.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.5460241 = fieldWeight in 595, product of:
                  6.6332498 = tf(freq=44.0), with freq of:
                    44.0 = termFreq=44.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=595)
            0.023747738 = weight(_text_:29 in 595) [ClassicSimilarity], result of:
              0.023747738 = score(doc=595,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.15546128 = fieldWeight in 595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=595)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Science maps are visual representations of the structure and dynamics of scholarly knowl­edge. They aim to show how fields, disciplines, journals, scientists, publications, and scientific terms relate to each other. Science mapping is the body of methods and techniques that have been developed for generating science maps. This entry is an introduction to science maps and science mapping. It focuses on the conceptual, theoretical, and methodological issues of science mapping, rather than on the mathematical formulation of science mapping techniques. After a brief history of science mapping, we describe the general procedure for building a science map, presenting the data sources and the methods to select, clean, and pre-process the data. Next, we examine in detail how the most common types of science maps, namely the citation-based and the term-based, are generated. Both are based on networks: the former on the network of publications connected by citations, the latter on the network of terms co-occurring in publications. We review the rationale behind these mapping approaches, as well as the techniques and methods to build the maps (from the extraction of the network to the visualization and enrichment of the map). We also present less-common types of science maps, including co-authorship networks, interlocking editorship networks, maps based on patents' data, and geographic maps of science. Moreover, we consider how time can be represented in science maps to investigate the dynamics of science. We also discuss some epistemological and sociological topics that can help in the interpretation, contextualization, and assessment of science maps. Then, we present some possible applications of science maps in science policy. In the conclusion, we point out why science mapping may be interesting for all the branches of meta-science, from knowl­edge organization to epistemology.
    Date
    27. 5.2022 18:19:29
    Footnote
    Beitrag in einem Special issue on 'Science and knowledge organization' mit längeren Überblicken zu wichtigen Begriffen der Wissensorgansiation.
  8. Dunsire, G.; Fritz, D.; Fritz, R.: Instructions, interfaces, and interoperable data : the RIMMF experience with RDA revisited (2020) 0.01
    0.013759367 = product of:
      0.05503747 = sum of:
        0.05503747 = weight(_text_:processing in 5751) [ClassicSimilarity], result of:
          0.05503747 = score(doc=5751,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3130829 = fieldWeight in 5751, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5751)
      0.25 = coord(1/4)
    
    Abstract
    This article presents a case study of RIMMF, a software tool developed to improve the orientation and training of catalogers who use Resource Description and Access (RDA) to maintain bibliographic data. The cataloging guidance and instructions of RDA are based on the Functional Requirements conceptual models that are now consolidated in the IFLA Library Reference Model, but many catalogers are applying RDA in systems that have evolved from inventory and text-processing applications developed from older metadata paradigms. The article describes how RIMMF interacts with the RDA Toolkit and RDA Registry to offer cataloger-friendly multilingual data input and editing interfaces.
  9. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.01
    0.013759367 = product of:
      0.05503747 = sum of:
        0.05503747 = weight(_text_:processing in 572) [ClassicSimilarity], result of:
          0.05503747 = score(doc=572,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3130829 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
      0.25 = coord(1/4)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
  10. Hahn, U.: Automatische Sprachverarbeitung (2023) 0.01
    0.013759367 = product of:
      0.05503747 = sum of:
        0.05503747 = weight(_text_:processing in 790) [ClassicSimilarity], result of:
          0.05503747 = score(doc=790,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3130829 = fieldWeight in 790, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=790)
      0.25 = coord(1/4)
    
    Abstract
    Dieses Kapitel gibt eine Übersicht über die maschinelle Verarbeitung natürlicher Sprachen (wie das Deutsche oder Englische; natural language - NL) durch Computer. Grundlegende Konzepte der automatischen Sprachverarbeitung (natural language processing - NLP) stammen aus der Sprachwissenschaft (s. Abschnitt 2) und sind in zunehmend selbstständiger Weise mit formalen Methoden und technischen Grundlagen der Informatik in einer eigenständigen Disziplin, der Computerlinguistik (CL; s. Abschnitte 3 und 4), verknüpft worden. Natürlichsprachliche Systeme (NatS) mit anwendungsbezogenen Funktionalitätsvorgaben bilden den Kern der informationswissenschaftlich geprägten NLP, die häufig als Sprachtechnologie oder im Deutschen auch (mittlerweile veraltet) als Informationslinguistik bezeichnet wird (s. Abschnitt 5).
  11. Chou, C.; Chu, T.: ¬An analysis of BERT (NLP) for assisted subject indexing for Project Gutenberg (2022) 0.01
    0.013759367 = product of:
      0.05503747 = sum of:
        0.05503747 = weight(_text_:processing in 1139) [ClassicSimilarity], result of:
          0.05503747 = score(doc=1139,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3130829 = fieldWeight in 1139, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1139)
      0.25 = coord(1/4)
    
    Abstract
    In light of AI (Artificial Intelligence) and NLP (Natural language processing) technologies, this article examines the feasibility of using AI/NLP models to enhance the subject indexing of digital resources. While BERT (Bidirectional Encoder Representations from Transformers) models are widely used in scholarly communities, the authors assess whether BERT models can be used in machine-assisted indexing in the Project Gutenberg collection, through suggesting Library of Congress subject headings filtered by certain Library of Congress Classification subclass labels. The findings of this study are informative for further research on BERT models to assist with automatic subject indexing for digital library collections.
  12. Jörs, B.: Informationskompetenz oder Information Literacy : Das große Missverständnis und Versäumnis der Bibliotheks- und Informationswissenschaft im Zeitalter der Desinformation. Teil 4: "Informationskompetenz" messbar machen. Ergänzende Anmerkungen zum "16th International Symposium of Information Science" ("ISI 2021", Regensburg 8. März - 10. März 2021) (2021) 0.01
    0.013679321 = product of:
      0.054717284 = sum of:
        0.054717284 = product of:
          0.08207592 = sum of:
            0.023303263 = weight(_text_:science in 428) [ClassicSimilarity], result of:
              0.023303263 = score(doc=428,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.20372227 = fieldWeight in 428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=428)
            0.058772657 = weight(_text_:29 in 428) [ClassicSimilarity], result of:
              0.058772657 = score(doc=428,freq=4.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.38474706 = fieldWeight in 428, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=428)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Date
    29. 9.2021 18:17:40
    Source
    Open Password. 2021, Nr.979 vom 29. September 2021 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzM1OSwiNWZhNTM1ZjgxZDVlIiwwLDAsMzI2LDFd]
  13. Hartel, J.: ¬The red thread of information (2020) 0.01
    0.013225533 = product of:
      0.052902132 = sum of:
        0.052902132 = product of:
          0.0793532 = sum of:
            0.04993556 = weight(_text_:science in 5839) [ClassicSimilarity], result of:
              0.04993556 = score(doc=5839,freq=18.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.4365477 = fieldWeight in 5839, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5839)
            0.029417641 = weight(_text_:22 in 5839) [ClassicSimilarity], result of:
              0.029417641 = score(doc=5839,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.19345059 = fieldWeight in 5839, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5839)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Purpose In The Invisible Substrate of Information Science, a landmark article about the discipline of information science, Marcia J. Bates wrote that ".we are always looking for the red thread of information in the social texture of people's lives" (1999a, p. 1048). To sharpen our understanding of information science and to elaborate Bates' idea, the work at hand answers the question: Just what does the red thread of information entail? Design/methodology/approach Through a close reading of Bates' oeuvre and by applying concepts from the reference literature of information science, nine composite entities that qualify as the red thread of information are identified, elaborated, and related to existing concepts in the information science literature. In the spirit of a scientist-poet (White, 1999), several playful metaphors related to the color red are employed. Findings Bates' red thread of information entails: terms, genres, literatures, classification systems, scholarly communication, information retrieval, information experience, information institutions, and information policy. This same constellation of phenomena can be found in resonant visions of information science, namely, domain analysis (Hjørland, 2002), ethnography of infrastructure (Star, 1999), and social epistemology (Shera, 1968). Research limitations/implications With the vital vermilion filament in clear view, newcomers can more easily engage the material, conceptual, and social machinery of information science, and specialists are reminded of what constitutes information science as a whole. Future researchers and scientist-poets may wish to supplement the nine composite entities with additional, emergent information phenomena. Originality/value Though the explication of information science that follows is relatively orthodox and time-bound, the paper offers an imaginative, accessible, yet technically precise way of understanding the field.
    Date
    30. 4.2020 21:03:22
  14. Hudon, M.: ¬The status of knowledge organization in library and information science master's programs (2021) 0.01
    0.012419055 = product of:
      0.04967622 = sum of:
        0.04967622 = product of:
          0.07451433 = sum of:
            0.03295579 = weight(_text_:science in 697) [ClassicSimilarity], result of:
              0.03295579 = score(doc=697,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.2881068 = fieldWeight in 697, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=697)
            0.04155854 = weight(_text_:29 in 697) [ClassicSimilarity], result of:
              0.04155854 = score(doc=697,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.27205724 = fieldWeight in 697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=697)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    The content of master's programs accredited by the American Library Association was examined to assess the status of knowledge organization (KO) as a subject in current training. Data collected show that KO remains very visible in a majority of programs, mainly in the form of required and electives courses focusing on descriptive cataloging, classification, and metadata. Observed tendencies include, however, the recent elimination of the required KO course in several programs, the reality that one third of KO electives listed in course catalogs have not been scheduled in the past three years, and the fact that two-thirds of those teaching KO specialize in other areas of information science.
    Date
    27. 9.2022 18:46:29
  15. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.01
    0.012356749 = product of:
      0.049426995 = sum of:
        0.049426995 = product of:
          0.07414049 = sum of:
            0.03295579 = weight(_text_:science in 40) [ClassicSimilarity], result of:
              0.03295579 = score(doc=40,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.2881068 = fieldWeight in 40, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
            0.041184697 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.041184697 = score(doc=40,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
    Object
    Web of Science
  16. Manley, S.: Letters to the editor and the race for publication metrics (2022) 0.01
    0.012356749 = product of:
      0.049426995 = sum of:
        0.049426995 = product of:
          0.07414049 = sum of:
            0.03295579 = weight(_text_:science in 547) [ClassicSimilarity], result of:
              0.03295579 = score(doc=547,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.2881068 = fieldWeight in 547, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=547)
            0.041184697 = weight(_text_:22 in 547) [ClassicSimilarity], result of:
              0.041184697 = score(doc=547,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.2708308 = fieldWeight in 547, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=547)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    This article discusses how letters to the editor boost publishing metrics for journals and authors, and then examines letters published since 2015 in six elite journals, including the Journal of the Association for Information Science and Technology. The initial findings identify some potentially anomalous use of letters and unusual self-citation patterns. The article proposes that Clarivate Analytics consider slightly reconfiguring the Journal Impact Factor to more fairly account for letters and that journals transparently explain their letter submission policies.
    Date
    6. 4.2022 19:22:26
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.5, S.702-707
  17. Skare, R.: Paratext (2020) 0.01
    0.01235463 = product of:
      0.04941852 = sum of:
        0.04941852 = product of:
          0.07412778 = sum of:
            0.0266323 = weight(_text_:science in 20) [ClassicSimilarity], result of:
              0.0266323 = score(doc=20,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.23282544 = fieldWeight in 20, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0625 = fieldNorm(doc=20)
            0.047495477 = weight(_text_:29 in 20) [ClassicSimilarity], result of:
              0.047495477 = score(doc=20,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.31092256 = fieldWeight in 20, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=20)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    This article presents Gérard Genette's concept of the paratext by defining the term and by describing its characteristics. The use of the concept in disciplines other than literary studies and for media other than printed books is discussed. The last section shows the relevance of the concept for library and information science in general and for knowledge organization, in which paratext in particular is connected to the concept "metadata."
    Date
    31.10.2020 18:51:29
  18. Asubiaro, T.V.; Onaolapo, S.: ¬A comparative study of the coverage of African journals in Web of Science, Scopus, and CrossRef (2023) 0.01
    0.012242779 = product of:
      0.048971117 = sum of:
        0.048971117 = product of:
          0.073456675 = sum of:
            0.04403903 = weight(_text_:science in 992) [ClassicSimilarity], result of:
              0.04403903 = score(doc=992,freq=14.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.38499892 = fieldWeight in 992, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=992)
            0.029417641 = weight(_text_:22 in 992) [ClassicSimilarity], result of:
              0.029417641 = score(doc=992,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.19345059 = fieldWeight in 992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=992)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    This is the first study that evaluated the coverage of journals from Africa in Web of Science, Scopus, and CrossRef. A list of active journals published in each of the 55 African countries was compiled from Ulrich's periodicals directory and African Journals Online (AJOL) website. Journal master lists for Web of Science, Scopus, and CrossRef were searched for the African journals. A total of 2,229 unique active African journals were identified from Ulrich (N = 2,117, 95.0%) and AJOL (N = 243, 10.9%) after removing duplicates. The volume of African journals in Web of Science and Scopus databases is 7.4% (N = 166) and 7.8% (N = 174), respectively, compared to the 45.6% (N = 1,017) covered in CrossRef. While making up only 17.% of all the African journals, South African journals had the best coverage in the two most authoritative databases, accounting for 73.5% and 62.1% of all the African journals in Web of Science and Scopus, respectively. In contrast, Nigeria published 44.5% of all the African journals. The distribution of the African journals is biased in favor of Medical, Life and Health Sciences and Humanities and the Arts in the three databases. The low representation of African journals in CrossRef, a free indexing infrastructure that could be harnessed for building an African-centric research indexing database, is concerning.
    Date
    22. 6.2023 14:09:06
    Object
    Web of Science
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.745-758
  19. Bärnreuther, K.: Informationskompetenz-Vermittlung für Schulklassen mit Wikipedia und dem Framework Informationskompetenz in der Hochschulbildung (2021) 0.01
    0.011820463 = product of:
      0.047281854 = sum of:
        0.047281854 = product of:
          0.07092278 = sum of:
            0.03562161 = weight(_text_:29 in 299) [ClassicSimilarity], result of:
              0.03562161 = score(doc=299,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.23319192 = fieldWeight in 299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=299)
            0.035301168 = weight(_text_:22 in 299) [ClassicSimilarity], result of:
              0.035301168 = score(doc=299,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.23214069 = fieldWeight in 299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=299)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Date
    30. 6.2021 16:29:52
    Source
    o-bib: Das offene Bibliotheksjournal. 8(2021) Nr.2, S.1-22
  20. Shree, P.: ¬The journey of Open AI GPT models (2020) 0.01
    0.011793743 = product of:
      0.04717497 = sum of:
        0.04717497 = weight(_text_:processing in 869) [ClassicSimilarity], result of:
          0.04717497 = score(doc=869,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.26835677 = fieldWeight in 869, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=869)
      0.25 = coord(1/4)
    
    Abstract
    Generative Pre-trained Transformer (GPT) models by OpenAI have taken natural language processing (NLP) community by storm by introducing very powerful language models. These models can perform various NLP tasks like question answering, textual entailment, text summarisation etc. without any supervised training. These language models need very few to no examples to understand the tasks and perform equivalent or even better than the state-of-the-art models trained in supervised fashion. In this article we will cover the journey of these models and understand how they have evolved over a period of 2 years. 1. Discussion of GPT-1 paper (Improving Language Understanding by Generative Pre-training). 2. Discussion of GPT-2 paper (Language Models are unsupervised multitask learners) and its subsequent improvements over GPT-1. 3. Discussion of GPT-3 paper (Language models are few shot learners) and the improvements which have made it one of the most powerful models NLP has seen till date. This article assumes familiarity with the basics of NLP terminologies and transformer architecture.

Languages

  • e 573
  • d 67
  • pt 2
  • sp 1
  • More… Less…

Types

  • el 58
  • p 2
  • More… Less…