Search (758 results, page 38 of 38)

  • × year_i:[2020 TO 2030}
  1. Broughton, V.: Science and knowledge organization : an editorial (2021) 0.00
    0.0020253018 = product of:
      0.008101207 = sum of:
        0.008101207 = weight(_text_:information in 593) [ClassicSimilarity], result of:
          0.008101207 = score(doc=593,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.09697737 = fieldWeight in 593, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=593)
      0.25 = coord(1/4)
    
    Abstract
    The purpose of this article is to identify the most important factors and features in the evolution of thesauri and ontologies through a dialectic model. This model relies on a dialectic process or idea which could be discovered via a dialectic method. This method has focused on identifying the logical relationship between a beginning proposition, or an idea called a thesis, a negation of that idea called the antithesis, and the result of the conflict between the two ideas, called a synthesis. During the creation of knowl­edge organization systems (KOSs), the identification of logical relations between different ideas has been made possible through the consideration and use of the most influential methods and tools such as dictionaries, Roget's Thesaurus, thesaurus, micro-, macro- and metathesauri, ontology, lower, middle and upper level ontologies. The analysis process has adapted a historical methodology, more specifically a dialectic method and documentary method as the reasoning process. This supports our arguments and synthesizes a method for the analysis of research results. Confirmed by the research results, the principle of unity has shown to be the most important factor in the development and evolution of the structure of knowl­edge organization systems and their types. There are various types of unity when considering the analysis of logical relations. These include the principle of unity of alphabetical order, unity of science, semantic unity, structural unity and conceptual unity. The results have clearly demonstrated a movement from plurality to unity in the assembling of the complex structure of knowl­edge organization systems to increase information and knowl­edge storage and retrieval performance.
  2. Hjoerland, B.: Science, Part I : basic conceptions of science and the scientific method (2021) 0.00
    0.0020253018 = product of:
      0.008101207 = sum of:
        0.008101207 = weight(_text_:information in 594) [ClassicSimilarity], result of:
          0.008101207 = score(doc=594,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.09697737 = fieldWeight in 594, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=594)
      0.25 = coord(1/4)
    
    Abstract
    This article is the first in a trilogy about the concept "science". Section 1 considers the historical development of the meaning of the term science and shows its close relation to the terms "knowl­edge" and "philosophy". Section 2 presents four historic phases in the basic conceptualizations of science (1) science as representing absolute certain of knowl­edge based on deductive proof; (2) science as representing absolute certain of knowl­edge based on "the scientific method"; (3) science as representing fallible knowl­edge based on "the scientific method"; (4) science without a belief in "the scientific method" as constitutive, hence the question about the nature of science becomes dramatic. Section 3 presents four basic understandings of the scientific method: Rationalism, which gives priority to a priori thinking; empiricism, which gives priority to the collection, description, and processing of data in a neutral way; historicism, which gives priority to the interpretation of data in the light of "paradigm" and pragmatism, which emphasizes the analysis of the purposes, consequences, and the interests of knowl­edge. The second article in the trilogy focus on different fields studying science, while the final article presets further developments in the concept of science and the general conclusion. Overall, the trilogy illuminates the most important tensions in different conceptualizations of science and argues for the role of information science and knowl­edge organization in the study of science and suggests how "science" should be understood as an object of research in these fields.
  3. Kyprianos, K.; Efthymiou, F.; Kouis, D.: Students' perceptions on cataloging course (2022) 0.00
    0.0020253018 = product of:
      0.008101207 = sum of:
        0.008101207 = weight(_text_:information in 623) [ClassicSimilarity], result of:
          0.008101207 = score(doc=623,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.09697737 = fieldWeight in 623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=623)
      0.25 = coord(1/4)
    
    Abstract
    Cataloging and metadata description is one of the major competencies that a trainee cataloger must conquer. According to recent research results, library and information studies students experience difficulties understanding the theory, the terminology, and the tools necessary for cataloging. The experimental application of teaching models which derive from predominant learning theories, such as behaviorism, cognitivism, and constructivism, may help in detecting the difficulties of a cataloging course and in suggesting efficient solutions. This paper presents in detail three teaching models applied for a cataloging course and investigates their effectiveness, based on a survey of 126 first-year students. The survey employed the Kirkpatrick model aiming to record undergraduate students' perceptions and feelings about cataloging. The results revealed that, although a positive change in students' behavior towards cataloging has been achieved, they still do not feel very confident about the skills they have acquired. Moreover, students felt that practicing cataloging more frequently will eliminate their difficulties. Finally, they emphasized the need for face to face courses, as the survey took place in the coronavirus pandemic, during which the courses were held via distance learning.
  4. Lee, S.: Pidgin metadata framework as a mediator for metadata interoperability (2021) 0.00
    0.0020253018 = product of:
      0.008101207 = sum of:
        0.008101207 = weight(_text_:information in 654) [ClassicSimilarity], result of:
          0.008101207 = score(doc=654,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.09697737 = fieldWeight in 654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=654)
      0.25 = coord(1/4)
    
    Abstract
    A pidgin metadata framework based on the concept of pidgin metadata is proposed to complement the limitations of existing approaches to metadata interoperability and to achieve more reliable metadata interoperability. The framework consists of three layers, with a hierarchical structure, and reflects the semantic and structural characteristics of various metadata. Layer 1 performs both an external function, serving as an anchor for semantic association between metadata elements, and an internal function, providing semantic categories that can encompass detailed elements. Layer 2 is an arbitrary layer composed of substantial elements from existing metadata and performs a function in which different metadata elements describing the same or similar aspects of information resources are associated with the semantic categories of Layer 1. Layer 3 implements the semantic relationships between Layer 1 and Layer 2 through the Resource Description Framework syntax. With this structure, the pidgin metadata framework can establish the criteria for semantic connection between different elements and fully reflect the complexity and heterogeneity among various metadata. Additionally, it is expected to provide a bibliographic environment that can achieve more reliable metadata interoperability than existing approaches by securing the communication between metadata.
  5. Juneström, A.: Discourses of fact-checking in Swedish news media (2022) 0.00
    0.0020253018 = product of:
      0.008101207 = sum of:
        0.008101207 = weight(_text_:information in 686) [ClassicSimilarity], result of:
          0.008101207 = score(doc=686,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.09697737 = fieldWeight in 686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=686)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of this paper is to examine how contemporary fact-checking is discursively constructed in Swedish news media; this serves to gain insight into how this practice is understood in society. Design/methodology/approach A selection of texts on the topic of fact-checking published by two of Sweden's largest morning newspapers is analyzed through the lens of Fairclough's discourse theoretical framework. Findings Three key discourses of fact-checking were identified, each of which included multiple sub-discourses. First, a discourse that has been labeled as "the affirmative discourse," representing fact-checking as something positive, was identified. This discourse embraces ideas about fact-checking as something that, for example, strengthens democracy. Second, a contrasting discourse that has been labeled "the adverse discourse" was identified. This discourse represents fact-checking as something precarious that, for example, poses a risk to democracy. Third, a discourse labeled "the agency discourse" was identified. This discourse conveys ideas on whose responsibility it is to conduct fact-checking. Originality/value A better understanding of the discursive construction of fact-checking provides insights into social practices pertaining to it and the expectations of its role in contemporary society. The results are relevant for journalists and professionals who engage in fact-checking and for others who have a particular interest in fact-checking, e.g. librarians and educators engaged in media and information literacy projects.
  6. Oliver, C: Introducing RDA : a guide to the basics after 3R (2021) 0.00
    0.0020253018 = product of:
      0.008101207 = sum of:
        0.008101207 = weight(_text_:information in 716) [ClassicSimilarity], result of:
          0.008101207 = score(doc=716,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.09697737 = fieldWeight in 716, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=716)
      0.25 = coord(1/4)
    
    Abstract
    Since Oliver's guide was first published in 2010, thousands of LIS students, records managers, and catalogers and other library professionals have relied on its clear, plainspoken explanation of RDA: Resource Description and Access as their first step towards becoming acquainted with the cataloging standard. Now, reflecting the changes to RDA after the completion of the 3R Project, Oliver brings her Special Report up to date. This essential primer concisely explains what RDA is, its basic features, and the main factors in its development describes RDA's relationship to the international standards and models that continue to influence its evolution provides an overview of the latest developments, focusing on the impact of the 3R Project, the results of aligning RDA with IFLA's Library Reference Model (LRM), and the outcomes of internationalization illustrates how information is organized in the post 3R Toolkit and explains how to navigate through this new structure; and discusses how RDA continues to enable improved resource discovery both in traditional and new applications, including the linked data environment.
  7. Rösch, H.: Informationsethik (2023) 0.00
    0.0020253018 = product of:
      0.008101207 = sum of:
        0.008101207 = weight(_text_:information in 821) [ClassicSimilarity], result of:
          0.008101207 = score(doc=821,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.09697737 = fieldWeight in 821, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=821)
      0.25 = coord(1/4)
    
    Abstract
    Der Terminus Informationsethik (information ethics) wurde Ende der 1980er Jahre im bibliothekarischen Umfeld geprägt und tauchte etwa zeitgleich in den USA und Deutschland auf. Informationsethik umfasst alle ethisch relevanten Fragen, die im Zusammenhang mit Produktion, Speicherung, Erschließung, Verteilung und Nutzung von Informationen auftreten. Informationsethik gehört zu den angewandten oder Bereichsethiken, die sich in den vergangenen Jahrzehnten in großer Zahl gebildet haben. Dazu zählen etwa Wirtschaftsethik, Medizinethik, Technikethik, Computerethik oder Medienethik. Zu beobachten ist ein Trend zu immer spezifischeren Bereichsethiken wie z. B. der Lebensmittelethik oder der Algorithmenethik. Aufteilung und Abgrenzung der Bereichsethiken folgen keinem einheitlichen Prinzip. Daher schwanken ihre Anzahl und ihre Benennungen in der Fachliteratur erheblich. Bereichsethiken überlappen sich z. T. oder stehen bisweilen in einem komplementären Verhältnis. So hat die Informationsethik ohne Zweifel u. a. Bezüge zur Medienethik, zur Technikethik (Computerethik), zur Wirtschaftsethik, zur Wissenschaftsethik und natürlich zur Sozialethik. Im Unterschied zur Allgemeinen Ethik, die sich mit übergreifenden, allgemeinen Aspekten wie Freiheit, Gerechtigkeit oder Wahrhaftigkeit auseinandersetzt, übertragen angewandte Ethiken zum einen allgemeine ethische Prinzipien und Methoden auf bestimmte Lebensbereiche und Handlungsfelder. Zum anderen arbeiten sie spezifische Fragestellungen und Probleme heraus, die charakteristisch für den jeweiligen Bereich sind und die in der Allgemeinen Ethik keine Berücksichtigung finden. Angewandte Ethiken sind grundsätzlich praxisorientiert. Sie zielen darauf, die Akteure der jeweiligen Handlungsfelder für ethische Fragestellungen zu sensibilisieren und das Bewusstsein um eine gemeinsame Wertebasis, die idealerweise in einem Ethikkodex dokumentiert ist, zu stabilisieren.
  8. Ahmed, M.: Automatic indexing for agriculture : designing a framework by deploying Agrovoc, Agris and Annif (2023) 0.00
    0.0020253018 = product of:
      0.008101207 = sum of:
        0.008101207 = weight(_text_:information in 1024) [ClassicSimilarity], result of:
          0.008101207 = score(doc=1024,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.09697737 = fieldWeight in 1024, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1024)
      0.25 = coord(1/4)
    
    Source
    ¬SRELS Journal of Information Management. 60(2023) no.2, S.85-95
  9. Hjoerland, B.: Bibliographical control (2023) 0.00
    0.0020253018 = product of:
      0.008101207 = sum of:
        0.008101207 = weight(_text_:information in 1131) [ClassicSimilarity], result of:
          0.008101207 = score(doc=1131,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.09697737 = fieldWeight in 1131, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1131)
      0.25 = coord(1/4)
    
    Abstract
    Section 1 of this article discusses the concept of bibliographical control and makes a distinction between this term, "bibliographical description," and related terms, which are often confused in the literature. It further discusses the function of bibliographical control and criticizes Patrick Wilson's distinction between "exploitative control" and "descriptive control." Section 2 presents projects for establishing bibliographic control from the Library of Alexandria to the Internet and Google, and it is found that these projects have often been dominated by a positivist dream to make all information in the world available to everybody. Section 3 discusses the theoretical problems of providing comprehensive coverage and retrieving documents represented in databases and argues that 100% coverage and retrievability is an unobtainable ideal. It is shown that bibliographical control has been taken very seriously in the field of medicine, where knowledge of the most important findings is of utmost importance. In principle, it is equally important in all other domains. The conclusion states that the alternative to a positivist dream of complete bibliographic control is a pragmatic philosophy aiming at optimizing bibliographic control supporting specific activities, perspectives, and interests.
  10. Bagatini, J.A.; Chaves Guimarães, J.A.: Algorithmic discriminations and their ethical impacts on knowledge organization : a thematic domain-analysis (2023) 0.00
    0.0020253018 = product of:
      0.008101207 = sum of:
        0.008101207 = weight(_text_:information in 1134) [ClassicSimilarity], result of:
          0.008101207 = score(doc=1134,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.09697737 = fieldWeight in 1134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1134)
      0.25 = coord(1/4)
    
    Footnote
    Beitrag eines Themenheftes: 4th International Conference on the Ethics of Information and Knowledge Organization, June 8-9, University of Lille, France.
  11. Gartner, R.: Metadata in the digital library : building an integrated strategy with XML (2021) 0.00
    0.0017185257 = product of:
      0.0068741026 = sum of:
        0.0068741026 = weight(_text_:information in 732) [ClassicSimilarity], result of:
          0.0068741026 = score(doc=732,freq=4.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.08228803 = fieldWeight in 732, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=732)
      0.25 = coord(1/4)
    
    Abstract
    The range of metadata needed to run a digital library and preserve its collections in the long term is much more extensive and complicated than anything in its traditional counterpart. It includes the same 'descriptive' information which guides users to the resources they require but must supplement this with comprehensive 'administrative' metadata: this encompasses technical details of the files that make up its collections, the documentation of complex intellectual property rights and the extensive set needed to support its preservation in the long-term. To accommodate all of this requires the use of multiple metadata standards, all of which have to be brought together into a single integrated whole.
    Content
    Inhalt: 1 Introduction, Aims and Definitions -- 1.1 Origins -- 1.2 From information science to libraries -- 1.3 The central place of metadata -- 1.4 The book in outline -- 2 Metadata Basics -- 2.1 Introduction -- 2.2 Three types of metadata -- 2.2.1 Descriptive metadata -- 2.2.2 Administrative metadata -- 2.2.3 Structural metadata -- 2.3 The core components of metadata -- 2.3.1 Syntax -- 2.3.2 Semantics -- 2.3.3 Content rules -- 2.4 Metadata standards -- 2.5 Conclusion -- 3 Planning a Metadata Strategy: Basic Principles -- 3.1 Introduction -- 3.2 Principle 1: Support all stages of the digital curation lifecycle -- 3.3 Principle 2: Support the long-term preservation of the digital object -- 3.4 Principle 3: Ensure interoperability -- 3.5 Principle 4: Control metadata content wherever possible -- 3.6 Principle 5: Ensure software independence -- 3.7 Principle 6: Impose a logical system of identifiers -- 3.8 Principle 7: Use standards whenever possible -- 3.9 Principle 8: Ensure the integrity of the metadata itself -- 3.10 Summary: the basic principles of a metadata strategy -- 4 Planning a Metadata Strategy: Applying the Basic Principles -- 4.1 Introduction -- 4.2 Initial steps: standards as a foundation -- 4.2.1 'Off-the shelf' standards -- 4.2.2 Mapping out an architecture and serialising it into a standard -- 4.2.3 Devising a local metadata scheme -- 4.2.4 How standards support the basic principles -- 4.3 Identifiers: everything in its place -- 5 XML: The Syntactical Foundation of Metadata -- 5.1 Introduction -- 5.2 What XML looks like -- 5.3 XML schemas -- 5.4 Namespaces -- 5.5 Creating and editing XML -- 5.6 Transforming XML -- 5.7 Why use XML? -- 6 METS: The Metadata Package -- 6.1 Introduction -- 6.2 Why use METS?.
  12. Adler, M.: ¬The strangeness of subject cataloging : afterword (2020) 0.00
    0.0016202416 = product of:
      0.006480966 = sum of:
        0.006480966 = weight(_text_:information in 5887) [ClassicSimilarity], result of:
          0.006480966 = score(doc=5887,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.0775819 = fieldWeight in 5887, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=5887)
      0.25 = coord(1/4)
    
    Abstract
    "I can't presume to know how other catalogers view the systems, information resources, and institutions with which they engage on a daily basis. David Paton gives us a glimpse in this issue of the affective experiences of bibliographers and catalogers of artists' books in South Africa, and it is clear that the emotional range among them is wide. What I can say is that catalogers' feelings and worldviews, whatever they may be, give the library its shape. I think we can agree that the librarians who constructed the Library of Congress Classification around 1900, Melvil Dewey, and the many classifiers around the world past and present, have had particular sets of desires around control and access and order. We all are asked to submit to those desires in our library work, as well as our own pursuit of knowledge and pleasure reading. And every decision regarding the aboutness of a book, or about where to place it within a particular discipline, takes place in a cataloger's affective and experiential world. While the classification provides the outlines, the catalogers color in the spaces with the books, based on their own readings of the book descriptions and their interpretations of the classification scheme. The decisions they make and the structures to which they are bound affect the circulation of books and their readers across the library. Indeed, some of the encounters will be unexpected, strange, frustrating, frightening, shame-inducing, awe-inspiring, and/or delightful. The emotional experiences of students described in Mabee and Fancher's article, as well as those of any visitor to the library, are all affected by classificatory design. One concern is that a library's ordering principles may reinforce or heighten already existing feelings of precarity or marginality. Because the classifications are hidden from patrons' view, it is difficult to measure the way the order affects a person's mind and body. That a person does not consciously register the associations does not mean that they are not affected."
  13. Kragelj, M.; Borstnar, M.K.: Automatic classification of older electronic texts into the Universal Decimal Classification-UDC (2021) 0.00
    0.0016202416 = product of:
      0.006480966 = sum of:
        0.006480966 = weight(_text_:information in 175) [ClassicSimilarity], result of:
          0.006480966 = score(doc=175,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.0775819 = fieldWeight in 175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=175)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of this study is to develop a model for automated classification of old digitised texts to the Universal Decimal Classification (UDC), using machine-learning methods. Design/methodology/approach The general research approach is inherent to design science research, in which the problem of UDC assignment of the old, digitised texts is addressed by developing a machine-learning classification model. A corpus of 70,000 scholarly texts, fully bibliographically processed by librarians, was used to train and test the model, which was used for classification of old texts on a corpus of 200,000 items. Human experts evaluated the performance of the model. Findings Results suggest that machine-learning models can correctly assign the UDC at some level for almost any scholarly text. Furthermore, the model can be recommended for the UDC assignment of older texts. Ten librarians corroborated this on 150 randomly selected texts. Research limitations/implications The main limitations of this study were unavailability of labelled older texts and the limited availability of librarians. Practical implications The classification model can provide a recommendation to the librarians during their classification work; furthermore, it can be implemented as an add-on to full-text search in the library databases. Social implications The proposed methodology supports librarians by recommending UDC classifiers, thus saving time in their daily work. By automatically classifying older texts, digital libraries can provide a better user experience by enabling structured searches. These contribute to making knowledge more widely available and useable. Originality/value These findings contribute to the field of automated classification of bibliographical information with the usage of full texts, especially in cases in which the texts are old, unstructured and in which archaic language and vocabulary are used.
  14. Hobert, A.; Jahn, N.; Mayr, P.; Schmidt, B.; Taubert, N.: Open access uptake in Germany 2010-2018 : adoption in a diverse research landscape (2021) 0.00
    0.0016202416 = product of:
      0.006480966 = sum of:
        0.006480966 = weight(_text_:information in 250) [ClassicSimilarity], result of:
          0.006480966 = score(doc=250,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.0775819 = fieldWeight in 250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
      0.25 = coord(1/4)
    
    Content
    This study investigates the development of open access (OA) to journal articles from authors affiliated with German universities and non-university research institutions in the period 2010-2018. Beyond determining the overall share of openly available articles, a systematic classification of distinct categories of OA publishing allowed us to identify different patterns of adoption of OA. Taking into account the particularities of the German research landscape, variations in terms of productivity, OA uptake and approaches to OA are examined at the meso-level and possible explanations are discussed. The development of the OA uptake is analysed for the different research sectors in Germany (universities, non-university research institutes of the Helmholtz Association, Fraunhofer Society, Max Planck Society, Leibniz Association, and government research agencies). Combining several data sources (incl. Web of Science, Unpaywall, an authority file of standardised German affiliation information, the ISSN-Gold-OA 3.0 list, and OpenDOAR), the study confirms the growth of the OA share mirroring the international trend reported in related studies. We found that 45% of all considered articles during the observed period were openly available at the time of analysis. Our findings show that subject-specific repositories are the most prevalent type of OA. However, the percentages for publication in fully OA journals and OA via institutional repositories show similarly steep increases. Enabling data-driven decision-making regarding the implementation of OA in Germany at the institutional level, the results of this study furthermore can serve as a baseline to assess the impact recent transformative agreements with major publishers will likely have on scholarly communication.
  15. Jörs, B.: Informationskompetenz ist auf domänenspezifisches Vorwissen angewiesen und kann immer nur vorläufig sein : eine Antwort auf Steve Patriarca (2021) 0.00
    0.0016202416 = product of:
      0.006480966 = sum of:
        0.006480966 = weight(_text_:information in 430) [ClassicSimilarity], result of:
          0.006480966 = score(doc=430,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.0775819 = fieldWeight in 430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=430)
      0.25 = coord(1/4)
    
    Abstract
    Schon die Überschrift des Statements von Steve Patriarca belegt, dass die Anhänger der "Informationskompetenz" (Information Literacy) nach wie vor von der einfachen und naiven Annahme ausgehen, dass die reine Verfügbarkeit von "Informationskompetenz" ausreicht, um "uns die Werkzeuge" zu geben, "Quellen zu prüfen und Tatsachenbehauptungen zu verifizieren". Ohne nochmals gebetsmühlenartig die Argumente gegen eine "allgemeingültige Informationskompetenz" zu wiederholen, die es als eigenständige "Kompetenz" nicht geben kann (siehe die letzten Stellungnahmen zu diesem Unbegriff in Open Password Nr. 682, 691, 759, 960, 963, 965, 971, 979 usw.), und zudem auf die dort eingebundenen Sichten der Nachbarwissenschaften (Neurowissenschaften, Kommunikationswissenschaft usw.) zu diesem unguten Terminus der Bibliotheks- und Informationswissenschaft zu verweisen, sei hier lediglich kurz klargestellt:
  16. Ostrzinski, U.: Deutscher MeSH : ZB MED veröffentlicht aktuelle Jahresversion 2022 - freier Zugang und FAIRe Dateiformate (2022) 0.00
    0.0016202416 = product of:
      0.006480966 = sum of:
        0.006480966 = weight(_text_:information in 625) [ClassicSimilarity], result of:
          0.006480966 = score(doc=625,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.0775819 = fieldWeight in 625, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=625)
      0.25 = coord(1/4)
    
    Content
    Der MeSH ist ein polyhierarchisches, konzeptbasiertes Schlagwortregister für biomedizinische Fachbegriffe umfasst das Vokabular, welches in den NLM-Datenbanken, beispielsweise MEDLINE oder PubMed, erscheint. Er wird jährlich aktualisiert von der U. S. National Library of Medicine herausgegeben. Für die deutschsprachige Fassung übersetzt ZB MED dann die jeweils neu hinzugekommenen Terme und ergänzt sie um zusätzliche Synonyme. Erstmalig erstellte ZB MED den Deutschen MeSH im Jahr 2020. Vorher lag die Verantwortung beim Deutschen Institut für Medizinische Dokumentation und Information (DIMDI/BfArM)."
  17. Aydin, Ö.; Karaarslan, E.: OpenAI ChatGPT generated literature review: : digital twin in healthcare (2022) 0.00
    0.0016202416 = product of:
      0.006480966 = sum of:
        0.006480966 = weight(_text_:information in 851) [ClassicSimilarity], result of:
          0.006480966 = score(doc=851,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.0775819 = fieldWeight in 851, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=851)
      0.25 = coord(1/4)
    
    Abstract
    Literature review articles are essential to summarize the related work in the selected field. However, covering all related studies takes too much time and effort. This study questions how Artificial Intelligence can be used in this process. We used ChatGPT to create a literature review article to show the stage of the OpenAI ChatGPT artificial intelligence application. As the subject, the applications of Digital Twin in the health field were chosen. Abstracts of the last three years (2020, 2021 and 2022) papers were obtained from the keyword "Digital twin in healthcare" search results on Google Scholar and paraphrased by ChatGPT. Later on, we asked ChatGPT questions. The results are promising; however, the paraphrased parts had significant matches when checked with the Ithenticate tool. This article is the first attempt to show the compilation and expression of knowledge will be accelerated with the help of artificial intelligence. We are still at the beginning of such advances. The future academic publishing process will require less human effort, which in turn will allow academics to focus on their studies. In future studies, we will monitor citations to this study to evaluate the academic validity of the content produced by the ChatGPT. 1. Introduction OpenAI ChatGPT (ChatGPT, 2022) is a chatbot based on the OpenAI GPT-3 language model. It is designed to generate human-like text responses to user input in a conversational context. OpenAI ChatGPT is trained on a large dataset of human conversations and can be used to create responses to a wide range of topics and prompts. The chatbot can be used for customer service, content creation, and language translation tasks, creating replies in multiple languages. OpenAI ChatGPT is available through the OpenAI API, which allows developers to access and integrate the chatbot into their applications and systems. OpenAI ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text, allowing it to engage in conversation with users naturally and intuitively. OpenAI ChatGPT is trained on a large dataset of human conversations, allowing it to understand and respond to a wide range of topics and contexts. It can be used in various applications, such as chatbots, customer service agents, and language translation systems. OpenAI ChatGPT is a state-of-the-art language model able to generate coherent and natural text that can be indistinguishable from text written by a human. As an artificial intelligence, ChatGPT may need help to change academic writing practices. However, it can provide information and guidance on ways to improve people's academic writing skills.
  18. ¬The library's guide to graphic novels (2020) 0.00
    0.0014177114 = product of:
      0.0056708455 = sum of:
        0.0056708455 = weight(_text_:information in 717) [ClassicSimilarity], result of:
          0.0056708455 = score(doc=717,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.06788416 = fieldWeight in 717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=717)
      0.25 = coord(1/4)
    
    Abstract
    The circ stats say it all: graphic novels' popularity among library users keeps growing, with more being published (and acquired by libraries) each year. The unique challenges of developing and managing a graphics novels collection have led the Association of Library Collections and Technical Services (ALCTS) to craft this guide, presented under the expert supervision of editor Ballestro, who has worked with comics for more than 35 years. Examining the ever-changing ways that graphic novels are created, packaged, marketed, and released, this resource gathers a range of voices from the field to explore such topics as: a cultural history of comics and graphic novels from their World War II origins to today, providing a solid grounding for newbies and fresh insights for all; catching up on the Big Two's reboots: Marvel's 10 and DC's 4; five questions to ask when evaluating nonfiction graphic novels and 30 picks for a core collection; key publishers and cartoonists to consider when adding international titles; developing a collection that supports curriculum and faculty outreach to ensure wide usage, with catalogers' tips for organizing your collection and improving discovery; real-world examples of how libraries treat graphic novels, such as an in-depth profile of the development of Penn Library's Manga collection; how to integrate the emerging field of graphic medicine into the collection; and specialized resources like The Cartoonists of Color and Queer Cartoonists databases, the open access scholarly journal Comic Grid, and the No Flying, No Tights website. Packed with expert guidance and useful information, this guide will assist technical services staff, catalogers, and acquisition and collection management librarians.

Languages

  • e 642
  • d 111
  • pt 3
  • m 2
  • sp 1
  • More… Less…

Types

  • a 713
  • el 79
  • m 24
  • p 7
  • s 6
  • A 1
  • EL 1
  • x 1
  • More… Less…

Subjects

Classifications