Search (97 results, page 2 of 5)

  • × type_ss:"el"
  • × year_i:[2020 TO 2030}
  1. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.01
    0.0055237534 = product of:
      0.01657126 = sum of:
        0.01657126 = product of:
          0.03314252 = sum of:
            0.03314252 = weight(_text_:of in 572) [ClassicSimilarity], result of:
              0.03314252 = score(doc=572,freq=32.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.48376274 = fieldWeight in 572, product of:
                  5.656854 = tf(freq=32.0), with freq of:
                    32.0 = termFreq=32.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=572)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
  2. Prokop, M.: Hans Jonas and the phenomenological continuity of life and mind (2022) 0.00
    0.0042995503 = product of:
      0.012898651 = sum of:
        0.012898651 = product of:
          0.025797302 = sum of:
            0.025797302 = weight(_text_:of in 1048) [ClassicSimilarity], result of:
              0.025797302 = score(doc=1048,freq=38.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.37654874 = fieldWeight in 1048, product of:
                  6.164414 = tf(freq=38.0), with freq of:
                    38.0 = termFreq=38.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1048)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper offers a novel interpretation of Hans Jonas' analysis of metabolism, the centrepiece of Jonas' philosophy of organism, in relation to recent controversies regarding the phenomenological dimension of life-mind continuity as understood within 'autopoietic' enactivism (AE). Jonas' philosophy of organism chiefly inspired AE's development of what we might call 'the phenomenological life-mind continuity thesis' (PLMCT), the claim that certain phenomenological features of human experience are central to a proper scientific understanding of both life and mind, and as such central features of all living organisms. After discussing the understanding of PLMCT within AE, and recent criticisms thereof, I develop a reading of Jonas' analysis of metabolism, in light of previous commentators, which emphasizes its systematicity and transcendental flavour. The central thought is that, for Jonas, the attribution of certain phenomenological features is a necessary precondition for our understanding of the possibility of metabolism, rather than being derivable from metabolism itself. I argue that my interpretation strengthens Jonas' contribution to AE's justification for ascribing certain phenomenological features to life across the board. However, it also emphasises the need to complement Jonas' analysis with an explanatory account of organic identity in order to vindicate these phenomenological ascriptions in a scientific context.
  3. Machado, L.M.O.: Ontologies in knowledge organization (2021) 0.00
    0.004267752 = product of:
      0.012803256 = sum of:
        0.012803256 = product of:
          0.025606511 = sum of:
            0.025606511 = weight(_text_:of in 198) [ClassicSimilarity], result of:
              0.025606511 = score(doc=198,freq=26.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.37376386 = fieldWeight in 198, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=198)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Within the knowledge organization systems (KOS) set, the term "ontology" is paradigmatic of the terminological ambiguity in different typologies. Contributing to this situation is the indiscriminate association of the term "ontology", both as a specific type of KOS and as a process of categorization, due to the interdisciplinary use of the term with different meanings. We present a systematization of the perspectives of different authors of ontologies, as representational artifacts, seeking to contribute to terminological clarification. Focusing the analysis on the intention, semantics and modulation of ontologies, it was possible to notice two broad perspectives regarding ontologies as artifacts that coexist in the knowledge organization systems spectrum. We have ontologies viewed, on the one hand, as an evolution in terms of complexity of traditional conceptual systems, and on the other hand, as a system that organizes ontological rather than epistemological knowledge. The focus of ontological analysis is the item to model and not the intentions that motivate the construction of the system.
  4. Qi, Q.; Hessen, D.J.; Heijden, P.G.M. van der: Improving information retrieval through correspondenceanalysis instead of latent semantic analysis (2023) 0.00
    0.004267752 = product of:
      0.012803256 = sum of:
        0.012803256 = product of:
          0.025606511 = sum of:
            0.025606511 = weight(_text_:of in 1045) [ClassicSimilarity], result of:
              0.025606511 = score(doc=1045,freq=26.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.37376386 = fieldWeight in 1045, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1045)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The initial dimensions extracted by latent semantic analysis (LSA) of a document-term matrixhave been shown to mainly display marginal effects, which are irrelevant for informationretrieval. To improve the performance of LSA, usually the elements of the raw document-term matrix are weighted and the weighting exponent of singular values can be adjusted.An alternative information retrieval technique that ignores the marginal effects is correspon-dence analysis (CA). In this paper, the information retrieval performance of LSA and CA isempirically compared. Moreover, it is explored whether the two weightings also improve theperformance of CA. The results for four empirical datasets show that CA always performsbetter than LSA. Weighting the elements of the raw data matrix can improve CA; however,it is data dependent and the improvement is small. Adjusting the singular value weightingexponent often improves the performance of CA; however, the extent of the improvementdepends on the dataset and the number of dimensions. (PDF) Improving information retrieval through correspondence analysis instead of latent semantic analysis.
    Source
    Journal of intelligent information systems [https://doi.org/10.1007/s10844-023-00815-y]
  5. Hudon, M.: ¬The status of knowledge organization in library and information science master's programs (2021) 0.00
    0.004142815 = product of:
      0.012428444 = sum of:
        0.012428444 = product of:
          0.024856888 = sum of:
            0.024856888 = weight(_text_:of in 697) [ClassicSimilarity], result of:
              0.024856888 = score(doc=697,freq=18.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.36282203 = fieldWeight in 697, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=697)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The content of master's programs accredited by the American Library Association was examined to assess the status of knowledge organization (KO) as a subject in current training. Data collected show that KO remains very visible in a majority of programs, mainly in the form of required and electives courses focusing on descriptive cataloging, classification, and metadata. Observed tendencies include, however, the recent elimination of the required KO course in several programs, the reality that one third of KO electives listed in course catalogs have not been scheduled in the past three years, and the fact that two-thirds of those teaching KO specialize in other areas of information science.
  6. Broughton, V.: Faceted classification in support of diversity : the role of concepts and terms in representing religion (2020) 0.00
    0.0041003237 = product of:
      0.01230097 = sum of:
        0.01230097 = product of:
          0.02460194 = sum of:
            0.02460194 = weight(_text_:of in 5992) [ClassicSimilarity], result of:
              0.02460194 = score(doc=5992,freq=24.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3591007 = fieldWeight in 5992, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5992)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper examines the development of facet analysis as a methodology and the role it plays in building classifications and other knowledge-organization tools. The use of categorical analysis in areas other than library and information science is also considered. The suitability of the faceted approach for humanities documentation is explored through a critical description of the FATKS (Facet Analytical Theory in Managing Knowledge Structure for Humanities) project carried out at University College London. This research focused on building a conceptual model for the subject of religion together with a relational database and search-and-browse interfaces that would support some degree of automatic classification. The paper concludes with a discussion of the differences between the conceptual model and the vocabulary used to populate it, and how, in the case of religion, the choice of terminology can create an apparent bias in the system.
    Source
    ¬The Indexer: the international journal of indexing. 38(2020) no.3, S.247-270
  7. Ogden, J.; Summers, E.; Walker, S.: Know(ing) Infrastructure : the wayback machine as object and instrument of digital research (2023) 0.00
    0.0040669674 = product of:
      0.012200902 = sum of:
        0.012200902 = product of:
          0.024401804 = sum of:
            0.024401804 = weight(_text_:of in 1084) [ClassicSimilarity], result of:
              0.024401804 = score(doc=1084,freq=34.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.35617945 = fieldWeight in 1084, product of:
                  5.8309517 = tf(freq=34.0), with freq of:
                    34.0 = termFreq=34.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1084)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    From documenting human rights abuses to studying online advertising, web archives are increasingly positioned as critical resources for a broad range of scholarly Internet research agendas. In this article, we reflect on the motivations and methodological challenges of investigating the world's largest web archive, the Internet Archive's Wayback Machine (IAWM). Using a mixed methods approach, we report on a pilot project centred around documenting the inner workings of 'Save Page Now' (SPN) - an Internet Archive tool that allows users to initiate the creation and storage of 'snapshots' of web resources. By improving our understanding of SPN and its role in shaping the IAWM, this work examines how the public tool is being used to 'save the Web' and highlights the challenges of operationalising a study of the dynamic sociotechnical processes supporting this knowledge infrastructure. Inspired by existing Science and Technology Studies (STS) approaches, the paper charts our development of methodological interventions to support an interdisciplinary investigation of SPN, including: ethnographic methods, 'experimental blackbox tactics', data tracing, modelling and documentary research. We discuss the opportunities and limitations of our methodology when interfacing with issues associated with temporality, scale and visibility, as well as critically engage with our own positionality in the research process (in terms of expertise and access). We conclude with reflections on the implications of digital STS approaches for 'knowing infrastructure', where the use of these infrastructures is unavoidably intertwined with our ability to study the situated and material arrangements of their creation.
    Source
    Convergence: The International Journal of Research into New Media Technologies [https://www.researchgate.net/publication/369660337_Knowing_Infrastructure_The_Wayback_Machine_as_object_and_instrument_of_digital_research]
  8. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.00
    0.003925761 = product of:
      0.011777283 = sum of:
        0.011777283 = product of:
          0.023554565 = sum of:
            0.023554565 = weight(_text_:of in 5365) [ClassicSimilarity], result of:
              0.023554565 = score(doc=5365,freq=22.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34381276 = fieldWeight in 5365, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5365)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  9. Machado, L.; Martínez-Ávila, D.; Barcellos Almeida, M.; Borges, M.M.: Towards a moderate realistic foundation for ontological knowledge organization systems : the question of the naturalness of classifications (2023) 0.00
    0.003925761 = product of:
      0.011777283 = sum of:
        0.011777283 = product of:
          0.023554565 = sum of:
            0.023554565 = weight(_text_:of in 894) [ClassicSimilarity], result of:
              0.023554565 = score(doc=894,freq=22.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34381276 = fieldWeight in 894, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=894)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Several authors emphasize the need for a change in classification theory due to the influence of a dogmatic and monistic ontology supported by an outdated essentialism. These claims tend to focus on the fallibility of knowledge, the need for a pluralistic view, and the theoretical burden of observations. Regardless of the legitimacy of these concerns, there is the risk, when not moderate, to fall into the opposite relativistic extreme. Based on a narrative review of the literature, we aim to reflectively discuss the theoretical foundations that can serve as a basis for a realist position supporting pluralistic ontological classifications. The goal is to show that, against rather conventional solutions, objective scientific-based approaches to natural classifications are presented to be viable, allowing a proper distinction between ontological and taxonomic questions. Supported by critical scientific realism, we consider that such an approach is suitable for the development of ontological Knowledge Organization Systems (KOS). We believe that ontological perspectivism can provide the necessary adaptation to the different granularities of reality.
    Source
    Journal of information science. 54(2023) no.x, S.xx-xx
  10. Dhillon, P.; Singh, M.: ¬An extended ontology model for trust evaluation using advanced hybrid ontology (2023) 0.00
    0.003925761 = product of:
      0.011777283 = sum of:
        0.011777283 = product of:
          0.023554565 = sum of:
            0.023554565 = weight(_text_:of in 981) [ClassicSimilarity], result of:
              0.023554565 = score(doc=981,freq=22.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34381276 = fieldWeight in 981, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=981)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In the blooming area of Internet technology, the concept of Internet-of-Things (IoT) holds a distinct position that interconnects a large number of smart objects. In the context of social IoT (SIoT), the argument of trust and reliability is evaluated in the presented work. The proposed framework is divided into two blocks, namely Verification Block (VB) and Evaluation Block (EB). VB defines various ontology-based relationships computed for the objects that reflect the security and trustworthiness of an accessed service. While, EB is used for the feedback analysis and proves to be a valuable step that computes and governs the success rate of the service. Support vector machine (SVM) is applied to categorise the trust-based evaluation. The security aspect of the proposed approach is comparatively evaluated for DDoS and malware attacks in terms of success rate, trustworthiness and execution time. The proposed secure ontology-based framework provides better performance compared with existing architectures.
    Source
    Journal of information science. 41(2023) Jan., S.1-23
  11. Shree, P.: ¬The journey of Open AI GPT models (2020) 0.00
    0.003743066 = product of:
      0.0112291975 = sum of:
        0.0112291975 = product of:
          0.022458395 = sum of:
            0.022458395 = weight(_text_:of in 869) [ClassicSimilarity], result of:
              0.022458395 = score(doc=869,freq=20.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.32781258 = fieldWeight in 869, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=869)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Generative Pre-trained Transformer (GPT) models by OpenAI have taken natural language processing (NLP) community by storm by introducing very powerful language models. These models can perform various NLP tasks like question answering, textual entailment, text summarisation etc. without any supervised training. These language models need very few to no examples to understand the tasks and perform equivalent or even better than the state-of-the-art models trained in supervised fashion. In this article we will cover the journey of these models and understand how they have evolved over a period of 2 years. 1. Discussion of GPT-1 paper (Improving Language Understanding by Generative Pre-training). 2. Discussion of GPT-2 paper (Language Models are unsupervised multitask learners) and its subsequent improvements over GPT-1. 3. Discussion of GPT-3 paper (Language models are few shot learners) and the improvements which have made it one of the most powerful models NLP has seen till date. This article assumes familiarity with the basics of NLP terminologies and transformer architecture.
    Source
    https://medium.com/walmartglobaltech/the-journey-of-open-ai-gpt-models-32d95b7b7fb2
  12. Jha, A.: Why GPT-4 isn't all it's cracked up to be (2023) 0.00
    0.0034523457 = product of:
      0.010357037 = sum of:
        0.010357037 = product of:
          0.020714074 = sum of:
            0.020714074 = weight(_text_:of in 923) [ClassicSimilarity], result of:
              0.020714074 = score(doc=923,freq=50.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3023517 = fieldWeight in 923, product of:
                  7.071068 = tf(freq=50.0), with freq of:
                    50.0 = termFreq=50.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=923)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    "I still don't know what to think about GPT-4, the new large language model (LLM) from OpenAI. On the one hand it is a remarkable product that easily passes the Turing test. If you ask it questions, via the ChatGPT interface, GPT-4 can easily produce fluid sentences largely indistinguishable from those a person might write. But on the other hand, amid the exceptional levels of hype and anticipation, it's hard to know where GPT-4 and other LLMs truly fit in the larger project of making machines intelligent.
    They might appear intelligent, but LLMs are nothing of the sort. They don't understand the meanings of the words they are using, nor the concepts expressed within the sentences they create. When asked how to bring a cow back to life, earlier versions of ChatGPT, for example, which ran on a souped-up version of GPT-3, would confidently provide a list of instructions. So-called hallucinations like this happen because language models have no concept of what a "cow" is or that "death" is a non-reversible state of being. LLMs do not have minds that can think about objects in the world and how they relate to each other. All they "know" is how likely it is that some sets of words will follow other sets of words, having calculated those probabilities from their training data. To make sense of all this, I spoke with Gary Marcus, an emeritus professor of psychology and neural science at New York University, for "Babbage", our science and technology podcast. Last year, as the world was transfixed by the sudden appearance of ChatGPT, he made some fascinating predictions about GPT-4.
    He doesn't dismiss the potential of LLMs to become useful assistants in all sorts of ways-Google and Microsoft have already announced that they will be integrating LLMs into their search and office productivity software. But he talked me through some of his criticisms of the technology's apparent capabilities. At the heart of Dr Marcus's thoughtful critique is an attempt to put LLMs into proper context. Deep learning, the underlying technology that makes LLMs work, is only one piece of the puzzle in the quest for machine intelligence. To reach the level of artificial general intelligence (AGI) that many tech companies strive for-i.e. machines that can plan, reason and solve problems in the way human brains can-they will need to deploy a suite of other AI techniques. These include, for example, the kind of "symbolic AI" that was popular before artificial neural networks and deep learning became all the rage.
    People use symbols to think about the world: if I say the words "cat", "house" or "aeroplane", you know instantly what I mean. Symbols can also be used to describe the way things are behaving (running, falling, flying) or they can represent how things should behave in relation to each other (a "+" means add the numbers before and after). Symbolic AI is a way to embed this human knowledge and reasoning into computer systems. Though the idea has been around for decades, it fell by the wayside a few years ago as deep learning-buoyed by the sudden easy availability of lots of training data and cheap computing power-became more fashionable. In the near future at least, there's no doubt people will find LLMs useful. But whether they represent a critical step on the path towards AGI, or rather just an intriguing detour, remains to be seen."
  13. Hobert, A.; Jahn, N.; Mayr, P.; Schmidt, B.; Taubert, N.: Open access uptake in Germany 2010-2018 : adoption in a diverse research landscape (2021) 0.00
    0.0034396404 = product of:
      0.010318921 = sum of:
        0.010318921 = product of:
          0.020637842 = sum of:
            0.020637842 = weight(_text_:of in 250) [ClassicSimilarity], result of:
              0.020637842 = score(doc=250,freq=38.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.30123898 = fieldWeight in 250, product of:
                  6.164414 = tf(freq=38.0), with freq of:
                    38.0 = termFreq=38.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=250)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Es handelt sich um eine bibliometrische Untersuchung der Entwicklung der Open-Access-Verfügbarkeit wissenschaftlicher Zeitschriftenartikel in Deutschland, die im Zeitraum 2010-18 erschienen und im Web of Science indexiert sind. Ein besonderes Augenmerk der Analyse lag auf der Frage, ob und inwiefern sich die Open-Access-Profile der Universitäten und außeruniversitären Wissenschaftseinrichtungen in Deutschland voneinander unterscheiden.
    Content
    This study investigates the development of open access (OA) to journal articles from authors affiliated with German universities and non-university research institutions in the period 2010-2018. Beyond determining the overall share of openly available articles, a systematic classification of distinct categories of OA publishing allowed us to identify different patterns of adoption of OA. Taking into account the particularities of the German research landscape, variations in terms of productivity, OA uptake and approaches to OA are examined at the meso-level and possible explanations are discussed. The development of the OA uptake is analysed for the different research sectors in Germany (universities, non-university research institutes of the Helmholtz Association, Fraunhofer Society, Max Planck Society, Leibniz Association, and government research agencies). Combining several data sources (incl. Web of Science, Unpaywall, an authority file of standardised German affiliation information, the ISSN-Gold-OA 3.0 list, and OpenDOAR), the study confirms the growth of the OA share mirroring the international trend reported in related studies. We found that 45% of all considered articles during the observed period were openly available at the time of analysis. Our findings show that subject-specific repositories are the most prevalent type of OA. However, the percentages for publication in fully OA journals and OA via institutional repositories show similarly steep increases. Enabling data-driven decision-making regarding the implementation of OA in Germany at the institutional level, the results of this study furthermore can serve as a baseline to assess the impact recent transformative agreements with major publishers will likely have on scholarly communication.
  14. Koster, L.: Persistent identifiers for heritage objects (2020) 0.00
    0.0034169364 = product of:
      0.010250809 = sum of:
        0.010250809 = product of:
          0.020501617 = sum of:
            0.020501617 = weight(_text_:of in 5718) [ClassicSimilarity], result of:
              0.020501617 = score(doc=5718,freq=24.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2992506 = fieldWeight in 5718, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5718)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Persistent identifiers (PID's) are essential for getting access and referring to library, archive and museum (LAM) collection objects in a sustainable and unambiguous way, both internally and externally. Heritage institutions need a universal policy for the use of PID's in order to have an efficient digital infrastructure at their disposal and to achieve optimal interoperability, leading to open data, open collections and efficient resource management. Here the discussion is limited to PID's that institutions can assign to objects they own or administer themselves. PID's for people, subjects etc. can be used by heritage institutions, but are generally managed by other parties. The first part of this article consists of a general theoretical description of persistent identifiers. First of all, I discuss the questions of what persistent identifiers are and what they are not, and what is needed to administer and use them. The most commonly used existing PID systems are briefly characterized. Then I discuss the types of objects PID's can be assigned to. This section concludes with an overview of the requirements that apply if PIDs should also be used for linked data. The second part examines current infrastructural practices, and existing PID systems and their advantages and shortcomings. Based on these practical issues and the pros and cons of existing PID systems a list of requirements for PID systems is presented which is used to address a number of practical considerations. This section concludes with a number of recommendations.
  15. Pankowski, T.: Ontological databases with faceted queries (2022) 0.00
    0.0034169364 = product of:
      0.010250809 = sum of:
        0.010250809 = product of:
          0.020501617 = sum of:
            0.020501617 = weight(_text_:of in 666) [ClassicSimilarity], result of:
              0.020501617 = score(doc=666,freq=24.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2992506 = fieldWeight in 666, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=666)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The success of the use of ontology-based systems depends on efficient and user-friendly methods of formulating queries against the ontology. We propose a method to query a class of ontologies, called facet ontologies ( fac-ontologies ), using a faceted human-oriented approach. A fac-ontology has two important features: (a) a hierarchical view of it can be defined as a nested facet over this ontology and the view can be used as a faceted interface to create queries and to explore the ontology; (b) the ontology can be converted into an ontological database , the ABox of which is stored in a database, and the faceted queries are evaluated against this database. We show that the proposed faceted interface makes it possible to formulate queries that are semantically equivalent to $${\mathcal {SROIQ}}^{Fac}$$ SROIQ Fac , a limited version of the $${\mathcal {SROIQ}}$$ SROIQ description logic. The TBox of a fac-ontology is divided into a set of rules defining intensional predicates and a set of constraint rules to be satisfied by the database. We identify a class of so-called reflexive weak cycles in a set of constraint rules and propose a method to deal with them in the chase procedure. The considerations are illustrated with solutions implemented in the DAFO system ( data access based on faceted queries over ontologies ).
  16. Harari, Y.N.: ¬[Yuval-Noah-Harari-argues-that] AI has hacked the operating system of human civilisation (2023) 0.00
    0.0034169364 = product of:
      0.010250809 = sum of:
        0.010250809 = product of:
          0.020501617 = sum of:
            0.020501617 = weight(_text_:of in 953) [ClassicSimilarity], result of:
              0.020501617 = score(doc=953,freq=6.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2992506 = fieldWeight in 953, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.078125 = fieldNorm(doc=953)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Storytelling computers will change the course of human history, says the historian and philosopher.
    Source
    https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation?giftId=6982bba3-94bc-441d-9153-6d42468817ad
  17. Frederick, D.E.: ChatGPT: a viral data-driven disruption in the information environment (2023) 0.00
    0.0034169364 = product of:
      0.010250809 = sum of:
        0.010250809 = product of:
          0.020501617 = sum of:
            0.020501617 = weight(_text_:of in 983) [ClassicSimilarity], result of:
              0.020501617 = score(doc=983,freq=24.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2992506 = fieldWeight in 983, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=983)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This study aims to introduce librarians to ChatGPT and challenge them to think about how it fits into their work and what learning they will need to do in order to stay relevant in the realm of artificial intelligence. Design/methodology/approach Popular and scientific media sources were monitored over the course of two months to gather current discussions about the uses of and opinions about ChatGPT. This was analyzed in light of historical developments in education and libraries. Additional sources of information on the topic were described and discussed so that the issue is made relevant to librarians and libraries. Findings The potential risks and benefits of ChatGPT are highly relevant for librarians but also currently not fully understood. We are in a very early stage of understanding and using this technology but it does appear to have the possibility of becoming disruptive to libraries as well as many other aspects of life. Originality/value ChatGPT-3 has only been publicly available since the end of November 2022. We are just now starting to take a deeper dive into what this technology means for libraries. This paper is one of the early ones that provide librarians with some direction in terms of where to focus their interest and attention in learning about it.
  18. Petras, V.: ¬The identity of information science (2023) 0.00
    0.0034169364 = product of:
      0.010250809 = sum of:
        0.010250809 = product of:
          0.020501617 = sum of:
            0.020501617 = weight(_text_:of in 1077) [ClassicSimilarity], result of:
              0.020501617 = score(doc=1077,freq=24.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2992506 = fieldWeight in 1077, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1077)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose This paper offers a definition of the core of information science, which encompasses most research in the field. The definition provides a unique identity for information science and positions it in the disciplinary universe. Design/methodology/approach After motivating the objective, a definition of the core and an explanation of its key aspects are provided. The definition is related to other definitions of information science before controversial discourse aspects are briefly addressed: discipline vs. field, science vs. humanities, library vs. information science and application vs. theory. Interdisciplinarity as an often-assumed foundation of information science is challenged. Findings Information science is concerned with how information is manifested across space and time. Information is manifested to facilitate and support the representation, access, documentation and preservation of ideas, activities, or practices, and to enable different types of interactions. Research and professional practice encompass the infrastructures - institutions and technology -and phenomena and practices around manifested information across space and time as its core contribution to the scholarly landscape. Information science collaborates with other disciplines to work on complex information problems that need multi- and interdisciplinary approaches to address them. Originality/value The paper argues that new information problems may change the core of the field, but throughout its existence, the discipline has remained quite stable in its central focus, yet proved to be highly adaptive to the tremendous changes in the forms, practices, institutions and technologies around and for manifested information.
    Source
    Journal of documentation. 79(2023) no.n, S.xxx-xxx [ahead-of-print. https://doi.org/10.1108/JD-04-2023-0074]
  19. Shiri, A.; Kelly, E.J.; Kenfield, A.; Woolcott, L.; Masood, K.; Muglia, C.; Thompson, S.: ¬A faceted conceptualization of digital object reuse in digital repositories (2020) 0.00
    0.003382594 = product of:
      0.010147782 = sum of:
        0.010147782 = product of:
          0.020295564 = sum of:
            0.020295564 = weight(_text_:of in 48) [ClassicSimilarity], result of:
              0.020295564 = score(doc=48,freq=12.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.29624295 = fieldWeight in 48, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=48)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this paper, we provide an introduction to the concept of digital object reuse and its various connotations in the context of current digital libraries, archives, and repositories. We will then propose a faceted categorization of the various types, contexts, and cases for digital object reuse in order to facilitate understanding and communication and to provide a conceptual framework for the assessment of digital object reuse by various cultural heritage and cultural memory organizations.
    Footnote
    Conference Paper: International Society for Knowledge Organziation (ISKO), Proceedings of the Sixteenth International ISKO Conference, 2020 Aalborg, Denmark, Ed. by Marianne Lykke, Tanja Svarre, Mette Skov, Daniel Martínez-Ávila (Ed.)At: Aalborg, Denmark. (Advances in Knowledge Organization, vol. 17).
  20. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.00
    0.0033478998 = product of:
      0.010043699 = sum of:
        0.010043699 = product of:
          0.020087399 = sum of:
            0.020087399 = weight(_text_:of in 38) [ClassicSimilarity], result of:
              0.020087399 = score(doc=38,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2932045 = fieldWeight in 38, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=38)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A variety of schemas and ontologies are currently used for the machine-readable description of bibliographic entities and citations. This diversity, and the reuse of the same ontology terms with different nuances, generates inconsistencies in data. Adoption of a single data model would facilitate data integration tasks regardless of the data supplier or context application. In this paper we present the OpenCitations Data Model (OCDM), a generic data model for describing bibliographic entities and citations, developed using Semantic Web technologies. We also evaluate the effective reusability of OCDM according to ontology evaluation practices, mention existing users of OCDM, and discuss the use and impact of OCDM in the wider open science community.

Languages

  • e 59
  • d 36
  • sp 1
  • More… Less…

Types

  • a 77
  • p 9
  • r 1
  • More… Less…