Search (61 results, page 2 of 4)

  • × language_ss:"e"
  • × type_ss:"el"
  • × year_i:[2020 TO 2030}
  1. Lynch, J.D.; Gibson, J.; Han, M.-J.: Analyzing and normalizing type metadata for a large aggregated digital library (2020) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 5720) [ClassicSimilarity], result of:
              0.009471525 = score(doc=5720,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 5720, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5720)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Illinois Digital Heritage Hub (IDHH) gathers and enhances metadata from contributing institutions around the state of Illinois and provides this metadata to th Digital Public Library of America (DPLA) for greater access. The IDHH helps contributors shape their metadata to the standards recommended and required by the DPLA in part by analyzing and enhancing aggregated metadata. In late 2018, the IDHH undertook a project to address a particularly problematic field, Type metadata. This paper walks through the project, detailing the process of gathering and analyzing metadata using the DPLA API and OpenRefine, data remediation through XSL transformations in conjunction with local improvements by contributing institutions, and the DPLA ingestion system's quality controls.
    Type
    a
  2. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo : a Web-Scale interface for ontology archiving under consumer-oriented aspects (2020) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 52) [ClassicSimilarity], result of:
              0.009471525 = score(doc=52,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 52, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=52)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    While thousands of ontologies exist on the web, a unified sys-tem for handling online ontologies - in particular with respect to discov-ery, versioning, access, quality-control, mappings - has not yet surfacedand users of ontologies struggle with many challenges. In this paper, wepresent an online ontology interface and augmented archive called DB-pedia Archivo, that discovers, crawls, versions and archives ontologies onthe DBpedia Databus. Based on this versioned crawl, different features,quality measures and, if possible, fixes are deployed to handle and sta-bilize the changes in the found ontologies at web-scale. A comparison toexisting approaches and ontology repositories is given.
    Type
    a
  3. Gomez, J.; Allen, K.; Matney, M.; Awopetu, T.; Shafer, S.: Experimenting with a machine generated annotations pipeline (2020) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 657) [ClassicSimilarity], result of:
              0.009374379 = score(doc=657,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 657, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=657)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The UCLA Library reorganized its software developers into focused subteams with one, the Labs Team, dedicated to conducting experiments. In this article we describe our first attempt at conducting a software development experiment, in which we attempted to improve our digital library's search results with metadata from cloud-based image tagging services. We explore the findings and discuss the lessons learned from our first attempt at running an experiment.
    Type
    a
  4. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.M.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; Amodei, D.: Language models are few-shot learners (2020) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 872) [ClassicSimilarity], result of:
              0.009374379 = score(doc=872,freq=24.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 872, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=872)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
    Type
    a
  5. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 5365) [ClassicSimilarity], result of:
              0.009076704 = score(doc=5365,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 5365, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5365)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  6. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 38) [ClassicSimilarity], result of:
              0.009076704 = score(doc=38,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 38, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=38)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A variety of schemas and ontologies are currently used for the machine-readable description of bibliographic entities and citations. This diversity, and the reuse of the same ontology terms with different nuances, generates inconsistencies in data. Adoption of a single data model would facilitate data integration tasks regardless of the data supplier or context application. In this paper we present the OpenCitations Data Model (OCDM), a generic data model for describing bibliographic entities and citations, developed using Semantic Web technologies. We also evaluate the effective reusability of OCDM according to ontology evaluation practices, mention existing users of OCDM, and discuss the use and impact of OCDM in the wider open science community.
    Type
    a
  7. Koster, L.: Persistent identifiers for heritage objects (2020) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 5718) [ClassicSimilarity], result of:
              0.00894975 = score(doc=5718,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 5718, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5718)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Persistent identifiers (PID's) are essential for getting access and referring to library, archive and museum (LAM) collection objects in a sustainable and unambiguous way, both internally and externally. Heritage institutions need a universal policy for the use of PID's in order to have an efficient digital infrastructure at their disposal and to achieve optimal interoperability, leading to open data, open collections and efficient resource management. Here the discussion is limited to PID's that institutions can assign to objects they own or administer themselves. PID's for people, subjects etc. can be used by heritage institutions, but are generally managed by other parties. The first part of this article consists of a general theoretical description of persistent identifiers. First of all, I discuss the questions of what persistent identifiers are and what they are not, and what is needed to administer and use them. The most commonly used existing PID systems are briefly characterized. Then I discuss the types of objects PID's can be assigned to. This section concludes with an overview of the requirements that apply if PIDs should also be used for linked data. The second part examines current infrastructural practices, and existing PID systems and their advantages and shortcomings. Based on these practical issues and the pros and cons of existing PID systems a list of requirements for PID systems is presented which is used to address a number of practical considerations. This section concludes with a number of recommendations.
    Type
    a
  8. Aydin, Ö.; Karaarslan, E.: OpenAI ChatGPT generated literature review: : digital twin in healthcare (2022) 0.00
    0.0021393995 = product of:
      0.004278799 = sum of:
        0.004278799 = product of:
          0.008557598 = sum of:
            0.008557598 = weight(_text_:a in 851) [ClassicSimilarity], result of:
              0.008557598 = score(doc=851,freq=20.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.16114321 = fieldWeight in 851, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=851)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Literature review articles are essential to summarize the related work in the selected field. However, covering all related studies takes too much time and effort. This study questions how Artificial Intelligence can be used in this process. We used ChatGPT to create a literature review article to show the stage of the OpenAI ChatGPT artificial intelligence application. As the subject, the applications of Digital Twin in the health field were chosen. Abstracts of the last three years (2020, 2021 and 2022) papers were obtained from the keyword "Digital twin in healthcare" search results on Google Scholar and paraphrased by ChatGPT. Later on, we asked ChatGPT questions. The results are promising; however, the paraphrased parts had significant matches when checked with the Ithenticate tool. This article is the first attempt to show the compilation and expression of knowledge will be accelerated with the help of artificial intelligence. We are still at the beginning of such advances. The future academic publishing process will require less human effort, which in turn will allow academics to focus on their studies. In future studies, we will monitor citations to this study to evaluate the academic validity of the content produced by the ChatGPT. 1. Introduction OpenAI ChatGPT (ChatGPT, 2022) is a chatbot based on the OpenAI GPT-3 language model. It is designed to generate human-like text responses to user input in a conversational context. OpenAI ChatGPT is trained on a large dataset of human conversations and can be used to create responses to a wide range of topics and prompts. The chatbot can be used for customer service, content creation, and language translation tasks, creating replies in multiple languages. OpenAI ChatGPT is available through the OpenAI API, which allows developers to access and integrate the chatbot into their applications and systems. OpenAI ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text, allowing it to engage in conversation with users naturally and intuitively. OpenAI ChatGPT is trained on a large dataset of human conversations, allowing it to understand and respond to a wide range of topics and contexts. It can be used in various applications, such as chatbots, customer service agents, and language translation systems. OpenAI ChatGPT is a state-of-the-art language model able to generate coherent and natural text that can be indistinguishable from text written by a human. As an artificial intelligence, ChatGPT may need help to change academic writing practices. However, it can provide information and guidance on ways to improve people's academic writing skills.
  9. Jha, A.: Why GPT-4 isn't all it's cracked up to be (2023) 0.00
    0.0021343792 = product of:
      0.0042687585 = sum of:
        0.0042687585 = product of:
          0.008537517 = sum of:
            0.008537517 = weight(_text_:a in 923) [ClassicSimilarity], result of:
              0.008537517 = score(doc=923,freq=26.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.16076508 = fieldWeight in 923, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=923)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    "I still don't know what to think about GPT-4, the new large language model (LLM) from OpenAI. On the one hand it is a remarkable product that easily passes the Turing test. If you ask it questions, via the ChatGPT interface, GPT-4 can easily produce fluid sentences largely indistinguishable from those a person might write. But on the other hand, amid the exceptional levels of hype and anticipation, it's hard to know where GPT-4 and other LLMs truly fit in the larger project of making machines intelligent.
    They might appear intelligent, but LLMs are nothing of the sort. They don't understand the meanings of the words they are using, nor the concepts expressed within the sentences they create. When asked how to bring a cow back to life, earlier versions of ChatGPT, for example, which ran on a souped-up version of GPT-3, would confidently provide a list of instructions. So-called hallucinations like this happen because language models have no concept of what a "cow" is or that "death" is a non-reversible state of being. LLMs do not have minds that can think about objects in the world and how they relate to each other. All they "know" is how likely it is that some sets of words will follow other sets of words, having calculated those probabilities from their training data. To make sense of all this, I spoke with Gary Marcus, an emeritus professor of psychology and neural science at New York University, for "Babbage", our science and technology podcast. Last year, as the world was transfixed by the sudden appearance of ChatGPT, he made some fascinating predictions about GPT-4.
    He doesn't dismiss the potential of LLMs to become useful assistants in all sorts of ways-Google and Microsoft have already announced that they will be integrating LLMs into their search and office productivity software. But he talked me through some of his criticisms of the technology's apparent capabilities. At the heart of Dr Marcus's thoughtful critique is an attempt to put LLMs into proper context. Deep learning, the underlying technology that makes LLMs work, is only one piece of the puzzle in the quest for machine intelligence. To reach the level of artificial general intelligence (AGI) that many tech companies strive for-i.e. machines that can plan, reason and solve problems in the way human brains can-they will need to deploy a suite of other AI techniques. These include, for example, the kind of "symbolic AI" that was popular before artificial neural networks and deep learning became all the rage.
    People use symbols to think about the world: if I say the words "cat", "house" or "aeroplane", you know instantly what I mean. Symbols can also be used to describe the way things are behaving (running, falling, flying) or they can represent how things should behave in relation to each other (a "+" means add the numbers before and after). Symbolic AI is a way to embed this human knowledge and reasoning into computer systems. Though the idea has been around for decades, it fell by the wayside a few years ago as deep learning-buoyed by the sudden easy availability of lots of training data and cheap computing power-became more fashionable. In the near future at least, there's no doubt people will find LLMs useful. But whether they represent a critical step on the path towards AGI, or rather just an intriguing detour, remains to be seen."
  10. Park, J.S.; O'Brien, J.C.; Cai, C.J.; Ringel Morris, M.; Liang, P.; Bernstein, M.S.: Generative agents : interactive simulacra of human behavior (2023) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 972) [ClassicSimilarity], result of:
              0.008285859 = score(doc=972,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 972, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=972)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents--computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent's experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine's Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture--observation, planning, and reflection--each contribute critically to the believability of agent behavior. By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.
    Type
    a
  11. Prokop, M.: Hans Jonas and the phenomenological continuity of life and mind (2022) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 1048) [ClassicSimilarity], result of:
              0.008285859 = score(doc=1048,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 1048, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1048)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper offers a novel interpretation of Hans Jonas' analysis of metabolism, the centrepiece of Jonas' philosophy of organism, in relation to recent controversies regarding the phenomenological dimension of life-mind continuity as understood within 'autopoietic' enactivism (AE). Jonas' philosophy of organism chiefly inspired AE's development of what we might call 'the phenomenological life-mind continuity thesis' (PLMCT), the claim that certain phenomenological features of human experience are central to a proper scientific understanding of both life and mind, and as such central features of all living organisms. After discussing the understanding of PLMCT within AE, and recent criticisms thereof, I develop a reading of Jonas' analysis of metabolism, in light of previous commentators, which emphasizes its systematicity and transcendental flavour. The central thought is that, for Jonas, the attribution of certain phenomenological features is a necessary precondition for our understanding of the possibility of metabolism, rather than being derivable from metabolism itself. I argue that my interpretation strengthens Jonas' contribution to AE's justification for ascribing certain phenomenological features to life across the board. However, it also emphasises the need to complement Jonas' analysis with an explanatory account of organic identity in order to vindicate these phenomenological ascriptions in a scientific context.
    Type
    a
  12. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 572) [ClassicSimilarity], result of:
              0.008202582 = score(doc=572,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 572, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=572)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  13. Hudon, M.: ¬The status of knowledge organization in library and information science master's programs (2021) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 697) [ClassicSimilarity], result of:
              0.008202582 = score(doc=697,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 697, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=697)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The content of master's programs accredited by the American Library Association was examined to assess the status of knowledge organization (KO) as a subject in current training. Data collected show that KO remains very visible in a majority of programs, mainly in the form of required and electives courses focusing on descriptive cataloging, classification, and metadata. Observed tendencies include, however, the recent elimination of the required KO course in several programs, the reality that one third of KO electives listed in course catalogs have not been scheduled in the past three years, and the fact that two-thirds of those teaching KO specialize in other areas of information science.
    Type
    a
  14. Almeida, P. de; Gnoli, C.: Fiction in a phenomenon-based classification (2021) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 712) [ClassicSimilarity], result of:
              0.008118451 = score(doc=712,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 712, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=712)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In traditional classification, fictional works are indexed only by their form, genre, and language, while their subject content is believed to be irrelevant. However, recent research suggests that this may not be the best approach. We tested indexing of a small sample of selected fictional works by Integrative Levels Classification (ILC2), a freely faceted system based on phenomena instead of disciplines and considered the structure of the resulting classmarks. Issues in the process of subject analysis, such as selection of relevant vs. non-relevant themes and citation order of relevant ones, are identified and discussed. Some phenomena that are covered in scholarly literature can also be identified as relevant themes in fictional literature and expressed in classmarks. This can allow for hybrid search and retrieval systems covering both fiction and nonfiction, which will result in better leveraging of the knowledge contained in fictional works.
    Type
    a
  15. Metz, C.: ¬The new chatbots could change the world : can you trust them? (2022) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 854) [ClassicSimilarity], result of:
              0.008118451 = score(doc=854,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 854, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=854)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  16. Dhillon, P.; Singh, M.: ¬An extended ontology model for trust evaluation using advanced hybrid ontology (2023) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 981) [ClassicSimilarity], result of:
              0.008118451 = score(doc=981,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 981, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=981)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In the blooming area of Internet technology, the concept of Internet-of-Things (IoT) holds a distinct position that interconnects a large number of smart objects. In the context of social IoT (SIoT), the argument of trust and reliability is evaluated in the presented work. The proposed framework is divided into two blocks, namely Verification Block (VB) and Evaluation Block (EB). VB defines various ontology-based relationships computed for the objects that reflect the security and trustworthiness of an accessed service. While, EB is used for the feedback analysis and proves to be a valuable step that computes and governs the success rate of the service. Support vector machine (SVM) is applied to categorise the trust-based evaluation. The security aspect of the proposed approach is comparatively evaluated for DDoS and malware attacks in terms of success rate, trustworthiness and execution time. The proposed secure ontology-based framework provides better performance compared with existing architectures.
    Type
    a
  17. Kratochwil, F.; Peltonen, H.: Constructivism (2022) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 829) [ClassicSimilarity], result of:
              0.008118451 = score(doc=829,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 829, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=829)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Constructivism in the social sciences has known several ups and downs over the last decades. It was successful rather early in sociology but hotly contested in International Politics/Relations (IR). Oddly enough, just at the moments it made important inroads into the research agenda and became accepted by the mainstream, the enthusiasm for it waned. Many constructivists-as did mainstream scholars-moved from "grand theory" or even "meta-theory" toward "normal science," or experimented with other (eclectic) approaches, of which the turns to practices, to emotions, to new materialism, to the visual, and to the queer are some of the latest manifestations. In a way, constructivism was "successful," on the one hand, by introducing norms, norm-dynamics, and diffusion; the role of new actors in world politics; and the changing role of institutions into the debates, while losing, on the other hand, much of its critical potential. The latter survived only on the fringes-and in Europe more than in the United States. In IR, curiously, constructivism, which was rooted in various European traditions (philosophy, history, linguistics, social analysis), was originally introduced in Europe via the disciplinary discussions taking place in the United States. Yet, especially in its critical version, it has found a more conducive environment in Europe than in the United States.
    In the United States, soon after its emergence, constructivism became "mainstreamed" by having its analysis of norms reduced to "variable research." In such research, positive examples of for instance the spread of norms were included, but strangely empirical evidence of counterexamples of norm "deaths" (preventive strikes, unlawful combatants, drone strikes, extrajudicial killings) were not. The elective affinity of constructivism and humanitarianism seemed to have transformed the former into the Enlightenment project of "progress." Even Kant was finally pressed into the service of "liberalism" in the U.S. discussion, and his notion of the "practical interest of reason" morphed into the political project of an "end of history." This "slant" has prevented a serious conceptual engagement with the "history" of law and (inter-)national politics and the epistemological problems that are raised thereby. This bowdlerization of constructivism is further buttressed by the fact that in the "knowledge industry" none of the "leading" U.S. departments has a constructivist on board, ensuring thereby the narrowness of conceptual and methodological choices to which the future "professionals" are exposed. This article contextualizes constructivism and its emergence within a changing world and within the evolution of the discipline. The aim is not to provide a definition or a typology of constructivism, since such efforts go against the critical dimension of constructivism. An application of this critique on constructivism itself leads to a reflection on truth, knowledge, and the need for (re-)orientation.
    Type
    a
  18. Advanced online media use (2023) 0.00
    0.001913537 = product of:
      0.003827074 = sum of:
        0.003827074 = product of:
          0.007654148 = sum of:
            0.007654148 = weight(_text_:a in 954) [ClassicSimilarity], result of:
              0.007654148 = score(doc=954,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14413087 = fieldWeight in 954, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=954)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "1. Use a range of different media 2. Access paywalled media content 3. Use an advertising and tracking blocker 4. Use alternatives to Google Search 5. Use alternatives to YouTube 6. Use alternatives to Facebook and Twitter 7. Caution with Wikipedia 8. Web browser, email, and internet access 9. Access books and scientific papers 10. Access deleted web content"
    Type
    a
  19. Baines, D.; Elliott, R.J.: Defining misinformation, disinformation and malinformation : an urgent need for clarity during the COVID-19 infodemic (2020) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 5853) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=5853,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 5853, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5853)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    COVID-19 is an unprecedented global health crisis that will have immeasurable consequences for our economic and social well-being. Tedros Adhanom Ghebreyesus, the director general of the World Health Organization, stated "We're not just fighting an epidemic; we're fighting an infodemic". Currently, there is no robust scientific basis to the existing definitions of false information used in the fight against the COVID-19infodemic. The purpose of this paper is to demonstrate how the use of a novel taxonomy and related model (based upon a conceptual framework that synthesizes insights from information science, philosophy, media studies and politics) can produce new scientific definitions of mis-, dis- and malinformation. We undertake our analysis from the viewpoint of information systems research. The conceptual approach to defining mis-,dis- and malinformation can be applied to a wide range of empirical examples and, if applied properly, may prove useful in fighting the COVID-19 infodemic. In sum, our research suggests that: (i) analyzing all types of information is important in the battle against the COVID-19 infodemic; (ii) a scientific approach is required so that different methods are not used by different studies; (iii) "misinformation", as an umbrella term, can be confusing and should be dropped from use; (iv) clear, scientific definitions of information types will be needed going forward; (v) malinformation is an overlooked phenomenon involving reconfigurations of the truth.
    Type
    a
  20. Ogden, J.; Summers, E.; Walker, S.: Know(ing) Infrastructure : the wayback machine as object and instrument of digital research (2023) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 1084) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=1084,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 1084, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1084)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    From documenting human rights abuses to studying online advertising, web archives are increasingly positioned as critical resources for a broad range of scholarly Internet research agendas. In this article, we reflect on the motivations and methodological challenges of investigating the world's largest web archive, the Internet Archive's Wayback Machine (IAWM). Using a mixed methods approach, we report on a pilot project centred around documenting the inner workings of 'Save Page Now' (SPN) - an Internet Archive tool that allows users to initiate the creation and storage of 'snapshots' of web resources. By improving our understanding of SPN and its role in shaping the IAWM, this work examines how the public tool is being used to 'save the Web' and highlights the challenges of operationalising a study of the dynamic sociotechnical processes supporting this knowledge infrastructure. Inspired by existing Science and Technology Studies (STS) approaches, the paper charts our development of methodological interventions to support an interdisciplinary investigation of SPN, including: ethnographic methods, 'experimental blackbox tactics', data tracing, modelling and documentary research. We discuss the opportunities and limitations of our methodology when interfacing with issues associated with temporality, scale and visibility, as well as critically engage with our own positionality in the research process (in terms of expertise and access). We conclude with reflections on the implications of digital STS approaches for 'knowing infrastructure', where the use of these infrastructures is unavoidably intertwined with our ability to study the situated and material arrangements of their creation.
    Type
    a

Types

  • a 48
  • p 9