Search (232 results, page 12 of 12)

  • × year_i:[2020 TO 2030}
  1. Li, Y.; Crescenzi, A.; Ward, A.R.; Capra, R.: Thinking inside the box : an evaluation of a novel search-assisting tool for supporting (meta)cognition during exploratory search (2023) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 1040) [ClassicSimilarity], result of:
              0.027226217 = score(doc=1040,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 1040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1040)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Exploratory searches involve significant cognitively demanding aiming at learning and investigation. However, users gain little support from search engines for their cognitive and metacognitive activities (e.g., discovery, synthesis, planning, transformation, monitoring, and reflection) during exploratory searches. To better support the exploratory search process, we designed a new search assistance tool called OrgBox. OrgBox allows users to drag-and-drop information they find during searches into "boxes" and "items" that can be created, labeled, and rearranged on a canvas. We conducted a controlled, within-subjects user study with 24 participants to evaluate the OrgBox versus a baseline tool called the OrgDoc that supported rich-text features. Our findings show that participants perceived the OrgBox tool to provide more support for grouping and reorganizing information, tracking thought processes, planning and monitoring search and task processes, and gaining a visual overview of the collected information. The usability test reveals users' preferences for simplicity, familiarity, and flexibility of the design of OrgBox, along with technical problems such as delay of response and restrictions of use. Our results have implications for the design of search-assisting systems that encourage cognitive and metacognitive activities during exploratory search processes.
  2. Rafferty, P.: Genre as knowledge organization (2022) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 1093) [ClassicSimilarity], result of:
              0.027226217 = score(doc=1093,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 1093, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1093)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article examines genre as knowledge organization. Genres are fluid and historically changing categories, and there are different views about the scope and membership of specific genres. The literature generally agrees that genre is a matter of discrimination and taxonomy, and that it is concerned with organising things into recognisable classes, existing as part of the relationship between texts and readers. Genre can be thought of as a sorting mechanism, and genres are not only a matter of codes and conventions but also call into play systems of use and social institutions. This article explores the history of genre analysis across a broad range of disciplines, including literary studies, rhetorical and social action studies, and English for academic and professional purposes. It considers genre theory as a framework for librarianship and knowledge organization and explores the use of genre within librarianship and knowledge organization. Finally, the article discusses the Library of Congress Genre/Forms Terms for Library and Archival Materials which, itself an evolving and changing standard, offers a step towards standardisation regarding genre terms and the scope of genre categories.
  3. Bragato Barros, T.H.: Michel Pêcheux's discourse analysis : an approach to domain analyses (2023) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 1116) [ClassicSimilarity], result of:
              0.027226217 = score(doc=1116,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 1116, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1116)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article discusses the aspects and points of contact between discourse analysis and knowledge organization, perceiving how Michel Pêcheux's discourse analyses can contribute to domain analyses. Discourse analysis (DA) deals with the theoretical-methodological development of social and scientific movements that took place in France from the 1960s onwards; this paper seeks to discuss aspects of discourse analysis and the possibilities of its use in the universe of knowledge organization (KO). Little work is done structurally and transversally when it comes to discourse itself, especially when the words "discourse" and "analysis" appear in the titles, abstracts, keywords etc. of chapters, books and journals that have KO in their scope. That is mainly due to those works are recent and that belong to fields far from those which have traditionally dealt with discourse. Therefore, viewing discourse as a theoretical contribution to KO means a new framework should be understood in the scope of the analyses carried out regarding the construction of systems, approaches, and studies, precisely because it sees in the terms not only what concerns their concepts, as is the traditional route in KO, but also the ideology, and understands the construction of meaning as something historical as well as social. So, there is a major contribution for domain analyses based in Pêcheux's discourse theory.
  4. Hjoerland, B.: Education in knowledge organization (KO) (2023) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 1124) [ClassicSimilarity], result of:
              0.027226217 = score(doc=1124,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 1124, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1124)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article provides analyses, describes dilemmas, and suggests way forwards in the teaching of knowl­edge organization (KO). The general assumption of the article is that theoretical problems in KO must be the point of departure for teaching KO. Section 2 addresses the teaching of practical, applied and professional KO, focusing on learning about specific knowl­edge organization systems (KOS), specific standards, and specific methods for organizing knowl­edge, but provides arguments for not isolating these aspects from theoretical issues. Section 3 is about teaching theoretical and academic KO, in which the focus is on examining the bases on which KOSs and knowl­edge organization processes such as classifying and indexing are founded. This basically concerns concepts and conceptual relations and should not be based on prejudices about the superiority of either humans or computers for KO. Section 4 is about the study of education in KO, which is considered important because it is about how the field is monitoring itself and about how it should be shaping its own future. Section 5 is about the role of the ISKO Encyclopedia of Knowl­edge Organization in education of KO, emphasizing the need for an interdisciplinary source that may help improve the conceptual clarity in the field. The conclusion suggests some specific recommendations for curricula in KO based on the author's view of KO.
  5. Moreira dos Santos Macula, B.C.: ¬The Universal Decimal Classification in the organization of knowledge : representing the concept of ethics (2023) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 1128) [ClassicSimilarity], result of:
              0.027226217 = score(doc=1128,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 1128, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1128)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Training in knowl­edge organization (KO) involves an understanding of theories for the construction, maintenance, use, and evaluation of logical documentary languages. Teaching these KO concepts in LIS programs are related basically to accessing documents and retrieving their intellectual content. This study focuses on access to documents and exploring the ethical theme in all its dimensions as applied to the teaching of an undergraduate discipline as part of a Bachelor of Library Science degree offered at the Federal University of Minas Gerais (UFMG). As a methodology, a Project-based Pedagogy strategy is used in the teaching of a discipline called "Classification Systems: UDC" for students to classify a documentary resource from a collection on ethics. The teaching of bibliographic classification requires students to learn how to use the mechanisms available to form a notation as well as to use a syntax schema (tables) appropriately. Students also learn to determine a place for the document in the collection, considering the knowl­edge represented in the collection as a whole. Altogether, such a practice can help students to understand the theory underlying a classification system. The results show that the students were able to understand the basic concepts of knowl­edge organization. The students were also able to observe that the elements of the different tables of a classification tool are essential mechanisms for the organization of knowl­edge in other contexts, especially for specific purposes.
  6. Silva, S.E.; Reis, L.P.; Fernandes, J.M.; Sester Pereira, A.D.: ¬A multi-layer framework for semantic modeling (2020) 0.01
    0.0054452433 = product of:
      0.010890487 = sum of:
        0.010890487 = product of:
          0.021780973 = sum of:
            0.021780973 = weight(_text_:systems in 5712) [ClassicSimilarity], result of:
              0.021780973 = score(doc=5712,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1358164 = fieldWeight in 5712, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5712)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this paper is to introduce a multi-level framework for semantic modeling (MFSM) based on four signification levels: objects, classes of entities, instances and domains. In addition, four fundamental propositions of the signification process underpin these levels, namely, classification, decomposition, instantiation and contextualization. Design/methodology/approach The deductive approach guided the design of this modeling framework. The authors empirically validated the MFSM in two ways. First, the authors identified the signification processes used in articles that deal with semantic modeling. The authors then applied the MFSM to model the semantic context of the literature about lean manufacturing, a field of management science. Findings The MFSM presents a highly consistent approach about the signification process, integrates the semantic modeling literature in a new and comprehensive view; and permits the modeling of any semantic context, thus facilitating the development of knowledge organization systems based on semantic search. Research limitations/implications The use of MFSM is manual and, thus, requires a considerable effort of the team that decides to model a semantic context. In this paper, the modeling was generated by specialists, and in the future should be applicated to lay users. Practical implications The MFSM opens up avenues to a new form of classification of documents, as well as for the development of tools based on the semantic search, and to investigate how users do their searches. Social implications The MFSM can be used to model archives semantically in public or private settings. In future, it can be incorporated to search engines for more efficient searches of users. Originality/value The MFSM provides a new and comprehensive approach about the elementary levels and activities in the process of signification. In addition, this new framework presents a new form to model semantically any context classifying its objects.
  7. Adler, M.: ¬The strangeness of subject cataloging : afterword (2020) 0.01
    0.0054452433 = product of:
      0.010890487 = sum of:
        0.010890487 = product of:
          0.021780973 = sum of:
            0.021780973 = weight(_text_:systems in 5887) [ClassicSimilarity], result of:
              0.021780973 = score(doc=5887,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1358164 = fieldWeight in 5887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5887)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    "I can't presume to know how other catalogers view the systems, information resources, and institutions with which they engage on a daily basis. David Paton gives us a glimpse in this issue of the affective experiences of bibliographers and catalogers of artists' books in South Africa, and it is clear that the emotional range among them is wide. What I can say is that catalogers' feelings and worldviews, whatever they may be, give the library its shape. I think we can agree that the librarians who constructed the Library of Congress Classification around 1900, Melvil Dewey, and the many classifiers around the world past and present, have had particular sets of desires around control and access and order. We all are asked to submit to those desires in our library work, as well as our own pursuit of knowledge and pleasure reading. And every decision regarding the aboutness of a book, or about where to place it within a particular discipline, takes place in a cataloger's affective and experiential world. While the classification provides the outlines, the catalogers color in the spaces with the books, based on their own readings of the book descriptions and their interpretations of the classification scheme. The decisions they make and the structures to which they are bound affect the circulation of books and their readers across the library. Indeed, some of the encounters will be unexpected, strange, frustrating, frightening, shame-inducing, awe-inspiring, and/or delightful. The emotional experiences of students described in Mabee and Fancher's article, as well as those of any visitor to the library, are all affected by classificatory design. One concern is that a library's ordering principles may reinforce or heighten already existing feelings of precarity or marginality. Because the classifications are hidden from patrons' view, it is difficult to measure the way the order affects a person's mind and body. That a person does not consciously register the associations does not mean that they are not affected."
  8. Ma, J.; Lund, B.: ¬The evolution and shift of research topics and methods in library and information science (2021) 0.01
    0.0054452433 = product of:
      0.010890487 = sum of:
        0.010890487 = product of:
          0.021780973 = sum of:
            0.021780973 = weight(_text_:systems in 357) [ClassicSimilarity], result of:
              0.021780973 = score(doc=357,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1358164 = fieldWeight in 357, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=357)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Employing approaches adopted from studies of library and information science (LIS) research trends performed by Järvelin et al., this content analysis systematically examines the evolution and distribution of LIS research topics and data collection methods at 6-year increments from 2006 to 2018. Bibliographic data were collected for 3,422 articles published in LIS journals in the years 2006, 2012, and 2018. While the classification schemes provided in the Järvelin studies do not indicate much change, an analysis of subtopics, data sources, and keywords indicates a substantial impact of social media and data science on the discipline, which emerged at some point between the years of 2012 and 2018. These findings suggest a type of shift in the focus of LIS research, with social media and data science topics playing a role in well over one-third of articles published in 2018, compared with approximately 5% in 2012 and virtually none in 2006. The shift in LIS research foci based on these two technologies/approaches appears similar in extent to those produced by the introduction of information systems in library science in the 1960s, or the Internet in the 1990s, suggesting that these recent advancements are fundamental to the identity of LIS as a discipline.
  9. Zhao, D.; Strotmann, A.: Intellectual structure of information science 2011-2020 : an author co-citation analysis (2022) 0.01
    0.0054452433 = product of:
      0.010890487 = sum of:
        0.010890487 = product of:
          0.021780973 = sum of:
            0.021780973 = weight(_text_:systems in 610) [ClassicSimilarity], result of:
              0.021780973 = score(doc=610,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1358164 = fieldWeight in 610, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=610)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose This study continues a long history of author co-citation analysis of the intellectual structure of information science into the time period of 2011-2020. It also examines changes in this structure from 2006-2010 through 2011-2015 to 2016-2020. Results will contribute to a better understanding of the information science research field. Design/methodology/approach The well-established procedures and techniques for author co-citation analysis were followed. Full records of research articles in core information science journals published during 2011-2020 were retrieved and downloaded from the Web of Science database. About 150 most highly cited authors in each of the two five-year time periods were selected from this dataset to represent this field, and their co-citation counts were calculated. Each co-citation matrix was input into SPSS for factor analysis, and results were visualized in Pajek. Factors were interpreted as specialties and labeled upon an examination of articles written by authors who load primarily on each factor. Findings The two-camp structure of information science continued to be present clearly. Bibliometric indicators for research evaluation dominated the Knowledge Domain Analysis camp during both fivr-year time periods, whereas interactive information retrieval (IR) dominated the IR camp during 2011-2015 but shared dominance with information behavior during 2016-2020. Bridging between the two camps became increasingly weaker and was only provided by the scholarly communication specialty during 2016-2020. The IR systems specialty drifted further away from the IR camp. The information behavior specialty experienced a deep slump during 2011-2020 in its evolution process. Altmetrics grew to dominate the Webometrics specialty and brought it to a sharp increase during 2016-2020. Originality/value Author co-citation analysis (ACA) is effective in revealing intellectual structures of research fields. Most related studies used term-based methods to identify individual research topics but did not examine the interrelationships between these topics or the overall structure of the field. The few studies that did discuss the overall structure paid little attention to the effect of changes to the source journals on the results. The present study does not have these problems and continues the long history of benchmark contributions to a better understanding of the information science field using ACA.
  10. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.M.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; Amodei, D.: Language models are few-shot learners (2020) 0.01
    0.0054452433 = product of:
      0.010890487 = sum of:
        0.010890487 = product of:
          0.021780973 = sum of:
            0.021780973 = weight(_text_:systems in 872) [ClassicSimilarity], result of:
              0.021780973 = score(doc=872,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1358164 = fieldWeight in 872, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=872)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
  11. Haines, J.; Du, J.T.; Trevorrow, A.E.: Cultural use of ICT4D to promote Indigenous knowledge continuity of Ngarrindjeri stories and communal practices (2023) 0.01
    0.0054452433 = product of:
      0.010890487 = sum of:
        0.010890487 = product of:
          0.021780973 = sum of:
            0.021780973 = weight(_text_:systems in 1092) [ClassicSimilarity], result of:
              0.021780973 = score(doc=1092,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1358164 = fieldWeight in 1092, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1092)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    While there is a considerable amount of interest in information and communication technologies for development (ICT4D) in the Indigenous communities, it remains limited to those who can afford it and have the skills and knowledge to implement the technology and access appropriate digital tools. Hence, Indigenous communities are continually stigmatized as marginalized, leading to a cultural misrepresentation of histories that affects the continuing information disparity between Indigenous and Western knowledge systems, particularly the insufficient technology infrastructure designed for traditional users. In this article, ICT4D was conceptualized as a digital platform to support Senior Ngarrindjeri Elder Aunty Ellen Trevorrow in continuing her practice of weaving and storytelling throughout the pandemic. In this context, the community-based participatory research (CBPR) principles within the structure of video ethnography were qualitatively designed to implement the ICT4D project culturally and ethically. Video recordings, image data, transcriptions, and the Ngarrindjeri ICT4D Pondi (Murray Cod) framework were embedded to justify the findings and the aim of illustrating Aunty Ellen's knowledge-sharing process to online learners. Likewise, the results demonstrate the positive and negative impact of COVID-19 on the continuity and orality of Aunty Ellen's cultural stories and practices. The future continuity of Aunty Ellen's knowledge ought to consider the inconsistency of technological infrastructure in regional areas, her waning health, and the interconnectedness of oral expertise, which often pose challenges. This study is a small step toward a better understanding of the value of oral knowledge; emphasizing the creation of e-learning weaving instructional videos is valuable for future digital management of Indigenous knowledge relevant to LIS.
  12. Jha, A.: Why GPT-4 isn't all it's cracked up to be (2023) 0.00
    0.004764588 = product of:
      0.009529176 = sum of:
        0.009529176 = product of:
          0.019058352 = sum of:
            0.019058352 = weight(_text_:systems in 923) [ClassicSimilarity], result of:
              0.019058352 = score(doc=923,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.118839346 = fieldWeight in 923, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=923)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    People use symbols to think about the world: if I say the words "cat", "house" or "aeroplane", you know instantly what I mean. Symbols can also be used to describe the way things are behaving (running, falling, flying) or they can represent how things should behave in relation to each other (a "+" means add the numbers before and after). Symbolic AI is a way to embed this human knowledge and reasoning into computer systems. Though the idea has been around for decades, it fell by the wayside a few years ago as deep learning-buoyed by the sudden easy availability of lots of training data and cheap computing power-became more fashionable. In the near future at least, there's no doubt people will find LLMs useful. But whether they represent a critical step on the path towards AGI, or rather just an intriguing detour, remains to be seen."

Languages

  • e 197
  • d 31
  • pt 3
  • More… Less…

Types

  • a 218
  • el 35
  • p 5
  • m 3
  • A 1
  • EL 1
  • x 1
  • More… Less…