Search (820 results, page 1 of 41)

  • × language_ss:"e"
  • × year_i:[2020 TO 2030}
  1. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.06
    0.057391338 = product of:
      0.086087 = sum of:
        0.009385608 = weight(_text_:a in 950) [ClassicSimilarity], result of:
          0.009385608 = score(doc=950,freq=16.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.18016359 = fieldWeight in 950, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=950)
        0.076701395 = sum of:
          0.046094913 = weight(_text_:de in 950) [ClassicSimilarity], result of:
            0.046094913 = score(doc=950,freq=2.0), product of:
              0.19416152 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.045180224 = queryNorm
              0.23740499 = fieldWeight in 950, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
          0.030606484 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
            0.030606484 = score(doc=950,freq=2.0), product of:
              0.15821345 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045180224 = queryNorm
              0.19345059 = fieldWeight in 950, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose With the shift to an information-based society and to the de-centralisation of information, information overload has attracted a growing interest in the computer and information science research communities. However, there is no clear understanding of the meaning of the term, and while there have been many proposed definitions, there is no consensus. The goal of this work was to define the concept of "information overload". In order to do so, a concept analysis using Rodgers' approach was performed. Design/methodology/approach A concept analysis using Rodgers' approach based on a corpus of documents published between 2010 and September 2020 was conducted. One surrogate for "information overload", which is "cognitive overload" was identified. The corpus of documents consisted of 151 documents for information overload and ten for cognitive overload. All documents were from the fields of computer science and information science, and were retrieved from three databases: Association for Computing Machinery (ACM) Digital Library, SCOPUS and Library and Information Science Abstracts (LISA). Findings The themes identified from the authors' concept analysis allowed us to extract the triggers, manifestations and consequences of information overload. They found triggers related to information characteristics, information need, the working environment, the cognitive abilities of individuals and the information environment. In terms of manifestations, they found that information overload manifests itself both emotionally and cognitively. The consequences of information overload were both internal and external. These findings allowed them to provide a definition of information overload. Originality/value Through the authors' concept analysis, they were able to clarify the components of information overload and provide a definition of the concept.
    Date
    22. 4.2023 19:27:56
    Type
    a
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.04181507 = product of:
      0.0627226 = sum of:
        0.05381863 = product of:
          0.21527451 = sum of:
            0.21527451 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.21527451 = score(doc=862,freq=2.0), product of:
                0.38303843 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045180224 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
        0.00890397 = weight(_text_:a in 862) [ClassicSimilarity], result of:
          0.00890397 = score(doc=862,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.1709182 = fieldWeight in 862, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.6666667 = coord(2/3)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
    Type
    a
  3. Henshaw, Y.; Wu, S.: RILM Index (Répertoire International de Littérature Musicale) (2021) 0.04
    0.0373464 = product of:
      0.0560196 = sum of:
        0.010387965 = weight(_text_:a in 587) [ClassicSimilarity], result of:
          0.010387965 = score(doc=587,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19940455 = fieldWeight in 587, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=587)
        0.045631636 = product of:
          0.09126327 = sum of:
            0.09126327 = weight(_text_:de in 587) [ClassicSimilarity], result of:
              0.09126327 = score(doc=587,freq=4.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.47003788 = fieldWeight in 587, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=587)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    RILM Index is a partially controlled vocabulary designated to index scholarly writings on music and related subjects, created and curated by Répertoire International de Littérature Musicale (RILM). It has been developed over 50 years and has served the music community as a primary research tool. This analytical review of the characteristics of RILM Index reveals several issues, related to the Index's history, that impinge on its usefulness. An in-progress thesaurus is presented as a possible solution to these issues. RILM Index, despite being imperfect, provides a foundation for developing an ontological structure for both indexing and information retrieval purposes.
    Type
    a
  4. Fernanda de Jesus, A.; Ferreira de Castro, F.: Proposal for the publication of linked open bibliographic data (2024) 0.03
    0.03257776 = product of:
      0.048866637 = sum of:
        0.009753809 = weight(_text_:a in 1161) [ClassicSimilarity], result of:
          0.009753809 = score(doc=1161,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.18723148 = fieldWeight in 1161, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
        0.03911283 = product of:
          0.07822566 = sum of:
            0.07822566 = weight(_text_:de in 1161) [ClassicSimilarity], result of:
              0.07822566 = score(doc=1161,freq=4.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.4028896 = fieldWeight in 1161, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1161)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Linked Open Data (LOD) are a set of principles for publishing structured, connected data available for reuse under an open license. The objective of this paper is to analyze the publishing of bibliographic data such as LOD, having as a product the elaboration of theoretical-methodological recommendations for the publication of these data, in an approach based on the ten best practices for publishing LOD, from the World Wide Web Consortium. The starting point was the conduction of a Systematic Review of Literature, where initiatives to publish bibliographic data such as LOD were identified. An empirical study of these institutions was also conducted. As a result, theoretical-methodological recommendations were obtained for the process of publishing bibliographic data such as LOD.
    Type
    a
  5. Collard, J.; Paiva, V. de; Fong, B.; Subrahmanian, E.: Extracting mathematical concepts from text (2022) 0.03
    0.02843627 = product of:
      0.042654403 = sum of:
        0.010387965 = weight(_text_:a in 668) [ClassicSimilarity], result of:
          0.010387965 = score(doc=668,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19940455 = fieldWeight in 668, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=668)
        0.032266438 = product of:
          0.064532876 = sum of:
            0.064532876 = weight(_text_:de in 668) [ClassicSimilarity], result of:
              0.064532876 = score(doc=668,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33236697 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=668)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We investigate different systems for extracting mathematical entities from English texts in the mathematical field of category theory as a first step for constructing a mathematical knowledge graph. We consider four different term extractors and compare their results. This small experiment showcases some of the issues with the construction and evaluation of terms extracted from noisy domain text. We also make available two open corpora in research mathematics, in particular in category theory: a small corpus of 755 abstracts from the journal TAC (3188 sentences), and a larger corpus from the nLab community wiki (15,000 sentences).
    Type
    a
  6. Pérez Pozo, Á.; Rosa, J. de la; Ros, S.; González-Blanco, E.; Hernández, L.; Sisto, M. de: ¬A bridge too far for artificial intelligence? : automatic classification of stanzas in Spanish poetry (2022) 0.03
    0.027148133 = product of:
      0.0407222 = sum of:
        0.008128175 = weight(_text_:a in 468) [ClassicSimilarity], result of:
          0.008128175 = score(doc=468,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15602624 = fieldWeight in 468, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=468)
        0.032594025 = product of:
          0.06518805 = sum of:
            0.06518805 = weight(_text_:de in 468) [ClassicSimilarity], result of:
              0.06518805 = score(doc=468,freq=4.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33574134 = fieldWeight in 468, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The rise in artificial intelligence and natural language processing techniques has increased considerably in the last few decades. Historically, the focus has been primarily on texts expressed in prose form, leaving mostly aside figurative or poetic expressions of language due to their rich semantics and syntactic complexity. The creation and analysis of poetry have been commonly carried out by hand, with a few computer-assisted approaches. In the Spanish context, the promise of machine learning is starting to pan out in specific tasks such as metrical annotation and syllabification. However, there is a task that remains unexplored and underdeveloped: stanza classification. This classification of the inner structures of verses in which a poem is built upon is an especially relevant task for poetry studies since it complements the structural information of a poem. In this work, we analyzed different computational approaches to stanza classification in the Spanish poetic tradition. These approaches show that this task continues to be hard for computers systems, both based on classical machine learning approaches as well as statistical language models and cannot compete with traditional computational paradigms based on the knowledge of experts.
    Type
    a
  7. Al-Khatib, K.; Ghosa, T.; Hou, Y.; Waard, A. de; Freitag, D.: Argument mining for scholarly document processing : taking stock and looking ahead (2021) 0.03
    0.02687528 = product of:
      0.04031292 = sum of:
        0.008046483 = weight(_text_:a in 568) [ClassicSimilarity], result of:
          0.008046483 = score(doc=568,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.1544581 = fieldWeight in 568, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=568)
        0.032266438 = product of:
          0.064532876 = sum of:
            0.064532876 = weight(_text_:de in 568) [ClassicSimilarity], result of:
              0.064532876 = score(doc=568,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33236697 = fieldWeight in 568, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=568)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Argument mining targets structures in natural language related to interpretation and persuasion. Most scholarly discourse involves interpreting experimental evidence and attempting to persuade other scientists to adopt the same conclusions, which could benefit from argument mining techniques. However, While various argument mining studies have addressed student essays and news articles, those that target scientific discourse are still scarce. This paper surveys existing work in argument mining of scholarly discourse, and provides an overview of current models, data, tasks, and applications. We identify a number of key challenges confronting argument mining in the scientific domain, and suggest some possible solutions and future directions.
    Type
    a
  8. Zhang, Y.; Ren, P.; Rijke, M. de: ¬A taxonomy, data set, and benchmark for detecting and classifying malevolent dialogue responses (2021) 0.03
    0.02546151 = product of:
      0.038192265 = sum of:
        0.010535319 = weight(_text_:a in 356) [ClassicSimilarity], result of:
          0.010535319 = score(doc=356,freq=14.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20223314 = fieldWeight in 356, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=356)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 356) [ClassicSimilarity], result of:
              0.055313893 = score(doc=356,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 356, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=356)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Conversational interfaces are increasingly popular as a way of connecting people to information. With the increased generative capacity of corpus-based conversational agents comes the need to classify and filter out malevolent responses that are inappropriate in terms of content and dialogue acts. Previous studies on the topic of detecting and classifying inappropriate content are mostly focused on a specific category of malevolence or on single sentences instead of an entire dialogue. We make three contributions to advance research on the malevolent dialogue response detection and classification (MDRDC) task. First, we define the task and present a hierarchical malevolent dialogue taxonomy. Second, we create a labeled multiturn dialogue data set and formulate the MDRDC task as a hierarchical classification task. Last, we apply state-of-the-art text classification methods to the MDRDC task, and report on experiments aimed at assessing the performance of these approaches.
    Type
    a
  9. Sales, R. de; Martínez-Ávila, D.; Chaves Guimarães, J.A.: James Duff Brown : a librarian committed to the public library and the subject classification (2021) 0.03
    0.02546151 = product of:
      0.038192265 = sum of:
        0.010535319 = weight(_text_:a in 590) [ClassicSimilarity], result of:
          0.010535319 = score(doc=590,freq=14.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20223314 = fieldWeight in 590, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=590)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 590) [ClassicSimilarity], result of:
              0.055313893 = score(doc=590,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 590, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=590)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    After two decades in the 21st Century, and despite all the advances in the area, some very important names from past centuries still do not have the recognition they deserve in the global history of library and information science and, specifically, of knowledge organization. Although acknowledged in British librarianship, the name of James Duff Brown (1862-1914) still does not have a proper recognition on a global scale. His contributions to a free and more democratic library had a prominent place in the works and projects he developed during his time at the libraries of Clerkenwell and Islington in London. Free access to the library shelves, an architecture centered on books and people, and classifications that are more dynamic were dreams fulfilled by Brown. With this biographical article, we hope to live up to his legacy and pay homage to a true librarian and an advocate of the public library and subject classification.
    Type
    a
  10. Dunn, H.; Bourcier, P.: Nomenclature for museum cataloging (2020) 0.02
    0.024940504 = product of:
      0.037410755 = sum of:
        0.009753809 = weight(_text_:a in 5483) [ClassicSimilarity], result of:
          0.009753809 = score(doc=5483,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.18723148 = fieldWeight in 5483, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5483)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 5483) [ClassicSimilarity], result of:
              0.055313893 = score(doc=5483,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 5483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5483)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We present an overview of Nomenclature's history, characteristics, structure, use, management, development process, limitations, and future. Nomenclature for Museum Cataloging is a bilingual (English/French) structured and controlled list of object terms organized in a classification system to provide a basis for indexing and cataloging collections of human-made objects. It includes illustrations and bibliographic references as well as a user guide. It is used in the creation and management of object records in human history collections within museums and other organizations, and it focuses on objects relevant to North American history and culture. First published in 1978, Nomenclature is the most extensively used museum classification and controlled vocabulary for historical and ethnological collections in North America and represents thereby a de facto standard in the field. An online reference version of Nomenclature was made available in 2018, and it will be available under open license in 2020.
    Type
    a
  11. Fremery, W. de; Buckland, M.K.: Copy theory (2022) 0.02
    0.024940504 = product of:
      0.037410755 = sum of:
        0.009753809 = weight(_text_:a in 487) [ClassicSimilarity], result of:
          0.009753809 = score(doc=487,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.18723148 = fieldWeight in 487, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=487)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 487) [ClassicSimilarity], result of:
              0.055313893 = score(doc=487,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 487, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=487)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In information science, writing, printing, telecommunication, and digital computing have been central concerns because of their ability to distribute information. Overlooked is the obvious fact that these technologies fashion copies, and the theorizing of copies has been neglected. We may think a copy is the same as what it copies, but no two objects can really be the same. "The same" means similar enough as an acceptable substitute for some purpose. The differences between usefully similar things are also often important, in forensic analysis, for example, or inferential processes. Status as a copy is only one form of relationship between objects, but copies are so integral to information science that they demand a theory. Indeed, theorizing copies provides a basis for a more complete and unified view of information science.
    Type
    a
  12. Brito, M. de: Social affects engineering and ethics (2023) 0.02
    0.024940504 = product of:
      0.037410755 = sum of:
        0.009753809 = weight(_text_:a in 1135) [ClassicSimilarity], result of:
          0.009753809 = score(doc=1135,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.18723148 = fieldWeight in 1135, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1135)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 1135) [ClassicSimilarity], result of:
              0.055313893 = score(doc=1135,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 1135, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1135)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This text proposes a multidisciplinary reflection on the subject of ethics, based on philosophical approaches, using Spinoza's work, Ethics, as a foundation. The power of Spinoza's geometric reasoning and deterministic logic, compatible with formal grammars and programming languages, provides a favorable framework for this purpose. In an information society characterized by an abundance of data and a diversity of perspectives, complex thinking is an essential tool for developing an ethical construct that can deal with the uncertainty and contradictions in the field. Acknowledging the natural complexity of ethics in interpersonal relationships, the use of AI techniques appears unavoidable. Artificial intelligence in KOS offers the potential for processing complex questions through the formal modeling of concepts in ethical discourse. By formalizing problems, we hope to unleash the potential of ethical analysis; by addressing complexity analysis, we propose a mechanism for understanding problems and empowering solutions.
    Type
    a
  13. Fugmann, R.: What is information? : an information veteran looks back (2022) 0.02
    0.02482874 = product of:
      0.03724311 = sum of:
        0.0066366266 = weight(_text_:a in 1085) [ClassicSimilarity], result of:
          0.0066366266 = score(doc=1085,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12739488 = fieldWeight in 1085, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1085)
        0.030606484 = product of:
          0.061212968 = sum of:
            0.061212968 = weight(_text_:22 in 1085) [ClassicSimilarity], result of:
              0.061212968 = score(doc=1085,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.38690117 = fieldWeight in 1085, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1085)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    18. 8.2022 19:22:57
    Type
    a
  14. Fremery, W. De; Buckland, M.K.: Context, relevance, and labor (2022) 0.02
    0.023747265 = product of:
      0.035620898 = sum of:
        0.007963953 = weight(_text_:a in 4240) [ClassicSimilarity], result of:
          0.007963953 = score(doc=4240,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 4240, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4240)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 4240) [ClassicSimilarity], result of:
              0.055313893 = score(doc=4240,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 4240, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4240)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Since information science concerns the transmission of records, it concerns context. The transmission of documents ensures their arrival in new contexts. Documents and their copies are spread across times and places. The amount of labor required to discover and retrieve relevant documents is also formulated by context. Thus, any serious consideration of communication and of information technologies quickly leads to a concern with context, relevance, and labor. Information scientists have developed many theories of context, relevance, and labor but not a framework for organizing them and describing their relationship with one another. We propose the words context and relevance can be used to articulate a useful framework for considering the diversity of approaches to context and relevance in information science, as well as their relations with each other and with labor.
    Type
    a
  15. Libraries, archives and museums as democratic spaces in a digital age (2020) 0.02
    0.023747265 = product of:
      0.035620898 = sum of:
        0.007963953 = weight(_text_:a in 417) [ClassicSimilarity], result of:
          0.007963953 = score(doc=417,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 417, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=417)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 417) [ClassicSimilarity], result of:
              0.055313893 = score(doc=417,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=417)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Libraries, archives and museums have traditionally been a part of the public sphere's infrastructure. They have been so by providing public access to culture and knowledge, by being agents for enlightenment and by being public meeting places in their communities. Digitization and globalization poses new challenges in relation to upholding a sustainable public sphere. Can libraries, archives and museums contribute in meeting these challenges?
    Editor
    Ragnar, A. et al.
    Imprint
    Berlin : Walter de Gruyter GmbH
  16. Almeida, P. de; Gnoli, C.: Fiction in a phenomenon-based classification (2021) 0.02
    0.023747265 = product of:
      0.035620898 = sum of:
        0.007963953 = weight(_text_:a in 712) [ClassicSimilarity], result of:
          0.007963953 = score(doc=712,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 712, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=712)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 712) [ClassicSimilarity], result of:
              0.055313893 = score(doc=712,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 712, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=712)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In traditional classification, fictional works are indexed only by their form, genre, and language, while their subject content is believed to be irrelevant. However, recent research suggests that this may not be the best approach. We tested indexing of a small sample of selected fictional works by Integrative Levels Classification (ILC2), a freely faceted system based on phenomena instead of disciplines and considered the structure of the resulting classmarks. Issues in the process of subject analysis, such as selection of relevant vs. non-relevant themes and citation order of relevant ones, are identified and discussed. Some phenomena that are covered in scholarly literature can also be identified as relevant themes in fictional literature and expressed in classmarks. This can allow for hybrid search and retrieval systems covering both fiction and nonfiction, which will result in better leveraging of the knowledge contained in fictional works.
    Type
    a
  17. Trompf, G.W.: Auguste Comte's classification of the sciences (2023) 0.02
    0.023747265 = product of:
      0.035620898 = sum of:
        0.007963953 = weight(_text_:a in 1119) [ClassicSimilarity], result of:
          0.007963953 = score(doc=1119,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 1119, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1119)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 1119) [ClassicSimilarity], result of:
              0.055313893 = score(doc=1119,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 1119, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1119)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Auguste Comte is ostensibly the world's most famous classifier of the sciences in modern history. His whole life was dedicated to establishing a classification that conformed to the 'positivist' (non-theological and non-metaphysical) principles he settled on after working with early nineteenth-century French social reformer Henri de Saint-Simon. This article first probes the background to Comte's classifying of the sciences, discussing French and German influences, and the effect of the phrenological movement on his special attitude to psychology and social life. Central sections of the article concern the basic and most mature ordering of the sciences according to his fundamental Course of lectures on classification (1830-42), the development of a tableau to cover psychological issues, and attempts at tables to synthesize his ordering and draw out their implications for socio-political reform and the Church of Humanity he founded. Concluding sections cover key binding principles of his classificatory work, as well as matters of reception, influence, and critical response.
    Biographed
    Comte, A.
    Type
    a
  18. Araújo, P.C. de; Gutierres Castanha, R.C.; Hjoerland, B.: Citation indexing and indexes (2021) 0.02
    0.023035955 = product of:
      0.03455393 = sum of:
        0.006896985 = weight(_text_:a in 444) [ClassicSimilarity], result of:
          0.006896985 = score(doc=444,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.13239266 = fieldWeight in 444, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=444)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 444) [ClassicSimilarity], result of:
              0.055313893 = score(doc=444,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 444, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=444)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A citation index is a bibliographic database that provides citation links between documents. The first modern citation index was suggested by the researcher Eugene Garfield in 1955 and created by him in 1964, and it represents an important innovation to knowledge organization and information retrieval. This article describes citation indexes in general, considering the modern citation indexes, including Web of Science, Scopus, Google Scholar, Microsoft Academic, Crossref, Dimensions and some special citation indexes and predecessors to the modern citation index like Shepard's Citations. We present comparative studies of the major ones and survey theoretical problems related to the role of citation indexes as subject access points (SAP), recognizing the implications to knowledge organization and information retrieval. Finally, studies on citation behavior are presented and the influence of citation indexes on knowledge organization, information retrieval and the scientific information ecosystem is recognized.
    Type
    a
  19. Jiang, X.; Zhu, X.; Chen, J.: Main path analysis on cyclic citation networks (2020) 0.02
    0.022702038 = product of:
      0.034053057 = sum of:
        0.011005601 = weight(_text_:a in 5813) [ClassicSimilarity], result of:
          0.011005601 = score(doc=5813,freq=22.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.21126054 = fieldWeight in 5813, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5813)
        0.023047457 = product of:
          0.046094913 = sum of:
            0.046094913 = weight(_text_:de in 5813) [ClassicSimilarity], result of:
              0.046094913 = score(doc=5813,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23740499 = fieldWeight in 5813, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5813)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Main path analysis is a famous network-based method for understanding the evolution of a scientific domain. Most existing methods have two steps, weighting citation arcs based on search path counting and exploring main paths in a greedy fashion, with the assumption that citation networks are acyclic. The only available proposal that avoids manual cycle removal is to preprint transform a cyclic network to an acyclic counterpart. Through a detailed discussion about the issues concerning this approach, especially deriving the "de-preprinted" main paths for the original network, this article proposes an alternative solution with two-fold contributions. Based on the argument that a publication cannot influence itself through a citation cycle, the SimSPC algorithm is proposed to weight citation arcs by counting simple search paths. A set of algorithms are further proposed for main path exploration and extraction directly from cyclic networks based on a novel data structure main path tree. The experiments on two cyclic citation networks demonstrate the usefulness of the alternative solution. In the meanwhile, experiments show that publications in strongly connected components may sit on the turning points of main path networks, which signifies the necessity of a systematic way of dealing with citation cycles.
    Type
    a
  20. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.02
    0.022477165 = product of:
      0.033715747 = sum of:
        0.012291206 = weight(_text_:a in 40) [ClassicSimilarity], result of:
          0.012291206 = score(doc=40,freq=14.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.23593865 = fieldWeight in 40, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.02142454 = product of:
          0.04284908 = sum of:
            0.04284908 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.04284908 = score(doc=40,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
    Type
    a

Types

  • a 785
  • el 61
  • m 17
  • p 13
  • s 3
  • A 1
  • EL 1
  • x 1
  • More… Less…

Subjects

Classifications