Search (874 results, page 1 of 44)

  • × year_i:[2020 TO 2030}
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.06
    0.063893676 = product of:
      0.09584051 = sum of:
        0.07803193 = product of:
          0.23409578 = sum of:
            0.23409578 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.23409578 = score(doc=862,freq=2.0), product of:
                0.41652718 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049130294 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.017808583 = weight(_text_:of in 862) [ClassicSimilarity], result of:
          0.017808583 = score(doc=862,freq=10.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.23179851 = fieldWeight in 862, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.6666667 = coord(2/3)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.05
    0.047775652 = product of:
      0.07166348 = sum of:
        0.06502661 = product of:
          0.19507983 = sum of:
            0.19507983 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.19507983 = score(doc=1000,freq=2.0), product of:
                0.41652718 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049130294 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.006636867 = weight(_text_:of in 1000) [ClassicSimilarity], result of:
          0.006636867 = score(doc=1000,freq=2.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.086386204 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.6666667 = coord(2/3)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Morris, V.: Automated language identification of bibliographic resources (2020) 0.04
    0.035091337 = product of:
      0.052637003 = sum of:
        0.026011098 = weight(_text_:of in 5749) [ClassicSimilarity], result of:
          0.026011098 = score(doc=5749,freq=12.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.33856338 = fieldWeight in 5749, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
        0.026625905 = product of:
          0.05325181 = sum of:
            0.05325181 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.05325181 = score(doc=5749,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article describes experiments in the use of machine learning techniques at the British Library to assign language codes to catalog records, in order to provide information about the language of content of the resources described. In the first phase of the project, language codes were assigned to 1.15 million records with 99.7% confidence. The automated language identification tools developed will be used to contribute to future enhancement of over 4 million legacy records.
    Date
    2. 3.2020 19:04:22
  4. Geras, A.; Siudem, G.; Gagolewski, M.: Should we introduce a dislike button for academic articles? (2020) 0.03
    0.031705577 = product of:
      0.047558364 = sum of:
        0.027588936 = weight(_text_:of in 5620) [ClassicSimilarity], result of:
          0.027588936 = score(doc=5620,freq=24.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.3591007 = fieldWeight in 5620, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5620)
        0.019969428 = product of:
          0.039938856 = sum of:
            0.039938856 = weight(_text_:22 in 5620) [ClassicSimilarity], result of:
              0.039938856 = score(doc=5620,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.23214069 = fieldWeight in 5620, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5620)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    There is a mutual resemblance between the behavior of users of the Stack Exchange and the dynamics of the citations accumulation process in the scientific community, which enabled us to tackle the outwardly intractable problem of assessing the impact of introducing "negative" citations. Although the most frequent reason to cite an article is to highlight the connection between the 2 publications, researchers sometimes mention an earlier work to cast a negative light. While computing citation-based scores, for instance, the h-index, information about the reason why an article was mentioned is neglected. Therefore, it can be questioned whether these indices describe scientific achievements accurately. In this article we shed insight into the problem of "negative" citations, analyzing data from Stack Exchange and, to draw more universal conclusions, we derive an approximation of citations scores. Here we show that the quantified influence of introducing negative citations is of lesser importance and that they could be used as an indicator of where the attention of the scientific community is allocated.
    Date
    6. 1.2020 18:10:22
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.2, S.221-229
  5. Kuehn, E.F.: ¬The information ecosystem concept in information literacy : a theoretical approach and definition (2023) 0.03
    0.03092255 = product of:
      0.046383824 = sum of:
        0.026414396 = weight(_text_:of in 919) [ClassicSimilarity], result of:
          0.026414396 = score(doc=919,freq=22.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.34381276 = fieldWeight in 919, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=919)
        0.019969428 = product of:
          0.039938856 = sum of:
            0.039938856 = weight(_text_:22 in 919) [ClassicSimilarity], result of:
              0.039938856 = score(doc=919,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.23214069 = fieldWeight in 919, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=919)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Despite the prominence of the concept of the information ecosystem (hereafter IE) in information literacy documents and literature, it is under-theorized. This article proposes a general definition of IE for information literacy. After reviewing the current use of the IE concept in the Association of College and Research Libraries (ACRL) Framework for Information Literacy and other information literacy sources, existing definitions of IE and similar concepts (e.g., "evidence ecosystems") will be examined from other fields. These will form the basis of the definition of IE proposed in the article for the field of information literacy: "all structures, entities, and agents related to the flow of semantic information relevant to a research domain, as well as the information itself."
    Date
    22. 3.2023 11:52:50
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.4, S.434-443
  6. Li, G.; Siddharth, L.; Luo, J.: Embedding knowledge graph of patent metadata to measure knowledge proximity (2023) 0.03
    0.03092255 = product of:
      0.046383824 = sum of:
        0.026414396 = weight(_text_:of in 920) [ClassicSimilarity], result of:
          0.026414396 = score(doc=920,freq=22.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.34381276 = fieldWeight in 920, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=920)
        0.019969428 = product of:
          0.039938856 = sum of:
            0.039938856 = weight(_text_:22 in 920) [ClassicSimilarity], result of:
              0.039938856 = score(doc=920,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.23214069 = fieldWeight in 920, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=920)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Knowledge proximity refers to the strength of association between any two entities in a structural form that embodies certain aspects of a knowledge base. In this work, we operationalize knowledge proximity within the context of the US Patent Database (knowledge base) using a knowledge graph (structural form) named "PatNet" built using patent metadata, including citations, inventors, assignees, and domain classifications. We train various graph embedding models using PatNet to obtain the embeddings of entities and relations. The cosine similarity between the corresponding (or transformed) embeddings of entities denotes the knowledge proximity between these. We compare the embedding models in terms of their performances in predicting target entities and explaining domain expansion profiles of inventors and assignees. We then apply the embeddings of the best-preferred model to associate homogeneous (e.g., patent-patent) and heterogeneous (e.g., inventor-assignee) pairs of entities.
    Date
    22. 3.2023 12:06:55
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.4, S.476-490
  7. Hartel, J.: ¬The red thread of information (2020) 0.03
    0.03088144 = product of:
      0.04632216 = sum of:
        0.02968097 = weight(_text_:of in 5839) [ClassicSimilarity], result of:
          0.02968097 = score(doc=5839,freq=40.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.38633084 = fieldWeight in 5839, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5839)
        0.016641192 = product of:
          0.033282384 = sum of:
            0.033282384 = weight(_text_:22 in 5839) [ClassicSimilarity], result of:
              0.033282384 = score(doc=5839,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.19345059 = fieldWeight in 5839, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5839)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose In The Invisible Substrate of Information Science, a landmark article about the discipline of information science, Marcia J. Bates wrote that ".we are always looking for the red thread of information in the social texture of people's lives" (1999a, p. 1048). To sharpen our understanding of information science and to elaborate Bates' idea, the work at hand answers the question: Just what does the red thread of information entail? Design/methodology/approach Through a close reading of Bates' oeuvre and by applying concepts from the reference literature of information science, nine composite entities that qualify as the red thread of information are identified, elaborated, and related to existing concepts in the information science literature. In the spirit of a scientist-poet (White, 1999), several playful metaphors related to the color red are employed. Findings Bates' red thread of information entails: terms, genres, literatures, classification systems, scholarly communication, information retrieval, information experience, information institutions, and information policy. This same constellation of phenomena can be found in resonant visions of information science, namely, domain analysis (Hjørland, 2002), ethnography of infrastructure (Star, 1999), and social epistemology (Shera, 1968). Research limitations/implications With the vital vermilion filament in clear view, newcomers can more easily engage the material, conceptual, and social machinery of information science, and specialists are reminded of what constitutes information science as a whole. Future researchers and scientist-poets may wish to supplement the nine composite entities with additional, emergent information phenomena. Originality/value Though the explication of information science that follows is relatively orthodox and time-bound, the paper offers an imaginative, accessible, yet technically precise way of understanding the field.
    Date
    30. 4.2020 21:03:22
    Source
    Journal of documentation. 76(2020) no.3, S.647-656
  8. Asubiaro, T.V.; Onaolapo, S.: ¬A comparative study of the coverage of African journals in Web of Science, Scopus, and CrossRef (2023) 0.03
    0.03088144 = product of:
      0.04632216 = sum of:
        0.02968097 = weight(_text_:of in 992) [ClassicSimilarity], result of:
          0.02968097 = score(doc=992,freq=40.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.38633084 = fieldWeight in 992, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=992)
        0.016641192 = product of:
          0.033282384 = sum of:
            0.033282384 = weight(_text_:22 in 992) [ClassicSimilarity], result of:
              0.033282384 = score(doc=992,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.19345059 = fieldWeight in 992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=992)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This is the first study that evaluated the coverage of journals from Africa in Web of Science, Scopus, and CrossRef. A list of active journals published in each of the 55 African countries was compiled from Ulrich's periodicals directory and African Journals Online (AJOL) website. Journal master lists for Web of Science, Scopus, and CrossRef were searched for the African journals. A total of 2,229 unique active African journals were identified from Ulrich (N = 2,117, 95.0%) and AJOL (N = 243, 10.9%) after removing duplicates. The volume of African journals in Web of Science and Scopus databases is 7.4% (N = 166) and 7.8% (N = 174), respectively, compared to the 45.6% (N = 1,017) covered in CrossRef. While making up only 17.% of all the African journals, South African journals had the best coverage in the two most authoritative databases, accounting for 73.5% and 62.1% of all the African journals in Web of Science and Scopus, respectively. In contrast, Nigeria published 44.5% of all the African journals. The distribution of the African journals is biased in favor of Medical, Life and Health Sciences and Humanities and the Arts in the three databases. The low representation of African journals in CrossRef, a free indexing infrastructure that could be harnessed for building an African-centric research indexing database, is concerning.
    Date
    22. 6.2023 14:09:06
    Object
    Web of Science
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.745-758
  9. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.03
    0.03070492 = product of:
      0.046057377 = sum of:
        0.02275971 = weight(_text_:of in 40) [ClassicSimilarity], result of:
          0.02275971 = score(doc=40,freq=12.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.29624295 = fieldWeight in 40, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.023297668 = product of:
          0.046595335 = sum of:
            0.046595335 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.046595335 = score(doc=40,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
    Object
    Web of Science
  10. Wu, P.F.: Veni, vidi, vici? : On the rise of scrape-and-report scholarship in online reviews research (2023) 0.03
    0.03070492 = product of:
      0.046057377 = sum of:
        0.02275971 = weight(_text_:of in 896) [ClassicSimilarity], result of:
          0.02275971 = score(doc=896,freq=12.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.29624295 = fieldWeight in 896, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=896)
        0.023297668 = product of:
          0.046595335 = sum of:
            0.046595335 = weight(_text_:22 in 896) [ClassicSimilarity], result of:
              0.046595335 = score(doc=896,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.2708308 = fieldWeight in 896, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=896)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    JASIST has in recent years received many submissions reporting data analytics based on "Big Data" of online reviews scraped from various platforms. By outlining major issues in this type of scape-and-report scholarship and providing a set of recommendations, this essay encourages online reviews researchers to look at Big Data with a critical eye and treat online reviews as a sociotechnical "thing" produced within the fabric of sociomaterial life.
    Date
    22. 1.2023 18:33:53
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.2, S.145-149
  11. Hjoerland, B.: Table of contents (ToC) (2022) 0.03
    0.030380417 = product of:
      0.045570623 = sum of:
        0.028929431 = weight(_text_:of in 1096) [ClassicSimilarity], result of:
          0.028929431 = score(doc=1096,freq=38.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.37654874 = fieldWeight in 1096, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1096)
        0.016641192 = product of:
          0.033282384 = sum of:
            0.033282384 = weight(_text_:22 in 1096) [ClassicSimilarity], result of:
              0.033282384 = score(doc=1096,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.19345059 = fieldWeight in 1096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1096)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A table of contents (ToC) is a kind of document representation as well as a paratext and a kind of finding device to the document it represents. TOCs are very common in books and some other kinds of documents, but not in all kinds. This article discusses the definition and functions of ToC, normative guidelines for their design, and the history and forms of ToC in different kinds of documents and media. A main part of the article is about the role of ToC in information searching, in current awareness services and as items added to bibliographical records. The introduction and the conclusion focus on the core theoretical issues concerning ToCs. Should they be document-oriented or request-oriented, neutral, or policy-oriented, objective, or subjective? It is concluded that because of the special functions of ToCs, the arguments for the request-oriented (policy-oriented, subjective) view are weaker than they are in relation to indexing and knowledge organization in general. Apart from level of granularity, the evaluation of a ToC is difficult to separate from the evaluation of the structuring and naming of the elements of the structure of the document it represents.
    Date
    18.11.2023 13:47:22
    Series
    Reviews of concepts in knowledge organization
  12. Ma, Y.: Relatedness and compatibility : the concept of privacy in Mandarin Chinese and American English corpora (2023) 0.03
    0.030103043 = product of:
      0.045154564 = sum of:
        0.025185138 = weight(_text_:of in 887) [ClassicSimilarity], result of:
          0.025185138 = score(doc=887,freq=20.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.32781258 = fieldWeight in 887, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=887)
        0.019969428 = product of:
          0.039938856 = sum of:
            0.039938856 = weight(_text_:22 in 887) [ClassicSimilarity], result of:
              0.039938856 = score(doc=887,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.23214069 = fieldWeight in 887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=887)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This study investigates how privacy as an ethical concept exists in two languages: Mandarin Chinese and American English. The exploration relies on two genres of corpora from 10 years: social media posts and news articles, 2010-2019. A mixed-methods approach combining structural topic modeling (STM) and human interpretation were used to work with the data. Findings show various privacy-related topics across the two languages. Moreover, some of these different topics revealed fundamental incompatibilities for understanding privacy across these two languages. In other words, some of the variations of topics do not just reflect contextual differences; they reveal how the two languages value privacy in different ways that can relate back to the society's ethical tradition. This study is one of the first empirically grounded intercultural explorations of the concept of privacy. It has shown that natural language is promising to operationalize intercultural and comparative privacy research, and it provides an examination of the concept as it is understood in these two languages.
    Date
    22. 1.2023 18:59:40
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.2, S.249-272
  13. Cheti, A.; Viti, E.: Functionality and merits of a faceted thesaurus : the case of the Nuovo soggettario (2023) 0.03
    0.030103043 = product of:
      0.045154564 = sum of:
        0.025185138 = weight(_text_:of in 1181) [ClassicSimilarity], result of:
          0.025185138 = score(doc=1181,freq=20.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.32781258 = fieldWeight in 1181, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1181)
        0.019969428 = product of:
          0.039938856 = sum of:
            0.039938856 = weight(_text_:22 in 1181) [ClassicSimilarity], result of:
              0.039938856 = score(doc=1181,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.23214069 = fieldWeight in 1181, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1181)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Nuovo soggettario, the official Italian subject indexing system edited by the National Central Library of Florence, is made up of interactive components, the core of which is a general thesaurus and some rules of a conventional syntax for subject string construction. The Nuovo soggettario Thesaurus is in compliance with ISO 25964: 2011-2013, IFLA LRM, and FAIR principle (findability, accessibility, interoperability, and reusability). Its open data are available in the Zthes, MARC21, and in SKOS formats and allow for interoperability with l library, archive, and museum databases. The Thesaurus's macrostructure is organized into four fundamental macro-categories, thirteen categories, and facets. The facets allow for the orderly development of hierarchies, thereby limiting polyhierarchies and promoting the grouping of homogenous concepts. This paper addresses the main features and peculiarities which have characterized the consistent development of this categorical structure and its effects on the syntactic sphere in a predominantly pre-coordinated usage context.
    Date
    26.11.2023 18:59:22
    Footnote
    Beitrag in Themenheft: Implementation of Faceted Vocabularies.
  14. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.03
    0.029337129 = product of:
      0.044005692 = sum of:
        0.027364502 = weight(_text_:of in 950) [ClassicSimilarity], result of:
          0.027364502 = score(doc=950,freq=34.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.35617945 = fieldWeight in 950, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=950)
        0.016641192 = product of:
          0.033282384 = sum of:
            0.033282384 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
              0.033282384 = score(doc=950,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.19345059 = fieldWeight in 950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=950)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose With the shift to an information-based society and to the de-centralisation of information, information overload has attracted a growing interest in the computer and information science research communities. However, there is no clear understanding of the meaning of the term, and while there have been many proposed definitions, there is no consensus. The goal of this work was to define the concept of "information overload". In order to do so, a concept analysis using Rodgers' approach was performed. Design/methodology/approach A concept analysis using Rodgers' approach based on a corpus of documents published between 2010 and September 2020 was conducted. One surrogate for "information overload", which is "cognitive overload" was identified. The corpus of documents consisted of 151 documents for information overload and ten for cognitive overload. All documents were from the fields of computer science and information science, and were retrieved from three databases: Association for Computing Machinery (ACM) Digital Library, SCOPUS and Library and Information Science Abstracts (LISA). Findings The themes identified from the authors' concept analysis allowed us to extract the triggers, manifestations and consequences of information overload. They found triggers related to information characteristics, information need, the working environment, the cognitive abilities of individuals and the information environment. In terms of manifestations, they found that information overload manifests itself both emotionally and cognitively. The consequences of information overload were both internal and external. These findings allowed them to provide a definition of information overload. Originality/value Through the authors' concept analysis, they were able to clarify the components of information overload and provide a definition of the concept.
    Date
    22. 4.2023 19:27:56
    Source
    Journal of documentation. 79(2023) no.1, S.144-159
  15. Ma, L.: Information, platformized (2023) 0.03
    0.029241432 = product of:
      0.043862145 = sum of:
        0.023892717 = weight(_text_:of in 888) [ClassicSimilarity], result of:
          0.023892717 = score(doc=888,freq=18.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.3109903 = fieldWeight in 888, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=888)
        0.019969428 = product of:
          0.039938856 = sum of:
            0.039938856 = weight(_text_:22 in 888) [ClassicSimilarity], result of:
              0.039938856 = score(doc=888,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.23214069 = fieldWeight in 888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=888)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Scholarly publications are often regarded as "information" by default. They are collected, organized, preserved, and made accessible as knowledge records. However, the instances of article retraction, misconduct and malpractices of researchers and the replication crisis have raised concerns about the informativeness and evidential qualities of information. Among many factors, knowledge production has moved away from "normal science" under the systemic influences of platformization involving the datafication and commodification of scholarly articles, research profiles and research activities. This article aims to understand the platformization of information by examining how research practices and knowledge production are steered by market and platform mechanisms in four ways: (a) ownership of information; (b) metrics for sale; (c) relevance by metrics, and (d) market-based competition. In conclusion, the article argues that information is platformized when platforms hold the dominating power in determining what kinds of information can be disseminated and rewarded and when informativeness is decoupled from the normative agreement or consensus co-constructed and co-determined in an open and public discourse.
    Date
    22. 1.2023 19:01:47
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.2, S.273-282
  16. Milard, B.; Pitarch, Y.: Egocentric cocitation networks and scientific papers destinies (2023) 0.03
    0.029241432 = product of:
      0.043862145 = sum of:
        0.023892717 = weight(_text_:of in 918) [ClassicSimilarity], result of:
          0.023892717 = score(doc=918,freq=18.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.3109903 = fieldWeight in 918, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=918)
        0.019969428 = product of:
          0.039938856 = sum of:
            0.039938856 = weight(_text_:22 in 918) [ClassicSimilarity], result of:
              0.039938856 = score(doc=918,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.23214069 = fieldWeight in 918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=918)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    To what extent is the destiny of a scientific paper shaped by the cocitation network in which it is involved? What are the social contexts that can explain these structuring? Using bibliometric data, interviews with researchers, and social network analysis, this article proposes a typology based on egocentric cocitation networks that displays a quadruple structuring (before and after publication): polarization, clusterization, atomization, and attrition. It shows that the academic capital of the authors and the intellectual resources of their research are key factors of these destinies, as are the social relations between the authors concerned. The circumstances of the publishing are also correlated with the structuring of the egocentric cocitation networks, showing how socially embedded they are. Finally, the article discusses the contribution of these original networks to the analyze of scientific production and its dynamics.
    Date
    21. 3.2023 19:22:14
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.4, S.415-433
  17. Lorentzen, D.G.: Bridging polarised Twitter discussions : the interactions of the users in the middle (2021) 0.03
    0.028330466 = product of:
      0.042495698 = sum of:
        0.022526272 = weight(_text_:of in 182) [ClassicSimilarity], result of:
          0.022526272 = score(doc=182,freq=16.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2932045 = fieldWeight in 182, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=182)
        0.019969428 = product of:
          0.039938856 = sum of:
            0.039938856 = weight(_text_:22 in 182) [ClassicSimilarity], result of:
              0.039938856 = score(doc=182,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.23214069 = fieldWeight in 182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=182)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of the paper is to analyse the interactions of bridging users in Twitter discussions about vaccination. Design/methodology/approach Conversational threads were collected through filtering the Twitter stream using keywords and the most active participants in the conversations. Following data collection and anonymisation of tweets and user profiles, a retweet network was created to find users bridging the main clusters. Four conversations were selected, ranging from 456 to 1,983 tweets long, and then analysed through content analysis. Findings Although different opinions met in the discussions, a consensus was rarely built. Many sub-threads involved insults and criticism, and participants seemed not interested in shifting their positions. However, examples of reasoned discussions were also found. Originality/value The study analyses conversations on Twitter, which is rarely studied. The focus on the interactions of bridging users adds to the uniqueness of the paper.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 73(2021) no.1, S.129-143
  18. Park, Y.J.: ¬A socio-technological model of search information divide in US cities (2021) 0.03
    0.028330466 = product of:
      0.042495698 = sum of:
        0.022526272 = weight(_text_:of in 184) [ClassicSimilarity], result of:
          0.022526272 = score(doc=184,freq=16.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2932045 = fieldWeight in 184, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=184)
        0.019969428 = product of:
          0.039938856 = sum of:
            0.039938856 = weight(_text_:22 in 184) [ClassicSimilarity], result of:
              0.039938856 = score(doc=184,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.23214069 = fieldWeight in 184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=184)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of the paper is to analyse the interactions of bridging users in Twitter discussions about vaccination. Design/methodology/approach Conversational threads were collected through filtering the Twitter stream using keywords and the most active participants in the conversations. Following data collection and anonymisation of tweets and user profiles, a retweet network was created to find users bridging the main clusters. Four conversations were selected, ranging from 456 to 1,983 tweets long, and then analysed through content analysis. Findings Although different opinions met in the discussions, a consensus was rarely built. Many sub-threads involved insults and criticism, and participants seemed not interested in shifting their positions. However, examples of reasoned discussions were also found. Originality/value The study analyses conversations on Twitter, which is rarely studied. The focus on the interactions of bridging users adds to the uniqueness of the paper.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 73(2021) no.2 S.144-159
  19. Oh, H.; Nam, S.; Zhu, Y.: Structured abstract summarization of scientific articles : summarization using full-text section information (2023) 0.03
    0.028230444 = product of:
      0.042345665 = sum of:
        0.025704475 = weight(_text_:of in 889) [ClassicSimilarity], result of:
          0.025704475 = score(doc=889,freq=30.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.33457235 = fieldWeight in 889, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=889)
        0.016641192 = product of:
          0.033282384 = sum of:
            0.033282384 = weight(_text_:22 in 889) [ClassicSimilarity], result of:
              0.033282384 = score(doc=889,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.19345059 = fieldWeight in 889, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=889)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The automatic summarization of scientific articles differs from other text genres because of the structured format and longer text length. Previous approaches have focused on tackling the lengthy nature of scientific articles, aiming to improve the computational efficiency of summarizing long text using a flat, unstructured abstract. However, the structured format of scientific articles and characteristics of each section have not been fully explored, despite their importance. The lack of a sufficient investigation and discussion of various characteristics for each section and their influence on summarization results has hindered the practical use of automatic summarization for scientific articles. To provide a balanced abstract proportionally emphasizing each section of a scientific article, the community introduced the structured abstract, an abstract with distinct, labeled sections. Using this information, in this study, we aim to understand tasks ranging from data preparation to model evaluation from diverse viewpoints. Specifically, we provide a preprocessed large-scale dataset and propose a summarization method applying the introduction, methods, results, and discussion (IMRaD) format reflecting the characteristics of each section. We also discuss the objective benchmarks and perspectives of state-of-the-art algorithms and present the challenges and research directions in this area.
    Date
    22. 1.2023 18:57:12
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.2, S.234-248
  20. Candela, G.: ¬An automatic data quality approach to assess semantic data from cultural heritage institutions (2023) 0.03
    0.027920596 = product of:
      0.041880894 = sum of:
        0.018583227 = weight(_text_:of in 997) [ClassicSimilarity], result of:
          0.018583227 = score(doc=997,freq=8.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.24188137 = fieldWeight in 997, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=997)
        0.023297668 = product of:
          0.046595335 = sum of:
            0.046595335 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
              0.046595335 = score(doc=997,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.2708308 = fieldWeight in 997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=997)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In recent years, cultural heritage institutions have been exploring the benefits of applying Linked Open Data to their catalogs and digital materials. Innovative and creative methods have emerged to publish and reuse digital contents to promote computational access, such as the concepts of Labs and Collections as Data. Data quality has become a requirement for researchers and training methods based on artificial intelligence and machine learning. This article explores how the quality of Linked Open Data made available by cultural heritage institutions can be automatically assessed. The results obtained can be useful for other institutions who wish to publish and assess their collections.
    Date
    22. 6.2023 18:23:31
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.866-878

Languages

  • e 810
  • d 57
  • pt 4
  • sp 1
  • More… Less…

Types

  • a 823
  • el 97
  • m 23
  • p 13
  • x 4
  • s 3
  • A 1
  • EL 1
  • r 1
  • More… Less…

Themes

Subjects

Classifications