Search (820 results, page 1 of 41)

  • × language_ss:"e"
  • × year_i:[2020 TO 2030}
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.06
    0.05940105 = sum of:
      0.054862697 = product of:
        0.21945079 = sum of:
          0.21945079 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
            0.21945079 = score(doc=862,freq=2.0), product of:
              0.39046928 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046056706 = queryNorm
              0.56201804 = fieldWeight in 862, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.25 = coord(1/4)
      0.004538352 = product of:
        0.009076704 = sum of:
          0.009076704 = weight(_text_:a in 862) [ClassicSimilarity], result of:
            0.009076704 = score(doc=862,freq=10.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.1709182 = fieldWeight in 862, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.5 = coord(1/2)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
    Type
    a
  2. Fugmann, R.: What is information? : an information veteran looks back (2022) 0.03
    0.034582928 = product of:
      0.069165856 = sum of:
        0.069165856 = sum of:
          0.006765375 = weight(_text_:a in 1085) [ClassicSimilarity], result of:
            0.006765375 = score(doc=1085,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.12739488 = fieldWeight in 1085, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=1085)
          0.06240048 = weight(_text_:22 in 1085) [ClassicSimilarity], result of:
            0.06240048 = score(doc=1085,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.38690117 = fieldWeight in 1085, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=1085)
      0.5 = coord(1/2)
    
    Date
    18. 8.2022 19:22:57
    Type
    a
  3. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.03
    0.02810499 = product of:
      0.05620998 = sum of:
        0.05620998 = sum of:
          0.012529651 = weight(_text_:a in 40) [ClassicSimilarity], result of:
            0.012529651 = score(doc=40,freq=14.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.23593865 = fieldWeight in 40, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
          0.043680333 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
            0.043680333 = score(doc=40,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.2708308 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
      0.5 = coord(1/2)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
    Type
    a
  4. Morris, V.: Automated language identification of bibliographic resources (2020) 0.03
    0.02766634 = product of:
      0.05533268 = sum of:
        0.05533268 = sum of:
          0.0054123 = weight(_text_:a in 5749) [ClassicSimilarity], result of:
            0.0054123 = score(doc=5749,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.10191591 = fieldWeight in 5749, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=5749)
          0.04992038 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
            0.04992038 = score(doc=5749,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.30952093 = fieldWeight in 5749, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5749)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 19:04:22
    Type
    a
  5. Wu, P.F.: Veni, vidi, vici? : On the rise of scrape-and-report scholarship in online reviews research (2023) 0.03
    0.026575929 = product of:
      0.053151857 = sum of:
        0.053151857 = sum of:
          0.009471525 = weight(_text_:a in 896) [ClassicSimilarity], result of:
            0.009471525 = score(doc=896,freq=8.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.17835285 = fieldWeight in 896, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=896)
          0.043680333 = weight(_text_:22 in 896) [ClassicSimilarity], result of:
            0.043680333 = score(doc=896,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.2708308 = fieldWeight in 896, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=896)
      0.5 = coord(1/2)
    
    Abstract
    JASIST has in recent years received many submissions reporting data analytics based on "Big Data" of online reviews scraped from various platforms. By outlining major issues in this type of scape-and-report scholarship and providing a set of recommendations, this essay encourages online reviews researchers to look at Big Data with a critical eye and treat online reviews as a sociotechnical "thing" produced within the fabric of sociomaterial life.
    Date
    22. 1.2023 18:33:53
    Type
    a
  6. Candela, G.: ¬An automatic data quality approach to assess semantic data from cultural heritage institutions (2023) 0.03
    0.025188856 = product of:
      0.05037771 = sum of:
        0.05037771 = sum of:
          0.00669738 = weight(_text_:a in 997) [ClassicSimilarity], result of:
            0.00669738 = score(doc=997,freq=4.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.12611452 = fieldWeight in 997, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=997)
          0.043680333 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
            0.043680333 = score(doc=997,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.2708308 = fieldWeight in 997, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=997)
      0.5 = coord(1/2)
    
    Abstract
    In recent years, cultural heritage institutions have been exploring the benefits of applying Linked Open Data to their catalogs and digital materials. Innovative and creative methods have emerged to publish and reuse digital contents to promote computational access, such as the concepts of Labs and Collections as Data. Data quality has become a requirement for researchers and training methods based on artificial intelligence and machine learning. This article explores how the quality of Linked Open Data made available by cultural heritage institutions can be automatically assessed. The results obtained can be useful for other institutions who wish to publish and assess their collections.
    Date
    22. 6.2023 18:23:31
    Type
    a
  7. Manley, S.: Letters to the editor and the race for publication metrics (2022) 0.02
    0.024208048 = product of:
      0.048416097 = sum of:
        0.048416097 = sum of:
          0.0047357627 = weight(_text_:a in 547) [ClassicSimilarity], result of:
            0.0047357627 = score(doc=547,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.089176424 = fieldWeight in 547, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=547)
          0.043680333 = weight(_text_:22 in 547) [ClassicSimilarity], result of:
            0.043680333 = score(doc=547,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.2708308 = fieldWeight in 547, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=547)
      0.5 = coord(1/2)
    
    Date
    6. 4.2022 19:22:26
    Type
    a
  8. Bullard, J.; Dierking, A.; Grundner, A.: Centring LGBT2QIA+ subjects in knowledge organization systems (2020) 0.02
    0.024089992 = product of:
      0.048179984 = sum of:
        0.048179984 = sum of:
          0.010739701 = weight(_text_:a in 5996) [ClassicSimilarity], result of:
            0.010739701 = score(doc=5996,freq=14.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.20223314 = fieldWeight in 5996, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=5996)
          0.037440285 = weight(_text_:22 in 5996) [ClassicSimilarity], result of:
            0.037440285 = score(doc=5996,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 5996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5996)
      0.5 = coord(1/2)
    
    Abstract
    This paper contains a report of two interdependent knowledge organization (KO) projects for an LGBT2QIA+ library. The authors, in the context of volunteer library work for an independent library, redesigned the classification system and subject cataloguing guidelines to centre LGBT2QIA+ subjects. We discuss the priorities of creating and maintaining knowledge organization systems for a historically marginalized community and address the challenge that queer subjectivity poses to the goals of KO. The classification system features a focus on identity and physically reorganizes the library space in a way that accounts for the multiple and overlapping labels that constitute the currently articulated boundaries of this community. The subject heading system focuses on making visible topics and elements of identity made invisible by universal systems and by the newly implemented classification system. We discuss how this project may inform KO for other marginalized subjects, particularly through process and documentation that prioritizes transparency and the acceptance of an unfinished endpoint for queer KO.
    Date
    6.10.2020 21:22:33
    Type
    a
  9. Cheti, A.; Viti, E.: Functionality and merits of a faceted thesaurus : the case of the Nuovo soggettario (2023) 0.02
    0.023691658 = product of:
      0.047383316 = sum of:
        0.047383316 = sum of:
          0.00994303 = weight(_text_:a in 1181) [ClassicSimilarity], result of:
            0.00994303 = score(doc=1181,freq=12.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.18723148 = fieldWeight in 1181, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=1181)
          0.037440285 = weight(_text_:22 in 1181) [ClassicSimilarity], result of:
            0.037440285 = score(doc=1181,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 1181, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1181)
      0.5 = coord(1/2)
    
    Abstract
    The Nuovo soggettario, the official Italian subject indexing system edited by the National Central Library of Florence, is made up of interactive components, the core of which is a general thesaurus and some rules of a conventional syntax for subject string construction. The Nuovo soggettario Thesaurus is in compliance with ISO 25964: 2011-2013, IFLA LRM, and FAIR principle (findability, accessibility, interoperability, and reusability). Its open data are available in the Zthes, MARC21, and in SKOS formats and allow for interoperability with l library, archive, and museum databases. The Thesaurus's macrostructure is organized into four fundamental macro-categories, thirteen categories, and facets. The facets allow for the orderly development of hierarchies, thereby limiting polyhierarchies and promoting the grouping of homogenous concepts. This paper addresses the main features and peculiarities which have characterized the consistent development of this categorical structure and its effects on the syntactic sphere in a predominantly pre-coordinated usage context.
    Date
    26.11.2023 18:59:22
    Type
    a
  10. Geras, A.; Siudem, G.; Gagolewski, M.: Should we introduce a dislike button for academic articles? (2020) 0.02
    0.023258494 = product of:
      0.04651699 = sum of:
        0.04651699 = sum of:
          0.009076704 = weight(_text_:a in 5620) [ClassicSimilarity], result of:
            0.009076704 = score(doc=5620,freq=10.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.1709182 = fieldWeight in 5620, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=5620)
          0.037440285 = weight(_text_:22 in 5620) [ClassicSimilarity], result of:
            0.037440285 = score(doc=5620,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 5620, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5620)
      0.5 = coord(1/2)
    
    Abstract
    There is a mutual resemblance between the behavior of users of the Stack Exchange and the dynamics of the citations accumulation process in the scientific community, which enabled us to tackle the outwardly intractable problem of assessing the impact of introducing "negative" citations. Although the most frequent reason to cite an article is to highlight the connection between the 2 publications, researchers sometimes mention an earlier work to cast a negative light. While computing citation-based scores, for instance, the h-index, information about the reason why an article was mentioned is neglected. Therefore, it can be questioned whether these indices describe scientific achievements accurately. In this article we shed insight into the problem of "negative" citations, analyzing data from Stack Exchange and, to draw more universal conclusions, we derive an approximation of citations scores. Here we show that the quantified influence of introducing negative citations is of lesser importance and that they could be used as an indicator of where the attention of the scientific community is allocated.
    Date
    6. 1.2020 18:10:22
    Type
    a
  11. Park, Y.J.: ¬A socio-technological model of search information divide in US cities (2021) 0.02
    0.022779368 = product of:
      0.045558736 = sum of:
        0.045558736 = sum of:
          0.008118451 = weight(_text_:a in 184) [ClassicSimilarity], result of:
            0.008118451 = score(doc=184,freq=8.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.15287387 = fieldWeight in 184, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=184)
          0.037440285 = weight(_text_:22 in 184) [ClassicSimilarity], result of:
            0.037440285 = score(doc=184,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=184)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of the paper is to analyse the interactions of bridging users in Twitter discussions about vaccination. Design/methodology/approach Conversational threads were collected through filtering the Twitter stream using keywords and the most active participants in the conversations. Following data collection and anonymisation of tweets and user profiles, a retweet network was created to find users bridging the main clusters. Four conversations were selected, ranging from 456 to 1,983 tweets long, and then analysed through content analysis. Findings Although different opinions met in the discussions, a consensus was rarely built. Many sub-threads involved insults and criticism, and participants seemed not interested in shifting their positions. However, examples of reasoned discussions were also found. Originality/value The study analyses conversations on Twitter, which is rarely studied. The focus on the interactions of bridging users adds to the uniqueness of the paper.
    Date
    20. 1.2015 18:30:22
    Type
    a
  12. Hertzum, M.: Information seeking by experimentation : trying something out to discover what happens (2023) 0.02
    0.022779368 = product of:
      0.045558736 = sum of:
        0.045558736 = sum of:
          0.008118451 = weight(_text_:a in 915) [ClassicSimilarity], result of:
            0.008118451 = score(doc=915,freq=8.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.15287387 = fieldWeight in 915, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=915)
          0.037440285 = weight(_text_:22 in 915) [ClassicSimilarity], result of:
            0.037440285 = score(doc=915,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 915, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=915)
      0.5 = coord(1/2)
    
    Abstract
    Experimentation is the process of trying something out to discover what happens. It is a widespread information practice, yet often bypassed in information-behavior research. This article argues that experimentation complements prior knowledge, documents, and people as an important fourth class of information sources. Relative to the other classes, the distinguishing characteristics of experimentation are that it is a personal-as opposed to interpersonal-source and that it provides "backtalk." When the information seeker tries something out and then attends to the resulting situation, it is as though the materials of the situation talk back: They provide the information seeker with a situated and direct experience of the consequences of the tried-out options. In this way, experimentation involves obtaining information by creating it. It also involves turning material and behavioral processes into information interactions. Thereby, information seeking by experimentation is important to practical information literacy and extends information-behavior research with new insights on the interrelations between creating and seeking information.
    Date
    21. 3.2023 19:22:29
    Type
    a
  13. Milard, B.; Pitarch, Y.: Egocentric cocitation networks and scientific papers destinies (2023) 0.02
    0.022779368 = product of:
      0.045558736 = sum of:
        0.045558736 = sum of:
          0.008118451 = weight(_text_:a in 918) [ClassicSimilarity], result of:
            0.008118451 = score(doc=918,freq=8.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.15287387 = fieldWeight in 918, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=918)
          0.037440285 = weight(_text_:22 in 918) [ClassicSimilarity], result of:
            0.037440285 = score(doc=918,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 918, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=918)
      0.5 = coord(1/2)
    
    Abstract
    To what extent is the destiny of a scientific paper shaped by the cocitation network in which it is involved? What are the social contexts that can explain these structuring? Using bibliometric data, interviews with researchers, and social network analysis, this article proposes a typology based on egocentric cocitation networks that displays a quadruple structuring (before and after publication): polarization, clusterization, atomization, and attrition. It shows that the academic capital of the authors and the intellectual resources of their research are key factors of these destinies, as are the social relations between the authors concerned. The circumstances of the publishing are also correlated with the structuring of the egocentric cocitation networks, showing how socially embedded they are. Finally, the article discusses the contribution of these original networks to the analyze of scientific production and its dynamics.
    Date
    21. 3.2023 19:22:14
    Type
    a
  14. Kuehn, E.F.: ¬The information ecosystem concept in information literacy : a theoretical approach and definition (2023) 0.02
    0.022779368 = product of:
      0.045558736 = sum of:
        0.045558736 = sum of:
          0.008118451 = weight(_text_:a in 919) [ClassicSimilarity], result of:
            0.008118451 = score(doc=919,freq=8.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.15287387 = fieldWeight in 919, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=919)
          0.037440285 = weight(_text_:22 in 919) [ClassicSimilarity], result of:
            0.037440285 = score(doc=919,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 919, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=919)
      0.5 = coord(1/2)
    
    Abstract
    Despite the prominence of the concept of the information ecosystem (hereafter IE) in information literacy documents and literature, it is under-theorized. This article proposes a general definition of IE for information literacy. After reviewing the current use of the IE concept in the Association of College and Research Libraries (ACRL) Framework for Information Literacy and other information literacy sources, existing definitions of IE and similar concepts (e.g., "evidence ecosystems") will be examined from other fields. These will form the basis of the definition of IE proposed in the article for the field of information literacy: "all structures, entities, and agents related to the flow of semantic information relevant to a research domain, as well as the information itself."
    Date
    22. 3.2023 11:52:50
    Type
    a
  15. Li, G.; Siddharth, L.; Luo, J.: Embedding knowledge graph of patent metadata to measure knowledge proximity (2023) 0.02
    0.022779368 = product of:
      0.045558736 = sum of:
        0.045558736 = sum of:
          0.008118451 = weight(_text_:a in 920) [ClassicSimilarity], result of:
            0.008118451 = score(doc=920,freq=8.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.15287387 = fieldWeight in 920, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=920)
          0.037440285 = weight(_text_:22 in 920) [ClassicSimilarity], result of:
            0.037440285 = score(doc=920,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 920, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=920)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge proximity refers to the strength of association between any two entities in a structural form that embodies certain aspects of a knowledge base. In this work, we operationalize knowledge proximity within the context of the US Patent Database (knowledge base) using a knowledge graph (structural form) named "PatNet" built using patent metadata, including citations, inventors, assignees, and domain classifications. We train various graph embedding models using PatNet to obtain the embeddings of entities and relations. The cosine similarity between the corresponding (or transformed) embeddings of entities denotes the knowledge proximity between these. We compare the embedding models in terms of their performances in predicting target entities and explaining domain expansion profiles of inventors and assignees. We then apply the embeddings of the best-preferred model to associate homogeneous (e.g., patent-patent) and heterogeneous (e.g., inventor-assignee) pairs of entities.
    Date
    22. 3.2023 12:06:55
    Type
    a
  16. Das, S.; Paik, J.H.: Gender tagging of named entities using retrieval-assisted multi-context aggregation : an unsupervised approach (2023) 0.02
    0.022779368 = product of:
      0.045558736 = sum of:
        0.045558736 = sum of:
          0.008118451 = weight(_text_:a in 941) [ClassicSimilarity], result of:
            0.008118451 = score(doc=941,freq=8.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.15287387 = fieldWeight in 941, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=941)
          0.037440285 = weight(_text_:22 in 941) [ClassicSimilarity], result of:
            0.037440285 = score(doc=941,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 941, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=941)
      0.5 = coord(1/2)
    
    Abstract
    Inferring the gender of named entities present in a text has several practical applications in information sciences. Existing approaches toward name gender identification rely exclusively on using the gender distributions from labeled data. In the absence of such labeled data, these methods fail. In this article, we propose a two-stage model that is able to infer the gender of names present in text without requiring explicit name-gender labels. We use coreference resolution as the backbone for our proposed model. To aid coreference resolution where the existing contextual information does not suffice, we use a retrieval-assisted context aggregation framework. We demonstrate that state-of-the-art name gender inference is possible without supervision. Our proposed method matches or outperforms several supervised approaches and commercially used methods on five English language datasets from different domains.
    Date
    22. 3.2023 12:00:14
    Type
    a
  17. Zhang, X.; Wang, D.; Tang, Y.; Xiao, Q.: How question type influences knowledge withholding in social Q&A community (2023) 0.02
    0.022779368 = product of:
      0.045558736 = sum of:
        0.045558736 = sum of:
          0.008118451 = weight(_text_:a in 1067) [ClassicSimilarity], result of:
            0.008118451 = score(doc=1067,freq=8.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.15287387 = fieldWeight in 1067, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=1067)
          0.037440285 = weight(_text_:22 in 1067) [ClassicSimilarity], result of:
            0.037440285 = score(doc=1067,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 1067, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1067)
      0.5 = coord(1/2)
    
    Abstract
    Social question-and-answer (Q&A) communities are becoming increasingly important for knowledge acquisition. However, some users withhold knowledge, which can hinder the effectiveness of these platforms. Based on social exchange theory, the study investigates how different types of questions influence knowledge withholding, with question difficulty and user anonymity as boundary conditions. Two experiments were conducted to test hypotheses. Results indicate that informational questions are more likely to lead to knowledge withholding than conversational ones, as they elicit more fear of negative evaluation and fear of exploitation. The study also examines the interplay of question difficulty and user anonymity with question type. Overall, this study significantly extends the existing literature on counterproductive knowledge behavior by exploring the antecedents of knowledge withholding in social Q&A communities.
    Date
    22. 9.2023 13:51:47
    Type
    a
  18. Lorentzen, D.G.: Bridging polarised Twitter discussions : the interactions of the users in the middle (2021) 0.02
    0.022235535 = product of:
      0.04447107 = sum of:
        0.04447107 = sum of:
          0.007030784 = weight(_text_:a in 182) [ClassicSimilarity], result of:
            0.007030784 = score(doc=182,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.13239266 = fieldWeight in 182, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=182)
          0.037440285 = weight(_text_:22 in 182) [ClassicSimilarity], result of:
            0.037440285 = score(doc=182,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 182, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=182)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of the paper is to analyse the interactions of bridging users in Twitter discussions about vaccination. Design/methodology/approach Conversational threads were collected through filtering the Twitter stream using keywords and the most active participants in the conversations. Following data collection and anonymisation of tweets and user profiles, a retweet network was created to find users bridging the main clusters. Four conversations were selected, ranging from 456 to 1,983 tweets long, and then analysed through content analysis. Findings Although different opinions met in the discussions, a consensus was rarely built. Many sub-threads involved insults and criticism, and participants seemed not interested in shifting their positions. However, examples of reasoned discussions were also found. Originality/value The study analyses conversations on Twitter, which is rarely studied. The focus on the interactions of bridging users adds to the uniqueness of the paper.
    Date
    20. 1.2015 18:30:22
    Type
    a
  19. Cooke, N.A.; Kitzie, V.L.: Outsiders-within-Library and Information Science : reprioritizing the marginalized in critical sociocultural work (2021) 0.02
    0.021590449 = product of:
      0.043180898 = sum of:
        0.043180898 = sum of:
          0.005740611 = weight(_text_:a in 351) [ClassicSimilarity], result of:
            0.005740611 = score(doc=351,freq=4.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.10809815 = fieldWeight in 351, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=351)
          0.037440285 = weight(_text_:22 in 351) [ClassicSimilarity], result of:
            0.037440285 = score(doc=351,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 351, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=351)
      0.5 = coord(1/2)
    
    Abstract
    While there are calls for new paradigms within the profession, there are also existing subgenres that fit this bill if they would be fully acknowledged. This essay argues that underrepresented and otherwise marginalized scholars have already produced significant work within social, cultural, and community-oriented paradigms; social justice and advocacy; and, diversity, equity, and inclusion. This work has not been sufficiently valued or promoted. Furthermore, the surrounding structural conditions have resulted in the dismissal, violently reviewed and rejected, and erased work of underrepresented scholars, and the stigmatization and delegitimization of their work. These scholars are "outsiders-within-LIS." By identifying the outsiders-within-LIS through the frame of standpoint theories, the authors are suggesting that a new paradigm does not need to be created; rather, an existing paradigm needs to be recognized and reprioritized. This reprioritized paradigm of critical sociocultural work has and will continue to creatively enrich and expand the field and decolonize LIS curricula.
    Date
    18. 9.2021 13:22:27
    Type
    a
  20. Zheng, X.; Chen, J.; Yan, E.; Ni, C.: Gender and country biases in Wikipedia citations to scholarly publications (2023) 0.02
    0.021590449 = product of:
      0.043180898 = sum of:
        0.043180898 = sum of:
          0.005740611 = weight(_text_:a in 886) [ClassicSimilarity], result of:
            0.005740611 = score(doc=886,freq=4.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.10809815 = fieldWeight in 886, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=886)
          0.037440285 = weight(_text_:22 in 886) [ClassicSimilarity], result of:
            0.037440285 = score(doc=886,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 886, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=886)
      0.5 = coord(1/2)
    
    Abstract
    Ensuring Wikipedia cites scholarly publications based on quality and relevancy without biases is critical to credible and fair knowledge dissemination. We investigate gender- and country-based biases in Wikipedia citation practices using linked data from the Web of Science and a Wikipedia citation dataset. Using coarsened exact matching, we show that publications by women are cited less by Wikipedia than expected, and publications by women are less likely to be cited than those by men. Scholarly publications by authors affiliated with non-Anglosphere countries are also disadvantaged in getting cited by Wikipedia, compared with those by authors affiliated with Anglosphere countries. The level of gender- or country-based inequalities varies by research field, and the gender-country intersectional bias is prominent in math-intensive STEM fields. To ensure the credibility and equality of knowledge presentation, Wikipedia should consider strategies and guidelines to cite scholarly publications independent of the gender and country of authors.
    Date
    22. 1.2023 18:53:32
    Type
    a

Types

  • a 785
  • el 61
  • m 17
  • p 13
  • s 3
  • A 1
  • EL 1
  • x 1
  • More… Less…

Subjects

Classifications