Search (58 results, page 1 of 3)

  • × language_ss:"e"
  • × type_ss:"el"
  • × year_i:[2020 TO 2030}
  1. Patriarca, S.: Information literacy gives us the tools to check sources and to verify factual statements : What does Popper`s "Es gibt keine Autoritäten" mean? (2021) 0.01
    0.008223645 = product of:
      0.04934187 = sum of:
        0.008277881 = weight(_text_:und in 331) [ClassicSimilarity], result of:
          0.008277881 = score(doc=331,freq=4.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17315367 = fieldWeight in 331, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=331)
        0.024179846 = weight(_text_:informationswissenschaft in 331) [ClassicSimilarity], result of:
          0.024179846 = score(doc=331,freq=2.0), product of:
            0.09716552 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.021569785 = queryNorm
            0.24885213 = fieldWeight in 331, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.0390625 = fieldNorm(doc=331)
        0.006614278 = weight(_text_:in in 331) [ClassicSimilarity], result of:
          0.006614278 = score(doc=331,freq=18.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.22543246 = fieldWeight in 331, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=331)
        0.008277881 = weight(_text_:und in 331) [ClassicSimilarity], result of:
          0.008277881 = score(doc=331,freq=4.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17315367 = fieldWeight in 331, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=331)
        0.001991979 = weight(_text_:s in 331) [ClassicSimilarity], result of:
          0.001991979 = score(doc=331,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08494043 = fieldWeight in 331, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=331)
      0.16666667 = coord(5/30)
    
    Abstract
    I wonder if you would consider an English perspective on the exchange between Bernd Jörs and Hermann Huemer. In my career in the independent education sector I can recall many discussions and Government reports about cross-curricular issues such as logical reasoning and critical thinking, In the IB system this led to the inclusion in the Diploma of "Theory of Knowledge." In the UK we had "key skills" and "critical thinking." One such key skill is what we now call "information literacy." "In his parody of Information literacy, Dr Jörs seems to have confused a necessary condition for a sufficient condition. The fact that information competence may be necessary for serious academic study does not of course make it sufficient. When that is understood the joke about the megalomaniac rather loses its force. (We had better pass over the rant which follows, the sneer at "earth sciences" and the German prejudice towards Austrians)."
    Content
    Zu: Bernd Jörs, Zukunft der Informationswissenschaft und Kritischer Rationalismus - Gegen die Selbstüberschätzung der Vertreter der "Informationskompetenz" eine Rückkehr zu Karl R. Popper geboten, in: Open Password, 30 August - Herbert Huemer, Informationskompetenz als Kompetenz für lebenslanges Lernen, in: Open Password, #965, 25. August 2021 - Huemer nahm auf den Beitrag von Bernd Jörs "Wie sich "Informationskompetenz" methodisch-operativ untersuchen lässt" in Open Password am 20. August 2021 Bezug.
    Footnote
    Vgl. die Erwiderung: Jörs, B.: Informationskompetenz ist auf domänenspezifisches Vorwissen angewiesen und kann immer nur vorläufig sein: eine Antwort auf Steve Patriarca. Unter: Open Password. 2021, Nr.998 vom 15. November 2021 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzM3NiwiYTRlYWIxNTJhOTU4IiwwLDAsMzM5LDFd].
  2. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.00
    0.004498896 = product of:
      0.03374172 = sum of:
        0.011648012 = weight(_text_:und in 405) [ClassicSimilarity], result of:
          0.011648012 = score(doc=405,freq=22.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.24364883 = fieldWeight in 405, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.0060620857 = weight(_text_:in in 405) [ClassicSimilarity], result of:
          0.0060620857 = score(doc=405,freq=42.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.20661226 = fieldWeight in 405, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.011648012 = weight(_text_:und in 405) [ClassicSimilarity], result of:
          0.011648012 = score(doc=405,freq=22.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.24364883 = fieldWeight in 405, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.004383612 = product of:
          0.008767224 = sum of:
            0.008767224 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
              0.008767224 = score(doc=405,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.116070345 = fieldWeight in 405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=405)
          0.5 = coord(1/2)
      0.13333334 = coord(4/30)
    
    Abstract
    Human brain size nearly quadrupled in the six million years since Homo last shared a common ancestor with chimpanzees, but human brains are thought to have decreased in volume since the end of the last Ice Age. The timing and reason for this decrease is enigmatic. Here we use change-point analysis to estimate the timing of changes in the rate of hominin brain evolution. We find that hominin brains experienced positive rate changes at 2.1 and 1.5 million years ago, coincident with the early evolution of Homo and technological innovations evident in the archeological record. But we also find that human brain size reduction was surprisingly recent, occurring in the last 3,000 years. Our dating does not support hypotheses concerning brain size reduction as a by-product of body size reduction, a result of a shift to an agricultural diet, or a consequence of self-domestication. We suggest our analysis supports the hypothesis that the recent decrease in brain size may instead result from the externalization of knowledge and advantages of group-level decision-making due in part to the advent of social systems of distributed cognition and the storage and sharing of information. Humans live in social groups in which multiple brains contribute to the emergence of collective intelligence. Although difficult to study in the deep history of Homo, the impacts of group size, social organization, collective intelligence and other potential selective forces on brain evolution can be elucidated using ants as models. The remarkable ecological diversity of ants and their species richness encompasses forms convergent in aspects of human sociality, including large group size, agrarian life histories, division of labor, and collective cognition. Ants provide a wide range of social systems to generate and test hypotheses concerning brain size enlargement or reduction and aid in interpreting patterns of brain evolution identified in humans. Although humans and ants represent very different routes in social and cognitive evolution, the insights ants offer can broadly inform us of the selective forces that influence brain size.
    Footnote
    Vgl. auch: Rötzer, F.: Warum schrumpft das Gehirn des Menschen seit ein paar Tausend Jahren? Unter: https://krass-und-konkret.de/wissenschaft-technik/warum-schrumpft-das-gehirn-des-menschen-seit-ein-paar-tausend-jahren/. "... seit einigen tausend Jahren - manche sagen seit 10.000 Jahren -, also nach dem Beginn der Landwirtschaft, der Sesshaftigkeit und der Stadtgründungen sowie der Erfindung der Schrift schrumpfte das menschliche Gehirn überraschenderweise wieder. ... Allgemein wird davon ausgegangen, dass mit den ersten Werkzeugen und vor allem beginnend mit der Erfindung der Schrift kognitive Funktionen, vor allem das Gedächtnis externalisiert wurden, allerdings um den Preis, neue Kapazitäten entwickeln zu müssen, beispielsweise Lesen und Schreiben. Gedächtnis beinhaltet individuelle Erfahrungen, aber auch kollektives Wissen, an dem alle Mitglieder einer Gemeinschaft mitwirken und in das das Wissen sowie die Erfahrungen der Vorfahren eingeschrieben sind. Im digitalen Zeitalter ist die Externalisierung und Entlastung der Gehirne noch sehr viel weitgehender, weil etwa mit KI nicht nur Wissensinhalte, sondern auch kognitive Fähigkeiten wie das Suchen, Sammeln, Analysieren und Auswerten von Informationen zur Entscheidungsfindung externalisiert werden, während die externalisierten Gehirne wie das Internet kollektiv in Echtzeit lernen und sich erweitern. Über Neuimplantate könnten schließlich Menschen direkt an die externalisierten Gehirne angeschlossen werden, aber auch direkt ihre kognitiven Kapazitäten erweitern, indem Prothesen, neue Sensoren oder Maschinen/Roboter auch in der Ferne in den ergänzten Körper der Gehirne aufgenommen werden.
    Die Wissenschaftler sehen diese Entwicklungen im Hintergrund, wollen aber über einen Vergleich mit der Hirnentwicklung bei Ameisen erklären, warum heutige Menschen kleinere Gehirne als ihre Vorfahren vor 100.000 Jahren entwickelt haben. Der Rückgang der Gehirngröße könnte, so die Hypothese, "aus der Externalisierung von Wissen und den Vorteilen der Entscheidungsfindung auf Gruppenebene resultieren, was zum Teil auf das Aufkommen sozialer Systeme der verteilten Kognition und der Speicherung und Weitergabe von Informationen zurückzuführen ist"."
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]
  3. Hobert, A.; Jahn, N.; Mayr, P.; Schmidt, B.; Taubert, N.: Open access uptake in Germany 2010-2018 : adoption in a diverse research landscape (2021) 0.00
    0.0024580597 = product of:
      0.024580596 = sum of:
        0.009365354 = weight(_text_:und in 250) [ClassicSimilarity], result of:
          0.009365354 = score(doc=250,freq=8.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.19590102 = fieldWeight in 250, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
        0.005849888 = weight(_text_:in in 250) [ClassicSimilarity], result of:
          0.005849888 = score(doc=250,freq=22.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.19937998 = fieldWeight in 250, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
        0.009365354 = weight(_text_:und in 250) [ClassicSimilarity], result of:
          0.009365354 = score(doc=250,freq=8.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.19590102 = fieldWeight in 250, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
      0.1 = coord(3/30)
    
    Abstract
    Es handelt sich um eine bibliometrische Untersuchung der Entwicklung der Open-Access-Verfügbarkeit wissenschaftlicher Zeitschriftenartikel in Deutschland, die im Zeitraum 2010-18 erschienen und im Web of Science indexiert sind. Ein besonderes Augenmerk der Analyse lag auf der Frage, ob und inwiefern sich die Open-Access-Profile der Universitäten und außeruniversitären Wissenschaftseinrichtungen in Deutschland voneinander unterscheiden.
    Content
    This study investigates the development of open access (OA) to journal articles from authors affiliated with German universities and non-university research institutions in the period 2010-2018. Beyond determining the overall share of openly available articles, a systematic classification of distinct categories of OA publishing allowed us to identify different patterns of adoption of OA. Taking into account the particularities of the German research landscape, variations in terms of productivity, OA uptake and approaches to OA are examined at the meso-level and possible explanations are discussed. The development of the OA uptake is analysed for the different research sectors in Germany (universities, non-university research institutes of the Helmholtz Association, Fraunhofer Society, Max Planck Society, Leibniz Association, and government research agencies). Combining several data sources (incl. Web of Science, Unpaywall, an authority file of standardised German affiliation information, the ISSN-Gold-OA 3.0 list, and OpenDOAR), the study confirms the growth of the OA share mirroring the international trend reported in related studies. We found that 45% of all considered articles during the observed period were openly available at the time of analysis. Our findings show that subject-specific repositories are the most prevalent type of OA. However, the percentages for publication in fully OA journals and OA via institutional repositories show similarly steep increases. Enabling data-driven decision-making regarding the implementation of OA in Germany at the institutional level, the results of this study furthermore can serve as a baseline to assess the impact recent transformative agreements with major publishers will likely have on scholarly communication.
    Footnote
    Den Aufsatz begleitet ein interaktives Datensupplement, mit dem sich die OA-Anteile auf Ebene der Einrichtung vergleichen lassen. https://subugoe.github.io/oauni/articles/supplement.html. Die Arbeit entstand in Zusammenarbeit der BMBF-Projekte OAUNI und OASE der Förderlinie "Quantitative Wissenschaftsforschung". https://www.wihoforschung.de/de/quantitative-wissenschaftsforschung-1573.php.
  4. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.00
    0.0022562696 = product of:
      0.022562696 = sum of:
        0.008194685 = weight(_text_:und in 572) [ClassicSimilarity], result of:
          0.008194685 = score(doc=572,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
        0.0061733257 = weight(_text_:in in 572) [ClassicSimilarity], result of:
          0.0061733257 = score(doc=572,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21040362 = fieldWeight in 572, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
        0.008194685 = weight(_text_:und in 572) [ClassicSimilarity], result of:
          0.008194685 = score(doc=572,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
      0.1 = coord(3/30)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  5. Petras, V.: ¬The identity of information science (2023) 0.00
    0.001940656 = product of:
      0.029109837 = sum of:
        0.024179846 = weight(_text_:informationswissenschaft in 1077) [ClassicSimilarity], result of:
          0.024179846 = score(doc=1077,freq=2.0), product of:
            0.09716552 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.021569785 = queryNorm
            0.24885213 = fieldWeight in 1077, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1077)
        0.004929992 = weight(_text_:in in 1077) [ClassicSimilarity], result of:
          0.004929992 = score(doc=1077,freq=10.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.16802745 = fieldWeight in 1077, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1077)
      0.06666667 = coord(2/30)
    
    Abstract
    Purpose This paper offers a definition of the core of information science, which encompasses most research in the field. The definition provides a unique identity for information science and positions it in the disciplinary universe. Design/methodology/approach After motivating the objective, a definition of the core and an explanation of its key aspects are provided. The definition is related to other definitions of information science before controversial discourse aspects are briefly addressed: discipline vs. field, science vs. humanities, library vs. information science and application vs. theory. Interdisciplinarity as an often-assumed foundation of information science is challenged. Findings Information science is concerned with how information is manifested across space and time. Information is manifested to facilitate and support the representation, access, documentation and preservation of ideas, activities, or practices, and to enable different types of interactions. Research and professional practice encompass the infrastructures - institutions and technology -and phenomena and practices around manifested information across space and time as its core contribution to the scholarly landscape. Information science collaborates with other disciplines to work on complex information problems that need multi- and interdisciplinary approaches to address them. Originality/value The paper argues that new information problems may change the core of the field, but throughout its existence, the discipline has remained quite stable in its central focus, yet proved to be highly adaptive to the tremendous changes in the forms, practices, institutions and technologies around and for manifested information.
    Field
    Informationswissenschaft
    Footnote
    Beitrag in einer Festschrift für Michael Buckland.
  6. Shiri, A.; Kelly, E.J.; Kenfield, A.; Woolcott, L.; Masood, K.; Muglia, C.; Thompson, S.: ¬A faceted conceptualization of digital object reuse in digital repositories (2020) 0.00
    0.0017658629 = product of:
      0.017658629 = sum of:
        0.006901989 = weight(_text_:in in 48) [ClassicSimilarity], result of:
          0.006901989 = score(doc=48,freq=10.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.23523843 = fieldWeight in 48, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=48)
        0.008784681 = product of:
          0.026354041 = sum of:
            0.026354041 = weight(_text_:l in 48) [ClassicSimilarity], result of:
              0.026354041 = score(doc=48,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.30739886 = fieldWeight in 48, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=48)
          0.33333334 = coord(1/3)
        0.0019719584 = weight(_text_:s in 48) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=48,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 48, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=48)
      0.1 = coord(3/30)
    
    Abstract
    In this paper, we provide an introduction to the concept of digital object reuse and its various connotations in the context of current digital libraries, archives, and repositories. We will then propose a faceted categorization of the various types, contexts, and cases for digital object reuse in order to facilitate understanding and communication and to provide a conceptual framework for the assessment of digital object reuse by various cultural heritage and cultural memory organizations.
    Footnote
    Conference Paper: International Society for Knowledge Organziation (ISKO), Proceedings of the Sixteenth International ISKO Conference, 2020 Aalborg, Denmark, Ed. by Marianne Lykke, Tanja Svarre, Mette Skov, Daniel Martínez-Ávila (Ed.)At: Aalborg, Denmark. (Advances in Knowledge Organization, vol. 17).
  7. Advanced online media use (2023) 0.00
    0.0012487139 = product of:
      0.018730707 = sum of:
        0.009365354 = weight(_text_:und in 954) [ClassicSimilarity], result of:
          0.009365354 = score(doc=954,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.19590102 = fieldWeight in 954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=954)
        0.009365354 = weight(_text_:und in 954) [ClassicSimilarity], result of:
          0.009365354 = score(doc=954,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.19590102 = fieldWeight in 954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=954)
      0.06666667 = coord(2/30)
    
    Abstract
    Ten recommendations for the advanced use of online media. Mit Links auf historische und weiterführende Beiträge.
  8. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.00
    0.0010934502 = product of:
      0.016401753 = sum of:
        0.0061733257 = weight(_text_:in in 40) [ClassicSimilarity], result of:
          0.0061733257 = score(doc=40,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21040362 = fieldWeight in 40, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.010228428 = product of:
          0.020456856 = sum of:
            0.020456856 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.020456856 = score(doc=40,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.5 = coord(1/2)
      0.06666667 = coord(2/30)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
  9. Machado, L.; Martínez-Ávila, D.; Barcellos Almeida, M.; Borges, M.M.: Towards a moderate realistic foundation for ontological knowledge organization systems : the question of the naturalness of classifications (2023) 0.00
    6.783625E-4 = product of:
      0.010175437 = sum of:
        0.0026457112 = weight(_text_:in in 894) [ClassicSimilarity], result of:
          0.0026457112 = score(doc=894,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.09017298 = fieldWeight in 894, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=894)
        0.007529726 = product of:
          0.022589177 = sum of:
            0.022589177 = weight(_text_:l in 894) [ClassicSimilarity], result of:
              0.022589177 = score(doc=894,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.26348472 = fieldWeight in 894, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.046875 = fieldNorm(doc=894)
          0.33333334 = coord(1/3)
      0.06666667 = coord(2/30)
    
    Abstract
    Several authors emphasize the need for a change in classification theory due to the influence of a dogmatic and monistic ontology supported by an outdated essentialism. These claims tend to focus on the fallibility of knowledge, the need for a pluralistic view, and the theoretical burden of observations. Regardless of the legitimacy of these concerns, there is the risk, when not moderate, to fall into the opposite relativistic extreme. Based on a narrative review of the literature, we aim to reflectively discuss the theoretical foundations that can serve as a basis for a realist position supporting pluralistic ontological classifications. The goal is to show that, against rather conventional solutions, objective scientific-based approaches to natural classifications are presented to be viable, allowing a proper distinction between ontological and taxonomic questions. Supported by critical scientific realism, we consider that such an approach is suitable for the development of ontological Knowledge Organization Systems (KOS). We believe that ontological perspectivism can provide the necessary adaptation to the different granularities of reality.
  10. Koster, L.: Persistent identifiers for heritage objects (2020) 0.00
    6.261848E-4 = product of:
      0.009392772 = sum of:
        0.0031180005 = weight(_text_:in in 5718) [ClassicSimilarity], result of:
          0.0031180005 = score(doc=5718,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.10626988 = fieldWeight in 5718, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5718)
        0.006274772 = product of:
          0.018824315 = sum of:
            0.018824315 = weight(_text_:l in 5718) [ClassicSimilarity], result of:
              0.018824315 = score(doc=5718,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.2195706 = fieldWeight in 5718, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5718)
          0.33333334 = coord(1/3)
      0.06666667 = coord(2/30)
    
    Abstract
    Persistent identifiers (PID's) are essential for getting access and referring to library, archive and museum (LAM) collection objects in a sustainable and unambiguous way, both internally and externally. Heritage institutions need a universal policy for the use of PID's in order to have an efficient digital infrastructure at their disposal and to achieve optimal interoperability, leading to open data, open collections and efficient resource management. Here the discussion is limited to PID's that institutions can assign to objects they own or administer themselves. PID's for people, subjects etc. can be used by heritage institutions, but are generally managed by other parties. The first part of this article consists of a general theoretical description of persistent identifiers. First of all, I discuss the questions of what persistent identifiers are and what they are not, and what is needed to administer and use them. The most commonly used existing PID systems are briefly characterized. Then I discuss the types of objects PID's can be assigned to. This section concludes with an overview of the requirements that apply if PIDs should also be used for linked data. The second part examines current infrastructural practices, and existing PID systems and their advantages and shortcomings. Based on these practical issues and the pros and cons of existing PID systems a list of requirements for PID systems is presented which is used to address a number of practical considerations. This section concludes with a number of recommendations.
  11. Broughton, V.: Faceted classification in support of diversity : the role of concepts and terms in representing religion (2020) 0.00
    5.79343E-4 = product of:
      0.008690145 = sum of:
        0.0069998945 = weight(_text_:in in 5992) [ClassicSimilarity], result of:
          0.0069998945 = score(doc=5992,freq=14.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.23857531 = fieldWeight in 5992, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5992)
        0.0016902501 = weight(_text_:s in 5992) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=5992,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 5992, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=5992)
      0.06666667 = coord(2/30)
    
    Abstract
    The paper examines the development of facet analysis as a methodology and the role it plays in building classifications and other knowledge-organization tools. The use of categorical analysis in areas other than library and information science is also considered. The suitability of the faceted approach for humanities documentation is explored through a critical description of the FATKS (Facet Analytical Theory in Managing Knowledge Structure for Humanities) project carried out at University College London. This research focused on building a conceptual model for the subject of religion together with a relational database and search-and-browse interfaces that would support some degree of automatic classification. The paper concludes with a discussion of the differences between the conceptual model and the vocabulary used to populate it, and how, in the case of religion, the choice of terminology can create an apparent bias in the system.
    Source
    ¬The Indexer: the international journal of indexing. 38(2020) no.3, S.247-270
  12. Chessum, K.; Haiming, L.; Frommholz, I.: ¬A study of search user interface design based on Hofstede's six cultural dimensions (2022) 0.00
    5.0198176E-4 = product of:
      0.015059452 = sum of:
        0.015059452 = product of:
          0.045178354 = sum of:
            0.045178354 = weight(_text_:l in 856) [ClassicSimilarity], result of:
              0.045178354 = score(doc=856,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.52696943 = fieldWeight in 856, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.09375 = fieldNorm(doc=856)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
  13. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo : a Web-Scale interface for ontology archiving under consumer-oriented aspects (2020) 0.00
    4.8788113E-4 = product of:
      0.0073182164 = sum of:
        0.0053462577 = weight(_text_:in in 52) [ClassicSimilarity], result of:
          0.0053462577 = score(doc=52,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1822149 = fieldWeight in 52, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=52)
        0.0019719584 = weight(_text_:s in 52) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=52,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 52, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=52)
      0.06666667 = coord(2/30)
    
    Abstract
    While thousands of ontologies exist on the web, a unified sys-tem for handling online ontologies - in particular with respect to discov-ery, versioning, access, quality-control, mappings - has not yet surfacedand users of ontologies struggle with many challenges. In this paper, wepresent an online ontology interface and augmented archive called DB-pedia Archivo, that discovers, crawls, versions and archives ontologies onthe DBpedia Databus. Based on this versioned crawl, different features,quality measures and, if possible, fixes are deployed to handle and sta-bilize the changes in the found ontologies at web-scale. A comparison toexisting approaches and ontology repositories is given.
  14. Aizawa, A.; Kohlhase, M.: Mathematical information retrieval (2021) 0.00
    4.8788113E-4 = product of:
      0.0073182164 = sum of:
        0.0053462577 = weight(_text_:in in 667) [ClassicSimilarity], result of:
          0.0053462577 = score(doc=667,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1822149 = fieldWeight in 667, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
        0.0019719584 = weight(_text_:s in 667) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=667,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
      0.06666667 = coord(2/30)
    
    Abstract
    We present an overview of the NTCIR Math Tasks organized during NTCIR-10, 11, and 12. These tasks are primarily dedicated to techniques for searching mathematical content with formula expressions. In this chapter, we first summarize the task design and introduce test collections generated in the tasks. We also describe the features and main challenges of mathematical information retrieval systems and discuss future perspectives in the field.
    Pages
    S.169-185
  15. Gomez, J.; Allen, K.; Matney, M.; Awopetu, T.; Shafer, S.: Experimenting with a machine generated annotations pipeline (2020) 0.00
    4.8283124E-4 = product of:
      0.007242468 = sum of:
        0.004988801 = weight(_text_:in in 657) [ClassicSimilarity], result of:
          0.004988801 = score(doc=657,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.17003182 = fieldWeight in 657, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=657)
        0.002253667 = weight(_text_:s in 657) [ClassicSimilarity], result of:
          0.002253667 = score(doc=657,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 657, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=657)
      0.06666667 = coord(2/30)
    
    Abstract
    The UCLA Library reorganized its software developers into focused subteams with one, the Labs Team, dedicated to conducting experiments. In this article we describe our first attempt at conducting a software development experiment, in which we attempted to improve our digital library's search results with metadata from cloud-based image tagging services. We explore the findings and discuss the lessons learned from our first attempt at running an experiment.
  16. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.00
    4.6544487E-4 = product of:
      0.0069816727 = sum of:
        0.0052914224 = weight(_text_:in in 38) [ClassicSimilarity], result of:
          0.0052914224 = score(doc=38,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18034597 = fieldWeight in 38, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
        0.0016902501 = weight(_text_:s in 38) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=38,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 38, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
      0.06666667 = coord(2/30)
    
    Abstract
    A variety of schemas and ontologies are currently used for the machine-readable description of bibliographic entities and citations. This diversity, and the reuse of the same ontology terms with different nuances, generates inconsistencies in data. Adoption of a single data model would facilitate data integration tasks regardless of the data supplier or context application. In this paper we present the OpenCitations Data Model (OCDM), a generic data model for describing bibliographic entities and citations, developed using Semantic Web technologies. We also evaluate the effective reusability of OCDM according to ontology evaluation practices, mention existing users of OCDM, and discuss the use and impact of OCDM in the wider open science community.
    Content
    Erschienen in: The Semantic Web - ISWC 2020, 19th International Semantic Web Conference, Athens, Greece, November 2-6, 2020, Proceedings, Part II. Vgl.: DOI: 10.1007/978-3-030-62466-8_28.
  17. Dhillon, P.; Singh, M.: ¬An extended ontology model for trust evaluation using advanced hybrid ontology (2023) 0.00
    4.6544487E-4 = product of:
      0.0069816727 = sum of:
        0.0052914224 = weight(_text_:in in 981) [ClassicSimilarity], result of:
          0.0052914224 = score(doc=981,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18034597 = fieldWeight in 981, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=981)
        0.0016902501 = weight(_text_:s in 981) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=981,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=981)
      0.06666667 = coord(2/30)
    
    Abstract
    In the blooming area of Internet technology, the concept of Internet-of-Things (IoT) holds a distinct position that interconnects a large number of smart objects. In the context of social IoT (SIoT), the argument of trust and reliability is evaluated in the presented work. The proposed framework is divided into two blocks, namely Verification Block (VB) and Evaluation Block (EB). VB defines various ontology-based relationships computed for the objects that reflect the security and trustworthiness of an accessed service. While, EB is used for the feedback analysis and proves to be a valuable step that computes and governs the success rate of the service. Support vector machine (SVM) is applied to categorise the trust-based evaluation. The security aspect of the proposed approach is comparatively evaluated for DDoS and malware attacks in terms of success rate, trustworthiness and execution time. The proposed secure ontology-based framework provides better performance compared with existing architectures.
    Source
    Journal of information science. 41(2023) Jan., S.1-23
  18. Gil-Berrozpe, J.C.: Description, categorization, and representation of hyponymy in environmental terminology (2022) 0.00
    4.07709E-4 = product of:
      0.006115635 = sum of:
        0.004988801 = weight(_text_:in in 1004) [ClassicSimilarity], result of:
          0.004988801 = score(doc=1004,freq=16.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.17003182 = fieldWeight in 1004, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1004)
        0.0011268335 = weight(_text_:s in 1004) [ClassicSimilarity], result of:
          0.0011268335 = score(doc=1004,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.048049565 = fieldWeight in 1004, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=1004)
      0.06666667 = coord(2/30)
    
    Abstract
    Terminology has evolved from static and prescriptive theories to dynamic and cognitive approaches. Thanks to these approaches, there have been significant advances in the design and elaboration of terminological resources. This has resulted in the creation of tools such as terminological knowledge bases, which are able to show how concepts are interrelated through different semantic or conceptual relations. Of these relations, hyponymy is the most relevant to terminology work because it deals with concept categorization and term hierarchies. This doctoral thesis presents an enhancement of the semantic structure of EcoLexicon, a terminological knowledge base on environmental science. The aim of this research was to improve the description, categorization, and representation of hyponymy in environmental terminology. Therefore, we created HypoLexicon, a new stand-alone module for EcoLexicon in the form of a hyponymy-based terminological resource. This resource contains twelve terminological entries from four specialized domains (Biology, Chemistry, Civil Engineering, and Geology), which consist of 309 concepts and 465 terms associated with those concepts. This research was mainly based on the theoretical premises of Frame-based Terminology. This theory was combined with Cognitive Linguistics, for conceptual description and representation; Corpus Linguistics, for the extraction and processing of linguistic and terminological information; and Ontology, related to hyponymy and relevant for concept categorization. HypoLexicon was constructed from the following materials: (i) the EcoLexicon English Corpus; (ii) other specialized terminological resources, including EcoLexicon; (iii) Sketch Engine; and (iv) Lexonomy. This thesis explains the methodologies applied for corpus extraction and compilation, corpus analysis, the creation of conceptual hierarchies, and the design of the terminological template. The results of the creation of HypoLexicon are discussed by highlighting the information in the hyponymy-based terminological entries: (i) parent concept (hypernym); (ii) child concepts (hyponyms, with various hyponymy levels); (iii) terminological definitions; (iv) conceptual categories; (v) hyponymy subtypes; and (vi) hyponymic contexts. Furthermore, the features and the navigation within HypoLexicon are described from the user interface and the admin interface. In conclusion, this doctoral thesis lays the groundwork for developing a terminological resource that includes definitional, relational, ontological and contextual information about specialized hypernyms and hyponyms. All of this information on specialized knowledge is simple to follow thanks to the hierarchical structure of the terminological template used in HypoLexicon. Therefore, not only does it enhance knowledge representation, but it also facilitates its acquisition.
    Pages
    357 S
  19. Ogden, J.; Summers, E.; Walker, S.: Know(ing) Infrastructure : the wayback machine as object and instrument of digital research (2023) 0.00
    3.8787074E-4 = product of:
      0.0058180606 = sum of:
        0.004409519 = weight(_text_:in in 1084) [ClassicSimilarity], result of:
          0.004409519 = score(doc=1084,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.15028831 = fieldWeight in 1084, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1084)
        0.0014085418 = weight(_text_:s in 1084) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=1084,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 1084, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1084)
      0.06666667 = coord(2/30)
    
    Abstract
    From documenting human rights abuses to studying online advertising, web archives are increasingly positioned as critical resources for a broad range of scholarly Internet research agendas. In this article, we reflect on the motivations and methodological challenges of investigating the world's largest web archive, the Internet Archive's Wayback Machine (IAWM). Using a mixed methods approach, we report on a pilot project centred around documenting the inner workings of 'Save Page Now' (SPN) - an Internet Archive tool that allows users to initiate the creation and storage of 'snapshots' of web resources. By improving our understanding of SPN and its role in shaping the IAWM, this work examines how the public tool is being used to 'save the Web' and highlights the challenges of operationalising a study of the dynamic sociotechnical processes supporting this knowledge infrastructure. Inspired by existing Science and Technology Studies (STS) approaches, the paper charts our development of methodological interventions to support an interdisciplinary investigation of SPN, including: ethnographic methods, 'experimental blackbox tactics', data tracing, modelling and documentary research. We discuss the opportunities and limitations of our methodology when interfacing with issues associated with temporality, scale and visibility, as well as critically engage with our own positionality in the research process (in terms of expertise and access). We conclude with reflections on the implications of digital STS approaches for 'knowing infrastructure', where the use of these infrastructures is unavoidably intertwined with our ability to study the situated and material arrangements of their creation.
  20. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.M.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; Amodei, D.: Language models are few-shot learners (2020) 0.00
    3.854188E-4 = product of:
      0.0057812817 = sum of:
        0.003527615 = weight(_text_:in in 872) [ClassicSimilarity], result of:
          0.003527615 = score(doc=872,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.120230645 = fieldWeight in 872, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=872)
        0.002253667 = weight(_text_:s in 872) [ClassicSimilarity], result of:
          0.002253667 = score(doc=872,freq=8.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 872, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=872)
      0.06666667 = coord(2/30)
    
    Abstract
    Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.

Types

  • a 45
  • p 9