Search (212 results, page 1 of 11)

  • × year_i:[2020 TO 2030}
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.19
    0.1871978 = product of:
      0.3743956 = sum of:
        0.06928194 = product of:
          0.20784582 = sum of:
            0.20784582 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.20784582 = score(doc=862,freq=2.0), product of:
                0.36982056 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043621145 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.09726785 = weight(_text_:2.0 in 862) [ClassicSimilarity], result of:
          0.09726785 = score(doc=862,freq=2.0), product of:
            0.252991 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.043621145 = queryNorm
            0.3844716 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.20784582 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.20784582 = score(doc=862,freq=2.0), product of:
            0.36982056 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.043621145 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(3/6)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.13
    0.12830244 = product of:
      0.25660488 = sum of:
        0.05773496 = product of:
          0.17320487 = sum of:
            0.17320487 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.17320487 = score(doc=1000,freq=2.0), product of:
                0.36982056 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043621145 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.02566505 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.02566505 = score(doc=1000,freq=2.0), product of:
            0.14235806 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.043621145 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.17320487 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.17320487 = score(doc=1000,freq=2.0), product of:
            0.36982056 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.043621145 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(3/6)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Lewandowski, D.: Suchmaschinen (2023) 0.09
    0.09019366 = product of:
      0.18038732 = sum of:
        0.056768768 = weight(_text_:wide in 793) [ClassicSimilarity], result of:
          0.056768768 = score(doc=793,freq=2.0), product of:
            0.19327477 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.043621145 = queryNorm
            0.29372054 = fieldWeight in 793, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=793)
        0.043555032 = weight(_text_:web in 793) [ClassicSimilarity], result of:
          0.043555032 = score(doc=793,freq=4.0), product of:
            0.14235806 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.043621145 = queryNorm
            0.3059541 = fieldWeight in 793, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=793)
        0.08006352 = product of:
          0.16012704 = sum of:
            0.16012704 = weight(_text_:suchmaschine in 793) [ClassicSimilarity], result of:
              0.16012704 = score(doc=793,freq=6.0), product of:
                0.24664505 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.043621145 = queryNorm
                0.6492206 = fieldWeight in 793, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.046875 = fieldNorm(doc=793)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Eine Suchmaschine (auch: Web-Suchmaschine, Universalsuchmaschine) ist ein Computersystem, das Inhalte aus dem World Wide Web (WWW) mittels Crawling erfasst und über eine Benutzerschnittstelle durchsuchbar macht, wobei die Ergebnisse in einer nach systemseitig angenommener Relevanz geordneten Darstellung aufgeführt werden. Dies bedeutet, dass Suchmaschinen im Gegensatz zu anderen Informationssystemen nicht auf einem klar abgegrenzten Datenbestand aufbauen, sondern diesen aus den verstreut vorliegenden Dokumenten des WWW zusammenstellen. Dieser Datenbestand wird über eine Benutzerschnittstelle zugänglich gemacht, die so gestaltet ist, dass die Suchmaschine von Laien problemlos genutzt werden kann. Die zu einer Suchanfrage ausgegebenen Treffer werden so sortiert, dass den Nutzenden die aus Systemsicht relevantesten Dokumente zuerst angezeigt werden. Dabei handelt es sich um komplexe Bewertungsverfahren, denen zahlreiche Annahmen über die Relevanz von Dokumenten in Bezug auf Suchanfragen zugrunde liegen.
  4. Lewandowski, D.: Suchmaschinen verstehen : 3. vollständig überarbeitete und erweiterte Aufl. (2021) 0.08
    0.07883742 = product of:
      0.15767483 = sum of:
        0.066902645 = weight(_text_:wide in 4016) [ClassicSimilarity], result of:
          0.066902645 = score(doc=4016,freq=4.0), product of:
            0.19327477 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.043621145 = queryNorm
            0.34615302 = fieldWeight in 4016, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4016)
        0.03629586 = weight(_text_:web in 4016) [ClassicSimilarity], result of:
          0.03629586 = score(doc=4016,freq=4.0), product of:
            0.14235806 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.043621145 = queryNorm
            0.25496176 = fieldWeight in 4016, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4016)
        0.054476324 = product of:
          0.10895265 = sum of:
            0.10895265 = weight(_text_:suchmaschine in 4016) [ClassicSimilarity], result of:
              0.10895265 = score(doc=4016,freq=4.0), product of:
                0.24664505 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.043621145 = queryNorm
                0.44173864 = fieldWeight in 4016, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4016)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    RSWK
    Suchmaschine
    World Wide Web Recherche
    Subject
    Suchmaschine
    World Wide Web Recherche
  5. Smith, A.: Simple Knowledge Organization System (SKOS) (2022) 0.04
    0.044542328 = product of:
      0.13362698 = sum of:
        0.08028317 = weight(_text_:wide in 1094) [ClassicSimilarity], result of:
          0.08028317 = score(doc=1094,freq=4.0), product of:
            0.19327477 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.043621145 = queryNorm
            0.4153836 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.05334381 = weight(_text_:web in 1094) [ClassicSimilarity], result of:
          0.05334381 = score(doc=1094,freq=6.0), product of:
            0.14235806 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.043621145 = queryNorm
            0.37471575 = fieldWeight in 1094, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
      0.33333334 = coord(2/6)
    
    Abstract
    SKOS (Simple Knowledge Organization System) is a recommendation from the World Wide Web Consortium (W3C) for representing controlled vocabularies, taxonomies, thesauri, classifications, and similar systems for organizing and indexing information as linked data elements in the Semantic Web, using the Resource Description Framework (RDF). The SKOS data model is centered on "concepts", which can have preferred and alternate labels in any language as well as other metadata, and which are identified by addresses on the World Wide Web (URIs). Concepts are grouped into hierarchies through "broader" and "narrower" relations, with "top concepts" at the broadest conceptual level. Concepts are also organized into "concept schemes", also identified by URIs. Other relations, mappings, and groupings are also supported. This article discusses the history of the development of SKOS and provides notes on adoption, uses, and limitations.
  6. Peters, I.: Folksonomies & Social Tagging (2023) 0.04
    0.042821564 = product of:
      0.12846468 = sum of:
        0.06623024 = weight(_text_:wide in 796) [ClassicSimilarity], result of:
          0.06623024 = score(doc=796,freq=2.0), product of:
            0.19327477 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.043621145 = queryNorm
            0.342674 = fieldWeight in 796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
        0.062234443 = weight(_text_:web in 796) [ClassicSimilarity], result of:
          0.062234443 = score(doc=796,freq=6.0), product of:
            0.14235806 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.043621145 = queryNorm
            0.43716836 = fieldWeight in 796, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
      0.33333334 = coord(2/6)
    
    Abstract
    Die Erforschung und der Einsatz von Folksonomies und Social Tagging als nutzerzentrierte Formen der Inhaltserschließung und Wissensrepräsentation haben in den 10 Jahren ab ca. 2005 ihren Höhenpunkt erfahren. Motiviert wurde dies durch die Entwicklung und Verbreitung des Social Web und der wachsenden Nutzung von Social-Media-Plattformen (s. Kapitel E 8 Social Media und Social Web). Beides führte zu einem rasanten Anstieg der im oder über das World Wide Web auffindbaren Menge an potenzieller Information und generierte eine große Nachfrage nach skalierbaren Methoden der Inhaltserschließung.
  7. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.04
    0.04165197 = product of:
      0.12495591 = sum of:
        0.037845846 = weight(_text_:wide in 79) [ClassicSimilarity], result of:
          0.037845846 = score(doc=79,freq=2.0), product of:
            0.19327477 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.043621145 = queryNorm
            0.1958137 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.087110065 = weight(_text_:web in 79) [ClassicSimilarity], result of:
          0.087110065 = score(doc=79,freq=36.0), product of:
            0.14235806 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.043621145 = queryNorm
            0.6119082 = fieldWeight in 79, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
      0.33333334 = coord(2/6)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Theme
    Semantic Web
  8. Hong, H.; Ye, Q.: Crowd characteristics and crowd wisdom : evidence from an online investment community (2020) 0.04
    0.035573866 = product of:
      0.106721595 = sum of:
        0.02566505 = weight(_text_:web in 5763) [ClassicSimilarity], result of:
          0.02566505 = score(doc=5763,freq=2.0), product of:
            0.14235806 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.043621145 = queryNorm
            0.18028519 = fieldWeight in 5763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5763)
        0.08105654 = weight(_text_:2.0 in 5763) [ClassicSimilarity], result of:
          0.08105654 = score(doc=5763,freq=2.0), product of:
            0.252991 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.043621145 = queryNorm
            0.320393 = fieldWeight in 5763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5763)
      0.33333334 = coord(2/6)
    
    Abstract
    Fueled by the explosive growth of Web 2.0 and social media, online investment communities have become a popular venue for individual investors to interact with each other. Investor opinions extracted from online investment communities capture "crowd wisdom" and have begun to play an important role in financial markets. Existing research confirms the importance of crowd wisdom in stock predictions, but fails to investigate factors influencing crowd performance (that is, crowd prediction accuracy). In order to help improve crowd performance, our research strives to investigate the impact of crowd characteristics on crowd performance. We conduct an empirical study using a large data set collected from a popular online investment community, StockTwits. Our findings show that experience diversity, participant independence, and network decentralization are all positively related to crowd performance. Furthermore, crowd size moderates the influence of crowd characteristics on crowd performance. From a theoretical perspective, our work enriches extant literature by empirically testing the relationship between crowd characteristics and crowd performance. From a practical perspective, our findings help investors better evaluate social sensors embedded in user-generated stock predictions, based upon which they can make better investment decisions.
  9. Fernanda de Jesus, A.; Ferreira de Castro, F.: Proposal for the publication of linked open bibliographic data (2024) 0.03
    0.029188942 = product of:
      0.08756682 = sum of:
        0.056768768 = weight(_text_:wide in 1161) [ClassicSimilarity], result of:
          0.056768768 = score(doc=1161,freq=2.0), product of:
            0.19327477 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.043621145 = queryNorm
            0.29372054 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
        0.030798059 = weight(_text_:web in 1161) [ClassicSimilarity], result of:
          0.030798059 = score(doc=1161,freq=2.0), product of:
            0.14235806 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.043621145 = queryNorm
            0.21634221 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
      0.33333334 = coord(2/6)
    
    Abstract
    Linked Open Data (LOD) are a set of principles for publishing structured, connected data available for reuse under an open license. The objective of this paper is to analyze the publishing of bibliographic data such as LOD, having as a product the elaboration of theoretical-methodological recommendations for the publication of these data, in an approach based on the ten best practices for publishing LOD, from the World Wide Web Consortium. The starting point was the conduction of a Systematic Review of Literature, where initiatives to publish bibliographic data such as LOD were identified. An empirical study of these institutions was also conducted. As a result, theoretical-methodological recommendations were obtained for the process of publishing bibliographic data such as LOD.
  10. Habermas, J.: Überlegungen und Hypothesen zu einem erneuten Strukturwandel der politischen Öffentlichkeit : ¬Ein neuer Strukturwandel der Öffentlichkeit? Hrsg.: M. Seeliger u. S. Sevignani (2021) 0.03
    0.027018849 = product of:
      0.16211309 = sum of:
        0.16211309 = weight(_text_:2.0 in 402) [ClassicSimilarity], result of:
          0.16211309 = score(doc=402,freq=2.0), product of:
            0.252991 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.043621145 = queryNorm
            0.640786 = fieldWeight in 402, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.078125 = fieldNorm(doc=402)
      0.16666667 = coord(1/6)
    
    Footnote
    Vgl. dazu: El Ouassil, S.: Habermas und die Demokratie 2.0: Philosoph über Soziale Medien. Unter: https://www.spiegel.de/kultur/juergen-habermas-strukturwandel-der-oeffentlichkeit-in-der-2-0-version-a-2e683f52-3ccd-4985-a750-5e1a1823ad08.
  11. Sfakakis, M.; Zapounidou, S.; Papatheodorou, C.: Mapping derivative relationships from BIBFRAME 2.0 to RDA (2020) 0.03
    0.026747297 = product of:
      0.16048378 = sum of:
        0.16048378 = weight(_text_:2.0 in 294) [ClassicSimilarity], result of:
          0.16048378 = score(doc=294,freq=4.0), product of:
            0.252991 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.043621145 = queryNorm
            0.6343458 = fieldWeight in 294, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.0546875 = fieldNorm(doc=294)
      0.16666667 = coord(1/6)
    
    Abstract
    The mapping from BIBFRAME 2.0 to Resource Description and Access (RDA) is studied focusing on core entities, inherent relationships, and derivative relationships. The proposed mapping rules are evaluated with two gold datasets. Findings indicate that 1) core entities, inherent and derivative relationships may be mapped to RDA, 2) the use of the bf:hasExpression property may cluster bf:Works with the same ideational content and enable their mapping to RDA Works with their Expressions, and 3) cataloging policies have a significant impact on the interoperability between RDA and BIBFRAME datasets. This work complements the investigation of semantic interoperability between the two models previously presented in this journal.
  12. Asubiaro, T.V.; Onaolapo, S.: ¬A comparative study of the coverage of African journals in Web of Science, Scopus, and CrossRef (2023) 0.03
    0.025880478 = product of:
      0.077641435 = sum of:
        0.06286628 = weight(_text_:web in 992) [ClassicSimilarity], result of:
          0.06286628 = score(doc=992,freq=12.0), product of:
            0.14235806 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.043621145 = queryNorm
            0.4416067 = fieldWeight in 992, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=992)
        0.014775158 = product of:
          0.029550316 = sum of:
            0.029550316 = weight(_text_:22 in 992) [ClassicSimilarity], result of:
              0.029550316 = score(doc=992,freq=2.0), product of:
                0.15275382 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043621145 = queryNorm
                0.19345059 = fieldWeight in 992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=992)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This is the first study that evaluated the coverage of journals from Africa in Web of Science, Scopus, and CrossRef. A list of active journals published in each of the 55 African countries was compiled from Ulrich's periodicals directory and African Journals Online (AJOL) website. Journal master lists for Web of Science, Scopus, and CrossRef were searched for the African journals. A total of 2,229 unique active African journals were identified from Ulrich (N = 2,117, 95.0%) and AJOL (N = 243, 10.9%) after removing duplicates. The volume of African journals in Web of Science and Scopus databases is 7.4% (N = 166) and 7.8% (N = 174), respectively, compared to the 45.6% (N = 1,017) covered in CrossRef. While making up only 17.% of all the African journals, South African journals had the best coverage in the two most authoritative databases, accounting for 73.5% and 62.1% of all the African journals in Web of Science and Scopus, respectively. In contrast, Nigeria published 44.5% of all the African journals. The distribution of the African journals is biased in favor of Medical, Life and Health Sciences and Humanities and the Arts in the three databases. The low representation of African journals in CrossRef, a free indexing infrastructure that could be harnessed for building an African-centric research indexing database, is concerning.
    Date
    22. 6.2023 14:09:06
    Object
    Web of Science
  13. Yu, L.; Fan, Z.; Li, A.: ¬A hierarchical typology of scholarly information units : based on a deduction-verification study (2020) 0.03
    0.025555119 = product of:
      0.07666536 = sum of:
        0.064845234 = weight(_text_:2.0 in 5655) [ClassicSimilarity], result of:
          0.064845234 = score(doc=5655,freq=2.0), product of:
            0.252991 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.043621145 = queryNorm
            0.2563144 = fieldWeight in 5655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.03125 = fieldNorm(doc=5655)
        0.011820125 = product of:
          0.02364025 = sum of:
            0.02364025 = weight(_text_:22 in 5655) [ClassicSimilarity], result of:
              0.02364025 = score(doc=5655,freq=2.0), product of:
                0.15275382 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043621145 = queryNorm
                0.15476047 = fieldWeight in 5655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5655)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose The purpose of this paper is to lay a theoretical foundation for identifying operational information units for library and information professional activities in the context of scholarly communication. Design/methodology/approach The study adopts a deduction-verification approach to formulate a typology of units for scholarly information. It first deduces possible units from an existing conceptualization of information, which defines information as the combined product of data and meaning, and then tests the usefulness of these units via two empirical investigations, one with a group of scholarly papers and the other with a sample of scholarly information users. Findings The results show that, on defining an information unit as a piece of information that is complete in both data and meaning, to such an extent that it remains meaningful to its target audience when retrieved and displayed independently in a database, it is then possible to formulate a hierarchical typology of units for scholarly information. The typology proposed in this study consists of three levels, which in turn, consists of 1, 5 and 44 units, respectively. Research limitations/implications The result of this study has theoretical implications on both the philosophical and conceptual levels: on the philosophical level, it hinges on, and reinforces the objective view of information; on the conceptual level, it challenges the conceptualization of work by IFLA's Functional Requirements for Bibliographic Records and Library Reference Model but endorses that by Library of Congress's BIBFRAME 2.0 model. Practical implications It calls for reconsideration of existing operational units in a variety of library and information activities. Originality/value The study strengthens the conceptual foundation of operational information units and brings to light the primacy of "one work" as an information unit and the possibility for it to be supplemented by smaller units.
    Date
    14. 1.2020 11:15:22
  14. Lee, H.S.; Arnott Smith, C.: ¬A comparative mixed methods study on health information seeking among US-born/US-dwelling, Korean-born/US-dwelling, and Korean-born/Korean-dwelling mothers (2022) 0.02
    0.02432412 = product of:
      0.07297236 = sum of:
        0.04730731 = weight(_text_:wide in 614) [ClassicSimilarity], result of:
          0.04730731 = score(doc=614,freq=2.0), product of:
            0.19327477 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.043621145 = queryNorm
            0.24476713 = fieldWeight in 614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=614)
        0.02566505 = weight(_text_:web in 614) [ClassicSimilarity], result of:
          0.02566505 = score(doc=614,freq=2.0), product of:
            0.14235806 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.043621145 = queryNorm
            0.18028519 = fieldWeight in 614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=614)
      0.33333334 = coord(2/6)
    
    Abstract
    More knowledge and a better understanding of health information seeking are necessary, especially in these unprecedented times due to the COVID-19 pandemic. Using Sonnenwald's theoretical concept of information horizons, this study aimed to uncover patterns in mothers' source preferences related to their children's health. Online surveys were completed by 851 mothers (255 US-born/US-dwelling, 300 Korean-born/US-dwelling, and 296 Korean-born/Korean-dwelling), and supplementary in-depth interviews with 24 mothers were conducted and analyzed. Results indicate that there were remarkable differences between the mothers' information source preference and their actual source use. Moreover, there were many similarities between the two Korean-born groups concerning health information-seeking behavior. For instance, those two groups sought health information more frequently than US-born/US-dwelling mothers. Their sources frequently included blogs or online forums as well as friends with children, whereas US-born/US-dwelling mothers frequently used doctors or nurses as information sources. Mothers in the two Korean-born samples preferred the World Wide Web most as their health information source, while the US-born/US-dwelling mothers preferred doctors the most. Based on these findings, information professionals should guide mothers of specific ethnicities and nationalities to trustworthy sources considering both their usage and preferences.
  15. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.02
    0.023833144 = product of:
      0.07149943 = sum of:
        0.050814208 = weight(_text_:web in 40) [ClassicSimilarity], result of:
          0.050814208 = score(doc=40,freq=4.0), product of:
            0.14235806 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.043621145 = queryNorm
            0.35694647 = fieldWeight in 40, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.02068522 = product of:
          0.04137044 = sum of:
            0.04137044 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.04137044 = score(doc=40,freq=2.0), product of:
                0.15275382 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043621145 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
    Object
    Web of Science
  16. Meineck, S.: Gesichter-Suchmaschine PimEyes bricht das Schweigen : Neuer Chef (2022) 0.02
    0.022239868 = product of:
      0.1334392 = sum of:
        0.1334392 = product of:
          0.2668784 = sum of:
            0.2668784 = weight(_text_:suchmaschine in 418) [ClassicSimilarity], result of:
              0.2668784 = score(doc=418,freq=6.0), product of:
                0.24664505 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.043621145 = queryNorm
                1.0820342 = fieldWeight in 418, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.078125 = fieldNorm(doc=418)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    PimEyes untergräbt die Anonymität von Menschen, deren Gesicht im Internet zu finden ist. Nach breiter Kritik hatte sich die polnische Suchmaschine auf die Seychellen abgesetzt. Jetzt hat PimEyes einen neuen Chef - und geht an die Öfffentlichkeit.
    Source
    https://netzpolitik.org/2022/neuer-chef-gesichter-suchmaschine-pimeyes-bricht-das-schweigen/?utm_source=pocket-newtab-global-de-DE
  17. Thelwall, M.; Kousha, K.; Abdoli, M.; Stuart, E.; Makita, M.; Wilson, P.; Levitt, J.: Why are coauthored academic articles more cited : higher quality or larger audience? (2023) 0.02
    0.020694155 = product of:
      0.062082466 = sum of:
        0.04730731 = weight(_text_:wide in 995) [ClassicSimilarity], result of:
          0.04730731 = score(doc=995,freq=2.0), product of:
            0.19327477 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.043621145 = queryNorm
            0.24476713 = fieldWeight in 995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=995)
        0.014775158 = product of:
          0.029550316 = sum of:
            0.029550316 = weight(_text_:22 in 995) [ClassicSimilarity], result of:
              0.029550316 = score(doc=995,freq=2.0), product of:
                0.15275382 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043621145 = queryNorm
                0.19345059 = fieldWeight in 995, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=995)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Collaboration is encouraged because it is believed to improve academic research, supported by indirect evidence in the form of more coauthored articles being more cited. Nevertheless, this might not reflect quality but increased self-citations or the "audience effect": citations from increased awareness through multiple author networks. We address this with the first science wide investigation into whether author numbers associate with journal article quality, using expert peer quality judgments for 122,331 articles from the 2014-20 UK national assessment. Spearman correlations between author numbers and quality scores show moderately strong positive associations (0.2-0.4) in the health, life, and physical sciences, but weak or no positive associations in engineering and social sciences, with weak negative/positive or no associations in various arts and humanities, and a possible negative association for decision sciences. This gives the first systematic evidence that greater numbers of authors associates with higher quality journal articles in the majority of academia outside the arts and humanities, at least for the UK. Positive associations between team size and citation counts in areas with little association between team size and quality also show that audience effects or other nonquality factors account for the higher citation rates of coauthored articles in some fields.
    Date
    22. 6.2023 18:11:50
  18. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.02
    0.020694155 = product of:
      0.062082466 = sum of:
        0.04730731 = weight(_text_:wide in 1012) [ClassicSimilarity], result of:
          0.04730731 = score(doc=1012,freq=2.0), product of:
            0.19327477 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.043621145 = queryNorm
            0.24476713 = fieldWeight in 1012, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
        0.014775158 = product of:
          0.029550316 = sum of:
            0.029550316 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
              0.029550316 = score(doc=1012,freq=2.0), product of:
                0.15275382 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043621145 = queryNorm
                0.19345059 = fieldWeight in 1012, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1012)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    With the wide application of keyphrases in many Information Retrieval (IR) and Natural Language Processing (NLP) tasks, automatic keyphrase prediction has been emerging. However, these statistically important phrases are contributing increasingly less to the related tasks because the end-to-end learning mechanism enables models to learn the important semantic information of the text directly. Similarly, keyphrases are of little help for readers to quickly grasp the paper's main idea because the relationship between the keyphrase and the paper is not explicit to readers. Therefore, we propose to generate keyphrases with specific functions for readers to bridge the semantic gap between them and the information producers, and verify the effectiveness of the keyphrase function for assisting users' comprehension with a user experiment. A controllable keyphrase generation framework (the CKPG) that uses the keyphrase function as a control code to generate categorized keyphrases is proposed and implemented based on Transformer, BART, and T5, respectively. For the Computer Science domain, the Macro-avgs of , , and on the Paper with Code dataset are up to 0.680, 0.535, and 0.558, respectively. Our experimental results indicate the effectiveness of the CKPG models.
    Date
    22. 6.2023 14:55:20
  19. Weiß, E.-M.: ChatGPT soll es richten : Microsoft baut KI in Suchmaschine Bing ein (2023) 0.02
    0.02009808 = product of:
      0.12058848 = sum of:
        0.12058848 = product of:
          0.24117696 = sum of:
            0.24117696 = weight(_text_:suchmaschine in 866) [ClassicSimilarity], result of:
              0.24117696 = score(doc=866,freq=10.0), product of:
                0.24664505 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.043621145 = queryNorm
                0.9778302 = fieldWeight in 866, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=866)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    ChatGPT, die künstliche Intelligenz der Stunde, ist von OpenAI entwickelt worden. Und OpenAI ist in der Vergangenheit nicht unerheblich von Microsoft unterstützt worden. Nun geht es ums Profitieren: Die KI soll in die Suchmaschine Bing eingebaut werden, was eine direkte Konkurrenz zu Googles Suchalgorithmen und Intelligenzen bedeutet. Bing war da bislang nicht sonderlich erfolgreich. Wie "The Information" mit Verweis auf zwei Insider berichtet, plant Microsoft, ChatGPT in seine Suchmaschine Bing einzubauen. Bereits im März könnte die neue, intelligente Suche verfügbar sein. Microsoft hatte zuvor auf der hauseigenen Messe Ignite zunächst die Integration des Bildgenerators DALL·E 2 in seine Suchmaschine angekündigt - ohne konkretes Startdatum jedoch. Fragt man ChatGPT selbst, bestätigt der Chatbot seine künftige Aufgabe noch nicht. Weiß aber um potentielle Vorteile.
    Source
    https://www.heise.de/news/ChatGPT-soll-es-richten-Microsoft-baut-KI-in-Suchmaschine-Bing-ein-7447837.html
  20. Huber, W.: Menschen, Götter und Maschinen : eine Ethik der Digitalisierung (2022) 0.02
    0.019459296 = product of:
      0.058377884 = sum of:
        0.037845846 = weight(_text_:wide in 752) [ClassicSimilarity], result of:
          0.037845846 = score(doc=752,freq=2.0), product of:
            0.19327477 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.043621145 = queryNorm
            0.1958137 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
        0.02053204 = weight(_text_:web in 752) [ClassicSimilarity], result of:
          0.02053204 = score(doc=752,freq=2.0), product of:
            0.14235806 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.043621145 = queryNorm
            0.14422815 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
      0.33333334 = coord(2/6)
    
    Content
    Vorwort -- 1. Das digitale Zeitalter -- Zeitenwende -- Die Vorherrschaft des Buchdrucks geht zu Ende -- Wann beginnt das digitale Zeitalter? -- 2. Zwischen Euphorie und Apokalypse -- Digitalisierung. Einfach. Machen -- Euphorie -- Apokalypse -- Verantwortungsethik -- Der Mensch als Subjekt der Ethik -- Verantwortung als Prinzip -- 3. Digitalisierter Alltag in einer globalisierten Welt -- Vom World Wide Web zum Internet der Dinge -- Mobiles Internet und digitale Bildung -- Digitale Plattformen und ihre Strategien -- Big Data und informationelle Selbstbestimmung -- 4. Grenzüberschreitungen -- Die Erosion des Privaten -- Die Deformation des Öffentlichen -- Die Senkung von Hemmschwellen -- Das Verschwinden der Wirklichkeit -- Die Wahrheit in der Infosphäre -- 5. Die Zukunft der Arbeit -- Industrielle Revolutionen -- Arbeit 4.0 -- Ethik 4.0 -- 6. Digitale Intelligenz -- Können Computer dichten? -- Stärker als der Mensch? -- Maschinelles Lernen -- Ein bleibender Unterschied -- Ethische Prinzipien für den Umgang mit digitaler Intelligenz -- Medizin als Beispiel -- 7. Die Würde des Menschen im digitalen Zeitalter -- Kränkungen oder Revolutionen -- Transhumanismus und Posthumanismus -- Gibt es Empathie ohne Menschen? -- Wer ist autonom: Mensch oder Maschine? -- Humanismus der Verantwortung -- 8. Die Zukunft des Homo sapiens -- Vergöttlichung des Menschen -- Homo deus -- Gott und Mensch im digitalen Zeitalter -- Veränderung der Menschheit -- Literatur -- Personenregister.

Languages

  • e 160
  • d 51
  • pt 1
  • More… Less…

Types

  • a 189
  • el 41
  • m 9
  • p 6
  • s 3
  • x 2
  • A 1
  • EL 1
  • More… Less…