Search (246 results, page 1 of 13)

  • × year_i:[2020 TO 2030}
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.25
    0.24772668 = product of:
      0.49545336 = sum of:
        0.06432813 = product of:
          0.19298437 = sum of:
            0.19298437 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.19298437 = score(doc=862,freq=2.0), product of:
                0.34337753 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04050213 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.19298437 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.19298437 = score(doc=862,freq=2.0), product of:
            0.34337753 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04050213 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.19298437 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.19298437 = score(doc=862,freq=2.0), product of:
            0.34337753 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04050213 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.045156486 = product of:
          0.09031297 = sum of:
            0.09031297 = weight(_text_:2.0 in 862) [ClassicSimilarity], result of:
              0.09031297 = score(doc=862,freq=2.0), product of:
                0.23490155 = queryWeight, product of:
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.04050213 = queryNorm
                0.3844716 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.20
    0.19953866 = product of:
      0.39907733 = sum of:
        0.05360677 = product of:
          0.1608203 = sum of:
            0.1608203 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.1608203 = score(doc=1000,freq=2.0), product of:
                0.34337753 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04050213 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.1608203 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1608203 = score(doc=1000,freq=2.0), product of:
            0.34337753 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04050213 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.1608203 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1608203 = score(doc=1000,freq=2.0), product of:
            0.34337753 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04050213 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.023829939 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.023829939 = score(doc=1000,freq=2.0), product of:
            0.13217913 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04050213 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(4/8)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Lewandowski, D.: Suchmaschinen verstehen : 3. vollständig überarbeitete und erweiterte Aufl. (2021) 0.13
    0.128211 = product of:
      0.256422 = sum of:
        0.11385475 = weight(_text_:recherche in 4016) [ClassicSimilarity], result of:
          0.11385475 = score(doc=4016,freq=6.0), product of:
            0.21953142 = queryWeight, product of:
              5.4202437 = idf(docFreq=531, maxDocs=44218)
              0.04050213 = queryNorm
            0.5186262 = fieldWeight in 4016, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.4202437 = idf(docFreq=531, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4016)
        0.0467477 = weight(_text_:world in 4016) [ClassicSimilarity], result of:
          0.0467477 = score(doc=4016,freq=4.0), product of:
            0.15567686 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.04050213 = queryNorm
            0.30028677 = fieldWeight in 4016, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4016)
        0.062118948 = weight(_text_:wide in 4016) [ClassicSimilarity], result of:
          0.062118948 = score(doc=4016,freq=4.0), product of:
            0.17945516 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04050213 = queryNorm
            0.34615302 = fieldWeight in 4016, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4016)
        0.033700623 = weight(_text_:web in 4016) [ClassicSimilarity], result of:
          0.033700623 = score(doc=4016,freq=4.0), product of:
            0.13217913 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04050213 = queryNorm
            0.25496176 = fieldWeight in 4016, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4016)
      0.5 = coord(4/8)
    
    Abstract
    Suchmaschinen dienen heute selbstverständlich als Werkzeuge, um Informationen zu recherchieren. Doch wie funktionieren diese genau? Das Buch betrachtet Suchmaschinen aus vier Perspektiven: Technik, Nutzung, Recherche und gesellschaftliche Bedeutung. Es bietet eine klar strukturierte und verständliche Einführung in die Thematik. Zahlreiche Abbildungen erlauben eine schnelle Erfassung des Stoffs. Rankingverfahren und Nutzerverhalten werden dargestellt. Dazu kommen grundlegende Betrachtungen des Suchmaschinenmarkts, der Suchmaschinenoptimierung und der Rolle der Suchmaschinen als technische Informationsvermittler. Das Buch richtet sich an alle, die ein umfassendes Verständnis dieser Suchwerkzeuge erlangen wollen, u.a. Suchmaschinenoptimierer, Entwickler, Informationswissenschaftler, Bibliothekare sowie Online-Marketing-Verantwortliche. Für die dritte Auflage wurde der Text vollständig überarbeitet sowie alle Statistiken und Quellen auf den neuesten Stand gebracht.
    RSWK
    World Wide Web Recherche
    Subject
    World Wide Web Recherche
  4. Smith, A.: Simple Knowledge Organization System (SKOS) (2022) 0.07
    0.067563586 = product of:
      0.18016957 = sum of:
        0.056097243 = weight(_text_:world in 1094) [ClassicSimilarity], result of:
          0.056097243 = score(doc=1094,freq=4.0), product of:
            0.15567686 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.04050213 = queryNorm
            0.36034414 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.07454273 = weight(_text_:wide in 1094) [ClassicSimilarity], result of:
          0.07454273 = score(doc=1094,freq=4.0), product of:
            0.17945516 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04050213 = queryNorm
            0.4153836 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.0495296 = weight(_text_:web in 1094) [ClassicSimilarity], result of:
          0.0495296 = score(doc=1094,freq=6.0), product of:
            0.13217913 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04050213 = queryNorm
            0.37471575 = fieldWeight in 1094, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
      0.375 = coord(3/8)
    
    Abstract
    SKOS (Simple Knowledge Organization System) is a recommendation from the World Wide Web Consortium (W3C) for representing controlled vocabularies, taxonomies, thesauri, classifications, and similar systems for organizing and indexing information as linked data elements in the Semantic Web, using the Resource Description Framework (RDF). The SKOS data model is centered on "concepts", which can have preferred and alternate labels in any language as well as other metadata, and which are identified by addresses on the World Wide Web (URIs). Concepts are grouped into hierarchies through "broader" and "narrower" relations, with "top concepts" at the broadest conceptual level. Concepts are also organized into "concept schemes", also identified by URIs. Other relations, mappings, and groupings are also supported. This article discusses the history of the development of SKOS and provides notes on adoption, uses, and limitations.
  5. Peters, I.: Folksonomies & Social Tagging (2023) 0.06
    0.062083878 = product of:
      0.16555701 = sum of:
        0.046277866 = weight(_text_:world in 796) [ClassicSimilarity], result of:
          0.046277866 = score(doc=796,freq=2.0), product of:
            0.15567686 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.04050213 = queryNorm
            0.29726875 = fieldWeight in 796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
        0.061494615 = weight(_text_:wide in 796) [ClassicSimilarity], result of:
          0.061494615 = score(doc=796,freq=2.0), product of:
            0.17945516 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04050213 = queryNorm
            0.342674 = fieldWeight in 796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
        0.05778453 = weight(_text_:web in 796) [ClassicSimilarity], result of:
          0.05778453 = score(doc=796,freq=6.0), product of:
            0.13217913 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04050213 = queryNorm
            0.43716836 = fieldWeight in 796, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
      0.375 = coord(3/8)
    
    Abstract
    Die Erforschung und der Einsatz von Folksonomies und Social Tagging als nutzerzentrierte Formen der Inhaltserschließung und Wissensrepräsentation haben in den 10 Jahren ab ca. 2005 ihren Höhenpunkt erfahren. Motiviert wurde dies durch die Entwicklung und Verbreitung des Social Web und der wachsenden Nutzung von Social-Media-Plattformen (s. Kapitel E 8 Social Media und Social Web). Beides führte zu einem rasanten Anstieg der im oder über das World Wide Web auffindbaren Menge an potenzieller Information und generierte eine große Nachfrage nach skalierbaren Methoden der Inhaltserschließung.
  6. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.05
    0.053424664 = product of:
      0.14246577 = sum of:
        0.026444495 = weight(_text_:world in 79) [ClassicSimilarity], result of:
          0.026444495 = score(doc=79,freq=2.0), product of:
            0.15567686 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.04050213 = queryNorm
            0.16986786 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.03513978 = weight(_text_:wide in 79) [ClassicSimilarity], result of:
          0.03513978 = score(doc=79,freq=2.0), product of:
            0.17945516 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04050213 = queryNorm
            0.1958137 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.08088149 = weight(_text_:web in 79) [ClassicSimilarity], result of:
          0.08088149 = score(doc=79,freq=36.0), product of:
            0.13217913 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04050213 = queryNorm
            0.6119082 = fieldWeight in 79, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
      0.375 = coord(3/8)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Theme
    Semantic Web
  7. Lewandowski, D.: Suchmaschinen (2023) 0.05
    0.04980643 = product of:
      0.13281715 = sum of:
        0.03966674 = weight(_text_:world in 793) [ClassicSimilarity], result of:
          0.03966674 = score(doc=793,freq=2.0), product of:
            0.15567686 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.04050213 = queryNorm
            0.25480178 = fieldWeight in 793, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=793)
        0.05270967 = weight(_text_:wide in 793) [ClassicSimilarity], result of:
          0.05270967 = score(doc=793,freq=2.0), product of:
            0.17945516 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04050213 = queryNorm
            0.29372054 = fieldWeight in 793, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=793)
        0.040440746 = weight(_text_:web in 793) [ClassicSimilarity], result of:
          0.040440746 = score(doc=793,freq=4.0), product of:
            0.13217913 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04050213 = queryNorm
            0.3059541 = fieldWeight in 793, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=793)
      0.375 = coord(3/8)
    
    Abstract
    Eine Suchmaschine (auch: Web-Suchmaschine, Universalsuchmaschine) ist ein Computersystem, das Inhalte aus dem World Wide Web (WWW) mittels Crawling erfasst und über eine Benutzerschnittstelle durchsuchbar macht, wobei die Ergebnisse in einer nach systemseitig angenommener Relevanz geordneten Darstellung aufgeführt werden. Dies bedeutet, dass Suchmaschinen im Gegensatz zu anderen Informationssystemen nicht auf einem klar abgegrenzten Datenbestand aufbauen, sondern diesen aus den verstreut vorliegenden Dokumenten des WWW zusammenstellen. Dieser Datenbestand wird über eine Benutzerschnittstelle zugänglich gemacht, die so gestaltet ist, dass die Suchmaschine von Laien problemlos genutzt werden kann. Die zu einer Suchanfrage ausgegebenen Treffer werden so sortiert, dass den Nutzenden die aus Systemsicht relevantesten Dokumente zuerst angezeigt werden. Dabei handelt es sich um komplexe Bewertungsverfahren, denen zahlreiche Annahmen über die Relevanz von Dokumenten in Bezug auf Suchanfragen zugrunde liegen.
  8. Fernanda de Jesus, A.; Ferreira de Castro, F.: Proposal for the publication of linked open bibliographic data (2024) 0.05
    0.045364626 = product of:
      0.120972335 = sum of:
        0.03966674 = weight(_text_:world in 1161) [ClassicSimilarity], result of:
          0.03966674 = score(doc=1161,freq=2.0), product of:
            0.15567686 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.04050213 = queryNorm
            0.25480178 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
        0.05270967 = weight(_text_:wide in 1161) [ClassicSimilarity], result of:
          0.05270967 = score(doc=1161,freq=2.0), product of:
            0.17945516 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04050213 = queryNorm
            0.29372054 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
        0.028595924 = weight(_text_:web in 1161) [ClassicSimilarity], result of:
          0.028595924 = score(doc=1161,freq=2.0), product of:
            0.13217913 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04050213 = queryNorm
            0.21634221 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
      0.375 = coord(3/8)
    
    Abstract
    Linked Open Data (LOD) are a set of principles for publishing structured, connected data available for reuse under an open license. The objective of this paper is to analyze the publishing of bibliographic data such as LOD, having as a product the elaboration of theoretical-methodological recommendations for the publication of these data, in an approach based on the ten best practices for publishing LOD, from the World Wide Web Consortium. The starting point was the conduction of a Systematic Review of Literature, where initiatives to publish bibliographic data such as LOD were identified. An empirical study of these institutions was also conducted. As a result, theoretical-methodological recommendations were obtained for the process of publishing bibliographic data such as LOD.
  9. Lee, H.S.; Arnott Smith, C.: ¬A comparative mixed methods study on health information seeking among US-born/US-dwelling, Korean-born/US-dwelling, and Korean-born/Korean-dwelling mothers (2022) 0.04
    0.03780385 = product of:
      0.100810274 = sum of:
        0.03305562 = weight(_text_:world in 614) [ClassicSimilarity], result of:
          0.03305562 = score(doc=614,freq=2.0), product of:
            0.15567686 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.04050213 = queryNorm
            0.21233483 = fieldWeight in 614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=614)
        0.043924723 = weight(_text_:wide in 614) [ClassicSimilarity], result of:
          0.043924723 = score(doc=614,freq=2.0), product of:
            0.17945516 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04050213 = queryNorm
            0.24476713 = fieldWeight in 614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=614)
        0.023829939 = weight(_text_:web in 614) [ClassicSimilarity], result of:
          0.023829939 = score(doc=614,freq=2.0), product of:
            0.13217913 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04050213 = queryNorm
            0.18028519 = fieldWeight in 614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=614)
      0.375 = coord(3/8)
    
    Abstract
    More knowledge and a better understanding of health information seeking are necessary, especially in these unprecedented times due to the COVID-19 pandemic. Using Sonnenwald's theoretical concept of information horizons, this study aimed to uncover patterns in mothers' source preferences related to their children's health. Online surveys were completed by 851 mothers (255 US-born/US-dwelling, 300 Korean-born/US-dwelling, and 296 Korean-born/Korean-dwelling), and supplementary in-depth interviews with 24 mothers were conducted and analyzed. Results indicate that there were remarkable differences between the mothers' information source preference and their actual source use. Moreover, there were many similarities between the two Korean-born groups concerning health information-seeking behavior. For instance, those two groups sought health information more frequently than US-born/US-dwelling mothers. Their sources frequently included blogs or online forums as well as friends with children, whereas US-born/US-dwelling mothers frequently used doctors or nurses as information sources. Mothers in the two Korean-born samples preferred the World Wide Web most as their health information source, while the US-born/US-dwelling mothers preferred doctors the most. Based on these findings, information professionals should guide mothers of specific ethnicities and nationalities to trustworthy sources considering both their usage and preferences.
  10. Bärnreuther, K.: Informationskompetenz-Vermittlung für Schulklassen mit Wikipedia und dem Framework Informationskompetenz in der Hochschulbildung (2021) 0.03
    0.032004215 = product of:
      0.12801686 = sum of:
        0.111554414 = weight(_text_:recherche in 299) [ClassicSimilarity], result of:
          0.111554414 = score(doc=299,freq=4.0), product of:
            0.21953142 = queryWeight, product of:
              5.4202437 = idf(docFreq=531, maxDocs=44218)
              0.04050213 = queryNorm
            0.50814784 = fieldWeight in 299, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4202437 = idf(docFreq=531, maxDocs=44218)
              0.046875 = fieldNorm(doc=299)
        0.016462438 = product of:
          0.032924876 = sum of:
            0.032924876 = weight(_text_:22 in 299) [ClassicSimilarity], result of:
              0.032924876 = score(doc=299,freq=2.0), product of:
                0.14183156 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04050213 = queryNorm
                0.23214069 = fieldWeight in 299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=299)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Das Framework Informationskompetenz in der Hochschulbildung bietet sich als bibliotheksdidaktischer Rahmen auch schon für Kursangebote für Schulklassen an: obwohl es für die Angebote an Hochschulen und Universitäten konzipiert wurde, bereiten die Kollegstufen deutscher Gymnasien auf wissenschaftliche Karrieren vor; bibliothekarische Angebote für angehende Studierende können und sollten sich daher ebenfalls schon nach dem Framework richten. Informationskompetenz praxis- und lebensnah an Schüler*innen zu vermitteln, kann mit dem Framework als didaktischem Rahmen und praktisch am Beispiel der bei Lernenden und Lehrenden gleichermaßen beliebten und gleichzeitig viel gescholtenen Online-Enzyklopädie Wikipedia gelingen. Nicht nur wegen der zahlreichen Corona-bedingten Bibliotheksschließungen sollten angehende Abiturient*innen im Bereich der Online-Recherche zu reflektierten und kritischen Nutzer*innen ausgebildet werden. Im Rahmen des Frameworks kann praktisch am Beispiel Wikipedia Informationskompetenz vermittelt werden, die die Teilnehmenden unserer Kurse von der Wikipedia ausgehend auf die allgemeine Online-Recherche und auch auf alle anderen Bereiche des wissenschaftlichen Arbeitens übertragen können.
    Source
    o-bib: Das offene Bibliotheksjournal. 8(2021) Nr.2, S.1-22
  11. Christensen, A.: Wissenschaftliche Literatur entdecken : was bibliothekarische Discovery-Systeme von der Konkurrenz lernen und was sie ihr zeigen können (2022) 0.03
    0.0313474 = product of:
      0.1253896 = sum of:
        0.092027694 = weight(_text_:recherche in 833) [ClassicSimilarity], result of:
          0.092027694 = score(doc=833,freq=2.0), product of:
            0.21953142 = queryWeight, product of:
              5.4202437 = idf(docFreq=531, maxDocs=44218)
              0.04050213 = queryNorm
            0.41920057 = fieldWeight in 833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4202437 = idf(docFreq=531, maxDocs=44218)
              0.0546875 = fieldNorm(doc=833)
        0.033361915 = weight(_text_:web in 833) [ClassicSimilarity], result of:
          0.033361915 = score(doc=833,freq=2.0), product of:
            0.13217913 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04050213 = queryNorm
            0.25239927 = fieldWeight in 833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=833)
      0.25 = coord(2/8)
    
    Abstract
    In den letzten Jahren ist das Angebot an Academic Search Engines für die Recherche nach Fachliteratur zu allen Wissenschaftsgebieten stark angewachsen und ergänzt die beliebten kommerziellen Angebote wie Web of Science oder Scopus. Der Artikel zeigt die wesentlichen Unterschiede zwischen bibliothekarischen Discovery-Systemen und Academic Search Engines wie Base, Dimensions oder Open Alex auf und diskutiert Möglichkeiten, wie beide von einander profitieren können. Diese Entwicklungsperspektiven betreffen Aspekte wie die Kontextualisierung von Wissen, die Datenmodellierung, die automatischen Datenanreicherung sowie den Zuschnitt von Suchräumen.
  12. Huber, W.: Menschen, Götter und Maschinen : eine Ethik der Digitalisierung (2022) 0.03
    0.030243086 = product of:
      0.08064823 = sum of:
        0.026444495 = weight(_text_:world in 752) [ClassicSimilarity], result of:
          0.026444495 = score(doc=752,freq=2.0), product of:
            0.15567686 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.04050213 = queryNorm
            0.16986786 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
        0.03513978 = weight(_text_:wide in 752) [ClassicSimilarity], result of:
          0.03513978 = score(doc=752,freq=2.0), product of:
            0.17945516 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04050213 = queryNorm
            0.1958137 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
        0.01906395 = weight(_text_:web in 752) [ClassicSimilarity], result of:
          0.01906395 = score(doc=752,freq=2.0), product of:
            0.13217913 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04050213 = queryNorm
            0.14422815 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
      0.375 = coord(3/8)
    
    Content
    Vorwort -- 1. Das digitale Zeitalter -- Zeitenwende -- Die Vorherrschaft des Buchdrucks geht zu Ende -- Wann beginnt das digitale Zeitalter? -- 2. Zwischen Euphorie und Apokalypse -- Digitalisierung. Einfach. Machen -- Euphorie -- Apokalypse -- Verantwortungsethik -- Der Mensch als Subjekt der Ethik -- Verantwortung als Prinzip -- 3. Digitalisierter Alltag in einer globalisierten Welt -- Vom World Wide Web zum Internet der Dinge -- Mobiles Internet und digitale Bildung -- Digitale Plattformen und ihre Strategien -- Big Data und informationelle Selbstbestimmung -- 4. Grenzüberschreitungen -- Die Erosion des Privaten -- Die Deformation des Öffentlichen -- Die Senkung von Hemmschwellen -- Das Verschwinden der Wirklichkeit -- Die Wahrheit in der Infosphäre -- 5. Die Zukunft der Arbeit -- Industrielle Revolutionen -- Arbeit 4.0 -- Ethik 4.0 -- 6. Digitale Intelligenz -- Können Computer dichten? -- Stärker als der Mensch? -- Maschinelles Lernen -- Ein bleibender Unterschied -- Ethische Prinzipien für den Umgang mit digitaler Intelligenz -- Medizin als Beispiel -- 7. Die Würde des Menschen im digitalen Zeitalter -- Kränkungen oder Revolutionen -- Transhumanismus und Posthumanismus -- Gibt es Empathie ohne Menschen? -- Wer ist autonom: Mensch oder Maschine? -- Humanismus der Verantwortung -- 8. Die Zukunft des Homo sapiens -- Vergöttlichung des Menschen -- Homo deus -- Gott und Mensch im digitalen Zeitalter -- Veränderung der Menschheit -- Literatur -- Personenregister.
  13. Ding, J.: Can data die? : why one of the Internet's oldest images lives on wirhout its subjects's consent (2021) 0.03
    0.026562989 = product of:
      0.07083464 = sum of:
        0.036957305 = weight(_text_:world in 423) [ClassicSimilarity], result of:
          0.036957305 = score(doc=423,freq=10.0), product of:
            0.15567686 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.04050213 = queryNorm
            0.23739755 = fieldWeight in 423, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.01953125 = fieldNorm(doc=423)
        0.021962361 = weight(_text_:wide in 423) [ClassicSimilarity], result of:
          0.021962361 = score(doc=423,freq=2.0), product of:
            0.17945516 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04050213 = queryNorm
            0.122383565 = fieldWeight in 423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=423)
        0.011914969 = weight(_text_:web in 423) [ClassicSimilarity], result of:
          0.011914969 = score(doc=423,freq=2.0), product of:
            0.13217913 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04050213 = queryNorm
            0.09014259 = fieldWeight in 423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=423)
      0.375 = coord(3/8)
    
    Abstract
    In 2021, sharing content is easier than ever. Our lingua franca is visual: memes, infographics, TikToks. Our references cross borders and platforms, shared and remixed a hundred different ways in minutes. Digital culture is collective by default and has us together all around the world. But as the internet reaches its "dirty 30s," what happens when pieces of digital culture that have been saved, screenshotted, and reposted for years need to retire? Let's dig into the story of one of these artifacts: The Lenna image. The Lenna image may be relatively unknown in pop culture today, but in the engineering world, it remains an icon. I first encountered the image in an undergrad class, then grad school, and then all over the sites and software I use every day as a tech worker like Github, OpenCV, Stack Overflow, and Quora. To understand where the image is today, you have to understand how it got here. So, I decided to scrape Google scholar, search, and reverse image search results to track down thousands of instances of the image across the internet (see more in the methods section).
    Lena Forsén, the real human behind the Lenna image, was first published in Playboy in 1972. Soon after, USC engineers searching for a suitable test image for their image processing research sought inspiration from the magazine. They deemed Lenna the right fit and scanned the image into digital, RGB existence. From here, the story of the image follows the story of the internet. Lenna was one of the first inhabitants of ARPANet, the internet's predecessor, and then the world wide web. While the image's reach was limited to a few research papers in the '70s and '80s, in 1991, Lenna was featured on the cover of an engineering journal alongside another popular test image, Peppers. This caught the attention of Playboy, which threatened a copyright infringement lawsuit. Engineers who had grown attached to Lenna fought back. Ultimately, they prevailed, and as a Playboy VP reflected on the drama: "We decided we should exploit this because it is a phenomenon." The Playboy controversy canonized Lenna in engineering folklore and prompted an explosion of conversation about the image. Image hits on the internet rose to a peak number in 1995.
    But despite this progress, almost 2 years later, the use of Lenna continues. The image appears on the internet in 30+ different languages in the last decade, including 10+ languages in 2021. The image's spread across digital geographies has mirrored this geographical growth, moving from mostly .org domains before 1990 to over 100 different domains today, notably .com and .edu, along with others. Within the .edu world, the Lenna image continues to appear in homework questions, class slides and to be hosted on educational and research sites, ensuring that it is passed down to new generations of engineers. Whether it's due to institutional negligence or defiance, it seems that for now, the image is here to stay.
    Content
    "Having known Lenna for almost a decade, I have struggled to understand what the story of the image means for what tech culture is and what it is becoming. To me, the crux of the Lenna story is how little power we have over our data and how it is used and abused. This threat seems disproportionately higher for women who are often overrepresented in internet content, but underrepresented in internet company leadership and decision making. Given this reality, engineering and product decisions will continue to consciously (and unconsciously) exclude our needs and concerns. While social norms are changing towards non-consensual data collection and data exploitation, digital norms seem to be moving in the opposite direction. Advancements in machine learning algorithms and data storage capabilities are only making data misuse easier. Whether the outcome is revenge porn or targeted ads, surveillance or discriminatory AI, if we want a world where our data can retire when it's outlived its time, or when it's directly harming our lives, we must create the tools and policies that empower data subjects to have a say in what happens to their data. including allowing their data to die."
  14. Hoeber, O.; Harvey, M.; Dewan Sagar, S.A.; Pointon, M.: ¬The effects of simulated interruptions on mobile search tasks (2022) 0.03
    0.026476596 = product of:
      0.07060426 = sum of:
        0.03305562 = weight(_text_:world in 563) [ClassicSimilarity], result of:
          0.03305562 = score(doc=563,freq=2.0), product of:
            0.15567686 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.04050213 = queryNorm
            0.21233483 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=563)
        0.023829939 = weight(_text_:web in 563) [ClassicSimilarity], result of:
          0.023829939 = score(doc=563,freq=2.0), product of:
            0.13217913 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04050213 = queryNorm
            0.18028519 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=563)
        0.013718699 = product of:
          0.027437398 = sum of:
            0.027437398 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.027437398 = score(doc=563,freq=2.0), product of:
                0.14183156 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04050213 = queryNorm
                0.19345059 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    While it is clear that using a mobile device can interrupt real-world activities such as walking or driving, the effects of interruptions on mobile device use have been under-studied. We are particularly interested in how the ambient distraction of walking while using a mobile device, combined with the occurrence of simulated interruptions of different levels of cognitive complexity, affect web search activities. We have established an experimental design to study how the degree of cognitive complexity of simulated interruptions influences both objective and subjective search task performance. In a controlled laboratory study (n = 27), quantitative and qualitative data were collected on mobile search performance, perceptions of the interruptions, and how participants reacted to the interruptions, using a custom mobile eye-tracking app, a questionnaire, and observations. As expected, more cognitively complex interruptions resulted in increased overall task completion times and higher perceived impacts. Interestingly, the effect on the resumption lag or the actual search performance was not significant, showing the resiliency of people to resume their tasks after an interruption. Implications from this study enhance our understanding of how interruptions objectively and subjectively affect search task performance, motivating the need for providing explicit mobile search support to enable recovery from interruptions.
    Date
    3. 5.2022 13:22:33
  15. Scheven, E.: Qualitätssicherung in der GND (2021) 0.02
    0.02383583 = product of:
      0.09534332 = sum of:
        0.078880884 = weight(_text_:recherche in 314) [ClassicSimilarity], result of:
          0.078880884 = score(doc=314,freq=2.0), product of:
            0.21953142 = queryWeight, product of:
              5.4202437 = idf(docFreq=531, maxDocs=44218)
              0.04050213 = queryNorm
            0.35931477 = fieldWeight in 314, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4202437 = idf(docFreq=531, maxDocs=44218)
              0.046875 = fieldNorm(doc=314)
        0.016462438 = product of:
          0.032924876 = sum of:
            0.032924876 = weight(_text_:22 in 314) [ClassicSimilarity], result of:
              0.032924876 = score(doc=314,freq=2.0), product of:
                0.14183156 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04050213 = queryNorm
                0.23214069 = fieldWeight in 314, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=314)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Was mag das Akronym GND bedeuten? Lassen wir der Fantasie freien Lauf, kommen wir auf Auflösungen wie Golfer nehmen Datteln, Gerne noch Details, Glück nach Dauerstress, Größter Nutzen Deutschlands und vieles mehr. Eine ernsthaftere Recherche führt zur Gesamtnutzungsdauer oder auf einen Sachverhalt der Elektrotechnik: Die von einer Stromquelle bereitgestellte Spannung bezieht sich stets auf ein Grundniveau. Dieses Grundniveau wird auf Deutsch als Masse, im Englischen aber als ground oder GND bezeichnet. Techniker kennen das Schaltzeichen dafür: Für den informationswissenschaftlichen Bereich steht dagegen GND für die Gemeinsame Normdatei. Auch sie hat (seit 2020) ein Zeichen. Da die Gemeinsame Normdatei (im weiteren Text nur noch GND) auch ein Instrument der Inhaltserschließung ist, beeinflussen ihre Stärken und Schwächen die Qualität der Inhaltserschließung. Deshalb widmet sich dieser Artikel der Qualitätssicherung in der GND.
    Date
    23. 9.2021 19:12:22
  16. Information : a historical companion (2021) 0.02
    0.022007193 = product of:
      0.08802877 = sum of:
        0.05288899 = weight(_text_:world in 492) [ClassicSimilarity], result of:
          0.05288899 = score(doc=492,freq=8.0), product of:
            0.15567686 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.04050213 = queryNorm
            0.33973572 = fieldWeight in 492, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=492)
        0.03513978 = weight(_text_:wide in 492) [ClassicSimilarity], result of:
          0.03513978 = score(doc=492,freq=2.0), product of:
            0.17945516 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04050213 = queryNorm
            0.1958137 = fieldWeight in 492, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=492)
      0.25 = coord(2/8)
    
    Abstract
    Written by an international team of experts (including Jeremy Adelman, Lorraine Daston, Devin Fitzgerald, John-Paul Ghobrial, Lisa Gitelman, Earle Havens, Randolph C. Head, Niv Horesh, Sarah Igo, Richard R. John, Lauren Kassell, Pamela Long, Erin McGuirl, David McKitterick, Elias Muhanna, Thomas S. Mullaney, Carla Nappi, Craig Robertson, Daniel Rosenberg, Neil Safier, Haun Saussy, Will Slauter, Jacob Soll, Heidi Tworek, Siva Vaidhyanathan, Alexandra Walsham), the book's inspired and original long- and short-form contributions reconstruct the rise of human approaches to creating, managing, and sharing facts and knowledge. Thirteen full-length chapters discuss the role of information in pivotal epochs and regions, with chief emphasis on Europe and North America, but also substantive treatment of other parts of the world as well as current global interconnections. More than 100 alphabetical entries follow, focusing on specific tools, methods, and concepts?from ancient coins to the office memo, and censorship to plagiarism. The result is a wide-ranging, deeply immersive collection that will appeal to anyone drawn to the story behind our modern mania for an informed existence.
    Content
    Cover -- Contents -- Introduction -- Alphabetical List of Entries -- Thematic List of Entries -- Contributors -- PART ONE -- 1. Premodern Regimes and Practices -- 2. Realms of Information in the Medieval Islamic World -- 3. Information in Early Modern East Asia -- 4. Information in Early Modern Europe -- 5. Networks and the Making of a Connected World in the Sixteenth Century -- 6. Records, Secretaries, and the European Information State, circa 1400-1700 -- 7. Periodicals and the Commercialization of Information in the Early Modern Era -- 8. Documents, Empire, and Capitalism in the Nineteenth Century -- 9. Nineteenth-Century Media Technologies -- 10. Networking: Information Circles the Modern World -- 11. Publicity, Propaganda, and Public Opinion: From the Titanic Disaster to the Hungarian Uprising -- 12. Communication, Computation, and Information -- 13. Search -- PART TWO -- Alphabetical Entries -- Glossary -- Index.
  17. Schreur, P.E.: ¬The use of Linked Data and artificial intelligence as key elements in the transformation of technical services (2020) 0.02
    0.019909944 = product of:
      0.07963978 = sum of:
        0.046277866 = weight(_text_:world in 125) [ClassicSimilarity], result of:
          0.046277866 = score(doc=125,freq=2.0), product of:
            0.15567686 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.04050213 = queryNorm
            0.29726875 = fieldWeight in 125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=125)
        0.033361915 = weight(_text_:web in 125) [ClassicSimilarity], result of:
          0.033361915 = score(doc=125,freq=2.0), product of:
            0.13217913 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04050213 = queryNorm
            0.25239927 = fieldWeight in 125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=125)
      0.25 = coord(2/8)
    
    Abstract
    Library Technical Services have benefited from numerous stimuli. Although initially looked at with suspicion, transitions such as the move from catalog cards to the MARC formats have proven enormously helpful to libraries and their patrons. Linked data and Artificial Intelligence (AI) hold the same promise. Through the conversion of metadata surrogates (cataloging) to linked open data, libraries can represent their resources on the Semantic Web. But in order to provide some form of controlled access to unstructured data, libraries must reach beyond traditional cataloging to new tools such as AI to provide consistent access to a growing world of full-text resources.
  18. Zhu, L.; Xu, A.; Deng, S.; Heng, G.; Li, X.: Entity management using Wikidata for cultural heritage information (2024) 0.02
    0.019909944 = product of:
      0.07963978 = sum of:
        0.046277866 = weight(_text_:world in 975) [ClassicSimilarity], result of:
          0.046277866 = score(doc=975,freq=2.0), product of:
            0.15567686 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.04050213 = queryNorm
            0.29726875 = fieldWeight in 975, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=975)
        0.033361915 = weight(_text_:web in 975) [ClassicSimilarity], result of:
          0.033361915 = score(doc=975,freq=2.0), product of:
            0.13217913 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04050213 = queryNorm
            0.25239927 = fieldWeight in 975, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=975)
      0.25 = coord(2/8)
    
    Abstract
    Entity management in a Linked Open Data (LOD) environment is a process of associating a unique, persistent, and dereferenceable Uniform Resource Identifier (URI) with a single entity. It allows data from various sources to be reused and connected to the Web. It can help improve data quality and enable more efficient workflows. This article describes a semi-automated entity management project conducted by the "Wikidata: WikiProject Chinese Culture and Heritage Group," explores the challenges and opportunities in describing Chinese women poets and historical places in Wikidata, the largest crowdsourcing LOD platform in the world, and discusses lessons learned and future opportunities.
  19. Baines, D.; Elliott, R.J.: Defining misinformation, disinformation and malinformation : an urgent need for clarity during the COVID-19 infodemic (2020) 0.02
    0.019245084 = product of:
      0.07698034 = sum of:
        0.03305562 = weight(_text_:world in 5853) [ClassicSimilarity], result of:
          0.03305562 = score(doc=5853,freq=2.0), product of:
            0.15567686 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.04050213 = queryNorm
            0.21233483 = fieldWeight in 5853, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5853)
        0.043924723 = weight(_text_:wide in 5853) [ClassicSimilarity], result of:
          0.043924723 = score(doc=5853,freq=2.0), product of:
            0.17945516 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04050213 = queryNorm
            0.24476713 = fieldWeight in 5853, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5853)
      0.25 = coord(2/8)
    
    Abstract
    COVID-19 is an unprecedented global health crisis that will have immeasurable consequences for our economic and social well-being. Tedros Adhanom Ghebreyesus, the director general of the World Health Organization, stated "We're not just fighting an epidemic; we're fighting an infodemic". Currently, there is no robust scientific basis to the existing definitions of false information used in the fight against the COVID-19infodemic. The purpose of this paper is to demonstrate how the use of a novel taxonomy and related model (based upon a conceptual framework that synthesizes insights from information science, philosophy, media studies and politics) can produce new scientific definitions of mis-, dis- and malinformation. We undertake our analysis from the viewpoint of information systems research. The conceptual approach to defining mis-,dis- and malinformation can be applied to a wide range of empirical examples and, if applied properly, may prove useful in fighting the COVID-19 infodemic. In sum, our research suggests that: (i) analyzing all types of information is important in the battle against the COVID-19 infodemic; (ii) a scientific approach is required so that different methods are not used by different studies; (iii) "misinformation", as an umbrella term, can be confusing and should be dropped from use; (iv) clear, scientific definitions of information types will be needed going forward; (v) malinformation is an overlooked phenomenon involving reconfigurations of the truth.
  20. González-Teruel, A.; Ávila Araújo, C.A.; Sabelli, M.: Diffusion of theories and theoretical models in the Ibero-American research on information behavior (2022) 0.02
    0.019245084 = product of:
      0.07698034 = sum of:
        0.03305562 = weight(_text_:world in 529) [ClassicSimilarity], result of:
          0.03305562 = score(doc=529,freq=2.0), product of:
            0.15567686 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.04050213 = queryNorm
            0.21233483 = fieldWeight in 529, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=529)
        0.043924723 = weight(_text_:wide in 529) [ClassicSimilarity], result of:
          0.043924723 = score(doc=529,freq=2.0), product of:
            0.17945516 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04050213 = queryNorm
            0.24476713 = fieldWeight in 529, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=529)
      0.25 = coord(2/8)
    
    Abstract
    Ibero-American research on information behavior (IB) lacks the visibility typical of other parts of the world, and little is known about it in countries outside the area. The objective of this paper has therefore been to analyze the way in which Ibero-American research incorporates various theoretical references to empirical research on IB. The results point to the existence of different focuses of research in the past 10 years, in the sense of a reduced empirical approach and a moderate to minimal use of theories in the design of such research. Furthermore, the most cited theories and models of IB at an international level are those most widely applied in this geographical area, and the use of a wide variety of theoretical frameworks has been demonstrated, which gives the research under review a cognitive, but also sociocultural, perspective. Future research should further elaborate on this issue, including other types of documents, such as conference papers, books, and theses, while taking into account the publication landscape of the geographical area in question.

Languages

  • e 194
  • d 51
  • pt 1
  • More… Less…

Types

  • a 215
  • el 45
  • m 15
  • p 7
  • s 3
  • x 2
  • A 1
  • EL 1
  • More… Less…

Subjects