Search (254 results, page 1 of 13)

  • × year_i:[2020 TO 2030}
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.26
    0.25678995 = product of:
      0.5135799 = sum of:
        0.04716474 = product of:
          0.14149421 = sum of:
            0.14149421 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.14149421 = score(doc=1000,freq=2.0), product of:
                0.30211318 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035634913 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.14149421 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.14149421 = score(doc=1000,freq=2.0), product of:
            0.30211318 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035634913 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.14149421 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.14149421 = score(doc=1000,freq=2.0), product of:
            0.30211318 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035634913 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.02096625 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.02096625 = score(doc=1000,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.14149421 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.14149421 = score(doc=1000,freq=2.0), product of:
            0.30211318 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035634913 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.02096625 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.02096625 = score(doc=1000,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(6/12)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.25
    0.25237784 = product of:
      0.6057068 = sum of:
        0.056597687 = product of:
          0.16979305 = sum of:
            0.16979305 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.16979305 = score(doc=862,freq=2.0), product of:
                0.30211318 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035634913 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.16979305 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.16979305 = score(doc=862,freq=2.0), product of:
            0.30211318 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035634913 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.16979305 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.16979305 = score(doc=862,freq=2.0), product of:
            0.30211318 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035634913 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.16979305 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.16979305 = score(doc=862,freq=2.0), product of:
            0.30211318 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035634913 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.039729945 = product of:
          0.07945989 = sum of:
            0.07945989 = weight(_text_:2.0 in 862) [ClassicSimilarity], result of:
              0.07945989 = score(doc=862,freq=2.0), product of:
                0.20667298 = queryWeight, product of:
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.035634913 = queryNorm
                0.3844716 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.5 = coord(1/2)
      0.41666666 = coord(5/12)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Peters, I.: Folksonomies & Social Tagging (2023) 0.15
    0.15120287 = product of:
      0.3628869 = sum of:
        0.16638474 = weight(_text_:tagging in 796) [ClassicSimilarity], result of:
          0.16638474 = score(doc=796,freq=6.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.7908621 = fieldWeight in 796, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
        0.050840456 = weight(_text_:web in 796) [ClassicSimilarity], result of:
          0.050840456 = score(doc=796,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.43716836 = fieldWeight in 796, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
        0.040716566 = weight(_text_:world in 796) [ClassicSimilarity], result of:
          0.040716566 = score(doc=796,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.29726875 = fieldWeight in 796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
        0.05410469 = weight(_text_:wide in 796) [ClassicSimilarity], result of:
          0.05410469 = score(doc=796,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.342674 = fieldWeight in 796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
        0.050840456 = weight(_text_:web in 796) [ClassicSimilarity], result of:
          0.050840456 = score(doc=796,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.43716836 = fieldWeight in 796, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
      0.41666666 = coord(5/12)
    
    Abstract
    Die Erforschung und der Einsatz von Folksonomies und Social Tagging als nutzerzentrierte Formen der Inhaltserschließung und Wissensrepräsentation haben in den 10 Jahren ab ca. 2005 ihren Höhenpunkt erfahren. Motiviert wurde dies durch die Entwicklung und Verbreitung des Social Web und der wachsenden Nutzung von Social-Media-Plattformen (s. Kapitel E 8 Social Media und Social Web). Beides führte zu einem rasanten Anstieg der im oder über das World Wide Web auffindbaren Menge an potenzieller Information und generierte eine große Nachfrage nach skalierbaren Methoden der Inhaltserschließung.
    Theme
    Social tagging
  4. Smith, A.: Simple Knowledge Organization System (SKOS) (2022) 0.07
    0.06736527 = product of:
      0.20209579 = sum of:
        0.043577533 = weight(_text_:web in 1094) [ClassicSimilarity], result of:
          0.043577533 = score(doc=1094,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 1094, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.049355935 = weight(_text_:world in 1094) [ClassicSimilarity], result of:
          0.049355935 = score(doc=1094,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.36034414 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.06558479 = weight(_text_:wide in 1094) [ClassicSimilarity], result of:
          0.06558479 = score(doc=1094,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.4153836 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.043577533 = weight(_text_:web in 1094) [ClassicSimilarity], result of:
          0.043577533 = score(doc=1094,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 1094, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
      0.33333334 = coord(4/12)
    
    Abstract
    SKOS (Simple Knowledge Organization System) is a recommendation from the World Wide Web Consortium (W3C) for representing controlled vocabularies, taxonomies, thesauri, classifications, and similar systems for organizing and indexing information as linked data elements in the Semantic Web, using the Resource Description Framework (RDF). The SKOS data model is centered on "concepts", which can have preferred and alternate labels in any language as well as other metadata, and which are identified by addresses on the World Wide Web (URIs). Concepts are grouped into hierarchies through "broader" and "narrower" relations, with "top concepts" at the broadest conceptual level. Concepts are also organized into "concept schemes", also identified by URIs. Other relations, mappings, and groupings are also supported. This article discusses the history of the development of SKOS and provides notes on adoption, uses, and limitations.
  5. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.07
    0.0655024 = product of:
      0.19650719 = sum of:
        0.071161814 = weight(_text_:web in 79) [ClassicSimilarity], result of:
          0.071161814 = score(doc=79,freq=36.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.6119082 = fieldWeight in 79, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.02326661 = weight(_text_:world in 79) [ClassicSimilarity], result of:
          0.02326661 = score(doc=79,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.16986786 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.030916965 = weight(_text_:wide in 79) [ClassicSimilarity], result of:
          0.030916965 = score(doc=79,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.1958137 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.071161814 = weight(_text_:web in 79) [ClassicSimilarity], result of:
          0.071161814 = score(doc=79,freq=36.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.6119082 = fieldWeight in 79, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
      0.33333334 = coord(4/12)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Theme
    Semantic Web
  6. Breuer, T.; Tavakolpoursaleh, N.; Schaer, P.; Hienert, D.; Schaible, J.; Castro, L.J.: Online Information Retrieval Evaluation using the STELLA Framework (2022) 0.06
    0.06074629 = product of:
      0.18223886 = sum of:
        0.025159499 = weight(_text_:web in 640) [ClassicSimilarity], result of:
          0.025159499 = score(doc=640,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.097019956 = weight(_text_:log in 640) [ClassicSimilarity], result of:
          0.097019956 = score(doc=640,freq=2.0), product of:
            0.22837062 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.035634913 = queryNorm
            0.42483553 = fieldWeight in 640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.034899916 = weight(_text_:world in 640) [ClassicSimilarity], result of:
          0.034899916 = score(doc=640,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.25480178 = fieldWeight in 640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.025159499 = weight(_text_:web in 640) [ClassicSimilarity], result of:
          0.025159499 = score(doc=640,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
      0.33333334 = coord(4/12)
    
    Abstract
    Involving users in early phases of software development has become a common strategy as it enables developers to consider user needs from the beginning. Once a system is in production, new opportunities to observe, evaluate and learn from users emerge as more information becomes available. Gathering information from users to continuously evaluate their behavior is a common practice for commercial software, while the Cranfield paradigm remains the preferred option for Information Retrieval (IR) and recommendation systems in the academic world. Here we introduce the Infrastructures for Living Labs STELLA project which aims to create an evaluation infrastructure allowing experimental systems to run along production web-based academic search systems with real users. STELLA combines user interactions and log files analyses to enable large-scale A/B experiments for academic search.
  7. Lewandowski, D.: Suchmaschinen verstehen : 3. vollständig überarbeitete und erweiterte Aufl. (2021) 0.05
    0.05169515 = product of:
      0.15508544 = sum of:
        0.029650755 = weight(_text_:web in 4016) [ClassicSimilarity], result of:
          0.029650755 = score(doc=4016,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 4016, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4016)
        0.041129943 = weight(_text_:world in 4016) [ClassicSimilarity], result of:
          0.041129943 = score(doc=4016,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.30028677 = fieldWeight in 4016, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4016)
        0.054653995 = weight(_text_:wide in 4016) [ClassicSimilarity], result of:
          0.054653995 = score(doc=4016,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.34615302 = fieldWeight in 4016, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4016)
        0.029650755 = weight(_text_:web in 4016) [ClassicSimilarity], result of:
          0.029650755 = score(doc=4016,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 4016, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4016)
      0.33333334 = coord(4/12)
    
    RSWK
    World Wide Web Recherche
    Subject
    World Wide Web Recherche
  8. Lewandowski, D.: Suchmaschinen (2023) 0.05
    0.050812393 = product of:
      0.15243718 = sum of:
        0.035580907 = weight(_text_:web in 793) [ClassicSimilarity], result of:
          0.035580907 = score(doc=793,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3059541 = fieldWeight in 793, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=793)
        0.034899916 = weight(_text_:world in 793) [ClassicSimilarity], result of:
          0.034899916 = score(doc=793,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.25480178 = fieldWeight in 793, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=793)
        0.046375446 = weight(_text_:wide in 793) [ClassicSimilarity], result of:
          0.046375446 = score(doc=793,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.29372054 = fieldWeight in 793, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=793)
        0.035580907 = weight(_text_:web in 793) [ClassicSimilarity], result of:
          0.035580907 = score(doc=793,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3059541 = fieldWeight in 793, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=793)
      0.33333334 = coord(4/12)
    
    Abstract
    Eine Suchmaschine (auch: Web-Suchmaschine, Universalsuchmaschine) ist ein Computersystem, das Inhalte aus dem World Wide Web (WWW) mittels Crawling erfasst und über eine Benutzerschnittstelle durchsuchbar macht, wobei die Ergebnisse in einer nach systemseitig angenommener Relevanz geordneten Darstellung aufgeführt werden. Dies bedeutet, dass Suchmaschinen im Gegensatz zu anderen Informationssystemen nicht auf einem klar abgegrenzten Datenbestand aufbauen, sondern diesen aus den verstreut vorliegenden Dokumenten des WWW zusammenstellen. Dieser Datenbestand wird über eine Benutzerschnittstelle zugänglich gemacht, die so gestaltet ist, dass die Suchmaschine von Laien problemlos genutzt werden kann. Die zu einer Suchanfrage ausgegebenen Treffer werden so sortiert, dass den Nutzenden die aus Systemsicht relevantesten Dokumente zuerst angezeigt werden. Dabei handelt es sich um komplexe Bewertungsverfahren, denen zahlreiche Annahmen über die Relevanz von Dokumenten in Bezug auf Suchanfragen zugrunde liegen.
  9. San Segundo, R.; Martínez-Ávila, D.; Frías Montoya, J.A.: Ethical issues in control by algorithms : the user is the content (2023) 0.05
    0.046623982 = product of:
      0.18649593 = sum of:
        0.11533412 = weight(_text_:filter in 1132) [ClassicSimilarity], result of:
          0.11533412 = score(doc=1132,freq=2.0), product of:
            0.24899386 = queryWeight, product of:
              6.987357 = idf(docFreq=110, maxDocs=44218)
              0.035634913 = queryNorm
            0.4632007 = fieldWeight in 1132, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.987357 = idf(docFreq=110, maxDocs=44218)
              0.046875 = fieldNorm(doc=1132)
        0.035580907 = weight(_text_:web in 1132) [ClassicSimilarity], result of:
          0.035580907 = score(doc=1132,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3059541 = fieldWeight in 1132, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1132)
        0.035580907 = weight(_text_:web in 1132) [ClassicSimilarity], result of:
          0.035580907 = score(doc=1132,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3059541 = fieldWeight in 1132, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1132)
      0.25 = coord(3/12)
    
    Abstract
    In this paper we discuss some ethical issues and challenges of the use of algorithms on the web from the perspective of knowledge organization. We review some of the problems that these algorithms and the filter bubbles pose for the users. We contextualize these issues within the user-based approaches to knowledge organization in a larger sense. We review some of the technologies that have been developed to counter these problems as well as initiatives from the knowledge organization field. We conclude with the necessity of adopting a critical and ethical stance towards the use of algorithms on the web and the need for an education in knowledge organization that addresses these issues.
  10. Sun, J.; Zhu, M.; Jiang, Y.; Liu, Y.; Wu, L.L.: Hierarchical attention model for personalized tag recommendation : peer effects on information value perception (2021) 0.05
    0.046543892 = product of:
      0.13963167 = sum of:
        0.068615906 = weight(_text_:tagging in 98) [ClassicSimilarity], result of:
          0.068615906 = score(doc=98,freq=2.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.326146 = fieldWeight in 98, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.0390625 = fieldNorm(doc=98)
        0.02096625 = weight(_text_:web in 98) [ClassicSimilarity], result of:
          0.02096625 = score(doc=98,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 98, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=98)
        0.029083263 = weight(_text_:world in 98) [ClassicSimilarity], result of:
          0.029083263 = score(doc=98,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.21233483 = fieldWeight in 98, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=98)
        0.02096625 = weight(_text_:web in 98) [ClassicSimilarity], result of:
          0.02096625 = score(doc=98,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 98, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=98)
      0.33333334 = coord(4/12)
    
    Abstract
    With the development of Web-based social networks, many personalized tag recommendation approaches based on multi-information have been proposed. Due to the differences in users' preferences, different users care about different kinds of information. In the meantime, different elements within each kind of information are differentially informative for user tagging behaviors. In this context, how to effectively integrate different elements and different information separately becomes a key part of tag recommendation. However, the existing methods ignore this key part. In order to address this problem, we propose a deep neural network for tag recommendation. Specifically, we model two important attentive aspects with a hierarchical attention model. For different user-item pairs, the bottom layered attention network models the influence of different elements on the features representation of the information while the top layered attention network models the attentive scores of different information. To verify the effectiveness of the proposed method, we conduct extensive experiments on two real-world data sets. The results show that using attention network and different kinds of information can significantly improve the performance of the recommendation model, and verify the effectiveness and superiority of our proposed model.
  11. Fernanda de Jesus, A.; Ferreira de Castro, F.: Proposal for the publication of linked open bibliographic data (2024) 0.04
    0.043864787 = product of:
      0.13159436 = sum of:
        0.025159499 = weight(_text_:web in 1161) [ClassicSimilarity], result of:
          0.025159499 = score(doc=1161,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
        0.034899916 = weight(_text_:world in 1161) [ClassicSimilarity], result of:
          0.034899916 = score(doc=1161,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.25480178 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
        0.046375446 = weight(_text_:wide in 1161) [ClassicSimilarity], result of:
          0.046375446 = score(doc=1161,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.29372054 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
        0.025159499 = weight(_text_:web in 1161) [ClassicSimilarity], result of:
          0.025159499 = score(doc=1161,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
      0.33333334 = coord(4/12)
    
    Abstract
    Linked Open Data (LOD) are a set of principles for publishing structured, connected data available for reuse under an open license. The objective of this paper is to analyze the publishing of bibliographic data such as LOD, having as a product the elaboration of theoretical-methodological recommendations for the publication of these data, in an approach based on the ten best practices for publishing LOD, from the World Wide Web Consortium. The starting point was the conduction of a Systematic Review of Literature, where initiatives to publish bibliographic data such as LOD were identified. An empirical study of these institutions was also conducted. As a result, theoretical-methodological recommendations were obtained for the process of publishing bibliographic data such as LOD.
  12. Lee, H.S.; Arnott Smith, C.: ¬A comparative mixed methods study on health information seeking among US-born/US-dwelling, Korean-born/US-dwelling, and Korean-born/Korean-dwelling mothers (2022) 0.04
    0.03655399 = product of:
      0.10966197 = sum of:
        0.02096625 = weight(_text_:web in 614) [ClassicSimilarity], result of:
          0.02096625 = score(doc=614,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=614)
        0.029083263 = weight(_text_:world in 614) [ClassicSimilarity], result of:
          0.029083263 = score(doc=614,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.21233483 = fieldWeight in 614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=614)
        0.038646206 = weight(_text_:wide in 614) [ClassicSimilarity], result of:
          0.038646206 = score(doc=614,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.24476713 = fieldWeight in 614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=614)
        0.02096625 = weight(_text_:web in 614) [ClassicSimilarity], result of:
          0.02096625 = score(doc=614,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=614)
      0.33333334 = coord(4/12)
    
    Abstract
    More knowledge and a better understanding of health information seeking are necessary, especially in these unprecedented times due to the COVID-19 pandemic. Using Sonnenwald's theoretical concept of information horizons, this study aimed to uncover patterns in mothers' source preferences related to their children's health. Online surveys were completed by 851 mothers (255 US-born/US-dwelling, 300 Korean-born/US-dwelling, and 296 Korean-born/Korean-dwelling), and supplementary in-depth interviews with 24 mothers were conducted and analyzed. Results indicate that there were remarkable differences between the mothers' information source preference and their actual source use. Moreover, there were many similarities between the two Korean-born groups concerning health information-seeking behavior. For instance, those two groups sought health information more frequently than US-born/US-dwelling mothers. Their sources frequently included blogs or online forums as well as friends with children, whereas US-born/US-dwelling mothers frequently used doctors or nurses as information sources. Mothers in the two Korean-born samples preferred the World Wide Web most as their health information source, while the US-born/US-dwelling mothers preferred doctors the most. Based on these findings, information professionals should guide mothers of specific ethnicities and nationalities to trustworthy sources considering both their usage and preferences.
  13. Huber, W.: Menschen, Götter und Maschinen : eine Ethik der Digitalisierung (2022) 0.03
    0.029243192 = product of:
      0.08772957 = sum of:
        0.016773 = weight(_text_:web in 752) [ClassicSimilarity], result of:
          0.016773 = score(doc=752,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.14422815 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
        0.02326661 = weight(_text_:world in 752) [ClassicSimilarity], result of:
          0.02326661 = score(doc=752,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.16986786 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
        0.030916965 = weight(_text_:wide in 752) [ClassicSimilarity], result of:
          0.030916965 = score(doc=752,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.1958137 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
        0.016773 = weight(_text_:web in 752) [ClassicSimilarity], result of:
          0.016773 = score(doc=752,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.14422815 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
      0.33333334 = coord(4/12)
    
    Content
    Vorwort -- 1. Das digitale Zeitalter -- Zeitenwende -- Die Vorherrschaft des Buchdrucks geht zu Ende -- Wann beginnt das digitale Zeitalter? -- 2. Zwischen Euphorie und Apokalypse -- Digitalisierung. Einfach. Machen -- Euphorie -- Apokalypse -- Verantwortungsethik -- Der Mensch als Subjekt der Ethik -- Verantwortung als Prinzip -- 3. Digitalisierter Alltag in einer globalisierten Welt -- Vom World Wide Web zum Internet der Dinge -- Mobiles Internet und digitale Bildung -- Digitale Plattformen und ihre Strategien -- Big Data und informationelle Selbstbestimmung -- 4. Grenzüberschreitungen -- Die Erosion des Privaten -- Die Deformation des Öffentlichen -- Die Senkung von Hemmschwellen -- Das Verschwinden der Wirklichkeit -- Die Wahrheit in der Infosphäre -- 5. Die Zukunft der Arbeit -- Industrielle Revolutionen -- Arbeit 4.0 -- Ethik 4.0 -- 6. Digitale Intelligenz -- Können Computer dichten? -- Stärker als der Mensch? -- Maschinelles Lernen -- Ein bleibender Unterschied -- Ethische Prinzipien für den Umgang mit digitaler Intelligenz -- Medizin als Beispiel -- 7. Die Würde des Menschen im digitalen Zeitalter -- Kränkungen oder Revolutionen -- Transhumanismus und Posthumanismus -- Gibt es Empathie ohne Menschen? -- Wer ist autonom: Mensch oder Maschine? -- Humanismus der Verantwortung -- 8. Die Zukunft des Homo sapiens -- Vergöttlichung des Menschen -- Homo deus -- Gott und Mensch im digitalen Zeitalter -- Veränderung der Menschheit -- Literatur -- Personenregister.
  14. Asubiaro, T.V.; Onaolapo, S.: ¬A comparative study of the coverage of African journals in Web of Science, Scopus, and CrossRef (2023) 0.03
    0.028695831 = product of:
      0.114783324 = sum of:
        0.051356614 = weight(_text_:web in 992) [ClassicSimilarity], result of:
          0.051356614 = score(doc=992,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.4416067 = fieldWeight in 992, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=992)
        0.051356614 = weight(_text_:web in 992) [ClassicSimilarity], result of:
          0.051356614 = score(doc=992,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.4416067 = fieldWeight in 992, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=992)
        0.012070097 = product of:
          0.024140194 = sum of:
            0.024140194 = weight(_text_:22 in 992) [ClassicSimilarity], result of:
              0.024140194 = score(doc=992,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.19345059 = fieldWeight in 992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=992)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    This is the first study that evaluated the coverage of journals from Africa in Web of Science, Scopus, and CrossRef. A list of active journals published in each of the 55 African countries was compiled from Ulrich's periodicals directory and African Journals Online (AJOL) website. Journal master lists for Web of Science, Scopus, and CrossRef were searched for the African journals. A total of 2,229 unique active African journals were identified from Ulrich (N = 2,117, 95.0%) and AJOL (N = 243, 10.9%) after removing duplicates. The volume of African journals in Web of Science and Scopus databases is 7.4% (N = 166) and 7.8% (N = 174), respectively, compared to the 45.6% (N = 1,017) covered in CrossRef. While making up only 17.% of all the African journals, South African journals had the best coverage in the two most authoritative databases, accounting for 73.5% and 62.1% of all the African journals in Web of Science and Scopus, respectively. In contrast, Nigeria published 44.5% of all the African journals. The distribution of the African journals is biased in favor of Medical, Life and Health Sciences and Humanities and the Arts in the three databases. The low representation of African journals in CrossRef, a free indexing infrastructure that could be harnessed for building an African-centric research indexing database, is concerning.
    Date
    22. 6.2023 14:09:06
    Object
    Web of Science
  15. Hoeber, O.; Harvey, M.; Dewan Sagar, S.A.; Pointon, M.: ¬The effects of simulated interruptions on mobile search tasks (2022) 0.03
    0.027695287 = product of:
      0.08308586 = sum of:
        0.02096625 = weight(_text_:web in 563) [ClassicSimilarity], result of:
          0.02096625 = score(doc=563,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=563)
        0.029083263 = weight(_text_:world in 563) [ClassicSimilarity], result of:
          0.029083263 = score(doc=563,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.21233483 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=563)
        0.02096625 = weight(_text_:web in 563) [ClassicSimilarity], result of:
          0.02096625 = score(doc=563,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=563)
        0.012070097 = product of:
          0.024140194 = sum of:
            0.024140194 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.024140194 = score(doc=563,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.19345059 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.33333334 = coord(4/12)
    
    Abstract
    While it is clear that using a mobile device can interrupt real-world activities such as walking or driving, the effects of interruptions on mobile device use have been under-studied. We are particularly interested in how the ambient distraction of walking while using a mobile device, combined with the occurrence of simulated interruptions of different levels of cognitive complexity, affect web search activities. We have established an experimental design to study how the degree of cognitive complexity of simulated interruptions influences both objective and subjective search task performance. In a controlled laboratory study (n = 27), quantitative and qualitative data were collected on mobile search performance, perceptions of the interruptions, and how participants reacted to the interruptions, using a custom mobile eye-tracking app, a questionnaire, and observations. As expected, more cognitively complex interruptions resulted in increased overall task completion times and higher perceived impacts. Interestingly, the effect on the resumption lag or the actual search performance was not significant, showing the resiliency of people to resume their tasks after an interruption. Implications from this study enhance our understanding of how interruptions objectively and subjectively affect search task performance, motivating the need for providing explicit mobile search support to enable recovery from interruptions.
    Date
    3. 5.2022 13:22:33
  16. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.02
    0.024980063 = product of:
      0.09992025 = sum of:
        0.04151106 = weight(_text_:web in 40) [ClassicSimilarity], result of:
          0.04151106 = score(doc=40,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.35694647 = fieldWeight in 40, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.04151106 = weight(_text_:web in 40) [ClassicSimilarity], result of:
          0.04151106 = score(doc=40,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.35694647 = fieldWeight in 40, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.016898135 = product of:
          0.03379627 = sum of:
            0.03379627 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.03379627 = score(doc=40,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
    Object
    Web of Science
  17. Schreur, P.E.: ¬The use of Linked Data and artificial intelligence as key elements in the transformation of technical services (2020) 0.02
    0.024855517 = product of:
      0.09942207 = sum of:
        0.02935275 = weight(_text_:web in 125) [ClassicSimilarity], result of:
          0.02935275 = score(doc=125,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=125)
        0.040716566 = weight(_text_:world in 125) [ClassicSimilarity], result of:
          0.040716566 = score(doc=125,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.29726875 = fieldWeight in 125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=125)
        0.02935275 = weight(_text_:web in 125) [ClassicSimilarity], result of:
          0.02935275 = score(doc=125,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=125)
      0.25 = coord(3/12)
    
    Abstract
    Library Technical Services have benefited from numerous stimuli. Although initially looked at with suspicion, transitions such as the move from catalog cards to the MARC formats have proven enormously helpful to libraries and their patrons. Linked data and Artificial Intelligence (AI) hold the same promise. Through the conversion of metadata surrogates (cataloging) to linked open data, libraries can represent their resources on the Semantic Web. But in order to provide some form of controlled access to unstructured data, libraries must reach beyond traditional cataloging to new tools such as AI to provide consistent access to a growing world of full-text resources.
  18. Zhu, L.; Xu, A.; Deng, S.; Heng, G.; Li, X.: Entity management using Wikidata for cultural heritage information (2024) 0.02
    0.024855517 = product of:
      0.09942207 = sum of:
        0.02935275 = weight(_text_:web in 975) [ClassicSimilarity], result of:
          0.02935275 = score(doc=975,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 975, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=975)
        0.040716566 = weight(_text_:world in 975) [ClassicSimilarity], result of:
          0.040716566 = score(doc=975,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.29726875 = fieldWeight in 975, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=975)
        0.02935275 = weight(_text_:web in 975) [ClassicSimilarity], result of:
          0.02935275 = score(doc=975,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 975, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=975)
      0.25 = coord(3/12)
    
    Abstract
    Entity management in a Linked Open Data (LOD) environment is a process of associating a unique, persistent, and dereferenceable Uniform Resource Identifier (URI) with a single entity. It allows data from various sources to be reused and connected to the Web. It can help improve data quality and enable more efficient workflows. This article describes a semi-automated entity management project conducted by the "Wikidata: WikiProject Chinese Culture and Heritage Group," explores the challenges and opportunities in describing Chinese women poets and historical places in Wikidata, the largest crowdsourcing LOD platform in the world, and discusses lessons learned and future opportunities.
  19. Ding, J.: Can data die? : why one of the Internet's oldest images lives on wirhout its subjects's consent (2021) 0.02
    0.024268476 = product of:
      0.07280543 = sum of:
        0.010483125 = weight(_text_:web in 423) [ClassicSimilarity], result of:
          0.010483125 = score(doc=423,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.09014259 = fieldWeight in 423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=423)
        0.032516077 = weight(_text_:world in 423) [ClassicSimilarity], result of:
          0.032516077 = score(doc=423,freq=10.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.23739755 = fieldWeight in 423, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.01953125 = fieldNorm(doc=423)
        0.019323103 = weight(_text_:wide in 423) [ClassicSimilarity], result of:
          0.019323103 = score(doc=423,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.122383565 = fieldWeight in 423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=423)
        0.010483125 = weight(_text_:web in 423) [ClassicSimilarity], result of:
          0.010483125 = score(doc=423,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.09014259 = fieldWeight in 423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=423)
      0.33333334 = coord(4/12)
    
    Abstract
    In 2021, sharing content is easier than ever. Our lingua franca is visual: memes, infographics, TikToks. Our references cross borders and platforms, shared and remixed a hundred different ways in minutes. Digital culture is collective by default and has us together all around the world. But as the internet reaches its "dirty 30s," what happens when pieces of digital culture that have been saved, screenshotted, and reposted for years need to retire? Let's dig into the story of one of these artifacts: The Lenna image. The Lenna image may be relatively unknown in pop culture today, but in the engineering world, it remains an icon. I first encountered the image in an undergrad class, then grad school, and then all over the sites and software I use every day as a tech worker like Github, OpenCV, Stack Overflow, and Quora. To understand where the image is today, you have to understand how it got here. So, I decided to scrape Google scholar, search, and reverse image search results to track down thousands of instances of the image across the internet (see more in the methods section).
    Lena Forsén, the real human behind the Lenna image, was first published in Playboy in 1972. Soon after, USC engineers searching for a suitable test image for their image processing research sought inspiration from the magazine. They deemed Lenna the right fit and scanned the image into digital, RGB existence. From here, the story of the image follows the story of the internet. Lenna was one of the first inhabitants of ARPANet, the internet's predecessor, and then the world wide web. While the image's reach was limited to a few research papers in the '70s and '80s, in 1991, Lenna was featured on the cover of an engineering journal alongside another popular test image, Peppers. This caught the attention of Playboy, which threatened a copyright infringement lawsuit. Engineers who had grown attached to Lenna fought back. Ultimately, they prevailed, and as a Playboy VP reflected on the drama: "We decided we should exploit this because it is a phenomenon." The Playboy controversy canonized Lenna in engineering folklore and prompted an explosion of conversation about the image. Image hits on the internet rose to a peak number in 1995.
    But despite this progress, almost 2 years later, the use of Lenna continues. The image appears on the internet in 30+ different languages in the last decade, including 10+ languages in 2021. The image's spread across digital geographies has mirrored this geographical growth, moving from mostly .org domains before 1990 to over 100 different domains today, notably .com and .edu, along with others. Within the .edu world, the Lenna image continues to appear in homework questions, class slides and to be hosted on educational and research sites, ensuring that it is passed down to new generations of engineers. Whether it's due to institutional negligence or defiance, it seems that for now, the image is here to stay.
    Content
    "Having known Lenna for almost a decade, I have struggled to understand what the story of the image means for what tech culture is and what it is becoming. To me, the crux of the Lenna story is how little power we have over our data and how it is used and abused. This threat seems disproportionately higher for women who are often overrepresented in internet content, but underrepresented in internet company leadership and decision making. Given this reality, engineering and product decisions will continue to consciously (and unconsciously) exclude our needs and concerns. While social norms are changing towards non-consensual data collection and data exploitation, digital norms seem to be moving in the opposite direction. Advancements in machine learning algorithms and data storage capabilities are only making data misuse easier. Whether the outcome is revenge porn or targeted ads, surveillance or discriminatory AI, if we want a world where our data can retire when it's outlived its time, or when it's directly harming our lives, we must create the tools and policies that empower data subjects to have a say in what happens to their data. including allowing their data to die."
  20. Wang, H.; Song, Y.-Q.; Wang, L.-T.: Memory model for web ad effect based on multimodal features (2020) 0.02
    0.01976717 = product of:
      0.11860302 = sum of:
        0.05930151 = weight(_text_:web in 5512) [ClassicSimilarity], result of:
          0.05930151 = score(doc=5512,freq=16.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5099235 = fieldWeight in 5512, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5512)
        0.05930151 = weight(_text_:web in 5512) [ClassicSimilarity], result of:
          0.05930151 = score(doc=5512,freq=16.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5099235 = fieldWeight in 5512, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5512)
      0.16666667 = coord(2/12)
    
    Abstract
    Web ad effect evaluation is a challenging problem in web marketing research. Although the analysis of web ad effectiveness has achieved excellent results, there are still some deficiencies. First, there is a lack of an in-depth study of the relevance between advertisements and web content. Second, there is not a thorough analysis of the impacts of users and advertising features on user browsing behaviors. And last, the evaluation index of the web advertisement effect is not adequate. Given the above problems, we conducted our work by studying the observer's behavioral pattern based on multimodal features. First, we analyze the correlation between ads and links with different searching results and further assess the influence of relevance on the observer's attention to web ads using eye-movement features. Then we investigate the user's behavioral sequence and propose the directional frequent-browsing pattern algorithm for mining the user's most commonly used browsing patterns. Finally, we offer the novel use of "memory" as a new measure of advertising effectiveness and further build an advertising memory model with integrated multimodal features for predicting the efficacy of web ads. A large number of experiments have proved the superiority of our method.

Languages

  • e 206
  • d 47
  • pt 1
  • More… Less…

Types

  • a 225
  • el 44
  • m 13
  • p 7
  • s 3
  • x 2
  • A 1
  • EL 1
  • More… Less…

Subjects