Search (82 results, page 1 of 5)

  • × type_ss:"el"
  • × year_i:[2020 TO 2030}
  1. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.06
    0.06218951 = product of:
      0.15547377 = sum of:
        0.03437535 = product of:
          0.0687507 = sum of:
            0.0687507 = weight(_text_:web in 79) [ClassicSimilarity], result of:
              0.0687507 = score(doc=79,freq=36.0), product of:
                0.11235461 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03442753 = queryNorm
                0.6119082 = fieldWeight in 79, product of:
                  6.0 = tf(freq=36.0), with freq of:
                    36.0 = termFreq=36.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=79)
          0.5 = coord(1/2)
        0.02247829 = weight(_text_:world in 79) [ClassicSimilarity], result of:
          0.02247829 = score(doc=79,freq=2.0), product of:
            0.1323281 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03442753 = queryNorm
            0.16986786 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.029869435 = weight(_text_:wide in 79) [ClassicSimilarity], result of:
          0.029869435 = score(doc=79,freq=2.0), product of:
            0.15254007 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03442753 = queryNorm
            0.1958137 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.0687507 = weight(_text_:web in 79) [ClassicSimilarity], result of:
          0.0687507 = score(doc=79,freq=36.0), product of:
            0.11235461 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03442753 = queryNorm
            0.6119082 = fieldWeight in 79, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
      0.4 = coord(4/10)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Theme
    Semantic Web
  2. Ding, J.: Can data die? : why one of the Internet's oldest images lives on wirhout its subjects's consent (2021) 0.03
    0.026109869 = product of:
      0.06527467 = sum of:
        0.005063968 = product of:
          0.010127936 = sum of:
            0.010127936 = weight(_text_:web in 423) [ClassicSimilarity], result of:
              0.010127936 = score(doc=423,freq=2.0), product of:
                0.11235461 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03442753 = queryNorm
                0.09014259 = fieldWeight in 423, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=423)
          0.5 = coord(1/2)
        0.031414364 = weight(_text_:world in 423) [ClassicSimilarity], result of:
          0.031414364 = score(doc=423,freq=10.0), product of:
            0.1323281 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03442753 = queryNorm
            0.23739755 = fieldWeight in 423, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.01953125 = fieldNorm(doc=423)
        0.018668398 = weight(_text_:wide in 423) [ClassicSimilarity], result of:
          0.018668398 = score(doc=423,freq=2.0), product of:
            0.15254007 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03442753 = queryNorm
            0.122383565 = fieldWeight in 423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=423)
        0.010127936 = weight(_text_:web in 423) [ClassicSimilarity], result of:
          0.010127936 = score(doc=423,freq=2.0), product of:
            0.11235461 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03442753 = queryNorm
            0.09014259 = fieldWeight in 423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=423)
      0.4 = coord(4/10)
    
    Abstract
    In 2021, sharing content is easier than ever. Our lingua franca is visual: memes, infographics, TikToks. Our references cross borders and platforms, shared and remixed a hundred different ways in minutes. Digital culture is collective by default and has us together all around the world. But as the internet reaches its "dirty 30s," what happens when pieces of digital culture that have been saved, screenshotted, and reposted for years need to retire? Let's dig into the story of one of these artifacts: The Lenna image. The Lenna image may be relatively unknown in pop culture today, but in the engineering world, it remains an icon. I first encountered the image in an undergrad class, then grad school, and then all over the sites and software I use every day as a tech worker like Github, OpenCV, Stack Overflow, and Quora. To understand where the image is today, you have to understand how it got here. So, I decided to scrape Google scholar, search, and reverse image search results to track down thousands of instances of the image across the internet (see more in the methods section).
    Lena Forsén, the real human behind the Lenna image, was first published in Playboy in 1972. Soon after, USC engineers searching for a suitable test image for their image processing research sought inspiration from the magazine. They deemed Lenna the right fit and scanned the image into digital, RGB existence. From here, the story of the image follows the story of the internet. Lenna was one of the first inhabitants of ARPANet, the internet's predecessor, and then the world wide web. While the image's reach was limited to a few research papers in the '70s and '80s, in 1991, Lenna was featured on the cover of an engineering journal alongside another popular test image, Peppers. This caught the attention of Playboy, which threatened a copyright infringement lawsuit. Engineers who had grown attached to Lenna fought back. Ultimately, they prevailed, and as a Playboy VP reflected on the drama: "We decided we should exploit this because it is a phenomenon." The Playboy controversy canonized Lenna in engineering folklore and prompted an explosion of conversation about the image. Image hits on the internet rose to a peak number in 1995.
    But despite this progress, almost 2 years later, the use of Lenna continues. The image appears on the internet in 30+ different languages in the last decade, including 10+ languages in 2021. The image's spread across digital geographies has mirrored this geographical growth, moving from mostly .org domains before 1990 to over 100 different domains today, notably .com and .edu, along with others. Within the .edu world, the Lenna image continues to appear in homework questions, class slides and to be hosted on educational and research sites, ensuring that it is passed down to new generations of engineers. Whether it's due to institutional negligence or defiance, it seems that for now, the image is here to stay.
    Content
    "Having known Lenna for almost a decade, I have struggled to understand what the story of the image means for what tech culture is and what it is becoming. To me, the crux of the Lenna story is how little power we have over our data and how it is used and abused. This threat seems disproportionately higher for women who are often overrepresented in internet content, but underrepresented in internet company leadership and decision making. Given this reality, engineering and product decisions will continue to consciously (and unconsciously) exclude our needs and concerns. While social norms are changing towards non-consensual data collection and data exploitation, digital norms seem to be moving in the opposite direction. Advancements in machine learning algorithms and data storage capabilities are only making data misuse easier. Whether the outcome is revenge porn or targeted ads, surveillance or discriminatory AI, if we want a world where our data can retire when it's outlived its time, or when it's directly harming our lives, we must create the tools and policies that empower data subjects to have a say in what happens to their data. including allowing their data to die."
  3. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.02
    0.021312179 = product of:
      0.07104059 = sum of:
        0.02005229 = product of:
          0.04010458 = sum of:
            0.04010458 = weight(_text_:web in 40) [ClassicSimilarity], result of:
              0.04010458 = score(doc=40,freq=4.0), product of:
                0.11235461 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03442753 = queryNorm
                0.35694647 = fieldWeight in 40, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.5 = coord(1/2)
        0.04010458 = weight(_text_:web in 40) [ClassicSimilarity], result of:
          0.04010458 = score(doc=40,freq=4.0), product of:
            0.11235461 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03442753 = queryNorm
            0.35694647 = fieldWeight in 40, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.010883729 = product of:
          0.032651186 = sum of:
            0.032651186 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.032651186 = score(doc=40,freq=2.0), product of:
                0.12055935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03442753 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.33333334 = coord(1/3)
      0.3 = coord(3/10)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
    Object
    Web of Science
  4. Arndt, O.: Totale Telematik (2020) 0.02
    0.01998553 = product of:
      0.09992765 = sum of:
        0.084379464 = product of:
          0.16875893 = sum of:
            0.16875893 = weight(_text_:seite in 5907) [ClassicSimilarity], result of:
              0.16875893 = score(doc=5907,freq=4.0), product of:
                0.19283076 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.03442753 = queryNorm
                0.87516606 = fieldWeight in 5907, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5907)
          0.5 = coord(1/2)
        0.0155481845 = product of:
          0.046644554 = sum of:
            0.046644554 = weight(_text_:22 in 5907) [ClassicSimilarity], result of:
              0.046644554 = score(doc=5907,freq=2.0), product of:
                0.12055935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03442753 = queryNorm
                0.38690117 = fieldWeight in 5907, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5907)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Content
    Vgl. auch: https://heise.de/-4790095. Vgl. auch die Fortsetzung: Arndt, O.: Erosion der bürgerlichen Freiheiten unter: https://www.heise.de/tp/features/Erosion-der-buergerlichen-Freiheiten-4790106.html?seite=all.
    Date
    22. 6.2020 19:11:24
    Source
    https://www.heise.de/tp/features/Totale-Telematik-4790095.html?seite=all
  5. Arndt, O.: Erosion der bürgerlichen Freiheiten (2020) 0.02
    0.01998553 = product of:
      0.09992765 = sum of:
        0.084379464 = product of:
          0.16875893 = sum of:
            0.16875893 = weight(_text_:seite in 82) [ClassicSimilarity], result of:
              0.16875893 = score(doc=82,freq=4.0), product of:
                0.19283076 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.03442753 = queryNorm
                0.87516606 = fieldWeight in 82, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.078125 = fieldNorm(doc=82)
          0.5 = coord(1/2)
        0.0155481845 = product of:
          0.046644554 = sum of:
            0.046644554 = weight(_text_:22 in 82) [ClassicSimilarity], result of:
              0.046644554 = score(doc=82,freq=2.0), product of:
                0.12055935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03442753 = queryNorm
                0.38690117 = fieldWeight in 82, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=82)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Content
    Vgl. auch: https://www.heise.de/-4790106. Vgl. auch den Vorgänger: Arndt, O.: Totale Telematik unter: https://www.heise.de/tp/features/Totale-Telematik-4790095.html?seite=all.
    Date
    22. 6.2020 19:16:24
    Source
    https://www.heise.de/tp/features/Erosion-der-buergerlichen-Freiheiten-4790106.html?seite=all
  6. Wagner, E.: Über Impfstoffe zur digitalen Identität? (2020) 0.02
    0.015042695 = product of:
      0.07521348 = sum of:
        0.05966529 = product of:
          0.11933058 = sum of:
            0.11933058 = weight(_text_:seite in 5846) [ClassicSimilarity], result of:
              0.11933058 = score(doc=5846,freq=2.0), product of:
                0.19283076 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.03442753 = queryNorm
                0.6188358 = fieldWeight in 5846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5846)
          0.5 = coord(1/2)
        0.0155481845 = product of:
          0.046644554 = sum of:
            0.046644554 = weight(_text_:22 in 5846) [ClassicSimilarity], result of:
              0.046644554 = score(doc=5846,freq=2.0), product of:
                0.12055935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03442753 = queryNorm
                0.38690117 = fieldWeight in 5846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5846)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Date
    4. 5.2020 17:22:40
    Source
    https://www.heise.de/tp/features/Ueber-Impfstoffe-zur-digitalen-Identitaet-4713041.html?seite=all
  7. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo : a Web-Scale interface for ontology archiving under consumer-oriented aspects (2020) 0.01
    0.014735363 = product of:
      0.07367682 = sum of:
        0.024558939 = product of:
          0.049117878 = sum of:
            0.049117878 = weight(_text_:web in 52) [ClassicSimilarity], result of:
              0.049117878 = score(doc=52,freq=6.0), product of:
                0.11235461 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03442753 = queryNorm
                0.43716836 = fieldWeight in 52, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=52)
          0.5 = coord(1/2)
        0.049117878 = weight(_text_:web in 52) [ClassicSimilarity], result of:
          0.049117878 = score(doc=52,freq=6.0), product of:
            0.11235461 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03442753 = queryNorm
            0.43716836 = fieldWeight in 52, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=52)
      0.2 = coord(2/10)
    
    Abstract
    While thousands of ontologies exist on the web, a unified sys-tem for handling online ontologies - in particular with respect to discov-ery, versioning, access, quality-control, mappings - has not yet surfacedand users of ontologies struggle with many challenges. In this paper, wepresent an online ontology interface and augmented archive called DB-pedia Archivo, that discovers, crawls, versions and archives ontologies onthe DBpedia Databus. Based on this versioned crawl, different features,quality measures and, if possible, fixes are deployed to handle and sta-bilize the changes in the found ontologies at web-scale. A comparison toexisting approaches and ontology repositories is given.
  8. Isaac, A.; Raemy, J.A.; Meijers, E.; Valk, S. De; Freire, N.: Metadata aggregation via linked data : results of the Europeana Common Culture project (2020) 0.01
    0.013762249 = product of:
      0.04587416 = sum of:
        0.012153522 = product of:
          0.024307044 = sum of:
            0.024307044 = weight(_text_:web in 39) [ClassicSimilarity], result of:
              0.024307044 = score(doc=39,freq=2.0), product of:
                0.11235461 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03442753 = queryNorm
                0.21634221 = fieldWeight in 39, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=39)
          0.5 = coord(1/2)
        0.024307044 = weight(_text_:web in 39) [ClassicSimilarity], result of:
          0.024307044 = score(doc=39,freq=2.0), product of:
            0.11235461 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03442753 = queryNorm
            0.21634221 = fieldWeight in 39, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=39)
        0.0094135925 = product of:
          0.028240776 = sum of:
            0.028240776 = weight(_text_:29 in 39) [ClassicSimilarity], result of:
              0.028240776 = score(doc=39,freq=2.0), product of:
                0.12110529 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03442753 = queryNorm
                0.23319192 = fieldWeight in 39, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=39)
          0.33333334 = coord(1/3)
      0.3 = coord(3/10)
    
    Abstract
    Digital cultural heritage resources are widely available on the web through the digital libraries of heritage institutions. To address the difficulties of discoverability in cultural heritage, the common practice is metadata aggregation, where centralized efforts like Europeana facilitate discoverability by collecting the resources' metadata. We present the results of the linked data aggregation task conducted within the Europeana Common Culture project, which attempted an innovative approach to aggregation based on linked data made available by cultural heritage institutions. This task ran for one year with participation of eleven organizations, involving the three member roles of the Europeana network: data providers, intermediary aggregators, and the central aggregation hub, Europeana. We report on the challenges that were faced by data providers, the standards and specifications applied, and the resulting aggregated metadata.
    Date
    17.11.2020 11:29:00
  9. Advanced online media use (2023) 0.01
    0.013750142 = product of:
      0.06875071 = sum of:
        0.022916902 = product of:
          0.045833804 = sum of:
            0.045833804 = weight(_text_:web in 954) [ClassicSimilarity], result of:
              0.045833804 = score(doc=954,freq=4.0), product of:
                0.11235461 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03442753 = queryNorm
                0.4079388 = fieldWeight in 954, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=954)
          0.5 = coord(1/2)
        0.045833804 = weight(_text_:web in 954) [ClassicSimilarity], result of:
          0.045833804 = score(doc=954,freq=4.0), product of:
            0.11235461 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03442753 = queryNorm
            0.4079388 = fieldWeight in 954, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=954)
      0.2 = coord(2/10)
    
    Content
    "1. Use a range of different media 2. Access paywalled media content 3. Use an advertising and tracking blocker 4. Use alternatives to Google Search 5. Use alternatives to YouTube 6. Use alternatives to Facebook and Twitter 7. Caution with Wikipedia 8. Web browser, email, and internet access 9. Access books and scientific papers 10. Access deleted web content"
  10. Baines, D.; Elliott, R.J.: Defining misinformation, disinformation and malinformation : an urgent need for clarity during the COVID-19 infodemic (2020) 0.01
    0.013086932 = product of:
      0.06543466 = sum of:
        0.028097862 = weight(_text_:world in 5853) [ClassicSimilarity], result of:
          0.028097862 = score(doc=5853,freq=2.0), product of:
            0.1323281 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03442753 = queryNorm
            0.21233483 = fieldWeight in 5853, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5853)
        0.037336797 = weight(_text_:wide in 5853) [ClassicSimilarity], result of:
          0.037336797 = score(doc=5853,freq=2.0), product of:
            0.15254007 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03442753 = queryNorm
            0.24476713 = fieldWeight in 5853, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5853)
      0.2 = coord(2/10)
    
    Abstract
    COVID-19 is an unprecedented global health crisis that will have immeasurable consequences for our economic and social well-being. Tedros Adhanom Ghebreyesus, the director general of the World Health Organization, stated "We're not just fighting an epidemic; we're fighting an infodemic". Currently, there is no robust scientific basis to the existing definitions of false information used in the fight against the COVID-19infodemic. The purpose of this paper is to demonstrate how the use of a novel taxonomy and related model (based upon a conceptual framework that synthesizes insights from information science, philosophy, media studies and politics) can produce new scientific definitions of mis-, dis- and malinformation. We undertake our analysis from the viewpoint of information systems research. The conceptual approach to defining mis-,dis- and malinformation can be applied to a wide range of empirical examples and, if applied properly, may prove useful in fighting the COVID-19 infodemic. In sum, our research suggests that: (i) analyzing all types of information is important in the battle against the COVID-19 infodemic; (ii) a scientific approach is required so that different methods are not used by different studies; (iii) "misinformation", as an umbrella term, can be confusing and should be dropped from use; (iv) clear, scientific definitions of information types will be needed going forward; (v) malinformation is an overlooked phenomenon involving reconfigurations of the truth.
  11. Lauck, D.: So funktioniert die Warn-App (2020) 0.01
    0.012942942 = product of:
      0.12942941 = sum of:
        0.12942941 = weight(_text_:gestaltung in 5870) [ClassicSimilarity], result of:
          0.12942941 = score(doc=5870,freq=2.0), product of:
            0.2008246 = queryWeight, product of:
              5.8332562 = idf(docFreq=351, maxDocs=44218)
              0.03442753 = queryNorm
            0.6444898 = fieldWeight in 5870, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8332562 = idf(docFreq=351, maxDocs=44218)
              0.078125 = fieldNorm(doc=5870)
      0.1 = coord(1/10)
    
    Content
    Gestaltung als FAQ-Liste. Fortsetzung der vorherigen Berichte der Tagesschau über die Entwicklung der App.
  12. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.01
    0.012630313 = product of:
      0.06315156 = sum of:
        0.02105052 = product of:
          0.04210104 = sum of:
            0.04210104 = weight(_text_:web in 38) [ClassicSimilarity], result of:
              0.04210104 = score(doc=38,freq=6.0), product of:
                0.11235461 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03442753 = queryNorm
                0.37471575 = fieldWeight in 38, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=38)
          0.5 = coord(1/2)
        0.04210104 = weight(_text_:web in 38) [ClassicSimilarity], result of:
          0.04210104 = score(doc=38,freq=6.0), product of:
            0.11235461 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03442753 = queryNorm
            0.37471575 = fieldWeight in 38, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
      0.2 = coord(2/10)
    
    Abstract
    A variety of schemas and ontologies are currently used for the machine-readable description of bibliographic entities and citations. This diversity, and the reuse of the same ontology terms with different nuances, generates inconsistencies in data. Adoption of a single data model would facilitate data integration tasks regardless of the data supplier or context application. In this paper we present the OpenCitations Data Model (OCDM), a generic data model for describing bibliographic entities and citations, developed using Semantic Web technologies. We also evaluate the effective reusability of OCDM according to ontology evaluation practices, mention existing users of OCDM, and discuss the use and impact of OCDM in the wider open science community.
    Content
    Erschienen in: The Semantic Web - ISWC 2020, 19th International Semantic Web Conference, Athens, Greece, November 2-6, 2020, Proceedings, Part II. Vgl.: DOI: 10.1007/978-3-030-62466-8_28.
  13. Ogden, J.; Summers, E.; Walker, S.: Know(ing) Infrastructure : the wayback machine as object and instrument of digital research (2023) 0.01
    0.012153523 = product of:
      0.060767613 = sum of:
        0.020255871 = product of:
          0.040511742 = sum of:
            0.040511742 = weight(_text_:web in 1084) [ClassicSimilarity], result of:
              0.040511742 = score(doc=1084,freq=8.0), product of:
                0.11235461 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03442753 = queryNorm
                0.36057037 = fieldWeight in 1084, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1084)
          0.5 = coord(1/2)
        0.040511742 = weight(_text_:web in 1084) [ClassicSimilarity], result of:
          0.040511742 = score(doc=1084,freq=8.0), product of:
            0.11235461 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03442753 = queryNorm
            0.36057037 = fieldWeight in 1084, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1084)
      0.2 = coord(2/10)
    
    Abstract
    From documenting human rights abuses to studying online advertising, web archives are increasingly positioned as critical resources for a broad range of scholarly Internet research agendas. In this article, we reflect on the motivations and methodological challenges of investigating the world's largest web archive, the Internet Archive's Wayback Machine (IAWM). Using a mixed methods approach, we report on a pilot project centred around documenting the inner workings of 'Save Page Now' (SPN) - an Internet Archive tool that allows users to initiate the creation and storage of 'snapshots' of web resources. By improving our understanding of SPN and its role in shaping the IAWM, this work examines how the public tool is being used to 'save the Web' and highlights the challenges of operationalising a study of the dynamic sociotechnical processes supporting this knowledge infrastructure. Inspired by existing Science and Technology Studies (STS) approaches, the paper charts our development of methodological interventions to support an interdisciplinary investigation of SPN, including: ethnographic methods, 'experimental blackbox tactics', data tracing, modelling and documentary research. We discuss the opportunities and limitations of our methodology when interfacing with issues associated with temporality, scale and visibility, as well as critically engage with our own positionality in the research process (in terms of expertise and access). We conclude with reflections on the implications of digital STS approaches for 'knowing infrastructure', where the use of these infrastructures is unavoidably intertwined with our ability to study the situated and material arrangements of their creation.
  14. Westphalen, A. von: Konkurrenz oder Kooperation? : Das ist die entscheidende Frage (2020) 0.01
    0.012056738 = product of:
      0.06028369 = sum of:
        0.047732234 = product of:
          0.09546447 = sum of:
            0.09546447 = weight(_text_:seite in 5351) [ClassicSimilarity], result of:
              0.09546447 = score(doc=5351,freq=2.0), product of:
                0.19283076 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.03442753 = queryNorm
                0.49506867 = fieldWeight in 5351, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5351)
          0.5 = coord(1/2)
        0.012551456 = product of:
          0.037654366 = sum of:
            0.037654366 = weight(_text_:29 in 5351) [ClassicSimilarity], result of:
              0.037654366 = score(doc=5351,freq=2.0), product of:
                0.12110529 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03442753 = queryNorm
                0.31092256 = fieldWeight in 5351, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5351)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Date
    29. 6.2019 17:46:17
    Source
    https://www.heise.de/tp/features/Konkurrenz-oder-Kooperation-Das-ist-die-entscheidende-Frage-4647091.html?seite=all
  15. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.01
    0.010682274 = product of:
      0.05341137 = sum of:
        0.045566708 = product of:
          0.13670012 = sum of:
            0.13670012 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.13670012 = score(doc=5669,freq=2.0), product of:
                0.291877 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03442753 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
        0.00784466 = product of:
          0.023533981 = sum of:
            0.023533981 = weight(_text_:29 in 5669) [ClassicSimilarity], result of:
              0.023533981 = score(doc=5669,freq=2.0), product of:
                0.12110529 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03442753 = queryNorm
                0.19432661 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  16. Schuler, K.: Corona-Apps : was heißt hier anonym? Eine Begriffsklärung und ein Plädoyer (2020) 0.01
    0.009060059 = product of:
      0.09060059 = sum of:
        0.09060059 = weight(_text_:gestaltung in 5849) [ClassicSimilarity], result of:
          0.09060059 = score(doc=5849,freq=2.0), product of:
            0.2008246 = queryWeight, product of:
              5.8332562 = idf(docFreq=351, maxDocs=44218)
              0.03442753 = queryNorm
            0.45114288 = fieldWeight in 5849, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8332562 = idf(docFreq=351, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5849)
      0.1 = coord(1/10)
    
    Abstract
    Alle Welt, so scheint es, beschäftigt sich derzeit mit den Möglichkeiten und der Gestaltung sogenannter Corona-Apps. Dabei wird fast inflationär der Begriff der Anonymisierung verwandt und so der Eindruck erweckt, das Nachverfolgen mit diesen Apps sei harmlos. Sogar Informatiker äußern sich in diesem Sinne, die eigentlich besser wissen sollten, was bei der Zielsetzung der App logisch möglich ist - und was nicht. Einwände werden regelmäßig vom Tisch gewischt. Wer wissen und entscheiden will, was eine Corona-App leisten soll und kann, muss sich daher auch damit auseinandersetzen, was sich tatsächlich hinter den hier diskutierten Begriffen verbirgt und was die Konsequenzen sind. Erst deren präzise Verwendung ist die Grundlage für die richtigen Fragen und für die Bestimmung wirksamer Schutzmaßnahmen.
  17. Williams, B.: Dimensions & VOSViewer bibliometrics in the reference interview (2020) 0.01
    0.008507467 = product of:
      0.04253733 = sum of:
        0.0141791105 = product of:
          0.028358221 = sum of:
            0.028358221 = weight(_text_:web in 5719) [ClassicSimilarity], result of:
              0.028358221 = score(doc=5719,freq=2.0), product of:
                0.11235461 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03442753 = queryNorm
                0.25239927 = fieldWeight in 5719, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5719)
          0.5 = coord(1/2)
        0.028358221 = weight(_text_:web in 5719) [ClassicSimilarity], result of:
          0.028358221 = score(doc=5719,freq=2.0), product of:
            0.11235461 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03442753 = queryNorm
            0.25239927 = fieldWeight in 5719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5719)
      0.2 = coord(2/10)
    
    Abstract
    The VOSviewer software provides easy access to bibliometric mapping using data from Dimensions, Scopus and Web of Science. The properly formatted and structured citation data, and the ease in which it can be exported open up new avenues for use during citation searches and eference interviews. This paper details specific techniques for using advanced searches in Dimensions, exporting the citation data, and drawing insights from the maps produced in VOS Viewer. These search techniques and data export practices are fast and accurate enough to build into reference interviews for graduate students, faculty, and post-PhD researchers. The search results derived from them are accurate and allow a more comprehensive view of citation networks embedded in ordinary complex boolean searches.
  18. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo (2020) 0.01
    0.008507467 = product of:
      0.04253733 = sum of:
        0.0141791105 = product of:
          0.028358221 = sum of:
            0.028358221 = weight(_text_:web in 53) [ClassicSimilarity], result of:
              0.028358221 = score(doc=53,freq=8.0), product of:
                0.11235461 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03442753 = queryNorm
                0.25239927 = fieldWeight in 53, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=53)
          0.5 = coord(1/2)
        0.028358221 = weight(_text_:web in 53) [ClassicSimilarity], result of:
          0.028358221 = score(doc=53,freq=8.0), product of:
            0.11235461 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03442753 = queryNorm
            0.25239927 = fieldWeight in 53, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=53)
      0.2 = coord(2/10)
    
    Content
    # Community action on individual ontologies We would like to call on all ontology maintainers and consumers to help us increase the average star rating of the web of ontologies by fixing and improving its ontologies. You can easily check an ontology at https://archivo.dbpedia.org/info. If you are an ontology maintainer just release a patched version - archivo will automatically pick it up 8 hours later. If you are a user of an ontology and want your consumed data to become FAIRer, please inform the ontology maintainer about the issues found with Archivo. The star rating is very basic and only requires fixing small things. However, theimpact on technical and legal usability can be immense.
    # Community action on all ontologies (quality, FAIRness, conformity) Archivo is extensible and allows contributions to give consumers a central place to encode their requirements. We envision fostering adherence to standards and strengthening incentives for publishers to build a better (FAIRer) web of ontologies. 1. SHACL (https://www.w3.org/TR/shacl/, co-edited by DBpedia's CTO D. Kontokostas) enables easy testing of ontologies. Archivo offers free SHACL continuous integration testing for ontologies. Anyone can implement their SHACL tests and add them to the SHACL library on Github. We believe that there are many synergies, i.e. SHACL tests for your ontology are helpful for others as well. 2. We are looking for ontology experts to join DBpedia and discuss further validation (e.g. stars) to increase FAIRness and quality of ontologies. We are forming a steering committee and also a PC for the upcoming Vocarnival at SEMANTiCS 2021. Please message hellmann@informatik.uni-leipzig.de <mailto:hellmann@informatik.uni-leipzig.de>if you would like to join. We would like to extend the Archivo platform with relevant visualisations, tests, editing aides, mapping management tools and quality checks.
    # How does Archivo work? Each week Archivo runs several discovery algorithms to scan for new ontologies. Once discovered Archivo checks them every 8 hours. When changes are detected, Archivo downloads and rates and archives the latest snapshot persistently on the DBpedia Databus. # Archivo's mission Archivo's mission is to improve FAIRness (findability, accessibility, interoperability, and reusability) of all available ontologies on the Semantic Web. Archivo is not a guideline, it is fully automated, machine-readable and enforces interoperability with its star rating. - Ontology developers can implement against Archivo until they reach more stars. The stars and tests are designed to guarantee the interoperability and fitness of the ontology. - Ontology users can better find, access and re-use ontologies. Snapshots are persisted in case the original is not reachable anymore adding a layer of reliability to the decentral web of ontologies.
  19. Bredemeier, W.: Trend des Jahrzehnts 2011 - 2020 : Die Entfaltung und Degeneration des Social Web (2021) 0.01
    0.008507467 = product of:
      0.04253733 = sum of:
        0.0141791105 = product of:
          0.028358221 = sum of:
            0.028358221 = weight(_text_:web in 293) [ClassicSimilarity], result of:
              0.028358221 = score(doc=293,freq=2.0), product of:
                0.11235461 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03442753 = queryNorm
                0.25239927 = fieldWeight in 293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=293)
          0.5 = coord(1/2)
        0.028358221 = weight(_text_:web in 293) [ClassicSimilarity], result of:
          0.028358221 = score(doc=293,freq=2.0), product of:
            0.11235461 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03442753 = queryNorm
            0.25239927 = fieldWeight in 293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=293)
      0.2 = coord(2/10)
    
  20. Christensen, A.: Wissenschaftliche Literatur entdecken : was bibliothekarische Discovery-Systeme von der Konkurrenz lernen und was sie ihr zeigen können (2022) 0.01
    0.008507467 = product of:
      0.04253733 = sum of:
        0.0141791105 = product of:
          0.028358221 = sum of:
            0.028358221 = weight(_text_:web in 833) [ClassicSimilarity], result of:
              0.028358221 = score(doc=833,freq=2.0), product of:
                0.11235461 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03442753 = queryNorm
                0.25239927 = fieldWeight in 833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=833)
          0.5 = coord(1/2)
        0.028358221 = weight(_text_:web in 833) [ClassicSimilarity], result of:
          0.028358221 = score(doc=833,freq=2.0), product of:
            0.11235461 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03442753 = queryNorm
            0.25239927 = fieldWeight in 833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=833)
      0.2 = coord(2/10)
    
    Abstract
    In den letzten Jahren ist das Angebot an Academic Search Engines für die Recherche nach Fachliteratur zu allen Wissenschaftsgebieten stark angewachsen und ergänzt die beliebten kommerziellen Angebote wie Web of Science oder Scopus. Der Artikel zeigt die wesentlichen Unterschiede zwischen bibliothekarischen Discovery-Systemen und Academic Search Engines wie Base, Dimensions oder Open Alex auf und diskutiert Möglichkeiten, wie beide von einander profitieren können. Diese Entwicklungsperspektiven betreffen Aspekte wie die Kontextualisierung von Wissen, die Datenmodellierung, die automatischen Datenanreicherung sowie den Zuschnitt von Suchräumen.

Languages

  • d 58
  • e 24

Types

  • a 62
  • p 3
  • More… Less…