Search (43 results, page 1 of 3)

  • × type_ss:"el"
  • × year_i:[2020 TO 2030}
  1. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.05
    0.052835584 = product of:
      0.14089489 = sum of:
        0.026152909 = weight(_text_:world in 79) [ClassicSimilarity], result of:
          0.026152909 = score(doc=79,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.16986786 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.034752317 = weight(_text_:wide in 79) [ClassicSimilarity], result of:
          0.034752317 = score(doc=79,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.1958137 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.07998967 = weight(_text_:web in 79) [ClassicSimilarity], result of:
          0.07998967 = score(doc=79,freq=36.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.6119082 = fieldWeight in 79, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
      0.375 = coord(3/8)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Theme
    Semantic Web
  2. Ding, J.: Can data die? : why one of the Internet's oldest images lives on wirhout its subjects's consent (2021) 0.03
    0.026270097 = product of:
      0.07005359 = sum of:
        0.036549803 = weight(_text_:world in 423) [ClassicSimilarity], result of:
          0.036549803 = score(doc=423,freq=10.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.23739755 = fieldWeight in 423, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.01953125 = fieldNorm(doc=423)
        0.021720199 = weight(_text_:wide in 423) [ClassicSimilarity], result of:
          0.021720199 = score(doc=423,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.122383565 = fieldWeight in 423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=423)
        0.0117835915 = weight(_text_:web in 423) [ClassicSimilarity], result of:
          0.0117835915 = score(doc=423,freq=2.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.09014259 = fieldWeight in 423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=423)
      0.375 = coord(3/8)
    
    Abstract
    In 2021, sharing content is easier than ever. Our lingua franca is visual: memes, infographics, TikToks. Our references cross borders and platforms, shared and remixed a hundred different ways in minutes. Digital culture is collective by default and has us together all around the world. But as the internet reaches its "dirty 30s," what happens when pieces of digital culture that have been saved, screenshotted, and reposted for years need to retire? Let's dig into the story of one of these artifacts: The Lenna image. The Lenna image may be relatively unknown in pop culture today, but in the engineering world, it remains an icon. I first encountered the image in an undergrad class, then grad school, and then all over the sites and software I use every day as a tech worker like Github, OpenCV, Stack Overflow, and Quora. To understand where the image is today, you have to understand how it got here. So, I decided to scrape Google scholar, search, and reverse image search results to track down thousands of instances of the image across the internet (see more in the methods section).
    Lena Forsén, the real human behind the Lenna image, was first published in Playboy in 1972. Soon after, USC engineers searching for a suitable test image for their image processing research sought inspiration from the magazine. They deemed Lenna the right fit and scanned the image into digital, RGB existence. From here, the story of the image follows the story of the internet. Lenna was one of the first inhabitants of ARPANet, the internet's predecessor, and then the world wide web. While the image's reach was limited to a few research papers in the '70s and '80s, in 1991, Lenna was featured on the cover of an engineering journal alongside another popular test image, Peppers. This caught the attention of Playboy, which threatened a copyright infringement lawsuit. Engineers who had grown attached to Lenna fought back. Ultimately, they prevailed, and as a Playboy VP reflected on the drama: "We decided we should exploit this because it is a phenomenon." The Playboy controversy canonized Lenna in engineering folklore and prompted an explosion of conversation about the image. Image hits on the internet rose to a peak number in 1995.
    But despite this progress, almost 2 years later, the use of Lenna continues. The image appears on the internet in 30+ different languages in the last decade, including 10+ languages in 2021. The image's spread across digital geographies has mirrored this geographical growth, moving from mostly .org domains before 1990 to over 100 different domains today, notably .com and .edu, along with others. Within the .edu world, the Lenna image continues to appear in homework questions, class slides and to be hosted on educational and research sites, ensuring that it is passed down to new generations of engineers. Whether it's due to institutional negligence or defiance, it seems that for now, the image is here to stay.
    Content
    "Having known Lenna for almost a decade, I have struggled to understand what the story of the image means for what tech culture is and what it is becoming. To me, the crux of the Lenna story is how little power we have over our data and how it is used and abused. This threat seems disproportionately higher for women who are often overrepresented in internet content, but underrepresented in internet company leadership and decision making. Given this reality, engineering and product decisions will continue to consciously (and unconsciously) exclude our needs and concerns. While social norms are changing towards non-consensual data collection and data exploitation, digital norms seem to be moving in the opposite direction. Advancements in machine learning algorithms and data storage capabilities are only making data misuse easier. Whether the outcome is revenge porn or targeted ads, surveillance or discriminatory AI, if we want a world where our data can retire when it's outlived its time, or when it's directly harming our lives, we must create the tools and policies that empower data subjects to have a say in what happens to their data. including allowing their data to die."
  3. Baines, D.; Elliott, R.J.: Defining misinformation, disinformation and malinformation : an urgent need for clarity during the COVID-19 infodemic (2020) 0.02
    0.019032884 = product of:
      0.07613154 = sum of:
        0.032691136 = weight(_text_:world in 5853) [ClassicSimilarity], result of:
          0.032691136 = score(doc=5853,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.21233483 = fieldWeight in 5853, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5853)
        0.043440398 = weight(_text_:wide in 5853) [ClassicSimilarity], result of:
          0.043440398 = score(doc=5853,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.24476713 = fieldWeight in 5853, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5853)
      0.25 = coord(2/8)
    
    Abstract
    COVID-19 is an unprecedented global health crisis that will have immeasurable consequences for our economic and social well-being. Tedros Adhanom Ghebreyesus, the director general of the World Health Organization, stated "We're not just fighting an epidemic; we're fighting an infodemic". Currently, there is no robust scientific basis to the existing definitions of false information used in the fight against the COVID-19infodemic. The purpose of this paper is to demonstrate how the use of a novel taxonomy and related model (based upon a conceptual framework that synthesizes insights from information science, philosophy, media studies and politics) can produce new scientific definitions of mis-, dis- and malinformation. We undertake our analysis from the viewpoint of information systems research. The conceptual approach to defining mis-,dis- and malinformation can be applied to a wide range of empirical examples and, if applied properly, may prove useful in fighting the COVID-19 infodemic. In sum, our research suggests that: (i) analyzing all types of information is important in the battle against the COVID-19 infodemic; (ii) a scientific approach is required so that different methods are not used by different studies; (iii) "misinformation", as an umbrella term, can be confusing and should be dropped from use; (iv) clear, scientific definitions of information types will be needed going forward; (v) malinformation is an overlooked phenomenon involving reconfigurations of the truth.
  4. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.02
    0.016413763 = product of:
      0.06565505 = sum of:
        0.046660647 = weight(_text_:web in 40) [ClassicSimilarity], result of:
          0.046660647 = score(doc=40,freq=4.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.35694647 = fieldWeight in 40, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.018994406 = product of:
          0.03798881 = sum of:
            0.03798881 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.03798881 = score(doc=40,freq=2.0), product of:
                0.14026769 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040055543 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
    Object
    Web of Science
  5. Scherschel, F.A.: Corona-Tracking : SAP und Deutsche Telekom veröffentlichen erste Details zur Tracing- und Warn-App (2020) 0.01
    0.011164645 = product of:
      0.08931716 = sum of:
        0.08931716 = weight(_text_:2.0 in 5857) [ClassicSimilarity], result of:
          0.08931716 = score(doc=5857,freq=2.0), product of:
            0.23231146 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.040055543 = queryNorm
            0.3844716 = fieldWeight in 5857, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.046875 = fieldNorm(doc=5857)
      0.125 = coord(1/8)
    
    Abstract
    Im Auftrag der Bundesregierung entwickeln SAP und die Deutsche Telekom momentan eine Contact-Tracing-App im Rahmen von Apples und Googles Exposure-Notification-Framework. Die sogenannte Corona-Warn-App und alle von ihr genutzten Serverkomponenten sollen im Vorfeld der für kommenden Monat geplanten Veröffentlichung der App unter der Apache-2.0-Lizenz als Open-Source-Software auf GitHub bereitgestellt werden. Nun haben die Projektverantwortlichen erste Dokumente dazu herausgegeben, wie die App später funktionieren soll: https://github.com/corona-warn-app/cwa-documentation.
  6. Metz, C.: ¬The new chatbots could change the world : can you trust them? (2022) 0.01
    0.009807341 = product of:
      0.07845873 = sum of:
        0.07845873 = weight(_text_:world in 854) [ClassicSimilarity], result of:
          0.07845873 = score(doc=854,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.50960356 = fieldWeight in 854, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.09375 = fieldNorm(doc=854)
      0.125 = coord(1/8)
    
  7. Huge "foundation models" are turbo-charging AI progress : The world that Bert built (2022) 0.01
    0.009807341 = product of:
      0.07845873 = sum of:
        0.07845873 = weight(_text_:world in 922) [ClassicSimilarity], result of:
          0.07845873 = score(doc=922,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.50960356 = fieldWeight in 922, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.09375 = fieldNorm(doc=922)
      0.125 = coord(1/8)
    
  8. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.01
    0.008551175 = product of:
      0.0342047 = sum of:
        0.026064238 = weight(_text_:wide in 405) [ClassicSimilarity], result of:
          0.026064238 = score(doc=405,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.14686027 = fieldWeight in 405, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.00814046 = product of:
          0.01628092 = sum of:
            0.01628092 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
              0.01628092 = score(doc=405,freq=2.0), product of:
                0.14026769 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040055543 = queryNorm
                0.116070345 = fieldWeight in 405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=405)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Human brain size nearly quadrupled in the six million years since Homo last shared a common ancestor with chimpanzees, but human brains are thought to have decreased in volume since the end of the last Ice Age. The timing and reason for this decrease is enigmatic. Here we use change-point analysis to estimate the timing of changes in the rate of hominin brain evolution. We find that hominin brains experienced positive rate changes at 2.1 and 1.5 million years ago, coincident with the early evolution of Homo and technological innovations evident in the archeological record. But we also find that human brain size reduction was surprisingly recent, occurring in the last 3,000 years. Our dating does not support hypotheses concerning brain size reduction as a by-product of body size reduction, a result of a shift to an agricultural diet, or a consequence of self-domestication. We suggest our analysis supports the hypothesis that the recent decrease in brain size may instead result from the externalization of knowledge and advantages of group-level decision-making due in part to the advent of social systems of distributed cognition and the storage and sharing of information. Humans live in social groups in which multiple brains contribute to the emergence of collective intelligence. Although difficult to study in the deep history of Homo, the impacts of group size, social organization, collective intelligence and other potential selective forces on brain evolution can be elucidated using ants as models. The remarkable ecological diversity of ants and their species richness encompasses forms convergent in aspects of human sociality, including large group size, agrarian life histories, division of labor, and collective cognition. Ants provide a wide range of social systems to generate and test hypotheses concerning brain size enlargement or reduction and aid in interpreting patterns of brain evolution identified in humans. Although humans and ants represent very different routes in social and cognitive evolution, the insights ants offer can broadly inform us of the selective forces that influence brain size.
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]
  9. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo : a Web-Scale interface for ontology archiving under consumer-oriented aspects (2020) 0.01
    0.0071434234 = product of:
      0.057147387 = sum of:
        0.057147387 = weight(_text_:web in 52) [ClassicSimilarity], result of:
          0.057147387 = score(doc=52,freq=6.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.43716836 = fieldWeight in 52, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=52)
      0.125 = coord(1/8)
    
    Abstract
    While thousands of ontologies exist on the web, a unified sys-tem for handling online ontologies - in particular with respect to discov-ery, versioning, access, quality-control, mappings - has not yet surfacedand users of ontologies struggle with many challenges. In this paper, wepresent an online ontology interface and augmented archive called DB-pedia Archivo, that discovers, crawls, versions and archives ontologies onthe DBpedia Databus. Based on this versioned crawl, different features,quality measures and, if possible, fixes are deployed to handle and sta-bilize the changes in the found ontologies at web-scale. A comparison toexisting approaches and ontology repositories is given.
  10. Advanced online media use (2023) 0.01
    0.0066658063 = product of:
      0.05332645 = sum of:
        0.05332645 = weight(_text_:web in 954) [ClassicSimilarity], result of:
          0.05332645 = score(doc=954,freq=4.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.4079388 = fieldWeight in 954, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=954)
      0.125 = coord(1/8)
    
    Content
    "1. Use a range of different media 2. Access paywalled media content 3. Use an advertising and tracking blocker 4. Use alternatives to Google Search 5. Use alternatives to YouTube 6. Use alternatives to Facebook and Twitter 7. Caution with Wikipedia 8. Web browser, email, and internet access 9. Access books and scientific papers 10. Access deleted web content"
  11. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.01
    0.0066269613 = product of:
      0.05301569 = sum of:
        0.05301569 = product of:
          0.15904707 = sum of:
            0.15904707 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.15904707 = score(doc=5669,freq=2.0), product of:
                0.33959135 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.040055543 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  12. Franke, T.; Zoubir, M.: Technology for the people? : humanity as a compass for the digital transformation (2020) 0.01
    0.0065160594 = product of:
      0.052128475 = sum of:
        0.052128475 = weight(_text_:wide in 830) [ClassicSimilarity], result of:
          0.052128475 = score(doc=830,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.29372054 = fieldWeight in 830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=830)
      0.125 = coord(1/8)
    
    Abstract
    How do we define what technology is for humans? One perspective suggests that it is a tool enabling the use of valuable resources such as time, food, health and mobility. One could say that in its cultural history, humanity has developed a wide range of artefacts which enable the effective utilisation of these resources for the fulfilment of physiological, but also psychological, needs. This paper explores how this perspective may be used as an orientation for future technological innovation. Hence, the goal is to provide an accessible discussion of such a psychological perspective on technology development that could pave the way towards a truly human-centred digital transformation.
  13. Aydin, Ö.; Karaarslan, E.: OpenAI ChatGPT generated literature review: : digital twin in healthcare (2022) 0.01
    0.0061434 = product of:
      0.0491472 = sum of:
        0.0491472 = weight(_text_:wide in 851) [ClassicSimilarity], result of:
          0.0491472 = score(doc=851,freq=4.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.2769224 = fieldWeight in 851, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=851)
      0.125 = coord(1/8)
    
    Abstract
    Literature review articles are essential to summarize the related work in the selected field. However, covering all related studies takes too much time and effort. This study questions how Artificial Intelligence can be used in this process. We used ChatGPT to create a literature review article to show the stage of the OpenAI ChatGPT artificial intelligence application. As the subject, the applications of Digital Twin in the health field were chosen. Abstracts of the last three years (2020, 2021 and 2022) papers were obtained from the keyword "Digital twin in healthcare" search results on Google Scholar and paraphrased by ChatGPT. Later on, we asked ChatGPT questions. The results are promising; however, the paraphrased parts had significant matches when checked with the Ithenticate tool. This article is the first attempt to show the compilation and expression of knowledge will be accelerated with the help of artificial intelligence. We are still at the beginning of such advances. The future academic publishing process will require less human effort, which in turn will allow academics to focus on their studies. In future studies, we will monitor citations to this study to evaluate the academic validity of the content produced by the ChatGPT. 1. Introduction OpenAI ChatGPT (ChatGPT, 2022) is a chatbot based on the OpenAI GPT-3 language model. It is designed to generate human-like text responses to user input in a conversational context. OpenAI ChatGPT is trained on a large dataset of human conversations and can be used to create responses to a wide range of topics and prompts. The chatbot can be used for customer service, content creation, and language translation tasks, creating replies in multiple languages. OpenAI ChatGPT is available through the OpenAI API, which allows developers to access and integrate the chatbot into their applications and systems. OpenAI ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text, allowing it to engage in conversation with users naturally and intuitively. OpenAI ChatGPT is trained on a large dataset of human conversations, allowing it to understand and respond to a wide range of topics and contexts. It can be used in various applications, such as chatbots, customer service agents, and language translation systems. OpenAI ChatGPT is a state-of-the-art language model able to generate coherent and natural text that can be indistinguishable from text written by a human. As an artificial intelligence, ChatGPT may need help to change academic writing practices. However, it can provide information and guidance on ways to improve people's academic writing skills.
  14. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.01
    0.006122934 = product of:
      0.048983473 = sum of:
        0.048983473 = weight(_text_:web in 38) [ClassicSimilarity], result of:
          0.048983473 = score(doc=38,freq=6.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.37471575 = fieldWeight in 38, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
      0.125 = coord(1/8)
    
    Abstract
    A variety of schemas and ontologies are currently used for the machine-readable description of bibliographic entities and citations. This diversity, and the reuse of the same ontology terms with different nuances, generates inconsistencies in data. Adoption of a single data model would facilitate data integration tasks regardless of the data supplier or context application. In this paper we present the OpenCitations Data Model (OCDM), a generic data model for describing bibliographic entities and citations, developed using Semantic Web technologies. We also evaluate the effective reusability of OCDM according to ontology evaluation practices, mention existing users of OCDM, and discuss the use and impact of OCDM in the wider open science community.
    Content
    Erschienen in: The Semantic Web - ISWC 2020, 19th International Semantic Web Conference, Athens, Greece, November 2-6, 2020, Proceedings, Part II. Vgl.: DOI: 10.1007/978-3-030-62466-8_28.
  15. Ogden, J.; Summers, E.; Walker, S.: Know(ing) Infrastructure : the wayback machine as object and instrument of digital research (2023) 0.01
    0.0058917957 = product of:
      0.047134366 = sum of:
        0.047134366 = weight(_text_:web in 1084) [ClassicSimilarity], result of:
          0.047134366 = score(doc=1084,freq=8.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.36057037 = fieldWeight in 1084, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1084)
      0.125 = coord(1/8)
    
    Abstract
    From documenting human rights abuses to studying online advertising, web archives are increasingly positioned as critical resources for a broad range of scholarly Internet research agendas. In this article, we reflect on the motivations and methodological challenges of investigating the world's largest web archive, the Internet Archive's Wayback Machine (IAWM). Using a mixed methods approach, we report on a pilot project centred around documenting the inner workings of 'Save Page Now' (SPN) - an Internet Archive tool that allows users to initiate the creation and storage of 'snapshots' of web resources. By improving our understanding of SPN and its role in shaping the IAWM, this work examines how the public tool is being used to 'save the Web' and highlights the challenges of operationalising a study of the dynamic sociotechnical processes supporting this knowledge infrastructure. Inspired by existing Science and Technology Studies (STS) approaches, the paper charts our development of methodological interventions to support an interdisciplinary investigation of SPN, including: ethnographic methods, 'experimental blackbox tactics', data tracing, modelling and documentary research. We discuss the opportunities and limitations of our methodology when interfacing with issues associated with temporality, scale and visibility, as well as critically engage with our own positionality in the research process (in terms of expertise and access). We conclude with reflections on the implications of digital STS approaches for 'knowing infrastructure', where the use of these infrastructures is unavoidably intertwined with our ability to study the situated and material arrangements of their creation.
  16. Jha, A.: Why GPT-4 isn't all it's cracked up to be (2023) 0.00
    0.004954487 = product of:
      0.039635897 = sum of:
        0.039635897 = weight(_text_:world in 923) [ClassicSimilarity], result of:
          0.039635897 = score(doc=923,freq=6.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.2574423 = fieldWeight in 923, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.02734375 = fieldNorm(doc=923)
      0.125 = coord(1/8)
    
    Abstract
    They might appear intelligent, but LLMs are nothing of the sort. They don't understand the meanings of the words they are using, nor the concepts expressed within the sentences they create. When asked how to bring a cow back to life, earlier versions of ChatGPT, for example, which ran on a souped-up version of GPT-3, would confidently provide a list of instructions. So-called hallucinations like this happen because language models have no concept of what a "cow" is or that "death" is a non-reversible state of being. LLMs do not have minds that can think about objects in the world and how they relate to each other. All they "know" is how likely it is that some sets of words will follow other sets of words, having calculated those probabilities from their training data. To make sense of all this, I spoke with Gary Marcus, an emeritus professor of psychology and neural science at New York University, for "Babbage", our science and technology podcast. Last year, as the world was transfixed by the sudden appearance of ChatGPT, he made some fascinating predictions about GPT-4.
    People use symbols to think about the world: if I say the words "cat", "house" or "aeroplane", you know instantly what I mean. Symbols can also be used to describe the way things are behaving (running, falling, flying) or they can represent how things should behave in relation to each other (a "+" means add the numbers before and after). Symbolic AI is a way to embed this human knowledge and reasoning into computer systems. Though the idea has been around for decades, it fell by the wayside a few years ago as deep learning-buoyed by the sudden easy availability of lots of training data and cheap computing power-became more fashionable. In the near future at least, there's no doubt people will find LLMs useful. But whether they represent a critical step on the path towards AGI, or rather just an intriguing detour, remains to be seen."
  17. Kratochwil, F.; Peltonen, H.: Constructivism (2022) 0.00
    0.004623225 = product of:
      0.0369858 = sum of:
        0.0369858 = weight(_text_:world in 829) [ClassicSimilarity], result of:
          0.0369858 = score(doc=829,freq=4.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.24022943 = fieldWeight in 829, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=829)
      0.125 = coord(1/8)
    
    Abstract
    Constructivism in the social sciences has known several ups and downs over the last decades. It was successful rather early in sociology but hotly contested in International Politics/Relations (IR). Oddly enough, just at the moments it made important inroads into the research agenda and became accepted by the mainstream, the enthusiasm for it waned. Many constructivists-as did mainstream scholars-moved from "grand theory" or even "meta-theory" toward "normal science," or experimented with other (eclectic) approaches, of which the turns to practices, to emotions, to new materialism, to the visual, and to the queer are some of the latest manifestations. In a way, constructivism was "successful," on the one hand, by introducing norms, norm-dynamics, and diffusion; the role of new actors in world politics; and the changing role of institutions into the debates, while losing, on the other hand, much of its critical potential. The latter survived only on the fringes-and in Europe more than in the United States. In IR, curiously, constructivism, which was rooted in various European traditions (philosophy, history, linguistics, social analysis), was originally introduced in Europe via the disciplinary discussions taking place in the United States. Yet, especially in its critical version, it has found a more conducive environment in Europe than in the United States.
    In the United States, soon after its emergence, constructivism became "mainstreamed" by having its analysis of norms reduced to "variable research." In such research, positive examples of for instance the spread of norms were included, but strangely empirical evidence of counterexamples of norm "deaths" (preventive strikes, unlawful combatants, drone strikes, extrajudicial killings) were not. The elective affinity of constructivism and humanitarianism seemed to have transformed the former into the Enlightenment project of "progress." Even Kant was finally pressed into the service of "liberalism" in the U.S. discussion, and his notion of the "practical interest of reason" morphed into the political project of an "end of history." This "slant" has prevented a serious conceptual engagement with the "history" of law and (inter-)national politics and the epistemological problems that are raised thereby. This bowdlerization of constructivism is further buttressed by the fact that in the "knowledge industry" none of the "leading" U.S. departments has a constructivist on board, ensuring thereby the narrowness of conceptual and methodological choices to which the future "professionals" are exposed. This article contextualizes constructivism and its emergence within a changing world and within the evolution of the discipline. The aim is not to provide a definition or a typology of constructivism, since such efforts go against the critical dimension of constructivism. An application of this critique on constructivism itself leads to a reflection on truth, knowledge, and the need for (re-)orientation.
  18. Williams, B.: Dimensions & VOSViewer bibliometrics in the reference interview (2020) 0.00
    0.0041242572 = product of:
      0.032994058 = sum of:
        0.032994058 = weight(_text_:web in 5719) [ClassicSimilarity], result of:
          0.032994058 = score(doc=5719,freq=2.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.25239927 = fieldWeight in 5719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5719)
      0.125 = coord(1/8)
    
    Abstract
    The VOSviewer software provides easy access to bibliometric mapping using data from Dimensions, Scopus and Web of Science. The properly formatted and structured citation data, and the ease in which it can be exported open up new avenues for use during citation searches and eference interviews. This paper details specific techniques for using advanced searches in Dimensions, exporting the citation data, and drawing insights from the maps produced in VOS Viewer. These search techniques and data export practices are fast and accurate enough to build into reference interviews for graduate students, faculty, and post-PhD researchers. The search results derived from them are accurate and allow a more comprehensive view of citation networks embedded in ordinary complex boolean searches.
  19. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo (2020) 0.00
    0.0041242572 = product of:
      0.032994058 = sum of:
        0.032994058 = weight(_text_:web in 53) [ClassicSimilarity], result of:
          0.032994058 = score(doc=53,freq=8.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.25239927 = fieldWeight in 53, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=53)
      0.125 = coord(1/8)
    
    Content
    # Community action on individual ontologies We would like to call on all ontology maintainers and consumers to help us increase the average star rating of the web of ontologies by fixing and improving its ontologies. You can easily check an ontology at https://archivo.dbpedia.org/info. If you are an ontology maintainer just release a patched version - archivo will automatically pick it up 8 hours later. If you are a user of an ontology and want your consumed data to become FAIRer, please inform the ontology maintainer about the issues found with Archivo. The star rating is very basic and only requires fixing small things. However, theimpact on technical and legal usability can be immense.
    # Community action on all ontologies (quality, FAIRness, conformity) Archivo is extensible and allows contributions to give consumers a central place to encode their requirements. We envision fostering adherence to standards and strengthening incentives for publishers to build a better (FAIRer) web of ontologies. 1. SHACL (https://www.w3.org/TR/shacl/, co-edited by DBpedia's CTO D. Kontokostas) enables easy testing of ontologies. Archivo offers free SHACL continuous integration testing for ontologies. Anyone can implement their SHACL tests and add them to the SHACL library on Github. We believe that there are many synergies, i.e. SHACL tests for your ontology are helpful for others as well. 2. We are looking for ontology experts to join DBpedia and discuss further validation (e.g. stars) to increase FAIRness and quality of ontologies. We are forming a steering committee and also a PC for the upcoming Vocarnival at SEMANTiCS 2021. Please message hellmann@informatik.uni-leipzig.de <mailto:hellmann@informatik.uni-leipzig.de>if you would like to join. We would like to extend the Archivo platform with relevant visualisations, tests, editing aides, mapping management tools and quality checks.
    # How does Archivo work? Each week Archivo runs several discovery algorithms to scan for new ontologies. Once discovered Archivo checks them every 8 hours. When changes are detected, Archivo downloads and rates and archives the latest snapshot persistently on the DBpedia Databus. # Archivo's mission Archivo's mission is to improve FAIRness (findability, accessibility, interoperability, and reusability) of all available ontologies on the Semantic Web. Archivo is not a guideline, it is fully automated, machine-readable and enforces interoperability with its star rating. - Ontology developers can implement against Archivo until they reach more stars. The stars and tests are designed to guarantee the interoperability and fitness of the ontology. - Ontology users can better find, access and re-use ontologies. Snapshots are persisted in case the original is not reachable anymore adding a layer of reliability to the decentral web of ontologies.
  20. Bredemeier, W.: Trend des Jahrzehnts 2011 - 2020 : Die Entfaltung und Degeneration des Social Web (2021) 0.00
    0.0041242572 = product of:
      0.032994058 = sum of:
        0.032994058 = weight(_text_:web in 293) [ClassicSimilarity], result of:
          0.032994058 = score(doc=293,freq=2.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.25239927 = fieldWeight in 293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=293)
      0.125 = coord(1/8)
    

Languages

  • e 22
  • d 21

Types

  • a 33
  • p 3