Search (60 results, page 1 of 3)

  • × year_i:[2020 TO 2030}
  • × type_ss:"el"
  1. Option für Metager als Standardsuchmaschine, Suchmaschine nach dem Peer-to-Peer-Prinzip (2021) 0.02
    0.0198628 = product of:
      0.099314004 = sum of:
        0.033407938 = weight(_text_:kommunikation in 431) [ClassicSimilarity], result of:
          0.033407938 = score(doc=431,freq=2.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.22716287 = fieldWeight in 431, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.03125 = fieldNorm(doc=431)
        0.06590606 = weight(_text_:schutz in 431) [ClassicSimilarity], result of:
          0.06590606 = score(doc=431,freq=2.0), product of:
            0.20656188 = queryWeight, product of:
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.028611459 = queryNorm
            0.31906208 = fieldWeight in 431, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.03125 = fieldNorm(doc=431)
      0.2 = coord(2/10)
    
    Content
    Auch auf dem Volla-Phone ist es bald möglich, MetaGer als Standardsuchmaschine zu wählen. Das Volla Phone ist ein Produkt von "Hallo Welt Systeme UG" in Remscheid. Die Entwickler des Smartphones verfolgen den Ansatz, möglichst wenig von der Aufmerksamkeit des Nutzers zu beanspruchen. Technik soll nicht ablenken und sich in der Vordergrund spielen, sondern als bloßes Werkzeug im Hintergrund bleiben. Durch Möglichkeiten wie detaillierter Datenschutzeinstellungen, logfreiem VPN, quelloffener Apps aus einem alternativen App Store wird zudem Schutz der Privatsphäre ermöglicht - ganz ohne Google-Dienste. Durch die Partnerschaft mit MetaGer können die Nutzer von Volla-Phone auch im Bereich Suchmaschine Privatsphärenschutz realisieren. Mehr unter: https://suma-ev.de/mit-metager-auf-dem-volla-phone-suchen/
    YaCy: Suchmaschine nach dem Peer-to-Peer-Prinzip. YaCy ist eine dezentrale, freie Suchmaschine. Die Besonderheit: die freie Suchmaschine läuft nicht auf zentralen Servern eines einzelnen Betreibers, sondern funktioniert nach dem Peer-to-Peer (P2P) Prinzip. Dieses basiert darauf, dass die YaCy-Nutzer aufgerufene Webseiten auf ihrem Computer lokal indexieren. Jeder Nutzer "ercrawlt" sich damit einen kleinen Index, den er durch Kommunikation mit anderen YaCy-Peers teilen kann. Das Programm sorgt dafür, dass durch die kleinen dezentralen Crawler einzelner Nutzer schließlich ein globaler Gesamtindex entsteht. Je mehr Nutzer Teil dieser dezentralen Suche sind, desto größer wird der gemeinsame Index, auf den der einzelne Nutzer dann Zugriff haben kann. Seit kurzem befindet sich YaCy im Verbund unserer abgefragten Suchmaschinen. Wir sind somit auch Teil des Indexes der Suchmaschine.
  2. Molor-Erdene, B.: Schutz der Privatsphäre oder der Gesundheit? (2020) 0.02
    0.018641049 = product of:
      0.18641049 = sum of:
        0.18641049 = weight(_text_:schutz in 5821) [ClassicSimilarity], result of:
          0.18641049 = score(doc=5821,freq=4.0), product of:
            0.20656188 = queryWeight, product of:
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.028611459 = queryNorm
            0.9024438 = fieldWeight in 5821, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.0625 = fieldNorm(doc=5821)
      0.1 = coord(1/10)
    
    Source
    https://www.heise.de/tp/features/Schutz-der-Privatsphaere-oder-der-Gesundheit-4695908.html?view=print
  3. Krekeler, H.: Blockchain : Anwendungen im Dokumentenmanagement (2021) 0.01
    0.013181212 = product of:
      0.13181213 = sum of:
        0.13181213 = weight(_text_:schutz in 200) [ClassicSimilarity], result of:
          0.13181213 = score(doc=200,freq=2.0), product of:
            0.20656188 = queryWeight, product of:
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.028611459 = queryNorm
            0.63812417 = fieldWeight in 200, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.0625 = fieldNorm(doc=200)
      0.1 = coord(1/10)
    
    Abstract
    In der Unternehmenspraxis bietet die Documentchain (https://de.documentchain.org), eine speziell für das Dokumentenmanagement entwickelte dezentrale Blockchain, die Möglichkeit, verschlüsselte Beschreibungen sowie Hashwerte einer Dokumentdatei gemeinsam mit einem Zeitstempel in der verteilten Datenbank dauerhaft zu hinterlegen und später mit dem Originaldokument abzugleichen. Auf diese Weise wird der Beweis erbracht, seit wann ein Dokument vorhanden ist. Hieraus ergeben sich vielfältige Möglichkeiten. Denn es geht nicht nur um die Erfüllung von Aufbewahrungsvorschriften, sondern insbesondere um den Schutz des Copyrights.
  4. Räwel, J.: Automatisierte Kommunikation (2023) 0.01
    0.01181149 = product of:
      0.1181149 = sum of:
        0.1181149 = weight(_text_:kommunikation in 909) [ClassicSimilarity], result of:
          0.1181149 = score(doc=909,freq=16.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.8031421 = fieldWeight in 909, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0390625 = fieldNorm(doc=909)
      0.1 = coord(1/10)
    
    Content
    In den Sozialwissenschaften gibt es zwei fundamental unterschiedliche Auffassungen, was unter Kommunikation zu verstehen ist. Angelehnt an das Alltagsverständnis und daher auch in den Sozialwissenschaften dominant, gehen "handlungstheoretische" Vorstellungen von Kommunikation davon aus, dass diese instrumentellen Charakters ist. Es sind Menschen in ihrer physisch-psychischen Kompaktheit, die mittels Kommunikation, sei dies in mündlicher oder schriftlicher Form, Informationen austauschen. Kommunizierende werden nach dieser Vorstellung wechselseitig als Sender bzw. Empfänger von Informationen verstanden. Kommunikation dient der mehr oder minder erfolgreichen Übertragung von Informationen von Mensch zu Mensch. Davon paradigmatisch zu unterscheiden sind "systemtheoretische" Vorstellungen von Kommunikation, wie sie wesentlich von dem 1998 verstorbenen Soziologen Niklas Luhmann in Vorschlag gebracht wurden. Nach diesem Paradigma wird behauptet, dass ihr "Eigenleben" charakteristisch ist. Kommunikation zeichnet sich durch ihre rekursive Eigendynamik aus, welche die Möglichkeiten der Kommunizierenden begrenzt, diese zu steuern und zu beeinflussen. Gemäß dieser Konzeption befindet sich individuelles Bewusstseins - in ihrer je gedanklichen Eigendynamik - in der Umwelt von Kommunikationssystemen und vermag diese mittels Sprache lediglich zu irritieren, nicht aber kontrollierend zu determinieren. Dies schon deshalb nicht, weil in Kommunikationssystemen, etwa einem Gespräch als einem "Interaktionssystem", mindestens zwei bewusste Systeme mit ihrer je unterschiedlichen gedanklichen Eigendynamik beteiligt sind.
    Source
    https://www.telepolis.de/features/Automatisierte-Kommunikation-7520683.html?seite=all
  5. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.01
    0.008877646 = product of:
      0.04438823 = sum of:
        0.03786882 = product of:
          0.11360646 = sum of:
            0.11360646 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.11360646 = score(doc=5669,freq=2.0), product of:
                0.24256827 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.028611459 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
        0.00651941 = product of:
          0.019558229 = sum of:
            0.019558229 = weight(_text_:29 in 5669) [ClassicSimilarity], result of:
              0.019558229 = score(doc=5669,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19432661 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  6. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.01
    0.008474903 = product of:
      0.042374514 = sum of:
        0.033329446 = weight(_text_:web in 40) [ClassicSimilarity], result of:
          0.033329446 = score(doc=40,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.35694647 = fieldWeight in 40, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.009045068 = product of:
          0.027135205 = sum of:
            0.027135205 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.027135205 = score(doc=40,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
    Object
    Web of Science
  7. Vogt, T.: ¬Die Transformation des renommierten Informationsservices zbMATH zu einer Open Access-Plattform für die Mathematik steht vor dem Abschluss. (2020) 0.01
    0.008015121 = product of:
      0.080151215 = sum of:
        0.080151215 = product of:
          0.24045363 = sum of:
            0.24045363 = weight(_text_:c3 in 31) [ClassicSimilarity], result of:
              0.24045363 = score(doc=31,freq=2.0), product of:
                0.2789897 = queryWeight, product of:
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.028611459 = queryNorm
                0.8618728 = fieldWeight in 31, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.0625 = fieldNorm(doc=31)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Content
    "Mit Beginn des Jahres 2021 wird der umfassende internationale Informationsservice zbMATH in eine Open Access-Plattform überführt. Dann steht dieser bislang kostenpflichtige Dienst weltweit allen Interessierten kostenfrei zur Verfügung. Die Änderung des Geschäftsmodells ermöglicht, die meisten Informationen und Daten von zbMATH für Forschungszwecke und zur Verknüpfung mit anderen nicht-kommerziellen Diensten frei zu nutzen, siehe: https://www.mathematik.de/dmv-blog/2772-transformation-von-zbmath-zu-einer-open-access-plattform-f%C3%BCr-die-mathematik-kurz-vor-dem-abschluss."
  8. Matt, A.; Schaber, E.; Violet, B.: Vielfältige Formate und dynamische Umsetzung : Mathematik-Kommunikation zu Künstlicher Intelligenz bei IMAGINARY (2023) 0.01
    0.005846389 = product of:
      0.05846389 = sum of:
        0.05846389 = weight(_text_:kommunikation in 891) [ClassicSimilarity], result of:
          0.05846389 = score(doc=891,freq=2.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.39753503 = fieldWeight in 891, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0546875 = fieldNorm(doc=891)
      0.1 = coord(1/10)
    
  9. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.01
    0.0057136193 = product of:
      0.057136193 = sum of:
        0.057136193 = weight(_text_:web in 79) [ClassicSimilarity], result of:
          0.057136193 = score(doc=79,freq=36.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.6119082 = fieldWeight in 79, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
      0.1 = coord(1/10)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Theme
    Semantic Web
  10. Isaac, A.; Raemy, J.A.; Meijers, E.; Valk, S. De; Freire, N.: Metadata aggregation via linked data : results of the Europeana Common Culture project (2020) 0.01
    0.005604797 = product of:
      0.028023984 = sum of:
        0.020200694 = weight(_text_:web in 39) [ClassicSimilarity], result of:
          0.020200694 = score(doc=39,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 39, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=39)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 39) [ClassicSimilarity], result of:
              0.023469873 = score(doc=39,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 39, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=39)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Digital cultural heritage resources are widely available on the web through the digital libraries of heritage institutions. To address the difficulties of discoverability in cultural heritage, the common practice is metadata aggregation, where centralized efforts like Europeana facilitate discoverability by collecting the resources' metadata. We present the results of the linked data aggregation task conducted within the Europeana Common Culture project, which attempted an innovative approach to aggregation based on linked data made available by cultural heritage institutions. This task ran for one year with participation of eleven organizations, involving the three member roles of the Europeana network: data providers, intermediary aggregators, and the central aggregation hub, Europeana. We report on the challenges that were faced by data providers, the standards and specifications applied, and the resulting aggregated metadata.
    Date
    17.11.2020 11:29:00
  11. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo : a Web-Scale interface for ontology archiving under consumer-oriented aspects (2020) 0.00
    0.004082007 = product of:
      0.04082007 = sum of:
        0.04082007 = weight(_text_:web in 52) [ClassicSimilarity], result of:
          0.04082007 = score(doc=52,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.43716836 = fieldWeight in 52, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=52)
      0.1 = coord(1/10)
    
    Abstract
    While thousands of ontologies exist on the web, a unified sys-tem for handling online ontologies - in particular with respect to discov-ery, versioning, access, quality-control, mappings - has not yet surfacedand users of ontologies struggle with many challenges. In this paper, wepresent an online ontology interface and augmented archive called DB-pedia Archivo, that discovers, crawls, versions and archives ontologies onthe DBpedia Databus. Based on this versioned crawl, different features,quality measures and, if possible, fixes are deployed to handle and sta-bilize the changes in the found ontologies at web-scale. A comparison toexisting approaches and ontology repositories is given.
  12. Advanced online media use (2023) 0.00
    0.0038090795 = product of:
      0.038090795 = sum of:
        0.038090795 = weight(_text_:web in 954) [ClassicSimilarity], result of:
          0.038090795 = score(doc=954,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.4079388 = fieldWeight in 954, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=954)
      0.1 = coord(1/10)
    
    Content
    "1. Use a range of different media 2. Access paywalled media content 3. Use an advertising and tracking blocker 4. Use alternatives to Google Search 5. Use alternatives to YouTube 6. Use alternatives to Facebook and Twitter 7. Caution with Wikipedia 8. Web browser, email, and internet access 9. Access books and scientific papers 10. Access deleted web content"
  13. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.00
    0.0034988632 = product of:
      0.03498863 = sum of:
        0.03498863 = weight(_text_:web in 38) [ClassicSimilarity], result of:
          0.03498863 = score(doc=38,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.37471575 = fieldWeight in 38, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
      0.1 = coord(1/10)
    
    Abstract
    A variety of schemas and ontologies are currently used for the machine-readable description of bibliographic entities and citations. This diversity, and the reuse of the same ontology terms with different nuances, generates inconsistencies in data. Adoption of a single data model would facilitate data integration tasks regardless of the data supplier or context application. In this paper we present the OpenCitations Data Model (OCDM), a generic data model for describing bibliographic entities and citations, developed using Semantic Web technologies. We also evaluate the effective reusability of OCDM according to ontology evaluation practices, mention existing users of OCDM, and discuss the use and impact of OCDM in the wider open science community.
    Content
    Erschienen in: The Semantic Web - ISWC 2020, 19th International Semantic Web Conference, Athens, Greece, November 2-6, 2020, Proceedings, Part II. Vgl.: DOI: 10.1007/978-3-030-62466-8_28.
  14. Ogden, J.; Summers, E.; Walker, S.: Know(ing) Infrastructure : the wayback machine as object and instrument of digital research (2023) 0.00
    0.0033667826 = product of:
      0.033667825 = sum of:
        0.033667825 = weight(_text_:web in 1084) [ClassicSimilarity], result of:
          0.033667825 = score(doc=1084,freq=8.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.36057037 = fieldWeight in 1084, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1084)
      0.1 = coord(1/10)
    
    Abstract
    From documenting human rights abuses to studying online advertising, web archives are increasingly positioned as critical resources for a broad range of scholarly Internet research agendas. In this article, we reflect on the motivations and methodological challenges of investigating the world's largest web archive, the Internet Archive's Wayback Machine (IAWM). Using a mixed methods approach, we report on a pilot project centred around documenting the inner workings of 'Save Page Now' (SPN) - an Internet Archive tool that allows users to initiate the creation and storage of 'snapshots' of web resources. By improving our understanding of SPN and its role in shaping the IAWM, this work examines how the public tool is being used to 'save the Web' and highlights the challenges of operationalising a study of the dynamic sociotechnical processes supporting this knowledge infrastructure. Inspired by existing Science and Technology Studies (STS) approaches, the paper charts our development of methodological interventions to support an interdisciplinary investigation of SPN, including: ethnographic methods, 'experimental blackbox tactics', data tracing, modelling and documentary research. We discuss the opportunities and limitations of our methodology when interfacing with issues associated with temporality, scale and visibility, as well as critically engage with our own positionality in the research process (in terms of expertise and access). We conclude with reflections on the implications of digital STS approaches for 'knowing infrastructure', where the use of these infrastructures is unavoidably intertwined with our ability to study the situated and material arrangements of their creation.
  15. Weßels, D.: ChatGPT - ein Meilenstein der KI-Entwicklung (2022) 0.00
    0.0033407938 = product of:
      0.033407938 = sum of:
        0.033407938 = weight(_text_:kommunikation in 929) [ClassicSimilarity], result of:
          0.033407938 = score(doc=929,freq=2.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.22716287 = fieldWeight in 929, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.03125 = fieldNorm(doc=929)
      0.1 = coord(1/10)
    
    Content
    "Seit dem 30. November 2022 ist meine Welt - und die vieler Bildungsexpertinnen und Bildungsexperten - gefühlt eine andere Welt, die uns in eine "Neuzeit" führt, von der wir noch nicht wissen, ob wir sie lieben oder fürchten sollen. Der Ableger und Prototyp ChatGPT des derzeit (zumindest in der westlichen Welt) führenden generativen KI-Sprachmodells GPT-3 von OpenAI wurde am 30. November veröffentlicht und ist seit dieser Zeit für jeden frei zugänglich und kostenlos. Was zunächst als unspektakuläre Ankündigung von OpenAI anmutete, nämlich das seit 2020 bereits verfügbare KI-Sprachmodell GPT-3 nun in leicht modifizierter Version (GPT-3,5) als Chat-Variante für die Echtzeit-Kommunikation bereitzustellen, entpuppt sich in der Anwendung - aus Sicht der Nutzerinnen und Nutzer - als Meilenstein der KI-Entwicklung. Fakt ist, dass die Leistungsvielfalt und -stärke von ChatGPT selbst IT-Expertinnen und -Experten überrascht hat und sie zu einer Fülle von Superlativen in der Bewertung veranlasst, jedoch immer in Kombination mit Hinweisen zur fehlenden Faktentreue und Verlässlichkeit derartiger generativer KI-Modelle. Mit WebGPT von OpenAI steht aber bereits ein Forschungsprototyp bereit, der mit integrierter Internetsuchfunktion die "Halluzinationen" aktueller GPT-Varianten ausmerzen könnte. Für den Bildungssektor stellt sich die Frage, wie sich das Lehren und Lernen an Hochschulen (und nicht nur dort) verändern wird, wenn derartige KI-Werkzeuge omnipräsent sind und mit ihrer Hilfe nicht nur die Hausarbeit "per Knopfdruck" erstellt werden kann. Beeindruckend ist zudem die fachliche Bandbreite von ChatGPT, siehe den Tweet von @davidtsong, der ChatGPT dem Studierfähigkeitstest SAT unterzogen hat."
  16. Bärnreuther, K.: Informationskompetenz-Vermittlung für Schulklassen mit Wikipedia und dem Framework Informationskompetenz in der Hochschulbildung (2021) 0.00
    0.0031152412 = product of:
      0.031152412 = sum of:
        0.031152412 = product of:
          0.04672862 = sum of:
            0.023469873 = weight(_text_:29 in 299) [ClassicSimilarity], result of:
              0.023469873 = score(doc=299,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=299)
            0.023258746 = weight(_text_:22 in 299) [ClassicSimilarity], result of:
              0.023258746 = score(doc=299,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23214069 = fieldWeight in 299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=299)
          0.6666667 = coord(2/3)
      0.1 = coord(1/10)
    
    Date
    30. 6.2021 16:29:52
    Source
    o-bib: Das offene Bibliotheksjournal. 8(2021) Nr.2, S.1-22
  17. Williams, B.: Dimensions & VOSViewer bibliometrics in the reference interview (2020) 0.00
    0.0023567479 = product of:
      0.023567477 = sum of:
        0.023567477 = weight(_text_:web in 5719) [ClassicSimilarity], result of:
          0.023567477 = score(doc=5719,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25239927 = fieldWeight in 5719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5719)
      0.1 = coord(1/10)
    
    Abstract
    The VOSviewer software provides easy access to bibliometric mapping using data from Dimensions, Scopus and Web of Science. The properly formatted and structured citation data, and the ease in which it can be exported open up new avenues for use during citation searches and eference interviews. This paper details specific techniques for using advanced searches in Dimensions, exporting the citation data, and drawing insights from the maps produced in VOS Viewer. These search techniques and data export practices are fast and accurate enough to build into reference interviews for graduate students, faculty, and post-PhD researchers. The search results derived from them are accurate and allow a more comprehensive view of citation networks embedded in ordinary complex boolean searches.
  18. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo (2020) 0.00
    0.0023567479 = product of:
      0.023567477 = sum of:
        0.023567477 = weight(_text_:web in 53) [ClassicSimilarity], result of:
          0.023567477 = score(doc=53,freq=8.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25239927 = fieldWeight in 53, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=53)
      0.1 = coord(1/10)
    
    Content
    # Community action on individual ontologies We would like to call on all ontology maintainers and consumers to help us increase the average star rating of the web of ontologies by fixing and improving its ontologies. You can easily check an ontology at https://archivo.dbpedia.org/info. If you are an ontology maintainer just release a patched version - archivo will automatically pick it up 8 hours later. If you are a user of an ontology and want your consumed data to become FAIRer, please inform the ontology maintainer about the issues found with Archivo. The star rating is very basic and only requires fixing small things. However, theimpact on technical and legal usability can be immense.
    # Community action on all ontologies (quality, FAIRness, conformity) Archivo is extensible and allows contributions to give consumers a central place to encode their requirements. We envision fostering adherence to standards and strengthening incentives for publishers to build a better (FAIRer) web of ontologies. 1. SHACL (https://www.w3.org/TR/shacl/, co-edited by DBpedia's CTO D. Kontokostas) enables easy testing of ontologies. Archivo offers free SHACL continuous integration testing for ontologies. Anyone can implement their SHACL tests and add them to the SHACL library on Github. We believe that there are many synergies, i.e. SHACL tests for your ontology are helpful for others as well. 2. We are looking for ontology experts to join DBpedia and discuss further validation (e.g. stars) to increase FAIRness and quality of ontologies. We are forming a steering committee and also a PC for the upcoming Vocarnival at SEMANTiCS 2021. Please message hellmann@informatik.uni-leipzig.de <mailto:hellmann@informatik.uni-leipzig.de>if you would like to join. We would like to extend the Archivo platform with relevant visualisations, tests, editing aides, mapping management tools and quality checks.
    # How does Archivo work? Each week Archivo runs several discovery algorithms to scan for new ontologies. Once discovered Archivo checks them every 8 hours. When changes are detected, Archivo downloads and rates and archives the latest snapshot persistently on the DBpedia Databus. # Archivo's mission Archivo's mission is to improve FAIRness (findability, accessibility, interoperability, and reusability) of all available ontologies on the Semantic Web. Archivo is not a guideline, it is fully automated, machine-readable and enforces interoperability with its star rating. - Ontology developers can implement against Archivo until they reach more stars. The stars and tests are designed to guarantee the interoperability and fitness of the ontology. - Ontology users can better find, access and re-use ontologies. Snapshots are persisted in case the original is not reachable anymore adding a layer of reliability to the decentral web of ontologies.
  19. Bredemeier, W.: Trend des Jahrzehnts 2011 - 2020 : Die Entfaltung und Degeneration des Social Web (2021) 0.00
    0.0023567479 = product of:
      0.023567477 = sum of:
        0.023567477 = weight(_text_:web in 293) [ClassicSimilarity], result of:
          0.023567477 = score(doc=293,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25239927 = fieldWeight in 293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=293)
      0.1 = coord(1/10)
    
  20. Christensen, A.: Wissenschaftliche Literatur entdecken : was bibliothekarische Discovery-Systeme von der Konkurrenz lernen und was sie ihr zeigen können (2022) 0.00
    0.0023567479 = product of:
      0.023567477 = sum of:
        0.023567477 = weight(_text_:web in 833) [ClassicSimilarity], result of:
          0.023567477 = score(doc=833,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25239927 = fieldWeight in 833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=833)
      0.1 = coord(1/10)
    
    Abstract
    In den letzten Jahren ist das Angebot an Academic Search Engines für die Recherche nach Fachliteratur zu allen Wissenschaftsgebieten stark angewachsen und ergänzt die beliebten kommerziellen Angebote wie Web of Science oder Scopus. Der Artikel zeigt die wesentlichen Unterschiede zwischen bibliothekarischen Discovery-Systemen und Academic Search Engines wie Base, Dimensions oder Open Alex auf und diskutiert Möglichkeiten, wie beide von einander profitieren können. Diese Entwicklungsperspektiven betreffen Aspekte wie die Kontextualisierung von Wissen, die Datenmodellierung, die automatischen Datenanreicherung sowie den Zuschnitt von Suchräumen.

Languages

  • d 45
  • e 15

Types

  • a 46
  • p 1
  • More… Less…