Search (213 results, page 2 of 11)

  • × year_i:[2020 TO 2030}
  1. Schleim, S.: Wissenschaft und Religion : Konflikt oder Kooperation? (2021) 0.00
    0.004529867 = product of:
      0.040768802 = sum of:
        0.040768802 = product of:
          0.081537604 = sum of:
            0.081537604 = weight(_text_:seite in 5891) [ClassicSimilarity], result of:
              0.081537604 = score(doc=5891,freq=2.0), product of:
                0.16469958 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.02940506 = queryNorm
                0.49506867 = fieldWeight in 5891, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5891)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Source
    https://www.heise.de/tp/features/Wissenschaft-und-Religion-Konflikt-oder-Kooperation-6004714.html?seite=all
  2. Schleim, S.: Reduktionismus und die Erklärung von Alltagsphänomenen (2021) 0.00
    0.004529867 = product of:
      0.040768802 = sum of:
        0.040768802 = product of:
          0.081537604 = sum of:
            0.081537604 = weight(_text_:seite in 74) [ClassicSimilarity], result of:
              0.081537604 = score(doc=74,freq=2.0), product of:
                0.16469958 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.02940506 = queryNorm
                0.49506867 = fieldWeight in 74, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.0625 = fieldNorm(doc=74)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Content
    Vorgänger als: 'Wissenschaft und Religion: Konflikt oder Kooperation?' Unter: https://www.heise.de/tp/features/Wissenschaft-und-Religion-Konflikt-oder-Kooperation-6004714.html?seite=all. Vgl. auch: https://heise.de/-6009962.
  3. Schleim, S.: Mensch in Körper und Gesellschaft : was heißt Freiheit? (2020) 0.00
    0.004529867 = product of:
      0.040768802 = sum of:
        0.040768802 = product of:
          0.081537604 = sum of:
            0.081537604 = weight(_text_:seite in 220) [ClassicSimilarity], result of:
              0.081537604 = score(doc=220,freq=2.0), product of:
                0.16469958 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.02940506 = queryNorm
                0.49506867 = fieldWeight in 220, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.0625 = fieldNorm(doc=220)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Source
    https://www.heise.de/tp/features/Mensch-in-Koerper-und-Gesellschaft-Was-heisst-Freiheit-4660527.html?seite=all
  4. Lieb, W.: Willkommen im Überwachungskapitalismus (2021) 0.00
    0.004529867 = product of:
      0.040768802 = sum of:
        0.040768802 = product of:
          0.081537604 = sum of:
            0.081537604 = weight(_text_:seite in 323) [ClassicSimilarity], result of:
              0.081537604 = score(doc=323,freq=2.0), product of:
                0.16469958 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.02940506 = queryNorm
                0.49506867 = fieldWeight in 323, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.0625 = fieldNorm(doc=323)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Source
    https://www.heise.de/tp/features/Willkommen-im-Ueberwachungskapitalismus-6139001.html?seite=all
  5. Lieb, W.: Vorsicht vor den asozialen Medien! (2021) 0.00
    0.004529867 = product of:
      0.040768802 = sum of:
        0.040768802 = product of:
          0.081537604 = sum of:
            0.081537604 = weight(_text_:seite in 324) [ClassicSimilarity], result of:
              0.081537604 = score(doc=324,freq=2.0), product of:
                0.16469958 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.02940506 = queryNorm
                0.49506867 = fieldWeight in 324, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.0625 = fieldNorm(doc=324)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Source
    https://www.heise.de/tp/features/Vorsicht-vor-den-asozialen-Medien-6140715.html?seite=all
  6. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.00
    0.004324355 = product of:
      0.038919196 = sum of:
        0.038919196 = product of:
          0.11675758 = sum of:
            0.11675758 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.11675758 = score(doc=5669,freq=2.0), product of:
                0.24929643 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.02940506 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.11111111 = coord(1/9)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  7. Datenschutz-Folgenabschätzung (DSFA) für die Corona-App (2020) 0.00
    0.0043214173 = product of:
      0.038892757 = sum of:
        0.038892757 = product of:
          0.077785514 = sum of:
            0.077785514 = weight(_text_:bewertung in 5827) [ClassicSimilarity], result of:
              0.077785514 = score(doc=5827,freq=2.0), product of:
                0.18575147 = queryWeight, product of:
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.02940506 = queryNorm
                0.41876122 = fieldWeight in 5827, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5827)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    FIfF legt als konstruktiven Diskussionsbeitrag eine datenschutzrechtliche Bewertung der geplanten Corona-Tracing-Systeme vor.
  8. Beck, C.: ¬Die Qualität der Fremddatenanreicherung FRED (2021) 0.00
    0.0043214173 = product of:
      0.038892757 = sum of:
        0.038892757 = product of:
          0.077785514 = sum of:
            0.077785514 = weight(_text_:bewertung in 377) [ClassicSimilarity], result of:
              0.077785514 = score(doc=377,freq=2.0), product of:
                0.18575147 = queryWeight, product of:
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.02940506 = queryNorm
                0.41876122 = fieldWeight in 377, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.046875 = fieldNorm(doc=377)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    Das Projekt Fremddatenanreicherung (FRED) der Zentralbibliothek Zürich und der Universitätsbibliotheken Basel und Bern wurde schon verschiedentlich in Präsentationen vorgestellt und in der Literatur (Bucher et al. 2018) behandelt, wobei allerdings nur das Projekt vorgestellt und statistische Werte zur quantitativen Datenanreicherung sowie die Kooperation innerhalb des Projekts, also bei der Implementierung von FRED, dargestellt wurden. Der vorliegende Beitrag versucht weiterführend, die Qualität dieser Fremddatenanreicherung mittels einer subjektiven Beschreibung und Bewertung zu untersuchen. Zudem werden abschließend ein paar Fragen zum weiteren Einsatz von FRED in der völlig veränderten Bibliothekslandschaft der Schweiz mit der Swiss Library Service Platform (SLSP) ab 2021 aufgeworfen. Die Untersuchung erfolgt mittels einer Stichprobe aus Printbüchern für zwei sozialwissenschaftliche Fächer, stellt aber nur eine Art Beobachtung dar, deren Ergebnisse nicht repräsentativ für die Datenanreicherung durch FRED sind. Nicht behandelt wird im Folgenden die zeitweilig in Zürich, Basel und Bern erfolgte Datenanreicherung von E-Books. Auch ist die Qualität der geleisteten intellektuellen Verschlagwortung in den Verbünden, aus denen FRED schöpft, kein Thema. Es geht hier nur, aber immerhin, um die mit FRED erzielten Resultate im intellektuellen Verschlagwortungsumfeld des Frühjahres 2020.
  9. Sühl-Strohmenger, W.: Wissenschaftliche Bibliotheken als Orte des Schreibens : Infrastrukturen, Ressourcen, Services (2021) 0.00
    0.0043214173 = product of:
      0.038892757 = sum of:
        0.038892757 = product of:
          0.077785514 = sum of:
            0.077785514 = weight(_text_:bewertung in 457) [ClassicSimilarity], result of:
              0.077785514 = score(doc=457,freq=2.0), product of:
                0.18575147 = queryWeight, product of:
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.02940506 = queryNorm
                0.41876122 = fieldWeight in 457, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.046875 = fieldNorm(doc=457)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    In dem Lehrbuch wird der enge Zusammenhang zwischen dem wissenschaftlichen Schreiben in der Hochschulbibliothek sowie der Schlüsselqualifikation Informationskompetenz systematisch sowie konkret anhand von verschiedenen Schreibszenarien aufgezeigt. Für die erfolgreiche Anfertigung einer studentischen Hausarbeit, einer Abschlussarbeit (Bachelor, Master) oder einer Dissertation bedarf es eines fundierten Wissens beim Umgang mit wissenschaftsrelevanter Information und des Beherrschens dazu notwendiger Fähigkeiten und Fertigkeiten bei der Recherche, der Auswahl, der Bewertung und der Verarbeitung von Information. Das Konzept des forschenden Lernens, wie es an den Hochschulen verfolgt wird, spielt dabei ebenso eine Rolle wie die Schwellenkonzepte der Informationskompetenz, die den dynamischen Zusammenhang der Informationspraxis mit dem Forschungsprozess in den Disziplinen betonen. Die Ressourcen und Dienstleistungen, die die Hochschulbibliothek zur Förderung und Unterstützung des wissenschaftlichen Schreibens zu Verfügung stellen, werden einbezogen.
  10. Seidler-de Alwis, R.: Informationsrecherche (2023) 0.00
    0.0043214173 = product of:
      0.038892757 = sum of:
        0.038892757 = product of:
          0.077785514 = sum of:
            0.077785514 = weight(_text_:bewertung in 810) [ClassicSimilarity], result of:
              0.077785514 = score(doc=810,freq=2.0), product of:
                0.18575147 = queryWeight, product of:
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.02940506 = queryNorm
                0.41876122 = fieldWeight in 810, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.046875 = fieldNorm(doc=810)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    Bei der Informationsrecherche z. B. für einen Vortrag, eine bestimmte Fragestellung oder bei der Einarbeitung in ein neues Aufgabengebiet etc. kann man schnell durch die endlose Informationsflut überfordert sein. Daher braucht es ein methodisches und strukturiertes Vorgehen, um relevante und verlässliche Informationen zu erhalten. Der Informationsrecherchevorgang beinhaltet, zuerst den Informationsbedarf zu erkennen und eine entsprechende Suchstrategie zu entwickeln. Das bedeutet eine sinnvolle Themeneingrenzung, Suchbegriffe festzulegen und auch die Art und Form der Informationen zu bestimmen. Bei der elektronischen Informationsbeschaffung gilt es zunächst, die Sucheingabe zu optimieren, z. B. über eine Stichwort- oder Schlagwortsuche und mit Hilfe der Verwendung von Filtern, Booleschen Operatoren, Trunkierungszeichen und anderen Suchfunktionen. Für die Qualitätssicherung in der Informationsrecherche, vor allem bei der Auswahl, Nutzung und Bewertung von Informationsressourcen, sind Quellenkenntnis, Quellenauswahl und Quellenbewertung zentrale Themen, denen sich dieser Beitrag in der Hauptsache widmet.
  11. Marcondes, C.H.: Towards a vocabulary to implement culturally relevant relationships between digital collections in heritage institutions (2020) 0.00
    0.004135637 = product of:
      0.018610368 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 5757) [ClassicSimilarity], result of:
              0.017300837 = score(doc=5757,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 5757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5757)
          0.5 = coord(1/2)
        0.009959949 = product of:
          0.019919898 = sum of:
            0.019919898 = weight(_text_:22 in 5757) [ClassicSimilarity], result of:
              0.019919898 = score(doc=5757,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.19345059 = fieldWeight in 5757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5757)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Cultural heritage institutions are publishing their digital collections over the web as LOD. This is is a new step in the patrimonialization and curatorial processes developed by such institutions. Many of these collections are thematically superimposed and complementary. Frequently, objects in these collections present culturally relevant relationships, such as a book about a painting, or a draft or sketch of a famous painting, etc. LOD technology enables such heritage records to be interlinked, achieving interoperability and adding value to digital collections, thus empowering heritage institutions. An aim of this research is characterizing such culturally relevant relationships and organizing them in a vocabulary. Use cases or examples of relationships between objects suggested by curators or mentioned in literature and in the conceptual models as FRBR/LRM, CIDOC CRM and RiC-CM, were collected and used as examples or inspiration of cultural relevant relationships. Relationships identified are collated and compared for identifying those with the same or similar meaning, synthesized and normalized. A set of thirty-three culturally relevant relationships are identified and formalized as a LOD property vocabulary to be used by digital curators to interlink digital collections. The results presented are provisional and a starting point to be discussed, tested, and enhanced.
    Date
    4. 3.2020 14:22:41
  12. Ekstrand, M.D.; Wright, K.L.; Pera, M.S.: Enhancing classroom instruction with online news (2020) 0.00
    0.004135637 = product of:
      0.018610368 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 5844) [ClassicSimilarity], result of:
              0.017300837 = score(doc=5844,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 5844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5844)
          0.5 = coord(1/2)
        0.009959949 = product of:
          0.019919898 = sum of:
            0.019919898 = weight(_text_:22 in 5844) [ClassicSimilarity], result of:
              0.019919898 = score(doc=5844,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.19345059 = fieldWeight in 5844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5844)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Purpose This paper investigates how school teachers look for informational texts for their classrooms. Access to current, varied and authentic informational texts improves learning outcomes for K-12 students, but many teachers lack resources to expand and update readings. The Web offers freely available resources, but finding suitable ones is time-consuming. This research lays the groundwork for building tools to ease that burden. Design/methodology/approach This paper reports qualitative findings from a study in two stages: (1) a set of semistructured interviews, based on the critical incident technique, eliciting teachers' information-seeking practices and challenges; and (2) observations of teachers using a prototype teaching-oriented news search tool under a think-aloud protocol. Findings Teachers articulated different objectives and ways of using readings in their classrooms, goals and self-reported practices varied by experience level. Teachers struggled to formulate queries that are likely to return readings on specific course topics, instead searching directly for abstract topics. Experience differences did not translate into observable differences in search skill or success in the lab study. Originality/value There is limited work on teachers' information-seeking practices, particularly on how teachers look for texts for classroom use. This paper describes how teachers look for information in this context, setting the stage for future development and research on how to support this use case. Understanding and supporting teachers looking for information is a rich area for future research, due to the complexity of the information need and the fact that teachers are not looking for information for themselves.
    Date
    20. 1.2015 18:30:22
  13. Kang, M.: Dual paths to continuous online knowledge sharing : a repetitive behavior perspective (2020) 0.00
    0.004135637 = product of:
      0.018610368 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 5985) [ClassicSimilarity], result of:
              0.017300837 = score(doc=5985,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 5985, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5985)
          0.5 = coord(1/2)
        0.009959949 = product of:
          0.019919898 = sum of:
            0.019919898 = weight(_text_:22 in 5985) [ClassicSimilarity], result of:
              0.019919898 = score(doc=5985,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.19345059 = fieldWeight in 5985, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5985)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Purpose Continuous knowledge sharing by active users, who are highly active in answering questions, is crucial to the sustenance of social question-and-answer (Q&A) sites. The purpose of this paper is to examine such knowledge sharing considering reason-based elaborate decision and habit-based automated cognitive processes. Design/methodology/approach To verify the research hypotheses, survey data on subjective intentions and web-crawled data on objective behavior are utilized. The sample size is 337 with the response rate of 27.2 percent. Negative binomial and hierarchical linear regressions are used given the skewed distribution of the dependent variable (i.e. the number of answers). Findings Both elaborate decision (linking satisfaction, intentions and continuance behavior) and automated cognitive processes (linking past and continuance behavior) are significant and substitutable. Research limitations/implications By measuring both subjective intentions and objective behavior, it verifies a detailed mechanism linking continuance intentions, past behavior and continuous knowledge sharing. The significant influence of automated cognitive processes implies that online knowledge sharing is habitual for active users. Practical implications Understanding that online knowledge sharing is habitual is imperative to maintaining continuous knowledge sharing by active users. Knowledge sharing trends should be monitored to check if the frequency of sharing decreases. Social Q&A sites should intervene to restore knowledge sharing behavior through personalized incentives. Originality/value This is the first study utilizing both subjective intentions and objective behavior data in the context of online knowledge sharing. It also introduces habit-based automated cognitive processes to this context. This approach extends the current understanding of continuous online knowledge sharing behavior.
    Date
    20. 1.2015 18:30:22
  14. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.00
    0.004135637 = product of:
      0.018610368 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 106) [ClassicSimilarity], result of:
              0.017300837 = score(doc=106,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=106)
          0.5 = coord(1/2)
        0.009959949 = product of:
          0.019919898 = sum of:
            0.019919898 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
              0.019919898 = score(doc=106,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.19345059 = fieldWeight in 106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=106)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Purpose The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions. Design/methodology/approach This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions. Findings Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data. Originality/value This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.
    Date
    22. 1.2021 14:24:32
  15. Thelwall, M.; Thelwall, S.: ¬A thematic analysis of highly retweeted early COVID-19 tweets : consensus, information, dissent and lockdown life (2020) 0.00
    0.004135637 = product of:
      0.018610368 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 178) [ClassicSimilarity], result of:
              0.017300837 = score(doc=178,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=178)
          0.5 = coord(1/2)
        0.009959949 = product of:
          0.019919898 = sum of:
            0.019919898 = weight(_text_:22 in 178) [ClassicSimilarity], result of:
              0.019919898 = score(doc=178,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.19345059 = fieldWeight in 178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=178)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Purpose Public attitudes towards COVID-19 and social distancing are critical in reducing its spread. It is therefore important to understand public reactions and information dissemination in all major forms, including on social media. This article investigates important issues reflected on Twitter in the early stages of the public reaction to COVID-19. Design/methodology/approach A thematic analysis of the most retweeted English-language tweets mentioning COVID-19 during March 10-29, 2020. Findings The main themes identified for the 87 qualifying tweets accounting for 14 million retweets were: lockdown life; attitude towards social restrictions; politics; safety messages; people with COVID-19; support for key workers; work; and COVID-19 facts/news. Research limitations/implications Twitter played many positive roles, mainly through unofficial tweets. Users shared social distancing information, helped build support for social distancing, criticised government responses, expressed support for key workers and helped each other cope with social isolation. A few popular tweets not supporting social distancing show that government messages sometimes failed. Practical implications Public health campaigns in future may consider encouraging grass roots social web activity to support campaign goals. At a methodological level, analysing retweet counts emphasised politics and ignored practical implementation issues. Originality/value This is the first qualitative analysis of general COVID-19-related retweeting.
    Date
    20. 1.2015 18:30:22
  16. Hoeber, O.; Harvey, M.; Dewan Sagar, S.A.; Pointon, M.: ¬The effects of simulated interruptions on mobile search tasks (2022) 0.00
    0.004135637 = product of:
      0.018610368 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 563) [ClassicSimilarity], result of:
              0.017300837 = score(doc=563,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=563)
          0.5 = coord(1/2)
        0.009959949 = product of:
          0.019919898 = sum of:
            0.019919898 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.019919898 = score(doc=563,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.19345059 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    While it is clear that using a mobile device can interrupt real-world activities such as walking or driving, the effects of interruptions on mobile device use have been under-studied. We are particularly interested in how the ambient distraction of walking while using a mobile device, combined with the occurrence of simulated interruptions of different levels of cognitive complexity, affect web search activities. We have established an experimental design to study how the degree of cognitive complexity of simulated interruptions influences both objective and subjective search task performance. In a controlled laboratory study (n = 27), quantitative and qualitative data were collected on mobile search performance, perceptions of the interruptions, and how participants reacted to the interruptions, using a custom mobile eye-tracking app, a questionnaire, and observations. As expected, more cognitively complex interruptions resulted in increased overall task completion times and higher perceived impacts. Interestingly, the effect on the resumption lag or the actual search performance was not significant, showing the resiliency of people to resume their tasks after an interruption. Implications from this study enhance our understanding of how interruptions objectively and subjectively affect search task performance, motivating the need for providing explicit mobile search support to enable recovery from interruptions.
    Date
    3. 5.2022 13:22:33
  17. Zhang, Y.; Liu, J.; Song, S.: ¬The design and evaluation of a nudge-based interface to facilitate consumers' evaluation of online health information credibility (2023) 0.00
    0.004135637 = product of:
      0.018610368 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 993) [ClassicSimilarity], result of:
              0.017300837 = score(doc=993,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 993, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=993)
          0.5 = coord(1/2)
        0.009959949 = product of:
          0.019919898 = sum of:
            0.019919898 = weight(_text_:22 in 993) [ClassicSimilarity], result of:
              0.019919898 = score(doc=993,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.19345059 = fieldWeight in 993, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=993)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Evaluating the quality of online health information (OHI) is a major challenge facing consumers. We designed PageGraph, an interface that displays quality indicators and associated values for a webpage, based on credibility evaluation models, the nudge theory, and existing empirical research concerning professionals' and consumers' evaluation of OHI quality. A qualitative evaluation of the interface with 16 participants revealed that PageGraph rendered the information and presentation nudges as intended. It provided the participants with easier access to quality indicators, encouraged fresh angles to assess information credibility, provided an evaluation framework, and encouraged validation of initial judgments. We then conducted a quantitative evaluation of the interface involving 60 participants using a between-subject experimental design. The control group used a regular web browser and evaluated the credibility of 12 preselected webpages, whereas the experimental group evaluated the same webpages with the assistance of PageGraph. PageGraph did not significantly influence participants' evaluation results. The results may be attributed to the insufficiency of the saliency and structure of the nudges implemented and the webpage stimuli's lack of sensitivity to the intervention. Future directions for applying nudges to support OHI evaluation were discussed.
    Date
    22. 6.2023 18:18:34
  18. Donath, A.: Nutzungsverbote für ChatGPT (2023) 0.00
    0.0040742713 = product of:
      0.036668442 = sum of:
        0.036668442 = product of:
          0.073336884 = sum of:
            0.073336884 = weight(_text_:bewertung in 877) [ClassicSimilarity], result of:
              0.073336884 = score(doc=877,freq=4.0), product of:
                0.18575147 = queryWeight, product of:
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.02940506 = queryNorm
                0.39481187 = fieldWeight in 877, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.03125 = fieldNorm(doc=877)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Content
    Milliardenbewertung für ChatGPT OpenAI, das Chatbot ChatGPT betreibt, befindet sich laut einem Bericht des Wall Street Journals in Gesprächen zu einem Aktienverkauf. Das WSJ meldete, der mögliche Verkauf der Aktien würde die Bewertung von OpenAI auf 29 Milliarden US-Dollar anheben. Sorgen auch in Brandenburg Der brandenburgische SPD-Abgeordnete Erik Stohn stellte mit Hilfe von ChatGPT eine Kleine Anfrage an den Brandenburger Landtag, in der er fragte, wie die Landesregierung sicherstelle, dass Studierende bei maschinell erstellten Texten gerecht beurteilt und benotet würden. Er fragte auch nach Maßnahmen, die ergriffen worden seien, um sicherzustellen, dass maschinell erstellte Texte nicht in betrügerischer Weise von Studierenden bei der Bewertung von Studienleistungen verwendet werden könnten.
  19. Barth, T.: Digitalisierung und Lobby : Transhumanismus I (2020) 0.00
    0.0039636334 = product of:
      0.035672702 = sum of:
        0.035672702 = product of:
          0.071345404 = sum of:
            0.071345404 = weight(_text_:seite in 5665) [ClassicSimilarity], result of:
              0.071345404 = score(doc=5665,freq=2.0), product of:
                0.16469958 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.02940506 = queryNorm
                0.4331851 = fieldWeight in 5665, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5665)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Content
    Vgl. die Fortsetzung: Barth, T.: Inverse Panopticon: Digitalisierung & Transhumanismus [Transhumanismus II]. [25. Januar 2020]. Unter: https://www.heise.de/tp/features/Inverse-Panopticon-Digitalisierung-Transhumanismus-4645668.html?seite=all.
  20. Lieb, W.: Krise der Medien, Krise der Demokratie? (2021) 0.00
    0.0039636334 = product of:
      0.035672702 = sum of:
        0.035672702 = product of:
          0.071345404 = sum of:
            0.071345404 = weight(_text_:seite in 325) [ClassicSimilarity], result of:
              0.071345404 = score(doc=325,freq=2.0), product of:
                0.16469958 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.02940506 = queryNorm
                0.4331851 = fieldWeight in 325, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=325)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Source
    https://www.heise.de/tp/features/Krise-der-Medien-Krise-der-Demokratie-6136952.html?seite=all

Languages

  • e 140
  • d 72
  • pt 1
  • More… Less…

Types

  • a 191
  • el 56
  • m 7
  • p 5
  • s 2
  • x 2
  • A 1
  • EL 1
  • More… Less…