Search (472 results, page 1 of 24)

  • × year_i:[2020 TO 2030}
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.19
    0.19299357 = product of:
      0.3087897 = sum of:
        0.039276958 = product of:
          0.11783087 = sum of:
            0.11783087 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.11783087 = score(doc=1000,freq=2.0), product of:
                0.25158808 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.029675366 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.017459875 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.017459875 = score(doc=1000,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.11783087 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.11783087 = score(doc=1000,freq=2.0), product of:
            0.25158808 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029675366 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.11783087 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.11783087 = score(doc=1000,freq=2.0), product of:
            0.25158808 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029675366 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.016391123 = weight(_text_:data in 1000) [ClassicSimilarity], result of:
          0.016391123 = score(doc=1000,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.17468026 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.625 = coord(5/8)
    
    Abstract
    Vorgestellt wird die Konstruktion eines thematisch geordneten Thesaurus auf Basis der Sachschlagwörter der Gemeinsamen Normdatei (GND) unter Nutzung der darin enthaltenen DDC-Notationen. Oberste Ordnungsebene dieses Thesaurus werden die DDC-Sachgruppen der Deutschen Nationalbibliothek. Die Konstruktion des Thesaurus erfolgt regelbasiert unter der Nutzung von Linked Data Prinzipien in einem SPARQL Prozessor. Der Thesaurus dient der automatisierten Gewinnung von Metadaten aus wissenschaftlichen Publikationen mittels eines computerlinguistischen Extraktors. Hierzu werden digitale Volltexte verarbeitet. Dieser ermittelt die gefundenen Schlagwörter über Vergleich der Zeichenfolgen Benennungen im Thesaurus, ordnet die Treffer nach Relevanz im Text und gibt die zugeordne-ten Sachgruppen rangordnend zurück. Die grundlegende Annahme dabei ist, dass die gesuchte Sachgruppe unter den oberen Rängen zurückgegeben wird. In einem dreistufigen Verfahren wird die Leistungsfähigkeit des Verfahrens validiert. Hierzu wird zunächst anhand von Metadaten und Erkenntnissen einer Kurzautopsie ein Goldstandard aus Dokumenten erstellt, die im Online-Katalog der DNB abrufbar sind. Die Dokumente vertei-len sich über 14 der Sachgruppen mit einer Losgröße von jeweils 50 Dokumenten. Sämtliche Dokumente werden mit dem Extraktor erschlossen und die Ergebnisse der Kategorisierung do-kumentiert. Schließlich wird die sich daraus ergebende Retrievalleistung sowohl für eine harte (binäre) Kategorisierung als auch eine rangordnende Rückgabe der Sachgruppen beurteilt.
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.12
    0.12372241 = product of:
      0.32992643 = sum of:
        0.04713235 = product of:
          0.14139704 = sum of:
            0.14139704 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.14139704 = score(doc=862,freq=2.0), product of:
                0.25158808 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.029675366 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.14139704 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14139704 = score(doc=862,freq=2.0), product of:
            0.25158808 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029675366 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.14139704 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14139704 = score(doc=862,freq=2.0), product of:
            0.25158808 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029675366 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.375 = coord(3/8)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.05
    0.048186667 = product of:
      0.12849778 = sum of:
        0.025746442 = weight(_text_:wide in 79) [ClassicSimilarity], result of:
          0.025746442 = score(doc=79,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.1958137 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.059260778 = weight(_text_:web in 79) [ClassicSimilarity], result of:
          0.059260778 = score(doc=79,freq=36.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.6119082 = fieldWeight in 79, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.043490566 = weight(_text_:data in 79) [ClassicSimilarity], result of:
          0.043490566 = score(doc=79,freq=22.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.46347913 = fieldWeight in 79, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
      0.375 = coord(3/8)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Series
    Lecture notes on data engineering and communications technologies book series; vol.32
    Source
    Data visualization and knowledge engineering. Eds. J. Hemanth, et al
    Theme
    Semantic Web
  4. Smith, A.: Simple Knowledge Organization System (SKOS) (2022) 0.04
    0.044521045 = product of:
      0.11872279 = sum of:
        0.054616455 = weight(_text_:wide in 1094) [ClassicSimilarity], result of:
          0.054616455 = score(doc=1094,freq=4.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.4153836 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.03628967 = weight(_text_:web in 1094) [ClassicSimilarity], result of:
          0.03628967 = score(doc=1094,freq=6.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.37471575 = fieldWeight in 1094, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.027816659 = weight(_text_:data in 1094) [ClassicSimilarity], result of:
          0.027816659 = score(doc=1094,freq=4.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.29644224 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
      0.375 = coord(3/8)
    
    Abstract
    SKOS (Simple Knowledge Organization System) is a recommendation from the World Wide Web Consortium (W3C) for representing controlled vocabularies, taxonomies, thesauri, classifications, and similar systems for organizing and indexing information as linked data elements in the Semantic Web, using the Resource Description Framework (RDF). The SKOS data model is centered on "concepts", which can have preferred and alternate labels in any language as well as other metadata, and which are identified by addresses on the World Wide Web (URIs). Concepts are grouped into hierarchies through "broader" and "narrower" relations, with "top concepts" at the broadest conceptual level. Concepts are also organized into "concept schemes", also identified by URIs. Other relations, mappings, and groupings are also supported. This article discusses the history of the development of SKOS and provides notes on adoption, uses, and limitations.
  5. Fernanda de Jesus, A.; Ferreira de Castro, F.: Proposal for the publication of linked open bibliographic data (2024) 0.04
    0.041854393 = product of:
      0.11161172 = sum of:
        0.038619664 = weight(_text_:wide in 1161) [ClassicSimilarity], result of:
          0.038619664 = score(doc=1161,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.29372054 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
        0.020951848 = weight(_text_:web in 1161) [ClassicSimilarity], result of:
          0.020951848 = score(doc=1161,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.21634221 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
        0.052040204 = weight(_text_:data in 1161) [ClassicSimilarity], result of:
          0.052040204 = score(doc=1161,freq=14.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.55459267 = fieldWeight in 1161, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
      0.375 = coord(3/8)
    
    Abstract
    Linked Open Data (LOD) are a set of principles for publishing structured, connected data available for reuse under an open license. The objective of this paper is to analyze the publishing of bibliographic data such as LOD, having as a product the elaboration of theoretical-methodological recommendations for the publication of these data, in an approach based on the ten best practices for publishing LOD, from the World Wide Web Consortium. The starting point was the conduction of a Systematic Review of Literature, where initiatives to publish bibliographic data such as LOD were identified. An empirical study of these institutions was also conducted. As a result, theoretical-methodological recommendations were obtained for the process of publishing bibliographic data such as LOD.
  6. Mandl, T.: Text Mining und Data Mining (2023) 0.04
    0.036425237 = product of:
      0.14570095 = sum of:
        0.05620984 = weight(_text_:data in 774) [ClassicSimilarity], result of:
          0.05620984 = score(doc=774,freq=12.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.59902847 = fieldWeight in 774, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=774)
        0.08949111 = product of:
          0.17898221 = sum of:
            0.17898221 = weight(_text_:mining in 774) [ClassicSimilarity], result of:
              0.17898221 = score(doc=774,freq=12.0), product of:
                0.16744171 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.029675366 = queryNorm
                1.0689225 = fieldWeight in 774, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=774)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Text und Data Mining sind ein Bündel von Technologien, die eng mit den Themenfeldern Statistik, Maschinelles Lernen und dem Erkennen von Mustern verbunden sind. Die üblichen Definitionen beziehen eine Vielzahl von verschiedenen Verfahren mit ein, ohne eine exakte Grenze zu ziehen. Data Mining bezeichnet die Suche nach Mustern, Regelmäßigkeiten oder Auffälligkeiten in stark strukturierten und vor allem numerischen Daten. "Any algorithm that enumerates patterns from, or fits models to, data is a data mining algorithm." Numerische Daten und Datenbankinhalte werden als strukturierte Daten bezeichnet. Dagegen gelten Textdokumente in natürlicher Sprache als unstrukturierte Daten.
    Theme
    Data Mining
  7. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.03
    0.032478906 = product of:
      0.086610414 = sum of:
        0.017459875 = weight(_text_:web in 106) [ClassicSimilarity], result of:
          0.017459875 = score(doc=106,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.18028519 = fieldWeight in 106, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=106)
        0.059099037 = weight(_text_:data in 106) [ClassicSimilarity], result of:
          0.059099037 = score(doc=106,freq=26.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.6298187 = fieldWeight in 106, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=106)
        0.010051507 = product of:
          0.020103013 = sum of:
            0.020103013 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
              0.020103013 = score(doc=106,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.19345059 = fieldWeight in 106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=106)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Purpose The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions. Design/methodology/approach This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions. Findings Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data. Originality/value This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.
    Date
    22. 1.2021 14:24:32
  8. Al-Khatib, K.; Ghosa, T.; Hou, Y.; Waard, A. de; Freitag, D.: Argument mining for scholarly document processing : taking stock and looking ahead (2021) 0.03
    0.02810967 = product of:
      0.11243868 = sum of:
        0.022947572 = weight(_text_:data in 568) [ClassicSimilarity], result of:
          0.022947572 = score(doc=568,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.24455236 = fieldWeight in 568, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=568)
        0.08949111 = product of:
          0.17898221 = sum of:
            0.17898221 = weight(_text_:mining in 568) [ClassicSimilarity], result of:
              0.17898221 = score(doc=568,freq=12.0), product of:
                0.16744171 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.029675366 = queryNorm
                1.0689225 = fieldWeight in 568, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=568)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Argument mining targets structures in natural language related to interpretation and persuasion. Most scholarly discourse involves interpreting experimental evidence and attempting to persuade other scientists to adopt the same conclusions, which could benefit from argument mining techniques. However, While various argument mining studies have addressed student essays and news articles, those that target scientific discourse are still scarce. This paper surveys existing work in argument mining of scholarly discourse, and provides an overview of current models, data, tasks, and applications. We identify a number of key challenges confronting argument mining in the scientific domain, and suggest some possible solutions and future directions.
  9. Heesen, H.; Jüngels, L.: ¬Der Regierungsentwurf der Text und Data Mining-Schranken (§§ 44b, 60d UrhG-E) : ein Überblick zu den geplanten Regelungen für Kultur- und Wissenschaftseinrichtungen (2021) 0.03
    0.025492357 = product of:
      0.10196943 = sum of:
        0.039338693 = weight(_text_:data in 190) [ClassicSimilarity], result of:
          0.039338693 = score(doc=190,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.4192326 = fieldWeight in 190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=190)
        0.062630735 = product of:
          0.12526147 = sum of:
            0.12526147 = weight(_text_:mining in 190) [ClassicSimilarity], result of:
              0.12526147 = score(doc=190,freq=2.0), product of:
                0.16744171 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.029675366 = queryNorm
                0.74808997 = fieldWeight in 190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.09375 = fieldNorm(doc=190)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
  10. Vogt, T.: ¬Die Transformation des renommierten Informationsservices zbMATH zu einer Open Access-Plattform für die Mathematik steht vor dem Abschluss. (2020) 0.02
    0.024939483 = product of:
      0.19951586 = sum of:
        0.19951586 = product of:
          0.49878964 = sum of:
            0.24939482 = weight(_text_:c3 in 31) [ClassicSimilarity], result of:
              0.24939482 = score(doc=31,freq=2.0), product of:
                0.28936383 = queryWeight, product of:
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.029675366 = queryNorm
                0.8618728 = fieldWeight in 31, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.0625 = fieldNorm(doc=31)
            0.24939482 = weight(_text_:c3 in 31) [ClassicSimilarity], result of:
              0.24939482 = score(doc=31,freq=2.0), product of:
                0.28936383 = queryWeight, product of:
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.029675366 = queryNorm
                0.8618728 = fieldWeight in 31, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.0625 = fieldNorm(doc=31)
          0.4 = coord(2/5)
      0.125 = coord(1/8)
    
    Content
    "Mit Beginn des Jahres 2021 wird der umfassende internationale Informationsservice zbMATH in eine Open Access-Plattform überführt. Dann steht dieser bislang kostenpflichtige Dienst weltweit allen Interessierten kostenfrei zur Verfügung. Die Änderung des Geschäftsmodells ermöglicht, die meisten Informationen und Daten von zbMATH für Forschungszwecke und zur Verknüpfung mit anderen nicht-kommerziellen Diensten frei zu nutzen, siehe: https://www.mathematik.de/dmv-blog/2772-transformation-von-zbmath-zu-einer-open-access-plattform-f%C3%BCr-die-mathematik-kurz-vor-dem-abschluss."
  11. Wu, D.; Xu, H.; Sun, Y.; Lv, S.: What should we teach? : A human-centered data science graduate curriculum model design for iField schools (2023) 0.02
    0.024336748 = product of:
      0.09734699 = sum of:
        0.045513712 = weight(_text_:wide in 961) [ClassicSimilarity], result of:
          0.045513712 = score(doc=961,freq=4.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.34615302 = fieldWeight in 961, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=961)
        0.05183328 = weight(_text_:data in 961) [ClassicSimilarity], result of:
          0.05183328 = score(doc=961,freq=20.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.5523875 = fieldWeight in 961, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=961)
      0.25 = coord(2/8)
    
    Abstract
    The information schools, also referred to as iField schools, are leaders in data science education. This study aims to develop a data science graduate curriculum model from an information science perspective to support iField schools in developing data science graduate education. In June 2020, information about 96 data science graduate programs from iField schools worldwide was collected and analyzed using a mixed research method based on inductive content analysis. A wide range of data science competencies and skills development and 12 knowledge topics covered by the curriculum were obtained. The humanistic model is further taken as the theoretical and methodological basis for course model construction, and 12 course knowledge topics are reconstructed into 4 course modules, including (a) data-driven methods and techniques; (b) domain knowledge; (c) legal, moral, and ethical aspects of data; and (d) shaping and developing personal traits, and human-centered data science graduate curriculum model is formed. At the end of the study, the wide application prospect of this model is discussed.
    Footnote
    Beitrag in einem Special issue on "Data Science in the iField".
  12. Wiegmann, S.: Hättest du die Titanic überlebt? : Eine kurze Einführung in das Data Mining mit freier Software (2023) 0.02
    0.02285352 = product of:
      0.09141408 = sum of:
        0.039746363 = weight(_text_:data in 876) [ClassicSimilarity], result of:
          0.039746363 = score(doc=876,freq=6.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.42357713 = fieldWeight in 876, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=876)
        0.051667716 = product of:
          0.10333543 = sum of:
            0.10333543 = weight(_text_:mining in 876) [ClassicSimilarity], result of:
              0.10333543 = score(doc=876,freq=4.0), product of:
                0.16744171 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.029675366 = queryNorm
                0.61714274 = fieldWeight in 876, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=876)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Am 10. April 1912 ging Elisabeth Walton Allen an Bord der "Titanic", um ihr Hab und Gut nach England zu holen. Eines Nachts wurde sie von ihrer aufgelösten Tante geweckt, deren Kajüte unter Wasser stand. Wie steht es um Elisabeths Chancen und hätte man selbst das Unglück damals überlebt? Das Titanic-Orakel ist eine algorithmusbasierte App, die entsprechende Prognosen aufstellt und im Rahmen des Kurses "Data Science" am Department Information der HAW Hamburg entstanden ist. Dieser Beitrag zeigt Schritt für Schritt, wie die App unter Verwendung freier Software entwickelt wurde. Code und Daten werden zur Nachnutzung bereitgestellt.
    Theme
    Data Mining
  13. Lowe, D.B.; Dollinger, I.; Koster, T.; Herbert, B.E.: Text mining for type of research classification (2021) 0.02
    0.02242316 = product of:
      0.08969264 = sum of:
        0.019669347 = weight(_text_:data in 720) [ClassicSimilarity], result of:
          0.019669347 = score(doc=720,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.2096163 = fieldWeight in 720, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=720)
        0.07002329 = product of:
          0.14004658 = sum of:
            0.14004658 = weight(_text_:mining in 720) [ClassicSimilarity], result of:
              0.14004658 = score(doc=720,freq=10.0), product of:
                0.16744171 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.029675366 = queryNorm
                0.83639 = fieldWeight in 720, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.046875 = fieldNorm(doc=720)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This project brought together undergraduate students in Computer Science with librarians to mine abstracts of articles from the Texas A&M University Libraries' institutional repository, OAKTrust, in order to probe the creation of new metadata to improve discovery and use. The mining operation task consisted simply of classifying the articles into two categories of research type: basic research ("for understanding," "curiosity-based," or "knowledge-based") and applied research ("use-based"). These categories are fundamental especially for funders but are also important to researchers. The mining-to-classification steps took several iterations, but ultimately, we achieved good results with the toolkit BERT (Bidirectional Encoder Representations from Transformers). The project and its workflows represent a preview of what may lie ahead in the future of crafting metadata using text mining techniques to enhance discoverability.
    Theme
    Data Mining
  14. Jones, K.M.L.; Rubel, A.; LeClere, E.: ¬A matter of trust : higher education institutions as information fiduciaries in an age of educational data mining and learning analytics (2020) 0.02
    0.022141669 = product of:
      0.088566676 = sum of:
        0.04336684 = weight(_text_:data in 5968) [ClassicSimilarity], result of:
          0.04336684 = score(doc=5968,freq=14.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.46216056 = fieldWeight in 5968, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5968)
        0.04519984 = product of:
          0.09039968 = sum of:
            0.09039968 = weight(_text_:mining in 5968) [ClassicSimilarity], result of:
              0.09039968 = score(doc=5968,freq=6.0), product of:
                0.16744171 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.029675366 = queryNorm
                0.5398875 = fieldWeight in 5968, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5968)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Higher education institutions are mining and analyzing student data to effect educational, political, and managerial outcomes. Done under the banner of "learning analytics," this work can-and often does-surface sensitive data and information about, inter alia, a student's demographics, academic performance, offline and online movements, physical fitness, mental wellbeing, and social network. With these data, institutions and third parties are able to describe student life, predict future behaviors, and intervene to address academic or other barriers to student success (however defined). Learning analytics, consequently, raise serious issues concerning student privacy, autonomy, and the appropriate flow of student data. We argue that issues around privacy lead to valid questions about the degree to which students should trust their institution to use learning analytics data and other artifacts (algorithms, predictive scores) with their interests in mind. We argue that higher education institutions are paradigms of information fiduciaries. As such, colleges and universities have a special responsibility to their students. In this article, we use the information fiduciary concept to analyze cases when learning analytics violate an institution's responsibility to its students.
    Theme
    Data Mining
  15. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.02
    0.022082468 = product of:
      0.088329874 = sum of:
        0.03628967 = weight(_text_:web in 38) [ClassicSimilarity], result of:
          0.03628967 = score(doc=38,freq=6.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.37471575 = fieldWeight in 38, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
        0.052040204 = weight(_text_:data in 38) [ClassicSimilarity], result of:
          0.052040204 = score(doc=38,freq=14.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.55459267 = fieldWeight in 38, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
      0.25 = coord(2/8)
    
    Abstract
    A variety of schemas and ontologies are currently used for the machine-readable description of bibliographic entities and citations. This diversity, and the reuse of the same ontology terms with different nuances, generates inconsistencies in data. Adoption of a single data model would facilitate data integration tasks regardless of the data supplier or context application. In this paper we present the OpenCitations Data Model (OCDM), a generic data model for describing bibliographic entities and citations, developed using Semantic Web technologies. We also evaluate the effective reusability of OCDM according to ontology evaluation practices, mention existing users of OCDM, and discuss the use and impact of OCDM in the wider open science community.
    Content
    Erschienen in: The Semantic Web - ISWC 2020, 19th International Semantic Web Conference, Athens, Greece, November 2-6, 2020, Proceedings, Part II. Vgl.: DOI: 10.1007/978-3-030-62466-8_28.
  16. Borgman, C.L.; Wofford, M.F.; Golshan, M.S.; Darch, P.T.: Collaborative qualitative research at scale : reflections on 20 years of acquiring global data and making data global (2021) 0.02
    0.021856526 = product of:
      0.0874261 = sum of:
        0.061329965 = weight(_text_:data in 239) [ClassicSimilarity], result of:
          0.061329965 = score(doc=239,freq=28.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.65359366 = fieldWeight in 239, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=239)
        0.02609614 = product of:
          0.05219228 = sum of:
            0.05219228 = weight(_text_:mining in 239) [ClassicSimilarity], result of:
              0.05219228 = score(doc=239,freq=2.0), product of:
                0.16744171 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.029675366 = queryNorm
                0.31170416 = fieldWeight in 239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=239)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    A 5-year project to study scientific data uses in geography, starting in 1999, evolved into 20 years of research on data practices in sensor networks, environmental sciences, biology, seismology, undersea science, biomedicine, astronomy, and other fields. By emulating the "team science" approaches of the scientists studied, the UCLA Center for Knowledge Infrastructures accumulated a comprehensive collection of qualitative data about how scientists generate, manage, use, and reuse data across domains. Building upon Paul N. Edwards's model of "making global data"-collecting signals via consistent methods, technologies, and policies-to "make data global"-comparing and integrating those data, the research team has managed and exploited these data as a collaborative resource. This article reflects on the social, technical, organizational, economic, and policy challenges the team has encountered in creating new knowledge from data old and new. We reflect on continuity over generations of students and staff, transitions between grants, transfer of legacy data between software tools, research methods, and the role of professional data managers in the social sciences.
    Theme
    Data Mining
  17. Peters, I.: Folksonomies & Social Tagging (2023) 0.02
    0.021848556 = product of:
      0.08739422 = sum of:
        0.045056276 = weight(_text_:wide in 796) [ClassicSimilarity], result of:
          0.045056276 = score(doc=796,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.342674 = fieldWeight in 796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
        0.042337947 = weight(_text_:web in 796) [ClassicSimilarity], result of:
          0.042337947 = score(doc=796,freq=6.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.43716836 = fieldWeight in 796, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
      0.25 = coord(2/8)
    
    Abstract
    Die Erforschung und der Einsatz von Folksonomies und Social Tagging als nutzerzentrierte Formen der Inhaltserschließung und Wissensrepräsentation haben in den 10 Jahren ab ca. 2005 ihren Höhenpunkt erfahren. Motiviert wurde dies durch die Entwicklung und Verbreitung des Social Web und der wachsenden Nutzung von Social-Media-Plattformen (s. Kapitel E 8 Social Media und Social Web). Beides führte zu einem rasanten Anstieg der im oder über das World Wide Web auffindbaren Menge an potenzieller Information und generierte eine große Nachfrage nach skalierbaren Methoden der Inhaltserschließung.
  18. Kang, M.: Dual paths to continuous online knowledge sharing : a repetitive behavior perspective (2020) 0.02
    0.020963114 = product of:
      0.05590164 = sum of:
        0.017459875 = weight(_text_:web in 5985) [ClassicSimilarity], result of:
          0.017459875 = score(doc=5985,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.18028519 = fieldWeight in 5985, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5985)
        0.028390257 = weight(_text_:data in 5985) [ClassicSimilarity], result of:
          0.028390257 = score(doc=5985,freq=6.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.30255508 = fieldWeight in 5985, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5985)
        0.010051507 = product of:
          0.020103013 = sum of:
            0.020103013 = weight(_text_:22 in 5985) [ClassicSimilarity], result of:
              0.020103013 = score(doc=5985,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.19345059 = fieldWeight in 5985, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5985)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Purpose Continuous knowledge sharing by active users, who are highly active in answering questions, is crucial to the sustenance of social question-and-answer (Q&A) sites. The purpose of this paper is to examine such knowledge sharing considering reason-based elaborate decision and habit-based automated cognitive processes. Design/methodology/approach To verify the research hypotheses, survey data on subjective intentions and web-crawled data on objective behavior are utilized. The sample size is 337 with the response rate of 27.2 percent. Negative binomial and hierarchical linear regressions are used given the skewed distribution of the dependent variable (i.e. the number of answers). Findings Both elaborate decision (linking satisfaction, intentions and continuance behavior) and automated cognitive processes (linking past and continuance behavior) are significant and substitutable. Research limitations/implications By measuring both subjective intentions and objective behavior, it verifies a detailed mechanism linking continuance intentions, past behavior and continuous knowledge sharing. The significant influence of automated cognitive processes implies that online knowledge sharing is habitual for active users. Practical implications Understanding that online knowledge sharing is habitual is imperative to maintaining continuous knowledge sharing by active users. Knowledge sharing trends should be monitored to check if the frequency of sharing decreases. Social Q&A sites should intervene to restore knowledge sharing behavior through personalized incentives. Originality/value This is the first study utilizing both subjective intentions and objective behavior data in the context of online knowledge sharing. It also introduces habit-based automated cognitive processes to this context. This approach extends the current understanding of continuous online knowledge sharing behavior.
    Date
    20. 1.2015 18:30:22
  19. Urs, S.R.; Minhaj, M.: Evolution of data science and its education in iSchools : an impressionistic study using curriculum analysis (2023) 0.02
    0.020816652 = product of:
      0.08326661 = sum of:
        0.046361096 = weight(_text_:data in 960) [ClassicSimilarity], result of:
          0.046361096 = score(doc=960,freq=16.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.49407038 = fieldWeight in 960, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=960)
        0.036905512 = product of:
          0.073811024 = sum of:
            0.073811024 = weight(_text_:mining in 960) [ClassicSimilarity], result of:
              0.073811024 = score(doc=960,freq=4.0), product of:
                0.16744171 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.029675366 = queryNorm
                0.44081625 = fieldWeight in 960, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=960)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Data Science (DS) has emerged from the shadows of its parents-statistics and computer science-into an independent field since its origin nearly six decades ago. Its evolution and education have taken many sharp turns. We present an impressionistic study of the evolution of DS anchored to Kuhn's four stages of paradigm shifts. First, we construct the landscape of DS based on curriculum analysis of the 32 iSchools across the world offering graduate-level DS programs. Second, we paint the "field" as it emerges from the word frequency patterns, ranking, and clustering of course titles based on text mining. Third, we map the curriculum to the landscape of DS and project the same onto the Edison Data Science Framework (2017) and ACM Data Science Knowledge Areas (2021). Our study shows that the DS programs of iSchools align well with the field and correspond to the Knowledge Areas and skillsets. iSchool's DS curriculums exhibit a bias toward "data visualization" along with machine learning, data mining, natural language processing, and artificial intelligence; go light on statistics; slanted toward ontologies and health informatics; and surprisingly minimal thrust toward eScience/research data management, which we believe would add a distinctive iSchool flavor to the DS.
    Footnote
    Beitrag in einem Special issue on "Data Science in the iField".
  20. Huber, W.: Menschen, Götter und Maschinen : eine Ethik der Digitalisierung (2022) 0.02
    0.019810217 = product of:
      0.052827243 = sum of:
        0.025746442 = weight(_text_:wide in 752) [ClassicSimilarity], result of:
          0.025746442 = score(doc=752,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.1958137 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
        0.0139679 = weight(_text_:web in 752) [ClassicSimilarity], result of:
          0.0139679 = score(doc=752,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.14422815 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
        0.013112898 = weight(_text_:data in 752) [ClassicSimilarity], result of:
          0.013112898 = score(doc=752,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.1397442 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
      0.375 = coord(3/8)
    
    Content
    Vorwort -- 1. Das digitale Zeitalter -- Zeitenwende -- Die Vorherrschaft des Buchdrucks geht zu Ende -- Wann beginnt das digitale Zeitalter? -- 2. Zwischen Euphorie und Apokalypse -- Digitalisierung. Einfach. Machen -- Euphorie -- Apokalypse -- Verantwortungsethik -- Der Mensch als Subjekt der Ethik -- Verantwortung als Prinzip -- 3. Digitalisierter Alltag in einer globalisierten Welt -- Vom World Wide Web zum Internet der Dinge -- Mobiles Internet und digitale Bildung -- Digitale Plattformen und ihre Strategien -- Big Data und informationelle Selbstbestimmung -- 4. Grenzüberschreitungen -- Die Erosion des Privaten -- Die Deformation des Öffentlichen -- Die Senkung von Hemmschwellen -- Das Verschwinden der Wirklichkeit -- Die Wahrheit in der Infosphäre -- 5. Die Zukunft der Arbeit -- Industrielle Revolutionen -- Arbeit 4.0 -- Ethik 4.0 -- 6. Digitale Intelligenz -- Können Computer dichten? -- Stärker als der Mensch? -- Maschinelles Lernen -- Ein bleibender Unterschied -- Ethische Prinzipien für den Umgang mit digitaler Intelligenz -- Medizin als Beispiel -- 7. Die Würde des Menschen im digitalen Zeitalter -- Kränkungen oder Revolutionen -- Transhumanismus und Posthumanismus -- Gibt es Empathie ohne Menschen? -- Wer ist autonom: Mensch oder Maschine? -- Humanismus der Verantwortung -- 8. Die Zukunft des Homo sapiens -- Vergöttlichung des Menschen -- Homo deus -- Gott und Mensch im digitalen Zeitalter -- Veränderung der Menschheit -- Literatur -- Personenregister.

Languages

  • e 380
  • d 91
  • pt 2
  • m 1
  • More… Less…

Types

  • a 426
  • el 86
  • m 14
  • p 7
  • s 4
  • x 3
  • A 1
  • EL 1
  • More… Less…