Search (104 results, page 1 of 6)

  • × type_ss:"a"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Schleim, S.: Körper ist Geist (2017) 0.20
    0.20040713 = product of:
      0.80162853 = sum of:
        0.23347573 = weight(_text_:materialismus in 3670) [ClassicSimilarity], result of:
          0.23347573 = score(doc=3670,freq=4.0), product of:
            0.27705562 = queryWeight, product of:
              8.988837 = idf(docFreq=14, maxDocs=44218)
              0.030822188 = queryNorm
            0.84270346 = fieldWeight in 3670, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.988837 = idf(docFreq=14, maxDocs=44218)
              0.046875 = fieldNorm(doc=3670)
        0.5681528 = sum of:
          0.2671861 = weight(_text_:leib in 3670) [ClassicSimilarity], result of:
            0.2671861 = score(doc=3670,freq=8.0), product of:
              0.24922726 = queryWeight, product of:
                8.085969 = idf(docFreq=36, maxDocs=44218)
                0.030822188 = queryNorm
              1.0720581 = fieldWeight in 3670, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                8.085969 = idf(docFreq=36, maxDocs=44218)
                0.046875 = fieldNorm(doc=3670)
          0.21865623 = weight(_text_:seele in 3670) [ClassicSimilarity], result of:
            0.21865623 = score(doc=3670,freq=8.0), product of:
              0.22546001 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.030822188 = queryNorm
              0.96982265 = fieldWeight in 3670, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.046875 = fieldNorm(doc=3670)
          0.08231041 = weight(_text_:problem in 3670) [ClassicSimilarity], result of:
            0.08231041 = score(doc=3670,freq=10.0), product of:
              0.13082431 = queryWeight, product of:
                4.244485 = idf(docFreq=1723, maxDocs=44218)
                0.030822188 = queryNorm
              0.6291675 = fieldWeight in 3670, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                4.244485 = idf(docFreq=1723, maxDocs=44218)
                0.046875 = fieldNorm(doc=3670)
      0.25 = coord(2/8)
    
    Abstract
    Bezugnahme auf: Lehmann, K.: Denken mit Leib und Seele. Unter: https://www.heise.de/tp/features/Denken-mit-Leib-und-Seele-3593478.html. Konrad Lehmann wies in seinem Artikel vom 27. Mai auf Probleme hin, Bewusstsein naturwissenschaftlich zu erklären. Er kritisierte das uneingelöste Versprechen des Materialismus beziehungsweise Physikalismus und diskutierte alternative Ansätze, wie den Panpsychismus, Idealismus und praktischen Materialismus. Meiner Meinung nach muss man Aussagen über das Sein (also ontologische Aussagen) klarer von Aussagen über unsere Erkenntnis (also epistemische Aussagen) trennen, als Lehmann dies getan hat. Außerdem sollte man ein Problem klarer definieren, bevor man es diskutiert. Nach einer Replik auf den Artikel Lehmanns möchte ich meine eigenen Gedanken zum Leib-Seele-Problem zur Diskussion stellen. Dabei gehe ich auch auf Aussagen über den Erkenntniswert der Neurowissenschaften zum Verständnis des Bewusstseins ein. Mein Vorschlag ist, sich vom traditionellen Leib-Seele-Problem zu verabschieden, da es eher ein Problem unseres Denkens als ein Problem der Welt ist. Replik darauf: Lehmann, K.: Geist? Welcher Geist?. [04. Juni 2017]. Unter: https://www.heise.de/tp/features/Geist-Welcher-Geist-3733426.html.
  2. Koch, M.: Bewusstsein ohne Gehirn : kann man den Geist eines Menschen auf eine Maschine übertragen? (2018) 0.08
    0.08175993 = product of:
      0.32703972 = sum of:
        0.16509227 = weight(_text_:materialismus in 4436) [ClassicSimilarity], result of:
          0.16509227 = score(doc=4436,freq=2.0), product of:
            0.27705562 = queryWeight, product of:
              8.988837 = idf(docFreq=14, maxDocs=44218)
              0.030822188 = queryNorm
            0.59588134 = fieldWeight in 4436, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.988837 = idf(docFreq=14, maxDocs=44218)
              0.046875 = fieldNorm(doc=4436)
        0.16194746 = product of:
          0.24292117 = sum of:
            0.13359305 = weight(_text_:leib in 4436) [ClassicSimilarity], result of:
              0.13359305 = score(doc=4436,freq=2.0), product of:
                0.24922726 = queryWeight, product of:
                  8.085969 = idf(docFreq=36, maxDocs=44218)
                  0.030822188 = queryNorm
                0.53602904 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.085969 = idf(docFreq=36, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
            0.10932811 = weight(_text_:seele in 4436) [ClassicSimilarity], result of:
              0.10932811 = score(doc=4436,freq=2.0), product of:
                0.22546001 = queryWeight, product of:
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.030822188 = queryNorm
                0.48491132 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.6666667 = coord(2/3)
      0.25 = coord(2/8)
    
    Abstract
    Die Frage, in welchem Verhältnis Leib und Seele beziehungsweise Materie und Geist zueinanderstehen, beschäftigt Wissenschaftler und Philosophen schon seit Langem. Eine viel zitierte Antwort gab Mitte des 19. Jahrhunderts der deutsch-schweizerische Naturforscher Carl Vogt. In seinen »Physiologischen Briefen« erklärte er, »dass all jene Fähigkeiten, die wir unter dem Namen Seelentätigkeiten begreifen, nur Funktionen der Gehirnsubstanz sind, oder, um mich einigermaßen grob auszudrücken, dass die Gedanken in demselben Verhältnis zu dem Gehirn stehen, wie die Galle zur Leber oder der Urin zu den Nieren«. Einer, der Vogt hier entschieden widersprach, war Wladimir I. Lenin. In seinem Buch »Materialismus und Empiriokritizismus« belegte er dessen Auffassung mit dem wenig schmeichelhaften Adjektiv »vulgärmaterialistisch«. Denn Empfindungen und davon abgeleitete Gedanken seien nicht materiell, sondern subjektive Abbilder der objektiven Welt, so Lenin. An einer Idee indes hielt er fest. Er beschrieb Geist und Bewusstsein als »höchste Produkte der in besonderer Weise organisierten Materie«, sprich des Gehirns.
  3. Lehmann, K.: Geist? Welcher Geist? (2017) 0.06
    0.058277395 = product of:
      0.46621916 = sum of:
        0.46621916 = sum of:
          0.22265509 = weight(_text_:leib in 2390) [ClassicSimilarity], result of:
            0.22265509 = score(doc=2390,freq=2.0), product of:
              0.24922726 = queryWeight, product of:
                8.085969 = idf(docFreq=36, maxDocs=44218)
                0.030822188 = queryNorm
              0.8933818 = fieldWeight in 2390, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.085969 = idf(docFreq=36, maxDocs=44218)
                0.078125 = fieldNorm(doc=2390)
          0.18221353 = weight(_text_:seele in 2390) [ClassicSimilarity], result of:
            0.18221353 = score(doc=2390,freq=2.0), product of:
              0.22546001 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.030822188 = queryNorm
              0.8081856 = fieldWeight in 2390, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.078125 = fieldNorm(doc=2390)
          0.06135055 = weight(_text_:problem in 2390) [ClassicSimilarity], result of:
            0.06135055 = score(doc=2390,freq=2.0), product of:
              0.13082431 = queryWeight, product of:
                4.244485 = idf(docFreq=1723, maxDocs=44218)
                0.030822188 = queryNorm
              0.46895373 = fieldWeight in 2390, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.244485 = idf(docFreq=1723, maxDocs=44218)
                0.078125 = fieldNorm(doc=2390)
      0.125 = coord(1/8)
    
    Abstract
    Replik auf die Replik von Stephan Schleim: Um über das Leib-Seele-Problem reden zu können, muss man sich zunächst darüber einigen, wovon man überhaupt spricht. Erwiderung auf: Schleim, S.: Körper ist Geist. [31.05.2017]. Unter: https://www.heise.de/tp/features/Koerper-ist-Geist-3729372.html.
  4. Lehmann, K.: Denken mit Leib und Seele (2017) 0.05
    0.047544096 = product of:
      0.38035277 = sum of:
        0.38035277 = sum of:
          0.18892913 = weight(_text_:leib in 3668) [ClassicSimilarity], result of:
            0.18892913 = score(doc=3668,freq=4.0), product of:
              0.24922726 = queryWeight, product of:
                8.085969 = idf(docFreq=36, maxDocs=44218)
                0.030822188 = queryNorm
              0.7580596 = fieldWeight in 3668, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                8.085969 = idf(docFreq=36, maxDocs=44218)
                0.046875 = fieldNorm(doc=3668)
          0.1546133 = weight(_text_:seele in 3668) [ClassicSimilarity], result of:
            0.1546133 = score(doc=3668,freq=4.0), product of:
              0.22546001 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.030822188 = queryNorm
              0.6857682 = fieldWeight in 3668, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.046875 = fieldNorm(doc=3668)
          0.03681033 = weight(_text_:problem in 3668) [ClassicSimilarity], result of:
            0.03681033 = score(doc=3668,freq=2.0), product of:
              0.13082431 = queryWeight, product of:
                4.244485 = idf(docFreq=1723, maxDocs=44218)
                0.030822188 = queryNorm
              0.28137225 = fieldWeight in 3668, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.244485 = idf(docFreq=1723, maxDocs=44218)
                0.046875 = fieldNorm(doc=3668)
      0.125 = coord(1/8)
    
    Abstract
    Im Jahre 1999 hörte ich bei einer Tagung von Neurowissenschaftlern und Geistesphilosophen einen Vortrag der bekannten Neurophilosophin Patricia Churchland. In der anschließenden Diskussion fragte sie ein Zuschauer die Große Frage: Wie denn nun Gehirn und Bewusstsein zusammenhingen? Churchland antwortete (so zumindest habe ich es in meinem kreativen Gedächtnis gespeichert): Nun, dazu seien wir ja hier, um das von den Neurobiologen zu erfahren. Es war die Bankrotterklärung der Philosophie: Bewusstsein als biologisches Problem. Allerdings war die Kapitulation der Geistesphilosophen zu der Zeit nachvollziehbar, denn die Neurowissenschaften zeigten quasi täglich neu die enge Kopplung von Gehirnfunktion und Erlebnisinhalten, die skurrilen Ausfallserscheinungen bei neurologischen Patienten mit klar definierten Schädigungen, und biochemische, drogeninduzierte Veränderungen des Bewusstseins. Erwiderung: Schleim, S.: Körper ist Geist. [31.05.2017]. Unter: https://www.heise.de/tp/features/Koerper-ist-Geist-3729372.html und weitere Entgegnung von Lehmann unter: https://www.heise.de/tp/features/Geist-Welcher-Geist-3733426.html.
    Source
    https://www.heise.de/tp/features/Denken-mit-Leib-und-Seele-3593478.html
  5. Horgan, J.: Hängt das Universum von uns ab? (2017) 0.02
    0.020636534 = product of:
      0.16509227 = sum of:
        0.16509227 = weight(_text_:materialismus in 4366) [ClassicSimilarity], result of:
          0.16509227 = score(doc=4366,freq=2.0), product of:
            0.27705562 = queryWeight, product of:
              8.988837 = idf(docFreq=14, maxDocs=44218)
              0.030822188 = queryNorm
            0.59588134 = fieldWeight in 4366, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.988837 = idf(docFreq=14, maxDocs=44218)
              0.046875 = fieldNorm(doc=4366)
      0.125 = coord(1/8)
    
    Content
    "Der kühle, harte Skeptiker in mir lehnt jedoch den Neo-Geozentrismus als genau die Art von geistig verschwommenem Mystizismus ab, von dem uns die Wissenschaft befreit hat. Der Neo-Geozentrismus verkörpert die Projektion unserer Ängste und Hoffnungen, unsere Sehnsucht nach Bedeutung. Seine wachsende Popularität ist vielleicht auch ein Symptom für die durch soziale Medien verursachte Selbstverliebtheit unserer Zeit. Aber nicht allein der Neo-Geozentrismus geht mir auf die Nerven. Ich lehne auch militanten Materialismus und Atheismus ab, die unser Verlangen nach transzendenter Bedeutung herabsetzen und das außergewöhnlich Unwahrscheinliche unserer Existenz nicht wahrzunehmen scheinen. Denn letztlich: Ohne einen bewussten Geist, der nachdenkt, könnte unser Universum ebenso gut nicht existieren. Wofür also spreche ich mich aus? Für die einfache Anerkennung der Tatsache, dass keine Theorie oder Theologie dem Mysterium unserer Existenz gerecht werden kann. Ein solcher moderater Agnostizismus ist, so scheint es mir, das, was ein Homo sapiens wählen würde."
  6. Widmann, V.: ¬Die Antiquiertheit der Seele (2019) 0.01
    0.0131501295 = product of:
      0.105201036 = sum of:
        0.105201036 = product of:
          0.3156031 = sum of:
            0.3156031 = weight(_text_:seele in 5235) [ClassicSimilarity], result of:
              0.3156031 = score(doc=5235,freq=6.0), product of:
                0.22546001 = queryWeight, product of:
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.030822188 = queryNorm
                1.3998185 = fieldWeight in 5235, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5235)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    Die Transhumanisten sind auf dem Vormarsch. Diese ausdrücklich technophile Bewegung sorgt mit unerhörten Thesen und provokanten Parolen für Furore. Nun erfolgt der Angriff auf die menschliche Seele.
    Source
    https://www.heise.de/tp/features/Die-Antiquiertheit-der-Seele-4408867.html?view=print
  7. Rötzer, F.: Psychologen für die Künstliche Intelligenz (2018) 0.01
    0.006874024 = product of:
      0.027496096 = sum of:
        0.016360147 = product of:
          0.04908044 = sum of:
            0.04908044 = weight(_text_:problem in 4427) [ClassicSimilarity], result of:
              0.04908044 = score(doc=4427,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.375163 = fieldWeight in 4427, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4427)
          0.33333334 = coord(1/3)
        0.01113595 = product of:
          0.03340785 = sum of:
            0.03340785 = weight(_text_:22 in 4427) [ClassicSimilarity], result of:
              0.03340785 = score(doc=4427,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.30952093 = fieldWeight in 4427, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4427)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Künstliche Intelligenz kann lernen anhand von Erfahrungen und Vorgaben. Das Problem ist aber, dass sich maschinelles Lernen zwar in eine bestimmte Richtung steuern lässt, um bestimmte Probleme zu lösen, dass aber der Programmierer mitunter nicht mehr weiß, was in dem "künstlichen Gehirn" bzw. dem neuronalen Netzwerk vor sich geht, das mit zunehmender Komplexität und Lernerfahrung für den Menschen zur Black Box wird, wo man nur sehen kann, was an Daten einfließt und was herauskommt.
    Date
    22. 1.2018 11:08:27
  8. Mayo, D.; Bowers, K.: ¬The devil's shoehorn : a case study of EAD to ArchivesSpace migration at a large university (2017) 0.01
    0.0061833817 = product of:
      0.024733527 = sum of:
        0.01771038 = product of:
          0.05313114 = sum of:
            0.05313114 = weight(_text_:problem in 3373) [ClassicSimilarity], result of:
              0.05313114 = score(doc=3373,freq=6.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.4061259 = fieldWeight in 3373, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3373)
          0.33333334 = coord(1/3)
        0.007023146 = product of:
          0.021069437 = sum of:
            0.021069437 = weight(_text_:29 in 3373) [ClassicSimilarity], result of:
              0.021069437 = score(doc=3373,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.19432661 = fieldWeight in 3373, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3373)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system. Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives. These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify. Portions of them are rigorously structured, while other parts are narrative. Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent. In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned. The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard's Library and Technology Services. Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
    Date
    31. 1.2017 13:29:56
  9. Teal, W.: Alma enumerator : automating repetitive cataloging tasks with Python (2018) 0.01
    0.006036883 = product of:
      0.024147533 = sum of:
        0.014315128 = product of:
          0.042945385 = sum of:
            0.042945385 = weight(_text_:problem in 5348) [ClassicSimilarity], result of:
              0.042945385 = score(doc=5348,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.3282676 = fieldWeight in 5348, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5348)
          0.33333334 = coord(1/3)
        0.009832405 = product of:
          0.029497212 = sum of:
            0.029497212 = weight(_text_:29 in 5348) [ClassicSimilarity], result of:
              0.029497212 = score(doc=5348,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.27205724 = fieldWeight in 5348, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5348)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    In June 2016, the Warburg College library migrated to a new integrated library system, Alma. In the process, we lost the enumeration and chronology data for roughly 79,000 print serial item records. Re-entering all this data by hand seemed an unthinkable task. Fortunately, the information was recorded as free text in each item's description field. By using Python, Alma's API and much trial and error, the Wartburg College library was able to parse the serial item descriptions into enumeration and chronology data that was uploaded back into Alma. This paper discusses the design and feasibility considerations addressed in trying to solve this problem, the complications encountered during development, and the highlights and shortcomings of the collection of Python scripts that became Alma Enumerator.
    Date
    10.11.2018 16:29:37
  10. Franke, F.: ¬Das Framework for Information Literacy : neue Impulse für die Förderung von Informationskompetenz in Deutschland?! (2017) 0.01
    0.0057373247 = product of:
      0.045898598 = sum of:
        0.045898598 = product of:
          0.068847895 = sum of:
            0.043792006 = weight(_text_:29 in 2248) [ClassicSimilarity], result of:
              0.043792006 = score(doc=2248,freq=6.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.40390027 = fieldWeight in 2248, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2248)
            0.025055885 = weight(_text_:22 in 2248) [ClassicSimilarity], result of:
              0.025055885 = score(doc=2248,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23214069 = fieldWeight in 2248, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2248)
          0.6666667 = coord(2/3)
      0.125 = coord(1/8)
    
    Content
    https://www.o-bib.de/article/view/2017H4S22-29. DOI: https://doi.org/10.5282/o-bib/2017H4S22-29.
    Source
    o-bib: Das offene Bibliotheksjournal. 4(2017) Nr.4, S.22-29
  11. Landwehr, A.: China schafft digitales Punktesystem für den "besseren" Menschen (2018) 0.01
    0.0055932454 = product of:
      0.044745963 = sum of:
        0.044745963 = product of:
          0.06711894 = sum of:
            0.033711098 = weight(_text_:29 in 4314) [ClassicSimilarity], result of:
              0.033711098 = score(doc=4314,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.31092256 = fieldWeight in 4314, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4314)
            0.03340785 = weight(_text_:22 in 4314) [ClassicSimilarity], result of:
              0.03340785 = score(doc=4314,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.30952093 = fieldWeight in 4314, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4314)
          0.6666667 = coord(2/3)
      0.125 = coord(1/8)
    
    Date
    22. 6.2018 14:29:46
  12. Rötzer, F.: Bindestriche in Titeln von Artikeln schaden der wissenschaftlichen Reputation (2019) 0.01
    0.0051744715 = product of:
      0.020697886 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 5697) [ClassicSimilarity], result of:
              0.03681033 = score(doc=5697,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 5697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5697)
          0.33333334 = coord(1/3)
        0.008427775 = product of:
          0.025283325 = sum of:
            0.025283325 = weight(_text_:29 in 5697) [ClassicSimilarity], result of:
              0.025283325 = score(doc=5697,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23319192 = fieldWeight in 5697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5697)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Content
    "Aber warum werden Titel mit Bindestrichen weniger häufig zitiert? Die Wissenschaftler vermuten, dass Autoren, wenn sie einen Artikel zitieren, möglicherweise übersehen, Bindestriche anzugeben. Dann kann in den Datenbanken keine Verlinkung mit dem Artikel mit Bindestrichen im Titel hergestellt werden, weswegen der Zitationsindex falsch wird. Das Problem scheint sich bei mehreren Bindestrichen zu verstärken, die die Irrtumshäufigkeit der Menschen erhöhen. Dass die Länge der Titel etwas mit der Zitationshäufigkeit zu tun hat, bestreiten die Wissenschaftler. Längere Titel würden einfach mit höherer Wahrscheinlichkeit mehr Bindestriche enthalten - und deswegen weniger häufig wegen der Bindestrichfehler zitiert werden. Und Artikel mit Bindestrichen sollen auch den JIF von Wissenschaftsjournalen senken."
    Date
    29. 6.2019 17:46:17
  13. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.00
    0.0049467054 = product of:
      0.019786822 = sum of:
        0.014168305 = product of:
          0.042504914 = sum of:
            0.042504914 = weight(_text_:problem in 4639) [ClassicSimilarity], result of:
              0.042504914 = score(doc=4639,freq=6.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.32490072 = fieldWeight in 4639, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.33333334 = coord(1/3)
        0.0056185164 = product of:
          0.016855549 = sum of:
            0.016855549 = weight(_text_:29 in 4639) [ClassicSimilarity], result of:
              0.016855549 = score(doc=4639,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.15546128 = fieldWeight in 4639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
    Date
    29. 7.2011 14:44:56
  14. Taglinger, H.: Roboter sind auch nur Menschen (2018) 0.00
    0.0046386477 = product of:
      0.03710918 = sum of:
        0.03710918 = product of:
          0.111327544 = sum of:
            0.111327544 = weight(_text_:leib in 4069) [ClassicSimilarity], result of:
              0.111327544 = score(doc=4069,freq=2.0), product of:
                0.24922726 = queryWeight, product of:
                  8.085969 = idf(docFreq=36, maxDocs=44218)
                  0.030822188 = queryNorm
                0.4466909 = fieldWeight in 4069, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.085969 = idf(docFreq=36, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4069)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Content
    "Kommt einem das nicht aus der Diskussion um Menschenrechte für Tiere bekannt vor? Da fordern die einen vehement das ein, was für Menschen gesetzt scheint. Unversehrtheit von Leib und Leben, vor allem wenn sie ein Schmerzempfinden haben. Bioethiker Peter Singer begibt sich auf die schwierige Argumentationslinie, dass manche Tiere geistig behinderten Menschen das Wasser reichen könnten. Und andere wie Peter Kunzmann sagen, dass eine solche Initiative die Grundrechte des Menschen aushöhlen könnten. Und dass das moderne Verständnis von Menschenrecht wie etwa das Recht auf Asyl sich auf das Leben in einer Kultur bezieht, die sich Menschen geschaffen haben. Ja nun, man führt ja auch kein Wahlrecht für Gänseblümchen ein. Verstehe. Das ganze wird jetzt hackelig, wenn man nun anfängt, diese Diskussion auf Roboter und AI gesteuerte Bots zu übertragen. Nach einer allgemeinen Definition zeichnen sich Lebewesen dadurch aus, dass sie "(...) unter anderem zu Stoffwechsel, Fortpflanzung, Reizbarkeit, Wachstum und Evolution fähig sind." Gut, die ersteren beiden kann man Bots noch absprechen, aber trotzdem steht man plötzlich mitten in der Diskussion darüber, ob Bots nicht auch das Recht auf freie Rede haben sollten. Dummerweise hat die Verfassung der Vereinigten Staaten von Amerika sich in dieser Zeit noch nicht wirklich um die genaue Definition der Zielgruppe eines solchen Amendments gekümmert. Da steht nichts von Menschen, nur vom "Volk" (the people)." Vgl.: http://www.heise.de/-4237434.
  15. Freyberg, L.: ¬Die Lesbarkeit der Welt : Rezension zu 'The Concept of Information in Library and Information Science. A Field in Search of Its Boundaries: 8 Short Comments Concerning Information'. In: Cybernetics and Human Knowing. Vol. 22 (2015), 1, 57-80. Kurzartikel von Luciano Floridi, Søren Brier, Torkild Thellefsen, Martin Thellefsen, Bent Sørensen, Birger Hjørland, Brenda Dervin, Ken Herold, Per Hasle und Michael Buckland (2016) 0.00
    0.003437012 = product of:
      0.013748048 = sum of:
        0.008180073 = product of:
          0.02454022 = sum of:
            0.02454022 = weight(_text_:problem in 3335) [ClassicSimilarity], result of:
              0.02454022 = score(doc=3335,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.1875815 = fieldWeight in 3335, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3335)
          0.33333334 = coord(1/3)
        0.005567975 = product of:
          0.016703924 = sum of:
            0.016703924 = weight(_text_:22 in 3335) [ClassicSimilarity], result of:
              0.016703924 = score(doc=3335,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.15476047 = fieldWeight in 3335, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3335)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Es ist wieder an der Zeit den Begriff "Information" zu aktualisieren beziehungsweise einen Bericht zum Status Quo zu liefern. Information ist der zentrale Gegenstand der Informationswissenschaft und stellt einen der wichtigsten Forschungsgegenstände der Bibliotheks- und Informationswissenschaft dar. Erstaunlicherweise findet jedoch ein stetiger Diskurs, der mit der kritischen Auseinandersetzung und der damit verbundenen Aktualisierung von Konzepten in den Geisteswissensschaften vergleichbar ist, zumindest im deutschsprachigen Raum1 nicht konstant statt. Im Sinne einer theoretischen Grundlagenforschung und zur Erarbeitung einer gemeinsamen begrifflichen Matrix wäre dies aber sicherlich wünschenswert. Bereits im letzten Jahr erschienen in dem von Søren Brier (Siehe "The foundation of LIS in information science and semiotics"2 sowie "Semiotics in Information Science. An Interview with Søren Brier on the application of semiotic theories and the epistemological problem of a transdisciplinary Information Science"3) herausgegebenen Journal "Cybernetics and Human Knowing" acht lesenswerte Stellungnahmen von namhaften Philosophen beziehungsweise Bibliotheks- und Informationswissenschaftlern zum Begriff der Information. Unglücklicherweise ist das Journal "Cybernetics & Human Knowing" in Deutschland schwer zugänglich, da es sich nicht um ein Open-Access-Journal handelt und lediglich von acht deutschen Bibliotheken abonniert wird.4 Aufgrund der schlechten Verfügbarkeit scheint es sinnvoll hier eine ausführliche Besprechung dieser acht Kurzartikel anzubieten.
  16. Ishikawa, S.: ¬A final solution to the mind-body problem by quantum language (2017) 0.00
    0.0034296005 = product of:
      0.027436804 = sum of:
        0.027436804 = product of:
          0.08231041 = sum of:
            0.08231041 = weight(_text_:problem in 3666) [ClassicSimilarity], result of:
              0.08231041 = score(doc=3666,freq=10.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.6291675 = fieldWeight in 3666, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3666)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    Recently we proposed "quantum language", which was not only characterized as the metaphysical and linguistic turn of quantum mechanics but also the linguistic turn of Descartes = Kant epistemology. And further we believe that quantum language is the only scientifically successful theory in dualistic idealism. If this turn is regarded as progress in the history of western philosophy (i.e., if "philosophical progress" is defined by "approaching to quantum language"), we should study the linguistic mind-body problem more than the epistemological mind-body problem. In this paper, we show that to solve the mind-body problem and to propose "measurement axiom" in quantum language are equivalent. Since our approach is always within dualistic idealism, we believe that our linguistic answer is the only true solution to the mind-body problem.
  17. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.00
    0.0024607205 = product of:
      0.019685764 = sum of:
        0.019685764 = product of:
          0.059057288 = sum of:
            0.059057288 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.059057288 = score(doc=3582,freq=4.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
  18. Herb, U.: Überwachungskapitalismus und Wissenschaftssteuerung (2019) 0.00
    0.0024328893 = product of:
      0.019463114 = sum of:
        0.019463114 = product of:
          0.05838934 = sum of:
            0.05838934 = weight(_text_:29 in 5624) [ClassicSimilarity], result of:
              0.05838934 = score(doc=5624,freq=6.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.5385337 = fieldWeight in 5624, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5624)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    29. 6.2019 17:46:17
    4. 8.2019 19:52:29
    Issue
    [29. Juli 2019].
  19. Halpin, H.; Hayes, P.J.: When owl:sameAs isn't the same : an analysis of identity links on the Semantic Web (2010) 0.00
    0.0021690698 = product of:
      0.017352559 = sum of:
        0.017352559 = product of:
          0.052057672 = sum of:
            0.052057672 = weight(_text_:problem in 4834) [ClassicSimilarity], result of:
              0.052057672 = score(doc=4834,freq=4.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.39792046 = fieldWeight in 4834, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4834)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    In Linked Data, the use of owl:sameAs is ubiquitous in 'inter-linking' data-sets. However, there is a lurking suspicion within the Linked Data community that this use of owl:sameAs may be somehow incorrect, in particular with regards to its interactions with inference. In fact, owl:sameAs can be considered just one type of 'identity link', a link that declares two items to be identical in some fashion. After reviewing the definitions and history of the problem of identity in philosophy and knowledge representation, we outline four alternative readings of owl:sameAs, showing with examples how it is being (ab)used on the Web of data. Then we present possible solutions to this problem by introducing alternative identity links that rely on named graphs.
  20. Faßnacht, M.: "Druckt die GND Nummer in der Publikation ab!" : Vereinfachung der Normdatenverwendung in Bibliotheken und Datenbanken (2014) 0.00
    0.0021690698 = product of:
      0.017352559 = sum of:
        0.017352559 = product of:
          0.052057672 = sum of:
            0.052057672 = weight(_text_:problem in 2393) [ClassicSimilarity], result of:
              0.052057672 = score(doc=2393,freq=4.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.39792046 = fieldWeight in 2393, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2393)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    Normdateien lösen das Problem von unterschiedlichen Namensschreibweisen, Übersetzungen und Namensdoppelungen. Nationale oder an Sprachräume gebundene Normdaten sind mittlerweile im "Virtual International Authority File" Projekt (VIAF) über Konkordanzen miteinander verlinkt. Neben diesen von Bibliotheken erstellten Normdaten entstehen aber auch in anderen Kontexten Identifizierungssysteme, die die Uneindeutigkeit von Personennamen in sehr großen Datenmengen beseitigen sollen. Ein arbeitsaufwändiges Problem ist dabei aber immer noch nicht gelöst: nach wie vor müssen Bibliothekare die Identität der Autoren mühsam recherchieren. Wie einfach wäre es, wenn die GND oder LoC Nummer beim Namen des Autors abgedruckt wäre! An einem Pilotprojekt der UB Tübingen wird demonstriert, wie durch die Zusammenarbeit von Herausgeber, Verlag und Bibliothek die GND-Nummern der Autoren in der Publikation abgedruckt werden können.

Languages

  • d 66
  • e 37
  • a 1
  • More… Less…