Search (8 results, page 1 of 1)

  • × classification_ss:"54.72 (Künstliche Intelligenz)"
  • × year_i:[2010 TO 2020}
  1. Lenzen, M.: Künstliche Intelligenz : was sie kann & was uns erwartet (2018) 0.06
    0.059796274 = product of:
      0.11959255 = sum of:
        0.026693465 = weight(_text_:von in 4295) [ClassicSimilarity], result of:
          0.026693465 = score(doc=4295,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.2084335 = fieldWeight in 4295, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4295)
        0.092899084 = product of:
          0.13934863 = sum of:
            0.10683053 = weight(_text_:z in 4295) [ClassicSimilarity], result of:
              0.10683053 = score(doc=4295,freq=4.0), product of:
                0.2562021 = queryWeight, product of:
                  5.337313 = idf(docFreq=577, maxDocs=44218)
                  0.04800207 = queryNorm
                0.41697758 = fieldWeight in 4295, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.337313 = idf(docFreq=577, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4295)
            0.032518093 = weight(_text_:22 in 4295) [ClassicSimilarity], result of:
              0.032518093 = score(doc=4295,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.19345059 = fieldWeight in 4295, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4295)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    Künstliche Intelligenz (KI) steht für Maschinen, die können, was der Mensch kann: hören und sehen, sprechen, lernen, Probleme lösen. In manchem sind sie inzwischen nicht nur schneller, sondern auch besser als der Mensch. Wie funktionieren diese klugen Maschinen? Bedrohen sie uns, machen sie uns gar überflüssig? Die Journalistin und KI-Expertin Manuela Lenzen erklärt anschaulich, was Künstliche Intelligenz kann und was uns erwartet. Künstliche Intelligenz ist das neue Zauberwort des digitalen Kapitalismus. Intelligente Computersysteme stellen medizinische Diagnosen und geben Rechtsberatung. Sie managen den Aktienhandel und steuern bald unsere Autos. Sie malen, dichten, dolmetschen und komponieren. Immer klügere Roboter stehen an den Fließbändern, begrüßen uns im Hotel, führen uns durchs Museum oder braten Burger und schnipseln den Salat dazu. Doch neben die Utopie einer schönen neuen intelligenten Technikwelt sind längst Schreckbilder getreten: von künstlichen Intelligenzen, die uns auf Schritt und Tritt überwachen, die unsere Arbeitsplätze übernehmen und sich unserer Kontrolle entziehen. Manuela Lenzen zeigt, welche Hoffnungen und Befürchtungen realistisch sind und welche in die Science Fiction gehören. Sie beschreibt, wie ein gutes Leben mit der Künstlichen Intelligenz aussehen könnte - und dass wir von klugen Maschinen eine Menge über uns selbst lernen können.
    Classification
    Z 010
    Date
    18. 6.2018 19:22:02
    KAB
    Z 010
  2. Tegmark, M.: Leben 3.0 : Mensch sein im Zeitalter Künstlicher Intelligenz (2017) 0.02
    0.021806274 = product of:
      0.043612547 = sum of:
        0.018685425 = weight(_text_:von in 4558) [ClassicSimilarity], result of:
          0.018685425 = score(doc=4558,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.14590344 = fieldWeight in 4558, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4558)
        0.024927124 = product of:
          0.07478137 = sum of:
            0.07478137 = weight(_text_:z in 4558) [ClassicSimilarity], result of:
              0.07478137 = score(doc=4558,freq=4.0), product of:
                0.2562021 = queryWeight, product of:
                  5.337313 = idf(docFreq=577, maxDocs=44218)
                  0.04800207 = queryNorm
                0.2918843 = fieldWeight in 4558, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.337313 = idf(docFreq=577, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4558)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Classification
    Z 010
    Footnote
    Was bei der Lektüre des Buchs positiv auffällt: Der Autor bewertet diese ethischen Fragen nicht, sondern stellt sie offen in den Raum. Sein Ton ist sachlich, nüchtern und frei von jedem Alarmismus. Anders als der häufig apodiktisch argumentierende Kurzweil stellt Tegmark seine Zukunftsszenarien nicht als Unabänderlichkeit dar, sondern als gestaltbaren Möglichkeitsraum. Im jetzigen Stadium der Entwicklung könne der Mensch die Weichen noch stellen - er müsse sich nur entscheiden. Darin kann man auch eine versteckte Kritik an den Programmierern lesen, die sich häufig hinter der analytischen Formel des "Problemlösens" verstecken, aber keine nachhaltigen Gesellschaftsmodelle entwickeln. Die KI wird gesellschaftliche Hierarchien verändern, vielleicht sogar neu strukturieren - daran lässt Tegmark keinen Zweifel. Sein Buch ist eine luzide, wenn auch nicht immer erbauliche Vermessung. Einzig das Schlusskapitel über Bewusstseinstheorien und einige Anspielungen auf Konferenzteilnehmer, die wohl nur Insidern ein Begriff sind, wären verzichtbar gewesen. Ansonsten ist es ein überaus gelungenes Werk, das dem Leser als Kompass durch die Irrungen und Wirrungen künstlicher Intelligenz gereicht."
    Issue
    Übers. von. Hubert Mania.
    KAB
    Z 010
  3. Keyser, P. de: Indexing : from thesauri to the Semantic Web (2012) 0.01
    0.007208732 = product of:
      0.028834928 = sum of:
        0.028834928 = product of:
          0.04325239 = sum of:
            0.004230681 = weight(_text_:a in 3197) [ClassicSimilarity], result of:
              0.004230681 = score(doc=3197,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.07643694 = fieldWeight in 3197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3197)
            0.039021708 = weight(_text_:22 in 3197) [ClassicSimilarity], result of:
              0.039021708 = score(doc=3197,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.23214069 = fieldWeight in 3197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3197)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Indexing consists of both novel and more traditional techniques. Cutting-edge indexing techniques, such as automatic indexing, ontologies, and topic maps, were developed independently of older techniques such as thesauri, but it is now recognized that these older methods also hold expertise. Indexing describes various traditional and novel indexing techniques, giving information professionals and students of library and information sciences a broad and comprehensible introduction to indexing. This title consists of twelve chapters: an Introduction to subject readings and theasauri; Automatic indexing versus manual indexing; Techniques applied in automatic indexing of text material; Automatic indexing of images; The black art of indexing moving images; Automatic indexing of music; Taxonomies and ontologies; Metadata formats and indexing; Tagging; Topic maps; Indexing the web; and The Semantic Web.
    Date
    24. 8.2016 14:03:22
  4. Bostrom, N.: Superintelligenz : Szenarien einer kommenden Revolution (2016) 0.01
    0.0066733663 = product of:
      0.026693465 = sum of:
        0.026693465 = weight(_text_:von in 4318) [ClassicSimilarity], result of:
          0.026693465 = score(doc=4318,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.2084335 = fieldWeight in 4318, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4318)
      0.25 = coord(1/4)
    
    Abstract
    Was geschieht, wenn es uns eines Tages gelingt, eine Maschine zu entwickeln, die die menschliche Intelligenz auf so gut wie allen Gebieten übertrifft? Klar ist: Eine solche Superintelligenz wäre enorm mächtig und würde uns vor riesige Kontroll- und Steuerungsprobleme stellen. Mehr noch: Vermutlich würde die Zukunft der menschlichen Spezies in ihren Händen liegen, so wie heute die Zukunft der Gorillas von uns abhängt. Nick Bostrom nimmt uns mit auf eine faszinierende Reise in die Welt der Orakel und Genies, der Superrechner und Gehirnsimulationen, aber vor allem in die Labore dieser Welt, in denen derzeit fieberhaft an der Entwicklung einer künstlichen Intelligenz gearbeitet wird. Er skizziert mögliche Szenarien, wie die Geburt der Superintelligenz vonstattengehen könnte, und widmet sich ausführlich den Folgen dieser Revolution. Sie werden global sein und unser wirtschaftliches, soziales und politisches Leben tief greifend verändern. Wir müssen handeln, und zwar kollektiv, bevor der Geist aus der Flasche gelassen ist - also jetzt! Das ist die eminent politische Botschaft dieses so spannenden wie wichtigen Buches.
    Issue
    Aus dem Englischen von Jan-Erik Strasser.
  5. Stuart, D.: Practical ontologies for information professionals (2016) 0.00
    6.8209076E-4 = product of:
      0.002728363 = sum of:
        0.002728363 = product of:
          0.008185089 = sum of:
            0.008185089 = weight(_text_:a in 5152) [ClassicSimilarity], result of:
              0.008185089 = score(doc=5152,freq=22.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.14788237 = fieldWeight in 5152, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5152)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Practical Ontologies for Information Professionals provides an accessible introduction and exploration of ontologies and demonstrates their value to information professionals. More data and information is being created than ever before. Ontologies, formal representations of knowledge with rich semantic relationships, have become increasingly important in the context of today's information overload and data deluge. The publishing and sharing of explicit explanations for a wide variety of conceptualizations, in a machine readable format, has the power to both improve information retrieval and discover new knowledge. Information professionals are key contributors to the development of new, and increasingly useful, ontologies. Practical Ontologies for Information Professionals provides an accessible introduction to the following: defining the concept of ontologies and why they are increasingly important to information professionals ontologies and the semantic web existing ontologies, such as RDF, RDFS, SKOS, and OWL2 adopting and building ontologies, showing how to avoid repetition of work and how to build a simple ontology interrogating ontologies for reuse the future of ontologies and the role of the information professional in their development and use. This book will be useful reading for information professionals in libraries and other cultural heritage institutions who work with digitalization projects, cataloguing and classification and information retrieval. It will also be useful to LIS students who are new to the field.
    Content
    C H A P T E R 1 What is an ontology?; Introduction; The data deluge and information overload; Defining terms; Knowledge organization systems and ontologies; Ontologies, metadata and linked data; What can an ontology do?; Ontologies and information professionals; Alternatives to ontologies; The aims of this book; The structure of this book; C H A P T E R 2 Ontologies and the semantic web; Introduction; The semantic web and linked data; Resource Description Framework (RDF); Classes, subclasses and properties; The semantic web stack; Embedded RDF; Alternative semantic visionsLibraries and the semantic web; Other cultural heritage institutions and the semantic web; Other organizations and the semantic web; Conclusion; C H A P T E R 3 Existing ontologies; Introduction; Ontology documentation; Ontologies for representing ontologies; Ontologies for libraries; Upper ontologies; Cultural heritage data models; Ontologies for the web; Conclusion; C H A P T E R 4 Adopting ontologies; Introduction; Reusing ontologies: application profiles and data models; Identifying ontologies; The ideal ontology discovery tool; Selection criteria; Conclusion C H A P T E R 5 Building ontologiesIntroduction; Approaches to building an ontology; The twelve steps; Ontology development example: Bibliometric Metrics Ontology element set; Conclusion; C H A P T E R 6 Interrogating ontologies; Introduction; Interrogating ontologies for reuse; Interrogating a knowledge base; Understanding ontology use; Conclusion; C H A P T E R 7 The future of ontologies and the information professional; Introduction; The future of ontologies for knowledge discovery; The future role of library and information professionals; The practical development of ontologies
  6. Helbig, H.: Knowledge representation and the semantics of natural language (2014) 0.00
    6.569507E-4 = product of:
      0.0026278028 = sum of:
        0.0026278028 = product of:
          0.007883408 = sum of:
            0.007883408 = weight(_text_:a in 2396) [ClassicSimilarity], result of:
              0.007883408 = score(doc=2396,freq=10.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.14243183 = fieldWeight in 2396, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2396)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Natural Language is not only the most important means of communication between human beings, it is also used over historical periods for the preservation of cultural achievements and their transmission from one generation to the other. During the last few decades, the flod of digitalized information has been growing tremendously. This tendency will continue with the globalisation of information societies and with the growing importance of national and international computer networks. This is one reason why the theoretical understanding and the automated treatment of communication processes based on natural language have such a decisive social and economic impact. In this context, the semantic representation of knowledge originally formulated in natural language plays a central part, because it connects all components of natural language processing systems, be they the automatic understanding of natural language (analysis), the rational reasoning over knowledge bases, or the generation of natural language expressions from formal representations. This book presents a method for the semantic representation of natural language expressions (texts, sentences, phrases, etc.) which can be used as a universal knowledge representation paradigm in the human sciences, like linguistics, cognitive psychology, or philosophy of language, as well as in computational linguistics and in artificial intelligence. It is also an attempt to close the gap between these disciplines, which to a large extent are still working separately.
  7. Grigonyte, G.: Building and evaluating domain ontologies : NLP contributions (2010) 0.00
    4.1131617E-4 = product of:
      0.0016452647 = sum of:
        0.0016452647 = product of:
          0.004935794 = sum of:
            0.004935794 = weight(_text_:a in 481) [ClassicSimilarity], result of:
              0.004935794 = score(doc=481,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.089176424 = fieldWeight in 481, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=481)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    An ontology is a knowledge representation structure made up of concepts and their interrelations. It represents shared understanding delineated by some domain. The building of an ontology can be addressed from the perspective of natural language processing. This thesis discusses the validity and theoretical background of knowledge acquisition from natural language. It also presents the theoretical and experimental framework for NLP-driven ontology building and evaluation tasks.
  8. Sakr, S.; Wylot, M.; Mutharaju, R.; Le-Phuoc, D.; Fundulaki, I.: Linked data : storing, querying, and reasoning (2018) 0.00
    4.0709745E-4 = product of:
      0.0016283898 = sum of:
        0.0016283898 = product of:
          0.004885169 = sum of:
            0.004885169 = weight(_text_:a in 5329) [ClassicSimilarity], result of:
              0.004885169 = score(doc=5329,freq=6.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.088261776 = fieldWeight in 5329, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5329)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This book describes efficient and effective techniques for harnessing the power of Linked Data by tackling the various aspects of managing its growing volume: storing, querying, reasoning, provenance management and benchmarking. To this end, Chapter 1 introduces the main concepts of the Semantic Web and Linked Data and provides a roadmap for the book. Next, Chapter 2 briefly presents the basic concepts underpinning Linked Data technologies that are discussed in the book. Chapter 3 then offers an overview of various techniques and systems for centrally querying RDF datasets, and Chapter 4 outlines various techniques and systems for efficiently querying large RDF datasets in distributed environments. Subsequently, Chapter 5 explores how streaming requirements are addressed in current, state-of-the-art RDF stream data processing. Chapter 6 covers performance and scaling issues of distributed RDF reasoning systems, while Chapter 7 details benchmarks for RDF query engines and instance matching systems. Chapter 8 addresses the provenance management for Linked Data and presents the different provenance models developed. Lastly, Chapter 9 offers a brief summary, highlighting and providing insights into some of the open challenges and research directions. Providing an updated overview of methods, technologies and systems related to Linked Data this book is mainly intended for students and researchers who are interested in the Linked Data domain. It enables students to gain an understanding of the foundations and underpinning technologies and standards for Linked Data, while researchers benefit from the in-depth coverage of the emerging and ongoing advances in Linked Data storing, querying, reasoning, and provenance management systems. Further, it serves as a starting point to tackle the next research challenges in the domain of Linked Data management.

Languages