Search (14 results, page 1 of 1)

  • × classification_ss:"ST 300"
  1. New directions in cognitive information retrieval (2005) 0.03
    0.028229088 = product of:
      0.056458175 = sum of:
        0.036153924 = weight(_text_:retrieval in 338) [ClassicSimilarity], result of:
          0.036153924 = score(doc=338,freq=24.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.28943354 = fieldWeight in 338, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=338)
        0.010694833 = weight(_text_:use in 338) [ClassicSimilarity], result of:
          0.010694833 = score(doc=338,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.08457905 = fieldWeight in 338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.01953125 = fieldNorm(doc=338)
        0.004831008 = weight(_text_:of in 338) [ClassicSimilarity], result of:
          0.004831008 = score(doc=338,freq=6.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.07481265 = fieldWeight in 338, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.01953125 = fieldNorm(doc=338)
        0.00477841 = product of:
          0.00955682 = sum of:
            0.00955682 = weight(_text_:on in 338) [ClassicSimilarity], result of:
              0.00955682 = score(doc=338,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.10522352 = fieldWeight in 338, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=338)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Footnote
    Rez. in: Mitt. VÖB 59(2006) H.3, S.95-98 (O. Oberhauser): "Dieser Sammelband der Herausgeber A. Spink & C. Cole ist kurz vor ihrem im letzten Heft der Mitteilungen der VÖB besprochenen zweiten Buch erschienen. Er wendet sich an Informationswissenschaftler, Bibliothekare, Sozialwissenschaftler sowie Informatiker mit Interesse am Themenbereich Mensch-Computer-Interaktion und präsentiert einen Einblick in die aktuelle Forschung zum kognitiv orientierten Information Retrieval. Diese Richtung, die von der Analyse der Informationsprobleme der Benutzer und deren kognitivem Verhalten bei der Benutzung von Informationssystemen ausgeht, steht in einem gewissen Kontrast zum traditionell vorherrschenden IR-Paradigma, das sich auf die Optimierung der IR-Systeme und ihrer Effizienz konzentriert. "Cognitive information retrieval" oder CIR (natürlich geht es auch hier nicht ohne ein weiteres Akronym ab) ist ein interdisziplinärer Forschungsbereich, der Aktivitäten aus Informationswissenschaft, Informatik, Humanwissenschaften, Kognitionswissenschaft, Mensch-Computer-Interaktion und anderen informationsbezogenen Gebieten inkludiert.
    CIR Concepts - Interactive information retrieval: Bringing the user to a selection state, von Charles Cole et al. (Montréal), konzentriert sich auf den kognitiven Aspekt von Benutzern bei der Interaktion mit den bzw. der Reaktion auf die vom IR-System ausgesandten Stimuli; "selection" bezieht sich dabei auf die Auswahl, die das System den Benutzern abverlangt und die zur Veränderung ihrer Wissensstrukturen beiträgt. - Cognitive overlaps along the polyrepresentation continuum, von Birger Larsen und Peter Ingwersen (Kopenhagen), beschreibt einen auf Ingwersens Principle of Polyrepresentation beruhenden methodischen Ansatz, der dem IR-System ein breiteres Bild des Benutzers bzw. der Dokumente vermittelt als dies bei herkömmlichen, lediglich anfragebasierten Systemen möglich ist. - Integrating approaches to relevance, von Ian Ruthven (Glasgow), analysiert den Relevanzbegriff und schlägt anstelle des gegenwärtig in IR-Systemverwendeten, eindimensionalen Relevanzkonzepts eine multidimensionale Sichtweise vor. - New cognitive directions, von Nigel Ford (Sheffield), führt neue Begriffe ein: Ford schlägt anstelle von information need und information behaviour die Alternativen knowledge need und knowledge behaviour vor.
    CIR Processes - A multitasking framework for cognitive information retrieval, von Amanda Spink und Charles Cole (Australien/Kanada), sieht - im Gegensatz zu traditionellen Ansätzen - die simultane Bearbeitung verschiedener Aufgaben (Themen) während einer Informationssuche als den Normalfall an und analysiert das damit verbundene Benutzerverhalten. - Explanation in information seeking and retrieval, von Pertti Vakkari und Kalervo Järvelin (Tampere), plädiert anhand zweier empirischer Untersuchungen für die Verwendung des aufgabenorientierten Ansatzes ("task") in der IR-Forschung, gerade auch als Bindeglied zwischen nicht ausreichend mit einander kommunizierenden Disziplinen (Informationswissenschaft, Informatik, diverse Sozialwissenschaften). - Towards an alternative information retrieval system for children, von Jamshid Beheshti et al. (Montréal), berichtet über den Stand der IR-Forschung für Kinder und schlägt vor, eine Metapher aus dem Sozialkonstruktivismus (Lernen als soziales Verhandeln) als Gestaltungsprinzip für einschlägige IR-Systeme zu verwenden. CIR Techniques - Implicit feedback: using behavior to infer relevance, von Diane Kelly (North Carolina), setzt sich kritisch mit den Techniken zur Analyse des von Benutzern von IR-Systemen geäußerten Relevance-Feedbacks - explizit und implizit - auseinander. - Educational knowledge domain visualizations, von Peter Hook und Katy Börner (Indiana), beschreibt verschiedene Visualisierungstechniken zur Repräsentation von Wissensgebieten, die "Novizen" bei der Verwendung fachspezifischer IR-Systeme unterstützen sollen. - Learning and training to search, von Wendy Lucas und Heikki Topi (Massachusetts), analysiert, im breiteren Kontext der Information- Seeking-Forschung, Techniken zur Schulung von Benutzern von IRSystemen.
    Sämtliche Beiträge sind von hohem Niveau und bieten anspruchsvolle Lektüre. Verallgemeinert formuliert, fragen sie nach der Verknüpfung zwischen dem breiteren Kontext des Warum und Wie der menschlichen Informationssuche und den technischen bzw. sonstigen Randbedingungen, die die Interaktion zwischen Benutzern und Systemen bestimmen. Natürlich liegt hier kein Hand- oder Lehrbuch vor, sodass man - fairerweise - nicht von einer systematischen Behandlung dieses Themenbereichs oder einem didaktischen Aufbau ausgehen bzw. derlei erwarten darf. Das Buch bietet jedenfalls einen guten und vielfältigen Einstieg und Einblick in dieses interessante Forschungsgebiet. Fachlich einschlägige und größere allgemeine Bibliotheken sollten es daher jedenfalls in ihren Bestand aufnehmen. Schon die Rezension des oben zitierten zweiten Buches des Herausgeber-Duos Spink-Cole enthielt einen kritischen Hinweis auf das dortige Sachregister. Der vorliegende Band erfordert noch stärkere Nerven, denn der hier als "Index" bezeichnete Seitenfüller spottet geradezu jeder Beschreibung, umso mehr, als wir uns in einem informationswissenschaftlichen Kontext befi nden. Was soll man denn tatsächlich mit Einträgen wie "information" anfangen, noch dazu, wenn dazu über 150 verschiedene Seitenzahlen angegeben werden? Ähnlich verhält es sich mit anderen allgemeinen Begriffen wie z.B. "knowledge", "model", "tasks", "use", "users" - allesamt mit einer gewaltigen Menge von Seitenzahlen versehen und damit ohne Wert! Dieses der Leserschaft wenig dienliche Register ist wohl dem Verlag anzulasten, auch wenn die Herausgeber selbst seine Urheber gewesen sein sollten. Davon abgesehen wurde wieder einmal ein solide gefertigter Band vorgelegt, der allerdings wegen seines hohen Preis eher nur institutionelle Käufer ansprechen wird."
    Weitere Rez. in: JASIST 58(2007) no.5, S.758-760 (A. Gruzd): "Despite the minor drawbacks described, the book is a great source for researchers in the IR&S fields in general and in the CIR field in particular. Furthermore, different chapters of this book also might be of interest to members from other communities. For instance, librarians responsible for library instruction might find the chapter on search training by Lucas and Topi helpful in their work. Cognitive psychologists would probably be intrigued by Spink and Cole's view on multitasking. IR interface designers will likely find the chapter on KDV by Hook and Borner very beneficial. And students taking IR-related courses might find the thorough literature reviews by Ruthven and Kelly particularly useful when beginning their own research."
    LCSH
    Information retrieval ; Human / computer interaction
    RSWK
    Kognition / Informationsverarbeitung / Information Retrieval / Aufsatzsammlung
    Series
    The information retrieval series, vol. 19
    Subject
    Kognition / Informationsverarbeitung / Information Retrieval / Aufsatzsammlung
    Information retrieval ; Human / computer interaction
  2. Stuart, D.: Practical ontologies for information professionals (2016) 0.02
    0.024756368 = product of:
      0.06601698 = sum of:
        0.029222867 = weight(_text_:retrieval in 5152) [ClassicSimilarity], result of:
          0.029222867 = score(doc=5152,freq=8.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23394634 = fieldWeight in 5152, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5152)
        0.021174688 = weight(_text_:use in 5152) [ClassicSimilarity], result of:
          0.021174688 = score(doc=5152,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.16745798 = fieldWeight in 5152, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5152)
        0.015619429 = weight(_text_:of in 5152) [ClassicSimilarity], result of:
          0.015619429 = score(doc=5152,freq=32.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.24188137 = fieldWeight in 5152, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5152)
      0.375 = coord(3/8)
    
    Abstract
    Practical Ontologies for Information Professionals provides an accessible introduction and exploration of ontologies and demonstrates their value to information professionals. More data and information is being created than ever before. Ontologies, formal representations of knowledge with rich semantic relationships, have become increasingly important in the context of today's information overload and data deluge. The publishing and sharing of explicit explanations for a wide variety of conceptualizations, in a machine readable format, has the power to both improve information retrieval and discover new knowledge. Information professionals are key contributors to the development of new, and increasingly useful, ontologies. Practical Ontologies for Information Professionals provides an accessible introduction to the following: defining the concept of ontologies and why they are increasingly important to information professionals ontologies and the semantic web existing ontologies, such as RDF, RDFS, SKOS, and OWL2 adopting and building ontologies, showing how to avoid repetition of work and how to build a simple ontology interrogating ontologies for reuse the future of ontologies and the role of the information professional in their development and use. This book will be useful reading for information professionals in libraries and other cultural heritage institutions who work with digitalization projects, cataloguing and classification and information retrieval. It will also be useful to LIS students who are new to the field.
    Content
    C H A P T E R 1 What is an ontology?; Introduction; The data deluge and information overload; Defining terms; Knowledge organization systems and ontologies; Ontologies, metadata and linked data; What can an ontology do?; Ontologies and information professionals; Alternatives to ontologies; The aims of this book; The structure of this book; C H A P T E R 2 Ontologies and the semantic web; Introduction; The semantic web and linked data; Resource Description Framework (RDF); Classes, subclasses and properties; The semantic web stack; Embedded RDF; Alternative semantic visionsLibraries and the semantic web; Other cultural heritage institutions and the semantic web; Other organizations and the semantic web; Conclusion; C H A P T E R 3 Existing ontologies; Introduction; Ontology documentation; Ontologies for representing ontologies; Ontologies for libraries; Upper ontologies; Cultural heritage data models; Ontologies for the web; Conclusion; C H A P T E R 4 Adopting ontologies; Introduction; Reusing ontologies: application profiles and data models; Identifying ontologies; The ideal ontology discovery tool; Selection criteria; Conclusion C H A P T E R 5 Building ontologiesIntroduction; Approaches to building an ontology; The twelve steps; Ontology development example: Bibliometric Metrics Ontology element set; Conclusion; C H A P T E R 6 Interrogating ontologies; Introduction; Interrogating ontologies for reuse; Interrogating a knowledge base; Understanding ontology use; Conclusion; C H A P T E R 7 The future of ontologies and the information professional; Introduction; The future of ontologies for knowledge discovery; The future role of library and information professionals; The practical development of ontologies
    LCSH
    Ontologies (Information retrieval)
    Subject
    Ontologies (Information retrieval)
  3. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.02
    0.017151248 = product of:
      0.045736663 = sum of:
        0.023615643 = weight(_text_:retrieval in 168) [ClassicSimilarity], result of:
          0.023615643 = score(doc=168,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.18905719 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.010931322 = weight(_text_:of in 168) [ClassicSimilarity], result of:
          0.010931322 = score(doc=168,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.16928169 = fieldWeight in 168, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.0111897 = product of:
          0.0223794 = sum of:
            0.0223794 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
              0.0223794 = score(doc=168,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.15476047 = fieldWeight in 168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=168)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
    LCSH
    Ontologies (Information retrieval)
    Subject
    Ontologies (Information retrieval)
  4. Handbook of metadata, semantics and ontologies (2014) 0.01
    0.0128055895 = product of:
      0.03414824 = sum of:
        0.01711173 = weight(_text_:use in 5134) [ClassicSimilarity], result of:
          0.01711173 = score(doc=5134,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.13532647 = fieldWeight in 5134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=5134)
        0.012622404 = weight(_text_:of in 5134) [ClassicSimilarity], result of:
          0.012622404 = score(doc=5134,freq=16.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.19546966 = fieldWeight in 5134, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=5134)
        0.004414106 = product of:
          0.008828212 = sum of:
            0.008828212 = weight(_text_:on in 5134) [ClassicSimilarity], result of:
              0.008828212 = score(doc=5134,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.097201325 = fieldWeight in 5134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5134)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Metadata research has emerged as a discipline cross-cutting many domains, focused on the provision of distributed descriptions (often called annotations) to Web resources or applications. Such associated descriptions are supposed to serve as a foundation for advanced services in many application areas, including search and location, personalization, federation of repositories and automated delivery of information. Indeed, the Semantic Web is in itself a concrete technological framework for ontology-based metadata. For example, Web-based social networking requires metadata describing people and their interrelations, and large databases with biological information use complex and detailed metadata schemas for more precise and informed search strategies. There is a wide diversity in the languages and idioms used for providing meta-descriptions, from simple structured text in metadata schemas to formal annotations using ontologies, and the technologies for storing, sharing and exploiting meta-descriptions are also diverse and evolve rapidly. In addition, there is a proliferation of schemas and standards related to metadata, resulting in a complex and moving technological landscape - hence, the need for specialized knowledge and skills in this area. The Handbook of Metadata, Semantics and Ontologies is intended as an authoritative reference for students, practitioners and researchers, serving as a roadmap for the variety of metadata schemas and ontologies available in a number of key domain areas, including culture, biology, education, healthcare, engineering and library science.
  5. Fensel, D.: Ontologies : a silver bullet for knowledge management and electronic commerce (2001) 0.01
    0.01049829 = product of:
      0.04199316 = sum of:
        0.029519552 = weight(_text_:retrieval in 163) [ClassicSimilarity], result of:
          0.029519552 = score(doc=163,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23632148 = fieldWeight in 163, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=163)
        0.012473608 = weight(_text_:of in 163) [ClassicSimilarity], result of:
          0.012473608 = score(doc=163,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.19316542 = fieldWeight in 163, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=163)
      0.25 = coord(2/8)
    
    Abstract
    Ontologies have been developed and investigated for quite a while now in artificial intelligente and natural language processing to facilitate knowledge sharing and reuse. More recently, the notion of ontologies has attracied attention from fields such as intelligent information integration, cooperative information systems, information retrieval, electronic commerce, and knowledge management. The author systematicaliy introduces the notion of ontologies to the non-expert reader and demonstrates in detail how to apply this conceptual framework for improved intranet retrieval of corporate information and knowledge and for enhanced Internet-based electronic commerce. In the second part of the book, the author presents a more technical view an emerging Web standards, like XML, RDF, XSL-T, or XQL, allowing for structural and semantic modeling and description of data and information.
  6. Dreyfus, H.L.: ¬Die Grenzen künstlicher Intelligenz : was Computer nicht können (1985) 0.01
    0.009854372 = product of:
      0.039417487 = sum of:
        0.007889003 = weight(_text_:of in 4332) [ClassicSimilarity], result of:
          0.007889003 = score(doc=4332,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.12216854 = fieldWeight in 4332, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4332)
        0.031528484 = product of:
          0.06305697 = sum of:
            0.06305697 = weight(_text_:computers in 4332) [ClassicSimilarity], result of:
              0.06305697 = score(doc=4332,freq=2.0), product of:
                0.21710795 = queryWeight, product of:
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.041294612 = queryNorm
                0.29044062 = fieldWeight in 4332, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4332)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Content
    Vgl. auch die Standpunkte in: Collins, H.M.: A review of Hubert Dreyfus' What computers still can't do in: Artificial intelligence 80(1996) no.1, S.99-191.
    Footnote
    HST und ZST werden in verschiedenen Katalogen auch in vertauschter Reihenfolge angegeben (vgl. die Gestaltung des Covers und Titelblatts). Titel des Original: What computer can't do: the limits of artificial intelligence.
  7. Schank, R.C.; Childers, P.G.: ¬Die Zukunft der künstlichen Intelligenz : Chancen und Risiken (1986) 0.00
    0.004729273 = product of:
      0.037834182 = sum of:
        0.037834182 = product of:
          0.075668365 = sum of:
            0.075668365 = weight(_text_:computers in 3708) [ClassicSimilarity], result of:
              0.075668365 = score(doc=3708,freq=2.0), product of:
                0.21710795 = queryWeight, product of:
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.041294612 = queryNorm
                0.34852874 = fieldWeight in 3708, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3708)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    Dieses Buch handelt vom Wesen der menschlichen Intelligenz und davon, was es bedeuten würde, über 'Maschinen-Intelligenz' zu verfügen. Schon immer war es Ziel und Wunschtraum genialer Denker und Erfinder, die 'denkende Maschine' als Hilfsapparat für den Menschen zu konstruieren. Erst in unserer Zeit nähern wir uns mit Unterstützung des Computers der Verwirklichung der in der Tat hochfliegenden Pläne. Die Öffentlichkeit hat die Künstliche Intelligenz entdeckt, ist aber nicht ganz sicher, was das eigentlich ist. Schließlich müssen wir die Frage, was Computer können, parallel zu der Frage betrachten, WAS MENSCHEN KÖNNEN. Es fällt daher bis heute selbst den Fachleuten außerordentlich schwer, den Computer-Maschinen das Denken beizubringen.
  8. Penrose, R.: Computerdenken : Des Kaisers neue Kleider oder Die Debatte um Künstliche Intelligenz, Bewußtsein und die Gesetze der Physik (1991) 0.00
    0.004458801 = product of:
      0.035670407 = sum of:
        0.035670407 = product of:
          0.071340814 = sum of:
            0.071340814 = weight(_text_:computers in 4451) [ClassicSimilarity], result of:
              0.071340814 = score(doc=4451,freq=4.0), product of:
                0.21710795 = queryWeight, product of:
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.041294612 = queryNorm
                0.32859606 = fieldWeight in 4451, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4451)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    LCSH
    Computers
    Subject
    Computers
  9. Lenzen, M.: Künstliche Intelligenz : was sie kann & was uns erwartet (2018) 0.00
    0.0017483906 = product of:
      0.013987125 = sum of:
        0.013987125 = product of:
          0.02797425 = sum of:
            0.02797425 = weight(_text_:22 in 4295) [ClassicSimilarity], result of:
              0.02797425 = score(doc=4295,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.19345059 = fieldWeight in 4295, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4295)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    18. 6.2018 19:22:02
  10. Moravec, H.P.: Mind children : der Wettlauf zwischen menschlicher und künstlicher Intelligenz (1990) 0.00
    0.0011833503 = product of:
      0.009466803 = sum of:
        0.009466803 = weight(_text_:of in 5338) [ClassicSimilarity], result of:
          0.009466803 = score(doc=5338,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.14660224 = fieldWeight in 5338, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5338)
      0.125 = coord(1/8)
    
    Abstract
    Arguing that within the next fifty years machines will equal humans not only in reasoning power but also in their ability to perceive, interact with, and change their environment, the author describes the tremendous technological advances possible in thefield of robotics.
    Content
    "A dizzying display of intellect and wild imaginings by Moravec, a world-class roboticist who has himself developed clever beasts . . . Undeniably, Moravec comes across as a highly knowledgeable and creative talent-which is just what the field needs" - Kirkus Reviews.
  11. McCorduck, P.: Denkmaschinen : die Geschichte der künstlichen Intelligenz (1987) 0.00
    0.0011156735 = product of:
      0.008925388 = sum of:
        0.008925388 = weight(_text_:of in 4334) [ClassicSimilarity], result of:
          0.008925388 = score(doc=4334,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.13821793 = fieldWeight in 4334, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4334)
      0.125 = coord(1/8)
    
    Footnote
    Originaltitel: Machines who think: a personal inquiry into the history and prospects of artificial intelligence
  12. Allman, W.F.: Menschliches Denken - Künstliche Intelligenz : von der Gehirnforschung zur nächsten Computer-Generation (1990) 0.00
    6.9729594E-4 = product of:
      0.0055783675 = sum of:
        0.0055783675 = weight(_text_:of in 3948) [ClassicSimilarity], result of:
          0.0055783675 = score(doc=3948,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.086386204 = fieldWeight in 3948, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3948)
      0.125 = coord(1/8)
    
    Footnote
    Original u.d.T.: Apprentices of wonder
  13. Penrose, R.: Schatten des Geistes : Wege zu einer neuen Physik des Bewußtseins (1995) 0.00
    6.9729594E-4 = product of:
      0.0055783675 = sum of:
        0.0055783675 = weight(_text_:of in 4450) [ClassicSimilarity], result of:
          0.0055783675 = score(doc=4450,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.086386204 = fieldWeight in 4450, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4450)
      0.125 = coord(1/8)
    
    Footnote
    Originaltitel: Shadows of the mind. Rez. in: Spektrum der Wissenschaft 1996, H.8, S.118-119 (I. Diener)
  14. Tegmark, M.: Leben 3.0 : Mensch sein im Zeitalter Künstlicher Intelligenz (2017) 0.00
    4.8810715E-4 = product of:
      0.0039048572 = sum of:
        0.0039048572 = weight(_text_:of in 4558) [ClassicSimilarity], result of:
          0.0039048572 = score(doc=4558,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.060470343 = fieldWeight in 4558, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4558)
      0.125 = coord(1/8)
    
    Abstract
    Künstliche Intelligenz ist unsere unausweichliche Zukunft. Wird sie uns ins Verderben stürzen oder zur Weiterentwicklung des Homo sapiens beitragen? Die Nobelpreis-Schmiede Massachusetts Institute of Technology ist der bedeutendste technologische Think Tank der USA. Dort arbeitet Professor Max Tegmark mit den weltweit führenden Entwicklern künstlicher Intelligenz zusammen, die ihm exklusive Einblicke in ihre Labors gewähren. Die Erkenntnisse, die er daraus zieht, sind atemberaubend und zutiefst verstörend zugleich. Neigt sich die Ära der Menschen dem Ende zu? Der Physikprofessor Max Tegmark zeigt anhand der neusten Forschung, was die Menschheit erwartet. Hier eine Auswahl möglicher Szenarien: - Eroberer: Künstliche Intelligenz übernimmt die Macht und entledigt sich der Menschheit mit Methoden, die wir noch nicht einmal verstehen. - Der versklavte Gott: Die Menschen bemächtigen sich einer superintelligenten künstlichen Intelligenz und nutzen sie, um Hochtechnologien herzustellen. - Umkehr: Der technologische Fortschritt wird radikal unterbunden und wir kehren zu einer prä-technologischen Gesellschaft im Stil der Amish zurück. - Selbstzerstörung: Superintelligenz wird nicht erreicht, weil sich die Menschheit vorher nuklear oder anders selbst vernichtet. - Egalitäres Utopia: Es gibt weder Superintelligenz noch Besitz, Menschen und kybernetische Organismen existieren friedlich nebeneinander. Max Tegmark bietet kluge und fundierte Zukunftsszenarien basierend auf seinen exklusiven Einblicken in die aktuelle Forschung zur künstlichen Intelligenz.

Years

Languages

Types

  • m 14
  • s 2

Subjects

Classifications