Search (11 results, page 1 of 1)

  • × classification_ss:"54.72 / Künstliche Intelligenz"
  1. Hofstadter, D.R.; Fluid Analogies Group: ¬Die FARGonauten : über Analogie und Kreativität (1996) 0.02
    0.02438689 = product of:
      0.2438689 = sum of:
        0.2438689 = weight(_text_:erlebnisbericht in 1665) [ClassicSimilarity], result of:
          0.2438689 = score(doc=1665,freq=4.0), product of:
            0.30944613 = queryWeight, product of:
              10.087449 = idf(docFreq=4, maxDocs=44218)
              0.03067635 = queryNorm
            0.78808194 = fieldWeight in 1665, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              10.087449 = idf(docFreq=4, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1665)
      0.1 = coord(1/10)
    
    RSWK
    Künstliche Intelligenz / Forschung / Geschichte 1977-1992 / Erlebnisbericht
    Subject
    Künstliche Intelligenz / Forschung / Geschichte 1977-1992 / Erlebnisbericht
  2. Hofstadter, D.R.: I am a strange loop (2007) 0.02
    0.016912512 = product of:
      0.08456256 = sum of:
        0.038281452 = product of:
          0.057422176 = sum of:
            0.036270276 = weight(_text_:seele in 666) [ClassicSimilarity], result of:
              0.036270276 = score(doc=666,freq=2.0), product of:
                0.22439323 = queryWeight, product of:
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.03067635 = queryNorm
                0.16163711 = fieldWeight in 666, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.015625 = fieldNorm(doc=666)
            0.021151898 = weight(_text_:problem in 666) [ClassicSimilarity], result of:
              0.021151898 = score(doc=666,freq=6.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.16245036 = fieldWeight in 666, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.015625 = fieldNorm(doc=666)
          0.6666667 = coord(2/3)
        0.04628111 = weight(_text_:neurowissenschaftler in 666) [ClassicSimilarity], result of:
          0.04628111 = score(doc=666,freq=2.0), product of:
            0.25347564 = queryWeight, product of:
              8.2629 = idf(docFreq=30, maxDocs=44218)
              0.03067635 = queryNorm
            0.18258603 = fieldWeight in 666, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.2629 = idf(docFreq=30, maxDocs=44218)
              0.015625 = fieldNorm(doc=666)
      0.2 = coord(2/10)
    
    Footnote
    Rez. in Spektrum der Wissenschaft 2007, H.9, S.93-94 (M.Gardner): "Unser Gehirn enthält einige hundert Mil-liarden Neuronen mit zehntausendmal so vielen Verbindungen zwischen ihnen. Durch welch unglaubliche Zauberei wird dieses Gewirr von Fäden seiner selbst bewusst, fähig, Liebe und Hass zu empfinden, Romane und Sinfonien zu schreiben, Lust und Schmerz zu fühlen und sich aus freiem Willen für Gut oder Böse zu entscheiden? Der australische Philosoph David Chalmers hat die Erklärung des Bewusstseins »das schwere Problem» genannt. Das leichte Problem ist, Unbewusstes wie Atmen, Verdauen, Gehen, Wahrnehmen und tausend andere Dinge zu verstehen. An dem schweren beißen sich Philosophen, Psychologen und Neurowissenschaftler zurzeit bevorzugt die Zähne aus und produzieren tausende Bücher. Ein aktuelles stammt von Douglas R. Hofstadter, Professor für Kognitionswissenschaft an der Universität von Indiana in Bloomington, der vor allem durch sein preisgekröntes Buch »Gödel, Escher, Bach» bekannt geworden ist. Sein neues Werk, so genial und provokant wie seine Vorgänger, ist eine bunte Mischung aus Spekulationen und Geschichten aus seinem Leben. Ein ganzes Kapitel ist einer persönlichen Tragödie gewidmet, die Hofstadter bis heute zu verarbeiten versucht: Im Dezember 1993 starb seine Frau Carol im Alter von 42 Jahren plötzlich an einem Hirntumor. In der Vorstellung von einem Leben nach dem Tod kann er keinen Trost finden; so bleibt ihm nur die Gewissheit, dass Carol in den Erinnerungen derer, die sie kannten und liebten, weiterleben wird - zumindest für eine gewisse Zeit.
    Die Murmel liefert das Hauptthema des Buchs. Die Seele, das Ich, ist eine Illusion. Es ist eine »seltsame Schleife« (a strange loop), die ihrerseits von einer Unzahl von Schleifen auf einem niedrigeren Niveau erzeugt wird. So kommt es, dass der Klumpen Materie innerhalb unseres Schädels nicht nur sich selbst beobachtet, sondern sich dessen auch bewusst ist. Seltsame, genauer: selbstbezügliche Schleifen faszinieren Hofstadter seit jeher. Er sieht sie überall. Sie sind das Herzstück von Gödels berühmtem Unbeweisbarkeitssatz. Sie lauern in den »Principia Mathematica« von Russell und Whitehead, stets bereit, die Fundamente der Mathematik zu untergraben. Ihre kürzeste Form sind logische Paradoxa wie »Dieser Satz ist falsch« oder die Karte, auf deren einer Seite steht »Der Satz auf der Rückseite ist wahr« und auf der anderen »Der Satz auf der Rückseite ist falsch«. In Kapitel 21 führt er ein verstörendes Gedankenexperiment ein, das auch Thema zahlreicher Sciencefiction-Geschichten ist: Ein Mann wird, wie in »Raumschiff Enterprise«, auf einen fremden Planeten und zurück gebeamt, indem eine Maschine ihn Molekül für Molekül abscannt und die Information an den Zielort übermittelt, wo sie zur Herstellung einer exakten Kopie dieses Menschen dient. Wenn dabei das Original zerstört wird, entsteht kein philosophisches Problem. Wenn es aber erhalten bleibt - oder mit derselben Information zwei Kopien hergestellt werden -, entsteht ein Paar identischer Zwillinge mit identischen Erinnerungen. Ist der so gebeamte Mensch derselbe wie das Original oder ein anderer?
  3. ¬The Semantic Web - ISWC 2010 : 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part 2. (2010) 0.01
    0.0052496144 = product of:
      0.052496143 = sum of:
        0.052496143 = product of:
          0.15748842 = sum of:
            0.15748842 = weight(_text_:2010 in 4706) [ClassicSimilarity], result of:
              0.15748842 = score(doc=4706,freq=33.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                1.0733243 = fieldWeight in 4706, product of:
                  5.7445626 = tf(freq=33.0), with freq of:
                    33.0 = termFreq=33.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4706)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    The two-volume set LNCS 6496 and 6497 constitutes the refereed proceedings of the 9th International Semantic Web Conference, ISWC 2010, held in Shanghai, China, during November 7-11, 2010. Part I contains 51 papers out of 578 submissions to the research track. Part II contains 18 papers out of 66 submissions to the semantic Web in-use track, 6 papers out of 26 submissions to the doctoral consortium track, and also 4 invited talks. Each submitted paper were carefully reviewed. The International Semantic Web Conferences (ISWC) constitute the major international venue where the latest research results and technical innovations on all aspects of the Semantic Web are presented. ISWC brings together researchers, practitioners, and users from the areas of artificial intelligence, databases, social networks, distributed computing, Web engineering, information systems, natural language processing, soft computing, and human computer interaction to discuss the major challenges and proposed solutions, the success stories and failures, as well the visions that can advance research and drive innovation in the Semantic Web.
    RSWK
    Semantic Web / Kongress / Schanghai <2010>
    Semantic Web / Ontologie <Wissensverarbeitung> / Kongress / Schanghai <2010>
    Semantic Web / Datenverwaltung / Wissensmanagement / Kongress / Schanghai <2010>
    Semantic Web / Anwendungssystem / Kongress / Schanghai <2010>
    Semantic Web / World Wide Web 2.0 / Kongress / Schanghai <2010>
    Subject
    Semantic Web / Kongress / Schanghai <2010>
    Semantic Web / Ontologie <Wissensverarbeitung> / Kongress / Schanghai <2010>
    Semantic Web / Datenverwaltung / Wissensmanagement / Kongress / Schanghai <2010>
    Semantic Web / Anwendungssystem / Kongress / Schanghai <2010>
    Semantic Web / World Wide Web 2.0 / Kongress / Schanghai <2010>
    Year
    2010
  4. ¬The Semantic Web - ISWC 2010 : 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part I. (2010) 0.00
    0.004199691 = product of:
      0.04199691 = sum of:
        0.04199691 = product of:
          0.12599073 = sum of:
            0.12599073 = weight(_text_:2010 in 4707) [ClassicSimilarity], result of:
              0.12599073 = score(doc=4707,freq=33.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.85865945 = fieldWeight in 4707, product of:
                  5.7445626 = tf(freq=33.0), with freq of:
                    33.0 = termFreq=33.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4707)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    The two-volume set LNCS 6496 and 6497 constitutes the refereed proceedings of the 9th International Semantic Web Conference, ISWC 2010, held in Shanghai, China, during November 7-11, 2010. Part I contains 51 papers out of 578 submissions to the research track. Part II contains 18 papers out of 66 submissions to the semantic Web in-use track, 6 papers out of 26 submissions to the doctoral consortium track, and also 4 invited talks. Each submitted paper were carefully reviewed. The International Semantic Web Conferences (ISWC) constitute the major international venue where the latest research results and technical innovations on all aspects of the Semantic Web are presented. ISWC brings together researchers, practitioners, and users from the areas of artificial intelligence, databases, social networks, distributed computing, Web engineering, information systems, natural language processing, soft computing, and human computer interaction to discuss the major challenges and proposed solutions, the success stories and failures, as well the visions that can advance research and drive innovation in the Semantic Web.
    RSWK
    Semantic Web / Kongress / Schanghai <2010>
    Semantic Web / Ontologie <Wissensverarbeitung> / Kongress / Schanghai <2010>
    Semantic Web / Datenverwaltung / Wissensmanagement / Kongress / Schanghai <2010>
    Semantic Web / Anwendungssystem / Kongress / Schanghai <2010>
    Semantic Web / World Wide Web 2.0 / Kongress / Schanghai <2010>
    Subject
    Semantic Web / Kongress / Schanghai <2010>
    Semantic Web / Ontologie <Wissensverarbeitung> / Kongress / Schanghai <2010>
    Semantic Web / Datenverwaltung / Wissensmanagement / Kongress / Schanghai <2010>
    Semantic Web / Anwendungssystem / Kongress / Schanghai <2010>
    Semantic Web / World Wide Web 2.0 / Kongress / Schanghai <2010>
    Year
    2010
  5. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.00
    0.0022174693 = product of:
      0.011087346 = sum of:
        0.0050883554 = product of:
          0.015265065 = sum of:
            0.015265065 = weight(_text_:problem in 150) [ClassicSimilarity], result of:
              0.015265065 = score(doc=150,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.11723843 = fieldWeight in 150, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.33333334 = coord(1/3)
        0.0059989905 = product of:
          0.01799697 = sum of:
            0.01799697 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
              0.01799697 = score(doc=150,freq=6.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.16753313 = fieldWeight in 150, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
  6. Hodgson, J.P.E.: Knowledge representation and language in AI (1991) 0.00
    0.0014392043 = product of:
      0.0143920425 = sum of:
        0.0143920425 = product of:
          0.043176126 = sum of:
            0.043176126 = weight(_text_:problem in 1529) [ClassicSimilarity], result of:
              0.043176126 = score(doc=1529,freq=4.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.33160037 = fieldWeight in 1529, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1529)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    The aim of this book is to highlight the relationship between knowledge representation and language in artificial intelligence, and in particular on the way in which the choice of representation influences the language used to discuss a problem - and vice versa. Opening with a discussion of knowledge representation methods, and following this with a look at reasoning methods, the author begins to make his case for the intimate relationship between language and representation. He shows how each representation method fits particularly well with some reasoning methods and less so with others, using specific languages as examples. The question of representation change, an important and complex issue about which very little is known, is addressed. Dr Hodgson gathers together recent work on problem solving, showing how, in some cases, it has been possible to use representation changes to recast problems into a language that makes them easier to solve. The author maintains throughout that the relationships that this book explores lie at the heart of the construction of large systems, examining a number of the current large AI systems from the viewpoint of representation and language to prove his point.
  7. Bechtolsheim, M. von: Agentensysteme : verteiltes Problemlösen mit Expertensystemen (1992) 0.00
    0.0012212053 = product of:
      0.012212053 = sum of:
        0.012212053 = product of:
          0.03663616 = sum of:
            0.03663616 = weight(_text_:problem in 3388) [ClassicSimilarity], result of:
              0.03663616 = score(doc=3388,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.28137225 = fieldWeight in 3388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3388)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    Problem solving in Unternehmen und Organisationen geschieht vielerorts und vernetzt. Herkömmliche Expertensysteme tragen diesem Faktum kaum Rechnung. Deshalb ist gerda die Betriebswirtschaftslehre angewiesen auf den Agentensystemansatz, bei dem Problemlösungen so modelliert werden, daß sie den realen Verhältnissen möglichst nahe kommen
  8. Social information retrieval systems : emerging technologies and applications for searching the Web effectively (2008) 0.00
    0.0010338926 = product of:
      0.010338926 = sum of:
        0.010338926 = product of:
          0.031016776 = sum of:
            0.031016776 = weight(_text_:2010 in 4127) [ClassicSimilarity], result of:
              0.031016776 = score(doc=4127,freq=2.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.21138735 = fieldWeight in 4127, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4127)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Footnote
    Rez. in: JASIST 61(2010) no.12, S.2587-2588 (Gobinda Chowdhury)
  9. Survey of text mining : clustering, classification, and retrieval (2004) 0.00
    0.0010176711 = product of:
      0.010176711 = sum of:
        0.010176711 = product of:
          0.03053013 = sum of:
            0.03053013 = weight(_text_:problem in 804) [ClassicSimilarity], result of:
              0.03053013 = score(doc=804,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.23447686 = fieldWeight in 804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=804)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
  10. Handbuch der Künstlichen Intelligenz (2003) 0.00
    9.697851E-4 = product of:
      0.009697851 = sum of:
        0.009697851 = product of:
          0.029093552 = sum of:
            0.029093552 = weight(_text_:22 in 2916) [ClassicSimilarity], result of:
              0.029093552 = score(doc=2916,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.2708308 = fieldWeight in 2916, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2916)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    21. 3.2008 19:10:22
  11. Information visualization in data mining and knowledge discovery (2002) 0.00
    2.7708148E-4 = product of:
      0.0027708148 = sum of:
        0.0027708148 = product of:
          0.008312444 = sum of:
            0.008312444 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.008312444 = score(doc=1789,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    23. 3.2008 19:10:22

Languages

Types

  • m 11
  • s 7

Subjects

Classifications