Search (2 results, page 1 of 1)

  • × author_ss:"Harnett, K."
  • × year_i:[2010 TO 2020}
  1. Pearl, J.; Harnett, K.: ¬Die Zukunft der KI (2018) 0.01
    0.009227715 = product of:
      0.027683146 = sum of:
        0.010929906 = weight(_text_:in in 4525) [ClassicSimilarity], result of:
          0.010929906 = score(doc=4525,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18406484 = fieldWeight in 4525, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4525)
        0.01675324 = weight(_text_:und in 4525) [ClassicSimilarity], result of:
          0.01675324 = score(doc=4525,freq=4.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.17315367 = fieldWeight in 4525, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4525)
      0.33333334 = coord(2/6)
    
    Abstract
    Judea Pearl ist ein Pionier im Bereich der künstlichen Intelligenz. Der Informatiker glaubt, dass sein viel beachtetes Forschungsgebiet in Wirklichkeit in der Klemme steckt: Es habe sich seit Jahrzehnten kaum weiterentwickelt. Der Ausweg besteht ihm zufolge darin, den Systemen beizubringen, nach dem Warum zu fragen.
    Content
    Interessante Fragen und Antworten: Denken Sie, dass es in Zukunft Roboter mit einem freien Willen geben wird? Auf jeden Fall. Wir müssen verstehen, wie man sie programmiert und was wir davon haben könnten. Das Gefühl eines freien Willens scheint ja evolutionär wünschenswert zu sein. Inwiefern? Menschen haben das Gefühl eines freien Willens; die Evolution hat uns damit ausgestattet. Offensichtlich erfüllt es einen gewissen Zweck, der sich rechnet. Woran werden wir merken, dass Roboter einen freien Willen haben? Sobald sie anfangen, kontraproduktiv miteinander zu kommunizieren, etwa durch Aussagen wie »Du hättest es besser machen sollen«. Wenn eine Fußballmannschaft aus Robotern beginnt, so zu kommunizieren, dann haben sie ein Gefühl des freien Willens. »Du hättest es tun sollen« bedeutet, dass das Gegenüber frei entscheiden konnte. Die ersten Anzeichen werden also in der Kommunikation zu finden sein, die nächsten in besserem Fußball. Woher werden wir also wissen, ob eine Maschine in der Lage ist, böse zu sein? Wenn wir bemerken, dass ein Roboter einige Softwarekomponenten konsequent ignoriert, während er andere befolgt. Dann ist auch ein Roboter zu Bösem fähig.
  2. Harnett, K.: Machine learning confronts the elephant in the room : a visual prank exposes an Achilles' heel of computer vision systems: Unlike humans, they can't do a double take (2018) 0.00
    0.0014573209 = product of:
      0.008743925 = sum of:
        0.008743925 = weight(_text_:in in 4449) [ClassicSimilarity], result of:
          0.008743925 = score(doc=4449,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14725187 = fieldWeight in 4449, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4449)
      0.16666667 = coord(1/6)
    
    Abstract
    In a new study, computer scientists found that artificial intelligence systems fail a vision test a child could accomplish with ease. "It's a clever and important study that reminds us that 'deep learning' isn't really that deep," said Gary Marcus , a neuroscientist at New York University who was not affiliated with the work. The result takes place in the field of computer vision, where artificial intelligence systems attempt to detect and categorize objects. They might try to find all the pedestrians in a street scene, or just distinguish a bird from a bicycle (which is a notoriously difficult task). The stakes are high: As computers take over critical tasks like automated surveillance and autonomous driving, we'll want their visual processing to be at least as good as the human eyes they're replacing. It won't be easy. The new work accentuates the sophistication of human vision - and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene - an image of elephant. The elephant's mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen. Researchers are still trying to understand exactly why computer vision systems get tripped up so easily, but they have a good guess. It has to do with an ability humans have that AI lacks: the ability to understand when a scene is confusing and thus go back for a second glance.
    Source
    https://www.quantamagazine.org/machine-learning-confronts-the-elephant-in-the-room-20180920/

Languages

Types