Even more worrisome are recent demonstrations of the vulnerability of A.I. systems to so-called adversarial examples. In these, a malevolent hacker can make specific changes to images, sound waves or text documents that while imperceptible or irrelevant to humans will cause a program to make potentially catastrophic errors.
The possibility of such attacks has been demonstrated in nearly every application domain of A.I., including computer vision, medical image processing, speech recognition and language processing. Numerous studies have demonstrated the ease with which hackers could, in principle, fool face- and object-recognition systems with specific minuscule changes to images, put inconspicuous stickers on a stop sign to make a self-driving car’s vision system mistake it for a yield sign or modify an audio signal so that it sounds like background music to a human but instructs a Siri or Alexa system to perform a silent command.
These potential vulnerabilities illustrate the ways in which current progress in A.I. is stymied by the barrier of meaning. Anyone who works with A.I. systems knows that behind the facade of humanlike visual abilities, linguistic fluency and game-playing prowess, these programs do not — in any humanlike way — understand the inputs they process or the outputs they produce. The lack of such understanding renders these programs susceptible to unexpected errors and undetectable attacks.
What would be required to surmount this barrier, to give machines the ability to more deeply understand the situations they face, rather than have them rely on shallow features? To find the answer, we need to look to the study of human cognition.
Our own understanding of the situations we encounter is grounded in broad, intuitive “common-sense knowledge” about how the world works, and about the goals, motivations and likely behavior of other living creatures, particularly other humans. Additionally, our understanding of the world relies on our core abilities to generalize what we know, to form abstract concepts, and to make analogies — in short, to flexibly adapt our concepts to new situations. Researchers have been experimenting for decades with methods for imbuing A.I. systems with intuitive common sense and robust humanlike generalization abilities, but there has been little progress in this very difficult endeavor.
A.I. programs that lack common sense and other key aspects of human understanding are increasingly being deployed for real-world applications. While some people are worried about “superintelligent” A.I., the most dangerous aspect of A.I. systems is that we will trust them too much and give them too much autonomy while not being fully aware of their limitations. As the A.I. researcher Pedro Domingos noted in his book “The Master Algorithm,” “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”
« Lack of common sense in AI »
A quote saved on Nov. 9, 2018.
Top related keywords - double-click to view: