If Stephen Hawking, Bill Gates and Elon Musk all agree that artificial intelligence (AI) is an threat to the human race, then maybe you should too.
Hawking, a theoretical physicist, believes:
“The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.”
Musk, an entrepreneur and the inventor behind PayPal, Tesla and SpaceX, states that creating AI is like “summoning the demon:”
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.”
Gates, a computer programmer and tech inventor, says:
“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super-intelligent. That should be positive if we manage it well…A few decades after that though the intelligence is strong enough to be a concern.”
End of the human race? Existential threat and concern? The potential future risks of AI, or a machine’s ability to simulate human intelligence, are scary.
There have been a number of science fiction movies that demonstrate the manifestation of these concerns and the dangerous implications of trying to create a conscious computer system. Both the Terminator and I, Robot depict the ultimate humans versus robots scenarios, where robots develop minds of their own and become beyond human control. While it is unknown if this could ever become a reality, it is still something to be concerned about. After all, “Sci-fi writers are better at predicting the future than experts,” professor Robinson informed my class. Take Skype for example—it’s kind of like a modified, real version of the communication system used in Star Trek.
We live in an era where technology is advancing at such a rapid pace that there is no telling what is and what is not possible in the future of robotics. While there are various benefits to robotics technology such as Touch Bionic’s prosthetic hand and iRobot’s Roomba, I for one never want to live in a world where there are human-like robots walking around—especially these terrifying human-like robots invented by Japanese engineers.
More recently, I have watched the movie Transcendence starring Johnny Depp. If you have no idea what this movie is about, watch the trailer here. Overall, I wasn’t very impressed with the film, but the story brought up some questions worth considering.
- Will computers ever have the ability to have a full range of human emotion? Because emotions are so subjective to human perceptions and past experiences, I believe this is impossible. While it might be possible to get a robot to express an emotion, there is no way a machine could truly understand that feeling on a deeper level.
- Could it ever be possible to upload human consciousness to a computer? If brain waves could be coded, then maybe. But I am highly skeptical.
- Is artificial intelligence a moral science? While AI may be created for moral reasons to improve quality of life, Hawking, Gates and Musk offer enough insight to make me believe this is immoral. It opens the door to the possibility of creating a computer with the ability to reprogram itself to “think” and “act” on its own accord. And because robots are incapable of understanding emotion (I believe), how should we expect this form of technological intelligence to understand morals?