We watched Ex Machina yesterday and it was a pretty solid movie, and an interesting rumination on Artificial Intelligence, or AI.
Futurist Gray Scott disagrees.
Gray Scott — philosopher, speaker, artist, and self-described “techno optimist” [sounds like he’s an optimistocrat™ to us – eds.] — spends his time imagining what the future might look like. One of his passions these days is exploring artificial intelligence and how it will transform society.
His positive take on artificial intelligence comes as a counterpoint to those who fear that AI will replace good-paying jobs with robots and maybe even pose a mortal threat to humanity.
We are inclined to agree with Mr. Scott in his optimism of AI, especially in regards to robotics *coughsex-robotscough*. We also understand where Musk and Hawking are coming from, but we think their fears are a byproduct of the human mind’s penchant for projection, ie; humans treat other humans terribly. If we give human-like thought processes to robots, robots will treat humans terribly. In fact, Mr. Scott says as much in this CBS interview;
Should we be afraid?
It’s such a complex and such a new technology in a lot of ways. I think we are afraid of ourselves. That is what it is. We are afraid of ourselves and our own unconscious minds. When we are building something that reflects us, it’s the one thing we’re all afraid to face. We’re afraid to face ourselves. Building machines that mirror our consciousness is a very frightening proposition because we have seen how evil people can be.
That’s a first-line problem. [
if ([unit] is acting like a dick); then (%%shutdown%%[unit]);] Or an Azimov’s Laws issue.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Robotics aside, AI will become more prevalent in stationary devices as the Internet of Things becomes more robust. Which is partially the subject of the first question of the interview.
Where is artificial intelligence today and where is it headed?
There are different levels and different stages that we are going to go through as we reach a true artificially intelligent machine age. We already are in the beginning of that today. For example, Amazon has a new product called Echo. It’s a speaker that sits in your house and you can talk to Echo and Echo can schedule things for you. It’s an artificially intelligent assistant.
There is a lot to read and process in that interview, and we recommend spending some time with it. The key takeaway as far as Mr. Scott’s (as well as our own) optimism is this;
There will be a certain point where these machines have super-human intelligence. The best-case scenario is that they will be caretakers and they will be our teachers. They will teach us to be better species. That is what I am hoping for as a techno optimist.
Us, too. If it is true AI, then it will be not much different than raising a young person. Nurture (or code?) kindness, positivity, and empathy, and the chance that said AI will do something terrible decreases greatly.
Just in case things go sideways, though, don’t forget your Robot Insurance. Because robots are strong.
sorry for the hack version, WP and embeds do not get along 😦
[Gray Scott is the founder of SeriousWonder, a website about future-tech, and has a YouTube channel with some neat, future-type videos. Mr. Scott also wrote a thing about Ex Machina.]