I made my goal + a little bit today. It was good; I’m close to being back where I want to be regarding making it to 75k.
This post is inspired by a thread from one of the facebook groups I participate in, although I will be talking about a fair number of things that weren’t in the post.
AI is a staple of modern Science Fiction. This makes sense since AI is both strange and familiar enough to be compelling. The truth is though, very few people understand AI very well. They have it created as something like a Genie from folklore, or a super connected, super intelligent person.
AI could be much, much stranger than that.
Right now we “program” AI by giving some sort of learning algorithm a target and a massive amount of data as to what that target can look like. AlphaGo, we didn’t give it a bunch of decisions about how to play go, we told it what winning go looks like, what the rules are, then we told it to win go, and we let it play literally hundreds of millions of games.
AlphaGo has learned to play go through that process. It is the best go player in the world, and will probably be so forever. It will improve as the tech driving it improves, and it will improve exponentially.
This is a computer with a small fraction of the intelligence of a human. At this task, it is so much better than we are.
Self-driving cars are safer than humans. Right now. Not at some point in the future when they are allowed, no, at this moment there are autonomous vehicles in the world that drive better than you do.
We didn’t program them with rules, not the way we do something like Microsoft Excel or Pac-Man. No, we told them what good driving looked like, and we fed them data. There is some human intervention in their programming, at least right now. As time goes on that code will probably have a smaller and smaller footprint in the code base of the self-driving car. As that happens, the cars will become better drivers. Skynet can’t capture the potential strangeness at work here. Sure, the Terminator understood that AI will be smarter than we are, but it still made it look a lot like us, at least in the way it thought.
The Matrix was guilty of this too. The Wachowski’s wanted to machines to be using humanity for our processing capacity, not as a form of energy (because come on, we are a really bad energy storage medium). It was better, but it still made the machines somewhat more human in their core thought process.
A more interesting answer? They wanted to understand humanity, to really comprehend everything about us. This whole thing, it’s just a laboratory to study our response to stimulus. The Matrix as a social experiment. A war fought to figure us out.
AI could have a motivation that’s so alien to us that we don’t even recognise that such a motivation could exist. Hell, The Matrix could have started life as a time management system that was designed to get the maximum number of work hours out of each person, without any other parameters. It figured out a system that would allow us to work more total lifetime hours. Since it didn’t care about what we did, or if what we did produced anything in particular, so long as we were working, it decided to go with The Matrix.
That’s AI, at least amongst the realm of things that I can, just off the top of my head, think that AI might have as motivation for just that one scenario.
The realm of possible stories that can come out of this is nearly infinite. These creatures will have motivations we design, and we may not do a very good job designing their motivation.
I would love to see more Science Fiction out there with a more original take on AI, not just “they see humanity as a threat and want to destroy us.”