When I was a kid I always thought artificial intelligence would be different.
There was something impossible about it, something that stuck it firmly in the “fiction” part of “science fiction”, something that seemed like it should not ever exist, like time travel. Even time travel doesn’t have that sort of impossibility about it that artificial intelligence did. We might still stumble on time travel.
But here’s the thing: artificial intelligence wasn’t supposed to be about “stumbling.” It was just supposed to be really, really, really hard, like really impossibly hard.
And that’s totally not the way it turned out. The way it really has turned out is disappointing.
Don’t get me wrong: AI is exciting in its many applications and implications. But there’s a piece of it that’s disappointing. I’d like to talk about that.
So the first thing I want to talk about is something that I don’t really hear people saying. Let me just say this: “We figured out AI.”
I mean, we nailed it, in a way. Despite what I’m saying above about a part of it being disappointing, we have figured out artificial intelligence and that’s just crazy. And I don’t hear people saying that.
People say: “I’ll have the beef and mushroom pizza.” And people say “We can eliminate X percent of human fatalities on roads with AI cars.” And I hear people say: “We are no longer masters of the technology we have built and we’re all going to die.” But what I really don’t hear anybody saying is looking up in wonder and just going: “I can’t believe that after all this time we actually did this thing and we figured out AI.”
I am—make no mistake about it—awed in the face of what we have achieved and are achieving today with AI. And that’s part 1 (of two parts) of what I want to talk about today. Part 1 is that we figured out AI and it’s amazing, and part 2 is that the way we figured out AI is disappointing. I think that sounds kind of confusing, put that way. Part 1: I’m amazed we nailed it. Part 2: I’m disappointed in the way we nailed it. So let me explain that.
Ok, so really I already did part 1. Part 1 is that it’s amazing that we figured out AI. It’s amazing because when you watch a video of a Tesla driving by itself across a city and not hitting bikers, that’s amazing. That’s as good as we can do, with our “natural intelligence” (NI).
It’s amazing when Google Photos recognizes that a baby picture from 30 years ago is the same as the adult person in that other picture, and it shows you that. That’s maybe better than we could do with “natural intelligence.”
It’s amazing when you try one of those online machine learning courses and in just two hours of work and 100 lines of code the exercise has you train a “program” on your personal mid-range laptop that can do handwriting-recognition. If our little laptops can do that, well then obviously big parallel computers can do voice recognition and generate natural speech and write music and trade stocks et cetera et cetera faster and better than we can? Of course they can, and that’s amazing.
I don’t know… am I the only one who remembers when we couldn’t do all that stuff?
I don’t think I am. But I feel like there are people who just aren’t interested and they still think we can’t do all that stuff, and there are other people who are interested so they’re all hyped that we can do all that stuff. But I just don’t feel like I meet many “transition people:” those people who are looking at how we just got from A (where we couldn’t make robots do anything at all) to B (where we’ve got these superpower robots that basically can go ahead and start patrolling the streets and predicting crime before it happens like in that Tom Cruise movie).
Do you remember how in that Tom Cruise movie (I’ve obviously forgotten its name and it’s really obvious, something Limit Folder or something), do you remember how the actual brain behind the whole machinery doing all that predicting was three girls in bathtubs with brain pipes?
I.E., there was no actual robot built of metal there: at the time of that Tom Cruise movie (I actually want not to remember the title now) nobody who was imagining future science-fiction scenarios even believed that it would be possible to build a machine out of real metal that could do the kind of predictive calculations needed to catch criminals and arrest them before they did evil deeds. And that movie came out when in 2009? They thought that that kind of job would have to be done—there was no other way—by a human brain.
We’re in part 2. The segue to part 2 was “a human brain.” In part 1, I wanted to talk about how it was amazing that we figured out AI. Part 2 was going to be about how there’s something disappointing in the way we figured out AI. And the link between part 1 and part 2 is the human brain.
The disappointing thing is: I always expected that when we figure out AI, we would have first figured out the human brain and THEN implemented what we’d figured out in computers. And it’s disappointing because we figured out the part that we were supposed to do second but we totally skipped the part we were supposed to do first!
No, seriously, this really is disappointing. Maybe I’m being idealistic: obviously figuring out the human brain is this monstrous thankless effort that would have taken us thousands of years at least, and I should be happy that we found a pragmatic way to leap over it and achieve our goals—deliver our goals, without splashing around in the intellectual kiddie pool of useless neurotheory. But it’s still disappointing that we skipped past the part where we figure out the human brain.
I’m a bit of an idealist so I’ll need to wrench myself through that porthole of victory, but I can do that. What I want to observe though, is that this division between what many of us (eg. Tom Cruise) thought that this revolution would look like includes a huge part that we ended up just skipping, and we’re really accelerating on all these new AI projects without paying heed to what we thought we were going to be working on.
Some people are paying some attention. Elon Musk’s comparatively obscure project Neuralink is an effort to link machines with brains, which means that we will need to understand at least how to interface with a brain.
Have you thought about what it means to interface with a brain? I started wondering how we could just augment human vision, for example. And it’s a lot of work. I’m writing a blog entry about it which I will post soon. And the gist of it is that you really need to do a lot of complicated and unpredictable guesswork to figure out the part of the human brain that would let you interface with human vision.
To figure out human vision to the point that we can splice it off and generate it ourselves, we would need thousands and thousands of hours of laboratory work with cables sticking out of their heads and attempts to create reference images that we could then use to compare to what our chips were generating. And that’s just a tiny piece of how the brain works. Nevertheless, we have figured out how “vision” works (just not human vision). We have figured out “some vision.” And now we’re building tons of applications upon it.
This is a good thing. It’s pragmatic. It’s brilliant. The ideas that led us to this point are genius. But it’s not deep. It has potential way beyond most things that I can think of humanity having invented, but it’s not what I expected.
So some people do realize that we skipped over a phase and they’re not getting sucked into the hype (although it’s well-deserved hype) of 2018-era AI and they’re going back and studying harder problems.
And those people call those harder problems “Deep AI.” Or they call those problems “general artificial intelligence.” That’s the ability to understand the brain and only then model it in code, so that we can watch it work and pause it and fiddle with little pieces and fix it. That’s much different from the models that we have programmed machines to teach machines to build. Since we programmed machines to teach machines, we unfortunately have no way to know how the models that they built work.
What we have actually achieved, amazing and frankly captivating as it is, is “Cheap AI.”
I do recommend that we jump on this ship, because Cheap AI has to offer is really, really mindblowing. The way plastic is mindblowing to people who previously had to smith all their cauldrons. But we should bear in mind how cheap the thing we’re working with is.