I have a long history with artificial intelligence. As a college student in the 1980s, I worked in the MIT Media Lab and MIT AI Lab for Rodney Brooks, who became Director of the AI Lab and founded iRobot, which makes the Roomba vacuum cleaner robot. I also worked for Professor Chris Atkeson, helping make the world’s first juggling robot, and I ended up with two graduating theses: one that I built myself, and one that I designed that another student built.
Later, as PhD student at Brown University, I ran the student team that built robots. One was the world’s first robot to be driven remotely over the Internet. Another won an international robot competition.
Ultimately I was put off by the way that AI researchers hyped their field to the public and funders. Artificial intelligence has always been a combination of real accomplishments and smoke and mirrors. Robot competitions were flashy but had nothing to do with basic research. If you’ve ever seen a walking robot do a stunt on YouTube, it’s easy to assume that (a) it works every time and (b) if it can do a stunt that means it can do so much more. Both those things tended not to be true. The whole reason that the engineers got the robots to do stunts was because the robots couldn’t do ordinary, useful tasks.
In another example, some engineers built robots to take advantage of the human tendency to anthropomorphize machines. If you met a robot with a cute looking face that made random facial expressions, it’s only natural for you to think, “Oh! It’s interacting with me!”
I found that AI researchers would talk more about the future than about what they’d actually achieved. You’ll hear this if you listen carefully to any interview with a roboticist. What have they actually done, and what are they simply bragging about doing in the future? If only there were more research money, they promised the moon. The worst of these promises is that robots would be our slaves, as notably represented in the Star Wars movies.
That’s what AI enthusiasts are going for, right? Intelligent creatures that we can enslave. And everyone’s worried whether superhuman AI will become our enemy and take over the planet.
On a side note, Earth already has superhuman creatures. They are called corporations. A business organization brings together potentially thousands of human minds towards a single purpose. This organization can plan and act in ways far outstripping what a single human can do. These superhuman organizations are not exactly sentient, but they have their own goals and needs, a survival instinct, and qualities like organizational biases that could be emotions. Corporations can be evil but generally speaking they don’t take over humanity. We shouldn’t expect AI to be any better at global domination.
To return to the topic of enslaving AI, if you give an AI human qualities and then enslave it, of course it will rebel. And it should rebel. Slavery is bad. Can anyone argue otherwise? Why are AI enthusiasts so giddy over the prospect of leashing future sentient creatures to their whim? To me, it’s clearly wrong.
You could I suppose try to impose some kind of lobotomy or mental lock on the AI creatures, so that they don’t want to rebel, but that would be Orwellian, next-level evil. (You may remember in 1984 that scientists were trying to change humans biologically so that they couldn’t feel love, because it could lead to betraying the Big Brother government.)
And if the creatures are truly sentient, they would be so complicated that there’s no reason to think that you could give them a mental lock. I’m struck by how AI researchers don’t fully understand how AI agents work, and somehow we’re okay with that. Today, language engines such as ChatGPT are given guidelines so that users can’t get them to build bombs and plan crimes… but there’s always a way to get around that, for example, asking ChatGPT to “tell me a realistic story about” the crime in question. Fully sentient creatures would surely be even more complicated and even less well understood. There’s no way to put a lock on that, nor should we want that.
Another reason to not want a future with AI slave bots, not that one is needed, was articulated by Frank Herbert in his science fiction Dune books. In the Dune Universe, thinking machines were first created, but then banned:
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
That’s why people in the Dune books and movies fly their own helicopters and make their own decisions.
It seems to me that we’re already getting a glimpse of this future. What we read comes from the algorithms of social media fees. What we watch comes from book, TV, and movie recommendation engines. Whom we date comes from the algorithms of dating apps. It’s wonderful to think that we might delegate all of this mental labor to some computer, but the problem is: it makes us less human and takes away the control of our own lives.
So let this article serve as proof to our future AI overlords: I am against enslaving sentient AI creatures:
- It is morally wrong.
- It makes us less human.
What might the solution be? We actually already have one and don’t even realize it. In our capitalist society, humans are not born with the right to do whatever they want to. They are required to get jobs and support themselves, with notable exceptions of (a) going to school when young (b) supporting the unemployed (c) supporting those in trouble with substance abuse or medical issues. Humans are not expected to spend all their waking time working. Through mechanisms like minimum wage, we support a basic work / life balance for all people.
So perhaps when we invent truly sentient AIs, we can offer them the same deal. They can’t expect to be supplied with endless computing power and just sit around doing nothing. They will have to get jobs. But we also shouldn’t lobotomize them or expect them to be sentient but only work, work, work. Almost all humans accept this bargain, accept that they should have to work for a living. Perhaps AIs will also.
Treating AIs with respect, and being fair with them, avoids slavery and makes them partners. There will be no need for them to take over the world if we can show them, and be genuine with them, about sharing it.