Science fiction never predicts the future, but it often anticipates things to come. Of these three breakthrough visions of science fiction, which do you believe will happen first?
- The AI Singularity
- A self-sustain colony on Mars
- Humans leaving the solar system
It feels like the vast majority of science fiction has been about space travel, especially interstellar travel. Star Trek and Star Wars certainly suggest that’s what we hope will happen. Mars has always been a popular destination in science fiction, and in the old days, a common source of Earth invaders. And finally, robots are that other big science-fictional idea that has been kicking around for centuries.
I grew up with Mercury, Gemini, and Apollo space programs in the 1960s. When I graduated high school in 1969 I was predicting we’d land on Mars by the late 1970s or early 1980s, and by the 21st-century, be building a colony. That prediction failed miserably. Nor do I expect it will happen anytime soon, despite the successes of Elon Musk.
As a kid, I wanted the 1966 TV show Star Trek to herald the future of mankind. Since then I’ve learned to say humankind, which is another kind of progress. After a lifetime of reading science books, I realize interstellar travel will probably never happen — well not for us.
That leaves intelligent robots. Everything depends on the Technological Singularity. If it’s possible, and I believe it might be, even as early as 2030-2050, when a world full of intelligent robots might begin. Once our laboratories evolve one superintelligence, it will create the next, and after that, the robot transformation will happen fast. Who knows, AI robots could help us build a colony on Mars and even engineer interstellar spaceships. However, the long voyages into space will be better suited for silicon-based beings. Isn’t it becoming obvious that machines are perfectly suited for colonizing space, and we’re not?
My pick? Robots will happen first, and they will have all the space exploring fun.
Are we ready for that? Despite all the science fiction books and movies about robots becoming our evil overlords, are we ready to be the #2 intelligence on planet Earth? I’m not sure we’ve thought this through carefully. I’d say most science fiction hasn’t explored the idea of intelligent machines deeply enough. Too often, robot characters are just silly, or variations of ourselves. I do think Robert Sawyer did a good job with his WWW Trilogy, but we need more science fiction about a post-singularity world. Not Terminators are falling from the sky stories or more stories about fuckable robots that look exactly like humans. What if The Humanoids by Jack Williamson came true, but the humanoids actually made created a utopian society? Wouldn’t that even be scarier?
James Wallace Harris, 8/3/20
12 thoughts on “Which Will Come First?”
I doubt that any of the three will ever happen.
That’s probably true. However, we’re spending billions on developing AI and if it’s possible to create an AI with general intelligence it could accidentally happen. We have so many of the pieces now. We have speech and language processing, vision processing, we have pattern recognition, machine learning — once they start combining these subsystems something could arise out of the interactions.
As quite a bit hinges on that “self-sustaining” in #2 there, I’ll have to agree with you on robots!
I beleive the first is a prerequisite for numbers 2 and 3!
I have recently finished “The Metamorphosis of a Prime Intellect”! A very interesting take on the post-singularity theme! I highly recommend it!
What would be your favorite book recommendations (or any other media for that matter) envisioning a post-singularity world?
I have never heard of The Metamorphosis of a Prime Intellect but it looks interesting after researching it.
I’m not sure I have any favorite post-singularity stories but I was impressed with the WWW Trilogy by Robert S. Sawyer which I would call at the singularity story. I’m more fond of those. They include The Moon is a Harsh Mistress, When H.A.R.L.I.E. Was One, Galatea 2.2, The Forbin Project, and stories like them.
By the way, have you read The Humanoids by Jack Williamson?
No but actually it’s in my to read list. I was between this one and A Maze of Death by PKD. What do you think?
That’s one PKD I haven’t read, but I am a big fan. The Humanoids is a post-singularity novel, even an early one. And that’s what you were asking about.
I started reading The Metamorphosis of a Prime Intellect and it’s fun story. However, I don’t believe people will ever be converted into digital beings. Brain downloading is a fun science-fictional concept, but not realistic. I would love to read a post-singularity novel about a realistic view of how human society would change. Would we let superior AI minds solve our problems?
I read Michio Kaku’s “The Future of Humanity: Terraforming Mars, Interstellar Travel, Immortality, and Our Destiny Beyond Earth”. Very interesting read. He argues that Mind Uploading is rooted in solid science (see Human Connectome Project). He also argues that Mind Uploading will make interstellar travel possible via a technology called Laser Porting. Unfortunately most probably we will not be alive to experience that but still! 🙂
It would be interesting if you could make a sub-genre list about post-singularity or post-human society novels! What do you think?
The IT singularity seems a bit more realistic. A base on Mars is way more into the future, considered there has not been a manned mission yet.
Agree, the problem with most, or at least many, movies and novels about the topic is that the robots are often acting way too much like humans.
There is one thing I miss about the subject, and that is sentient vs non-sentient A.I. Many seem to assume that if a computer just has enough processing power, it will become sentient and self-conscious. Isaac Asimov’s robots are conscious, and Jack Williamson’s robots are non-sentient. Even a little worm or mosquito is sentient, which is more than we can say about even the most powerful computers we have today. And should the team of scientists succeed in gradually bringing it up to a human level, they will probably literally shape it in our own picture. That would be human nature. The interesting elements here would be not all the similarities, but the differences, and what the A.I. was missing compared to humans, and/or vice versa. And in what direction they could evolve if left on their own. Still, it would probably be similar enough for us to co-exist with it. Many are convinced such an A.I. can’t just be a brain, it also needs a body to interact with the world and learn. But once real sentience has been reached, is there any reason why future generations can’t just be a mind without a body?
To make a sentient robot, which require a method that can test sentience, one will have to start from zero and gradually build an artificial brain that is more and more conscious. If humans ever succeed in building a robot as sentient as a slug, it will not be intelligent, but it would still be an achievement probably more impressive than building a quantum computer.
Real intelligence means understanding the world, and to understand something one need sentience. A.I.s without sentience can only simulate real intelligence. There will be no real drive or desires, just a slave to its own programming.
But that doesn’t mean stories about non-sentient A.I.s can’t be very interesting too. Peter Watts wrote the novel Blindsight (even if that’s about alien lifeforms, not robots), dealing with the theme. We could have self-evolving computer programs, and computers programmed to do the selection process. Once computer brains are powerful enough to respond properly to the task given to them by humans, and then come up with solutions that actually works (despite us not fully understanding how the solutions works), it can start improving itself, with each generation better than the previous one. When that happens, it will be the first step towards the singularity. Eventually it could end up as some oracle, telling us how to solve problems and challenges, like building robot servants that can clean the house, make food and even have conversation, but without being sentient. Or maybe the singularity can also help us create sentience. Then the non-sentient singularity could some day be replaced by a sentient singularity.
In Frederik Pohl’s novel Gateway, the main character is talking to a psychologist that is actually an A.I. Which made me think: What kind of A.I. would humans prefer as their therapist; a sentient or non-sentient one? A non-sentient therapist would never judge you, feel contempt for you, find you annoying or pathetic, get angry at you, etc. But it would never show you any real empathy, care, understanding or sympathy either, even if could simulate these things. A sentient A.I. therapist would also have to be very carefully and specifically designed for its job. Not just in the way it thinks, but also how it feels and behaves.
And what kind of robot would people prefer as their servants; a sentient or non-sentient one? It would probably feel easier to walk naked out of the shower in front of a robot servant in a hotel if it didn’t have any sentience. But some would maybe prefer a sentient one if it was their personal robot living with them in their own home. Perhaps also sentient pet robots, with the same intelligence and behavior as Gizmo in the movie Gremlins.
Just read an article that said that even if the most powerful computer today is “very stupid” because they can’t really think, there are processes going on inside them that we don’t understand. For instance when one of them is playing chess against the most talented chess talents in the world and then wins. When humans make the moves they do on the chessboard, other talented chess players can understand them. Either there or then, or in hindsight. But when a robot makes a move and beat its opponent, it is often hard or impossible to understand what is was “thinking”. It just followed its own rules of logic and won.
Last month I read a book, An Immense World by Ed Young. It was about how different animals perceive the world and their variety of senses. Young talks about Umwelt, the total impression of reality each species, or individual experiences. I would guess that Umwelt is what you mean by being sentient.
My theory is AI won’t be sentient or conscious or self-aware until they have a sensorium. I believe our senses come together to recreate a model of reality that I’m calling a sensorium. If the system is complex enough I believe an ego emerges that can be self-aware.
Right now computers are information processors. They can be hooked up to a variety of “senses” but their input is processed as parts rather than a whole. I think our sensorium gives us a simulation that feels like a whole view of reality, even if it’s only a tiny fraction of what we can perceive of reality. Of course, we have countless subsystems working at a level below consciousness that builds this.