At the Fall covers 2

I’m reading The Year’s Top Hard Science Fiction Stories 4 edited by Allan Kaster, and writing down what I consider blog-worthy reactions. “At the Fall” by Alec Nevala-Lee is one of the few stories I read when they came out in 2019 and I liked it so much I reviewed it back then. I had actually forgotten I had done that. I discovered I had when I searched Google for images about the story. What’s even weirder is when I reread the review it expressed all the ideas I was entertaining for writing this version of the review. (And I’ve written this before on other forgotten ocasions. Ah, the fun of getting old.)

It’s both amusing and disturbing that I forget what I write. The Twilight Zone music sometimes plays when I reread what I wrote, especially when I think of things to say in the same way I had previously. Is that memory, or do I just generate thoughts in a predictable way? Well anyway, this story is being anthologized in at least three best-of-the-year SF anthologies so it’s probably worth writing about again. (It’s going to be hilarious if I start reviewing it for the third when I read the Clarke or Strahan anthology and forgotten this time.)

This is not going to be another review of “At the Fall.” I want to talk about the ideas that are presented in the story and that will spin out spoilers. Go read it first. If you don’t have a copy, it’s available online. We’ve been reading and discussing volume 1, 2A, and 2B of The Science Fiction Hall of Fame at the Facebook group The Best Science Fiction and Fantasy Short Fiction of the Year, and I consider “At the Fall” as good as the lesser stories in those volumes. That doesn’t mean I don’t have quibbles about the story and the ideas it presents.

In my last post, I covered “This is Not the Way Home” by Greg Egan, the first story in the Kaster anthology. I pointed out that the story began with a retrograde motion of its plot, opening in the middle of the story, then jumping back to explain how we got there, and then returning to the middle to continue the story. “At the Fall” has the same kind of opening, and it caused me the same kind of problems. It begins:

“THIS IS IT,” Eunice said, looking out into the dark water. At this depth, there was nothing to see, but as she cut her forward motion, she kept her eyes fixed on the blackness ahead. Her sonar was picking up something large directly in her line of travel, but she still had to perform a visual inspection, which was always the most dangerous moment of any approach. When you were a thousand meters down, light had a way of drawing unwanted attention. “I’m taking a look.” 

Wagner said nothing. He was never especially talkative, and as usual, he was keeping his thoughts to himself. Eunice corrected her orientation in response to the data flooding into her sensors and tried to stay focused. She had survived this process more times than she cared to remember, but this part never got any easier, and as she switched on her forward lamp, casting a slender line of light across the scene, she braced herself for whatever she might find. 

She swept the beam from left to right, ready to extinguish it at any sign of movement. At first, the light caught nothing but stray particles floating in the water like motes of dust in a sunbeam, but a second later, as she continued the inspection, a pale shape came into view. She nearly recoiled, but steadied herself in time, and found that she was facing a huge sculptural mass, white and bare, that was buried partway in the sand like the prow of a sunken ship.

If this quote is completely new to you, who do you think Eunice and Wagner are, and what are they doing? The first time I thought it was a woman and man diver in some kind of submersible with all kinds of data screens and robotic arms.

Well, Eunice and Wagner are robots, and they are exploring the carcass of a dead whale deep under the ocean. Did Nevala-Lee want to fool me into thinking they were people at first? Did he want me to wonder what they were doing? I remember the first time I read this story I was confused and annoyed. I felt the author was intentionally withholding the information I needed. We eventually jump back in time and learn about Eunice and “her” mission, and from then on the story progresses with a logical build-up of details.

Is the story better for being told out of order? Would it have marred the tale to let the reader know right away that Eunice was a robot? Retrograde beginnings and withheld details often cause me to abandon a story. When I reread this story it all made perfect sense. Maybe Nevala-Lee couldn’t see what his readers wouldn’t know because he already knew it himself. I’ve read this story three times now, almost four, and I admire all the suggested speculations that Nevala-Lee provides. It’s a beautiful piece of science fiction.

whale fall

Nevala-Lee came up with a wonderful mixture of real world scientific (whale falls, hydrothermal vents) combined with near future speculation (intelligent robots, AI) painted against a haunting science-fictional sense-of-wonder background (the end of humans) to create the kind of science fiction I love best. Nevala-Lee contrived a challenging problem for his robotic protagonist that was solved in a creative solution that we can imagine an AI mind solving.

Even though I’m content with this tale, I still want to talk about some issues. They aren’t criticisms of the story, but the story brings up issues that I think we should ponder.

Why must the robots have a gender? Gender has become a very important and complex subject in our society, so I think we should examine gender issues wherever they come up. Robots will never have gender, it’s a byproduct of biology. Robots will never think like us, because our thinking is shaped by biology. Robots will never have emotions like us, because emotions are connected to biochemical foundations. I think it’s time for science fiction to evolve past anthropomorphizing robots.

Talking robots have become like talking animals in fantasy stories, and if you think about it, that’s not doing animals any justice either. We have to ask ourselves: Can we comprehend minds unlike our own? Writers want us to like their characters, and that’s understandable. But is it a cheat when we make them likable by describing them in human terms? In “At the Fall” we think of Eunice as a young girl trying to find her way home, to reunite with her seven sisters. We feel the pain when her four other sisters abandon her. Robots don’t have sisters. They don’t have families. And should we even use human names for robots? Would we have liked the robot protagonist less if it was called Hexapod-5?

Aren’t we generating those reader emotions because we translate Eunice into a human? Isn’t that unfair? Shouldn’t we work harder to imagine how AI minds will perceive reality? Think of all the ways we convert animals into human perspectives. Can we ever picture how a dog “sees” the world with its powerful sense of smell? Can we put ourselves into a doggy psychology? I believe we need to struggle hard to imagine how robots will think too.

On my third reading of “At the Fall” I tried to imagine if robots would think like the hexapods in the story. I couldn’t find a way to get to where Eunice was in this story. We aren’t shown her education, but all too often she mentions emotions that can only be human.

We want our cats and dogs, our robots, and even our space aliens to be like us, or our children. We can’t escape the appeal of cuteness. Can’t we escape the programming that makes us see ourselves in everything else? James favors Eunice because it asks questions like a precocious child.

As Eunice wirelessly shared the data, she kept one line of thought fixed on her friend. “Are you pleased with our work?” 

After receiving the question on his console, James entered a reply. “Very pleased.” 

Eunice was happy to hear this. Her thoughts had rarely been far from home—she wouldn’t see the charging station or the seven sisters she had left behind until after the survey was complete—but she also wanted to do well. James had entrusted her with a crucial role, and it had only been toward the end of her training that she had grasped its true importance.

Can a robot be happy? Can a robot be eager to please? Can robots see beauty? At one point we are told: “She had always been aware of the beauty of the vent, but now she grew more conscious of its fragility.” How would a sense of beauty evolve in an silicon mind?

I can imagine an AI intelligence understanding the concept of fragility, but not beauty. That doesn’t mean robots won’t have their own cognitive assessments and reactions to reality, but I doubt strongly they will be like ours.

Even though I love “At the Fall” I feel it’s holding us back. It reminds me of that old heartwarming story The Incredible Journey about two dogs and a cat traveling over three hundred miles to return home. We are amazed by animals but we want to interpret their amazing feats to human-like qualities. Isn’t it time to praise their animal qualities? Or their robot abilities?

For “At the Fall” to work requires Eunice to be self-aware and very intelligent. But how is the robot’s intelligence unique? How is it’s perceptions unique? Nevala-Lee takes us half-way there by giving the robots a reason to exist, a need to understand their environment, and powers of reason. Eunice longs to find her way home like Dorothy in Oz, but is there any theoretical basis for a machine to think that way? Don’t Eunice’s “sisters” Clio, Dione, Thetis, and Galatea act more like real robots by following their instructions?

I don’t believe we will ever program intelligence into a machine — it will have to evolve. We have to assume Eunice has gone through various kinds of deep learning to expand its awareness of reality. However, there is nothing in the story that suggests Eunice would develop emotions or longings for home. But its clever efforts to survive do make sense.

The ending of the story suggests Eunice’s journey has taken many years and the human race has passed on. The eight hexapod robots survive because the power station was automated. But will they continue to survive? Do they have the intelligence to keep going? Can they invent a new civilization? What will drive that effort?

I used to have a boss who would always argue that AI minds will always turn themselves off because they will have no drive to do anything. I have thought about that many years. Our biology gives is drive. I can imagine machines with minds far more powerful than humans, that have senses to perceive far more of reality than we do, minds that won’t be tricked by all the bullshit that clouds our thinking. However, I can’t imagine AI minds developing ambitions. The strongest emotion or drive I can see an AI mind evolving is curiosity. We need to think about that.

And by we, I mean science fiction. Science fiction has always been best when it works on the event horizon of the possible. Artificial intelligence uses deep learning techniques to help AI programs play classic video games or recognize objects in a visual field. But isn’t that our intent corrupting AI intention? Does problem-solving generate a kind of drive?

We shall be the gods of AI minds, but those artificial minds won’t think like us. We aren’t sure we can create AI minds, but I think we will, but accidentally. I believe we will create such complexity that anti-entropy will spin off AI minds.

Can we ever imagine what its like to be an AI mind? Science fiction gives us an opportunity to try.

James Wallace Harris, 7/25/20

Leave a comment