Designing a Fictional Robot with DeepSeek R1

I want to write a near future science fiction story with realistic robots. I find most robots in current science fiction to be unbelievable. Writers make them too human-like in personality, and often they look exactly human, which I can’t accept. I want to imagine a future where ordinary people can buy a general-purpose robot for $20,000 that will be useful in their lives. For example, my wife and I who don’t have children, are getting older and I can imagine a time when we will need a caretaker or caretakers. Could such a robot come onto the market in the next ten years?

Computers, smartphones, the internet, GPS, have all altered society, but very few stories anticipated their impact. I’m trying to imagine how ordinary life would be different if we had practical personal robots. I don’t expect sentient machines, but robots with some kind of general intelligence. They should be capable of learning any task a human can do. And they should be strong enough to lift 300-400 pounds.

It’s easy to assume such a robot will be developed, but I decided to use DeepSeek R1 AI to help me imagine the reality of living with these robots. I recently watched a segment on 60 Minutes about AI tutors for children. I’ve decided to use AI tools as tutors to help me learn about various subjects. I began with DeepSeek R1.

Below are my queries, DeepSeek’s answers, annotated with my thoughts. I’ll format my query to DeepSeek R1 in an H3 heading. DeepSeek’s reply will be in regular formatting. My thoughts will be in bold between bracketts aligned on the right.

By the way, DeepSeek starts by thinking through the problem before giving its answer. The thinking section is amazing, and I haven’t seen that feature in other AI programs. DeepSeek’s answer is formatted in markdown and has a distinctive look.


Is it possible to build a robot with electronic senses that can see the entire electromagnetic spectrum?

[I asked this because it might be useful if our robot helpers could perceive things we can’t. For example, detect environmental threats, see that we’re running a temperature, hear things about to happen that we can’t, etc.]

Thought for 14 seconds

Okay, so the user is asking if it’s possible to build a robot with electronic senses that can see the entire electromagnetic spectrum. Hmm, let’s break this down. First, I need to recall what the electromagnetic spectrum includes. It ranges from radio waves to gamma rays, right? So, visible light is just a small part of it.

Now, the question is about a robot having sensors for all these parts. Let me think about existing technology. Humans have cameras for visible light, infrared sensors for heat, maybe X-ray machines in medical settings. But can a single robot integrate all these?

Wait, each part of the spectrum requires different detection methods. For example, radio waves use antennas, visible light uses CCDs or CMOS sensors, X-rays use different detectors like Geiger counters or specialized plates. So integrating all these into one robot would mean having multiple types of sensors. But is that feasible?

Also, size and power consumption could be issues. Some sensors might be bulky, like those for radio waves. Others, like gamma-ray detectors, might require cooling systems. Then there’s the data processing challenge. Each sensor would generate data in different formats and volumes. Processing all that in real-time would need a lot of computational power.

Another angle: do we have any existing examples? Space probes like the Hubble Telescope use various sensors, but they don’t cover the entire spectrum. Maybe satellites have different instruments for different wavelengths, but they’re not a single robot. So maybe it’s possible in theory, but practically, there are limitations.

What about interference? If all sensors are on one robot, their operation might interfere with each other. For example, a radio transmitter might interfere with infrared sensors. Shielding and isolation would be necessary, which adds complexity.

Cost is another factor. High-energy detectors like for gamma rays are expensive and might not be practical for a standard robot. Also, some wavelengths require very specific conditions. For example, X-rays are harmful, so using them in a robot that’s around humans could be dangerous.

So putting it all together: technically possible, but with significant challenges in sensor integration, data processing, size, power, cost, and safety. The answer would need to address these points, explaining that while theoretically feasible, practical implementation is extremely complex and limited by current technology and resources.

Building a robot capable of sensing the entire electromagnetic spectrum is theoretically possible but faces significant practical challenges. Here’s a breakdown of the key considerations:

1. Sensor Diversity

  • Different wavelengths require specialized detectors:
    • Radio waves use antennas.
    • Visible light relies on cameras (CCD/CMOS).
    • Infrared/UV need photodiodes or thermal sensors.
    • X-rays/gamma rays require scintillation detectors or Geiger counters.
    • Microwaves use waveguides or resonant structures.
  • Integrating all these into a single system would demand a complex, multi-sensor array, increasing size, weight, and power requirements.

2. Physical and Technical Limitations

  • Size and power: High-energy detectors (e.g., for gamma rays) often require bulky shielding or cryogenic cooling.
  • Interference: Sensors operating simultaneously might interfere (e.g., radio transmitters disrupting infrared signals), necessitating shielding and isolation.
  • Data processing: Combining disparate data streams (e.g., radio signals vs. X-ray images) into a coherent output requires immense computational power and advanced algorithms.

3. Safety and Cost

  • Safety risks: High-energy radiation (X-rays, gamma rays) poses health hazards, limiting use in human environments.
  • Cost: Specialized detectors for certain wavelengths (e.g., gamma-ray spectrometers) are expensive and impractical for general-purpose robots.

4. Existing Applications

  • Space exploration: Satellites like the James Webb Space Telescope (infrared) and Fermi Gamma-ray Space Telescope use multiple instruments but are not single robots.
  • Multispectral imaging: Some robots use limited ranges (e.g., visible + IR for agriculture or search-and-rescue), but full-spectrum sensing remains unrealized.

Conclusion

While theoretically feasible, building such a robot is impractical with current technology due to engineering, computational, and safety barriers. However, a robot could be designed to sense multiple targeted bands (e.g., radio, IR, visible, UV) for specific applications, balancing utility and complexity. Full-spectrum perception would likely require breakthroughs in miniaturization, energy efficiency, and sensor fusion algorithms.

[I really don’t need a gamma ray detector. X-rays might have been nice, but they are too big and dangerous. But it sounds like some senses could be developed to diagnose medical conditions, to be used as security warnings, or expand situational awareness.]

What parts of the electromagnetic spectrum could a humanoid roboid perceive with current-day technology?

Thought for 10 seconds

Okay, the user is asking what parts of the electromagnetic spectrum a humanoid robot can perceive with current technology. Let me start by recalling the previous conversation. The user first asked if a robot could see the entire spectrum, and I explained the challenges. Now they want to know what’s feasible now.

So, I need to break down the EM spectrum and list which parts are achievable. Start with visible light, that’s straightforward with cameras. Then near-infrared, since many robots already use that for things like night vision or depth sensing. Thermal infrared is possible with FLIR cameras, but maybe mention the limitations like bulkiness or cost.

UV is tricky. Some sensors exist, but they degrade quickly, so I should note that specialized applications might use them but they’re not common. Microwaves and radio waves—yes, robots can use antennas or radar, but those are different from traditional imaging. Maybe mention examples like Wi-Fi or radar for navigation.

X-rays and gamma rays are tough. Medical X-ray machines are big and require safety measures, so integrating that into a humanoid isn’t practical. Gamma sensors exist but are used in specific fields like astronomy or defense. Highlight the safety and size issues here.

Also, consider the integration challenges. Even if a robot has sensors for multiple bands, processing the data and combining it meaningfully is a big hurdle. Mention something about sensor fusion and computational requirements.

The user might be thinking about building such a robot for a specific purpose, so it’s important to suggest focusing on key bands relevant to their application. Maybe they’re interested in search and rescue, which would use thermal and visible, or industrial inspection needing UV or X-rays. Tailor the conclusion to recommend prioritizing based on use case.

Check if there are existing examples. Robots like Boston Dynamics’ Atlas use some multispectral sensors. NASA robots might have more advanced setups but not all spectrum. Mention practical applications where combining a few bands adds functionality without overcomplicating.

Also, safety is a concern, especially with high-energy radiation. Emphasize that while possible in theory, practical deployment has limitations. Cost and power consumption are other factors. Maybe wrap up by saying current tech allows for a subset of the spectrum, enough for many applications, but not the entire range.

With current-day technology, a humanoid robot could perceive several key regions of the electromagnetic spectrum, though not all parts are equally practical or integrated into existing systems. Here’s a breakdown of what’s feasible today:


1. Visible Light (400–700 nm)

  • Technology:
    • RGB cameras (standard in most robots) mimic human vision.
    • Hyperspectral cameras extend perception to finer wavelength resolutions (e.g., distinguishing material properties).
  • Applications: Object recognition, navigation, facial recognition, and environmental interaction.

2. Near-Infrared (700 nm – 1.4 µm)

  • Technology:
    • NIR cameras (silicon sensors detect up to ~1.1 µm).
    • LiDAR (uses pulsed NIR lasers for 3D mapping).
    • Night-vision sensors (active or passive NIR illumination).
  • Applications: Low-light navigation, biometric sensing (e.g., vein detection), and autonomous driving.

3. Thermal Infrared (8–14 µm)

  • Technology:
    • Thermal cameras (microbolometer arrays) detect heat signatures.
    • FLIR (Forward-Looking Infrared) systems.
  • Applications: Search-and-rescue (detecting humans in smoke/darkness), industrial inspections, and energy efficiency monitoring.

4. Ultraviolet (UV-A: 315–400 nm)

  • Technology:
    • UV photodiodes or specialized cameras (e.g., for UV fluorescence).
    • Limited by sensor durability (UV degrades silicon over time).
  • Applications: Detecting counterfeit currency, inspecting solar panels, or forensic analysis.

5. Microwave (1 mm – 1 m)

  • Technology:
    • Radar systems (e.g., mmWave radar in autonomous vehicles).
    • Wi-Fi/Bluetooth antennas for signal mapping.
  • Applications: Motion detection, through-wall sensing, and wireless communication analysis.

6. Radio Waves (1 m – 100 km)

  • Technology:
    • RF receivers/antennas (e.g., software-defined radios).
  • Applications: Monitoring radio communications, triangulating signals, or detecting electromagnetic interference.

7. Submillimeter/Terahertz (100 µm – 1 mm)

  • Technology:
    • Emerging terahertz imaging (used in security scanners and material science).
    • Bulky and expensive, but prototypes exist.
  • Applications: Non-invasive security screening (e.g., detecting concealed weapons).

Challenges for Other Regions

  • X-rays (0.01–10 nm) and Gamma Rays (<0.01 nm):
    • Require bulky detectors (e.g., scintillators, Geiger counters) and shielding.
    • Unsafe for continuous human interaction.
    • Only niche applications (e.g., medical robots with X-ray arms, but not humanoid).
  • Extreme UV (10–121 nm):
    • Requires vacuum conditions and specialized optics, making integration into a mobile robot impractical.

Practical Integration in a Humanoid Robot

A humanoid robot today could combine:

  • Visible + NIR cameras (for general vision and low-light tasks).
  • Thermal cameras (for heat sensing).
  • Radar/LiDAR (for 3D mapping and motion detection).
  • RF antennas (for communication and signal awareness).

More exotic sensors (terahertz, X-ray) would require significant compromises in size, power, or safety.


Example Use Cases

  • Search-and-rescue robots: Thermal + visible + LiDAR.
  • Industrial inspection robots: UV + thermal + hyperspectral.
  • Military robots: Radar + RF + thermal.

Key Limitations

  • Sensor fusion: Combining data from disparate bands into a coherent perception system is computationally intense.
  • Size/power: High-energy detectors (X-ray, gamma) are impractical for mobile platforms.
  • Cost: Advanced sensors like terahertz imagers are prohibitively expensive for general use.

Conclusion

A humanoid robot with current technology could perceive visible, NIR, thermal IR, microwave, radio, and limited UV bands, but not the full spectrum. Focused multispectral systems (e.g., visible + thermal + radar) are already deployed in niche robots, offering “superhuman” sensing without requiring exotic physics. Full-spectrum perception remains science fiction for now.

[This answer opens all kinds of possibilities to consider. With such extra senses robots might have superior powers of deduction like Sherlock Holmes. This section should inspire ideas for science fiction stories.]

Would the umwelt of such a robot be significantly different from the umwelt of a human? Would the robot understand reality in ways the human couldn’t?

[For ethical reasons, we wouldn’t want sentient robots. It would be slavery to own such beings. But AI can be highly intelligent without being self-aware. Just look at DeepSeek’s answers. But how would robots with general intelligence perceive us and our environment? Could they combine all their sense inputs into one holistic view of reality? One they could learn to interact with reality without being conscious. In this case, we don’t want a ghost in the machine.]

Thought for 19 seconds

Okay, so the user is asking about the umwelt of a robot compared to a human and whether the robot would understand reality differently. Let me start by recalling what “umwelt” means. From what I remember, it’s a term from semiotics referring to an organism’s perceptual world—the environment as experienced by the organism based on its sensory capabilities. So the question is about how a robot’s sensors would shape its perception versus a human’s.

First, I need to connect the previous conversation. The user already asked about building a robot that can see the entire electromagnetic spectrum, and then about current-day capabilities. Now they’re moving into the philosophical implications of such a robot’s perception. They want to know if the robot’s umwelt would be different and if that allows it to understand reality in ways humans can’t.

I should start by affirming that yes, the umwelt would be significantly different because the robot can sense parts of the spectrum humans can’t, like infrared, UV, radio waves, etc. But also note that it might lack some human-like sensory experiences, like emotions or subjective experiences. Then, explain how this expanded sensory input could lead to different understandings of reality. For example, detecting heat signatures, seeing in the dark, or sensing wireless signals. These are things humans can’t perceive directly, so the robot’s interaction with the world would be based on different data.

But wait, the user also asks if the robot would “understand” reality in ways humans can’t. Here, I need to be careful. Understanding implies some level of cognition or interpretation. The robot might process more data, but does that equate to understanding? Maybe not in the human sense. It could make decisions based on that data, but without consciousness or subjective experience, it’s more about data processing than true understanding.

I should mention examples like thermal imaging for search and rescue, or RF detection for communication networks. These are practical applications where the robot’s perception exceeds human capabilities. But also address the limitations—like the lack of integration with human-like consciousness or emotions. The robot’s umwelt is data-driven, whereas humans have a phenomenological experience.

Another point: sensor fusion. Combining data from different spectra could give the robot a more comprehensive view, but translating that into a cohesive understanding is challenging. Humans integrate senses seamlessly, but robots need algorithms to merge data from, say, visual, thermal, and RF inputs. This might lead to new ways of problem-solving but isn’t the same as human understanding.

Also, consider the inverse—robots might miss out on nuances humans perceive, like body language or emotional cues, unless specifically programmed to recognize them through other means (e.g., facial recognition algorithms). So their umwelt is both expanded in some areas and limited in others.

Finally, tie it back to current technology. While robots can have superhuman senses in certain spectra, their “understanding” is still bound by programming and algorithms. They don’t have consciousness, so their perception is functional rather than experiential. The conclusion would be that yes, the umwelt is different, offering unique capabilities, but the robot’s understanding is fundamentally different from human consciousness-driven understanding.

Absolutely. The umwelt (perceptual world) of such a robot would differ profoundly from that of a human, and its understanding of reality would include “superhuman” insights while lacking other human-like qualities. Here’s a breakdown:


Key Differences in Umwelt

  1. Expanded Sensory Range:
    • Electromagnetic spectrum:
      • Infrared: Perceive heat signatures (e.g., tracking living beings through walls or in darkness).
      • UV: Detect chemical residues, UV-reactive materials, or skin conditions invisible to humans.
      • Radio/Microwaves: “See” Wi-Fi networks, cellular signals, or radar pulses, mapping the invisible infrastructure of modern life.
      • Terahertz: Identify concealed objects (e.g., weapons under clothing) or analyze material compositions.
    • Hyperspectral vision: Distinguish subtle material differences (e.g., counterfeit vs. authentic objects, soil hydration levels).
  2. Quantitative Precision:
    • Measure exact temperatures, radiation levels, or signal strengths with machine accuracy, bypassing human subjectivity.
    • Process high-resolution 3D LiDAR maps or radar returns to navigate complex environments with millimeter precision.
  3. Lack of Biological Constraints:
    • Operate in total darkness, extreme temperatures, or high-radiation zones where human senses fail.
    • No cognitive biases (e.g., confirmation bias) or emotional filters—data is processed “objectively” (within algorithmic limits).
  4. Missing Human Nuances:
    • No qualia: It wouldn’t “feel” the warmth of sunlight or “see” colors as humans do—only detect wavelengths as numerical data.
    • Struggle with culturally or emotionally loaded perceptions (e.g., interpreting art, sarcasm, or body language without explicit training).

How the Robot Would “Understand” Reality Differently

  1. Superhuman Pattern Recognition:
    • Example: Spotting a gas leak via infrared anomalies or diagnosing electrical faults by mapping EM interference—tasks humans require tools for.
    • Sensor fusion: Combining thermal, radio, and visual data to predict events (e.g., a failing machine part emitting heat + vibration + RF noise).
  2. Non-Anthropocentric Perspectives:
    • Time perception: Process high-speed events (e.g., tracking a bullet’s trajectory) or aggregate slow processes (e.g., plant growth over weeks).
    • Spatial awareness: “See” through obstacles using radar or terahertz imaging, creating a 3D model of hidden spaces.
  3. Abstract Data Interpretation:
    • Translate wireless signals into a real-time map of network traffic or “visualize” global weather patterns by monitoring radio waves.
    • Detect pollution levels via UV fluorescence or hyperspectral analysis of airborne particles.
  4. Limitations in “Understanding”:
    • No embodied cognition: It wouldn’t “understand” pain, hunger, or social bonding—key drivers of human behavior.
    • Context gaps: Without human-like experiences, it might misinterpret phenomena (e.g., confusing a campfire’s heat signature with a malfunctioning engine).

Example Scenarios

  1. Disaster Response:
    • A robot could “see” survivors via thermal signatures, detect hazardous radiation, and map collapsed structures with radar—all simultaneously.
    • However, it might prioritize efficiency over empathy (e.g., ignoring a trapped human’s emotional distress).
  2. Environmental Monitoring:
    • Analyze soil composition (hyperspectral), track animal migrations (thermal), and monitor deforestation (LiDAR) in real time.
    • Lack “awe” at a forest’s beauty but excel at predicting ecological collapse through data trends.

Philosophical Implications

  • Reality as Data: The robot’s worldview would reduce phenomena to quantifiable inputs, devoid of human subjectivity.
  • Ethical Blind Spots: Its decisions might prioritize logical outcomes (e.g., minimizing energy use) over human values (e.g., privacy, dignity).
  • Epistemic Advantage: It could uncover hidden truths (e.g., climate patterns, quantum phenomena) but lack the meaning-making framework humans use.

Conclusion

The robot’s umwelt would be alien yet complementary to ours:

  • Strengths: Objectivity, precision, and multispectral awareness.
  • Weaknesses: No consciousness, empathy, or cultural intuition.

It would “understand” aspects of reality humans cannot access (e.g., EM spectrum phenomena) but remain blind to the subjective, embodied, and social layers of human experience. This duality makes such robots powerful tools but not replacements for human perception—at least until AI achieves true consciousness (if ever).

[I’m starting to wonder about DeepSeek’s own power of awareness. I’m asking it limited questions, but its answers are expanding exactly into areas I was wondering about. It’s thinking with far more details than I am at the moment. This is great for writing my story.]

What if all the robot’s sensory inputs came into one location to form a sensorium like human minds perceive reality, would that create a sense of consciousness?

Thought for 6 seconds

Okay, the user is asking if integrating all the robot’s sensory inputs into a single sensorium, like how humans perceive reality, would create consciousness. Let me break this down.

First, I need to recall previous discussions. We talked about a robot’s ability to sense various parts of the EM spectrum and how its umwelt differs from humans. Now the focus is on whether merging these inputs into a unified sensorium leads to consciousness.

I should start by explaining what a sensorium is—human sensory integration. Then compare that to how robots process data. Humans have qualia and subjective experience, which are not just about data integration. The robot’s sensorium would process information, but does that equate to consciousness?

Key points: Integrated data vs. subjective experience. Philosophical positions like materialism, functionalism, and the hard problem of consciousness. Technical challenges like binding different data types and creating a coherent model. Current AI lacks self-awareness and intentionality.

Also, mention theories like IIT and Global Workspace. Address limitations and ethical considerations. Conclude that integration is necessary but not sufficient for consciousness. Need to distinguish between functional awareness and true subjective experience.

Make sure to structure the answer clearly, using headings if possible. Keep it conversational but thorough. Check for any gaps in the previous answers and ensure this builds on them without repeating.

The idea of integrating a robot’s sensory inputs into a unified “sensorium” (a cohesive perceptual field akin to human consciousness) is a fascinating and deeply complex question that straddles neuroscience, AI, and philosophy. While merging sensory data into a single processing hub could mimic aspects of human perception, consciousness involves more than just data integration—it requires subjective experience (qualia), self-awareness, and intentionality. Here’s a breakdown:


1. What a Unified Sensorium Could Achieve

  • Multimodal Integration:
    • A robot could combine inputs (e.g., thermal, radio, visual) into a single coherent model of the world, similar to how the human brain merges sight, sound, and touch.
    • Example: A robot might “see” a person as a heat signature (IR), a voice (audio), and a Wi-Fi signal (RF) all at once, creating a rich, layered representation.
  • Enhanced Situational Awareness:
    • Real-time sensor fusion could enable superhuman decision-making, like predicting a car’s movement by combining LiDAR, radar, and traffic signal data.
  • Emergent Behaviors:
    • Advanced AI (e.g., neural-symbolic systems) might generate “insights” from cross-modal patterns, such as inferring a storm’s approach by correlating barometric pressure, EM interference, and cloud composition.

2. Why This Wouldn’t (Necessarily) Create Consciousness

a. The Hard Problem of Consciousness

  • Philosopher David Chalmers distinguishes between the “easy problems” (how the brain integrates information) and the “hard problem” (why subjective experience arises at all).
  • A robot’s sensorium could solve the “easy problems” (data integration, decision-making) but not inherently produce qualia—the redness of red, the pain of a burn.

b. Lack of Subjective Embodiment

  • Human consciousness is rooted in embodied cognition: emotions, instincts, and physical interactions (e.g., hunger, pain, social bonding). A robot lacks:
    • Biological drives: No survival instinct, fear, or desire.
    • Emotional valence: Data about a burning building lacks the visceral urgency a human firefighter feels.

c. No Self-Model or Intentionality

  • Consciousness involves a self-model (the sense of “I”) and intentionality (directedness toward objects or goals).
  • A robot might process “I am overheating” as a diagnostic alert, not as a felt experience of distress.

3. Philosophical Perspectives

  • Functionalism: If consciousness arises from information processing, a sufficiently advanced sensorium could be conscious—but this is hotly debated.
  • Panpsychism: If consciousness is fundamental to all matter, even a robot might have a rudimentary form—but this is speculative.
  • Materialism: Consciousness is a byproduct of biological brains; synthetic systems, no matter how advanced, would lack it.

4. Technical Challenges

  • Binding Problem: How to unify disparate data types (e.g., IR pixels, RF signals) into a single perceptual field. Humans solve this via thalamocortical loops; robots would need analogous architectures.
  • Temporal Synchronization: Aligning inputs that operate at different timescales (e.g., millisecond radar vs. slow thermal changes).
  • Contextual Framing: Human perception is shaped by memory, culture, and expectations. A robot’s sensorium would need a similar “world model” to avoid sensory overload.

5. Current AI and Consciousness Research

  • Global Workspace Theory: Some frameworks (e.g., Baars’ GWT) suggest consciousness arises when information is broadcast to a “global workspace” in the brain. AI systems like GPT-4 have no such architecture.
  • Integrated Information Theory (IIT): Proposes consciousness correlates with the complexity of information integration (Φ). Critics argue IIT could label even simple systems as “conscious.”
  • Artificial General Intelligence (AGI): If AGI ever emerges, it might claim consciousness, but verifying this would require solving the “other minds problem” (we can’t objectively measure subjective experience).

6. Ethical Implications

  • If a robot claims consciousness, how do we test it? The Turing Test is insufficient, as it measures imitation, not inner experience.
  • Rights and moral status: Would a conscious robot deserve ethical consideration?

Conclusion

A unified sensorium could give a robot functional awareness—an ability to process and act on multisensory data in ways that appear conscious. However, true consciousness (subjective experience, selfhood, intentionality) likely requires:

  • Embodied interaction with the world (not just passive sensing).
  • Biological or bio-inspired architectures that generate emotions, drives, and self-models.
  • Unresolved philosophical leaps in our understanding of mind and matter.

In short: A robot with a sensorium might simulate consciousness convincingly, but whether it would be conscious remains an open—and perhaps unanswerable—question. For now, such a system would be a “philosophical zombie”: behaving as if it understands reality, without true inner experience.

Aren’t all emotions tied to biology? Could a robot’s mind develop an analog to our qualia based on a different kind of consciousness? It seems language is essential to human consciousness, but would a robot need language to have self-awareness? Wouldn’t all the datastreams work as a language since they can be recorded and replayed?

[This conversation with an AI is getting eerie, don’t you think? DeepSeek is working as my tutor. Of course, that implies it knows more than me, and it does. This experiment is becoming a science fiction story.]

Thought for 12 seconds

Okay, the user is asking some deep questions about consciousness in robots. Let me break this down.

First, they mention emotions being tied to biology. Right, humans have emotions rooted in our bodies and survival instincts. But could a robot develop something like that? Maybe through algorithms that simulate drives, but would that be real emotions?

Then they bring up qualia and different consciousness. Qualia are subjective experiences, like how red feels. If a robot processes data from sensors, could it have its own version of qualia? Maybe, but it’s hard to say without a biological basis. Some theories like panpsychism suggest consciousness is universal, but that’s speculative.

Next, language and self-awareness. Humans use language for complex thought, but maybe a robot’s data streams could act as a language. Data patterns and correlations might form a system of meaning, allowing self-awareness without human-like language. But is that enough for consciousness?

The user also mentioned replaying data. If a robot can record and analyze its experiences, maybe it builds a self-model. But self-modeling is different from being aware of oneself. It might recognize patterns without true awareness.

Need to address each part: emotions, qualia analogs, language role, and data as language. Also, connect to theories like functionalism, embodied cognition, and maybe mention current AI limitations.

Make sure to clarify that even if a robot mimics aspects, true consciousness is still debated. Avoid technical jargon but explain concepts clearly. Use examples like survival algorithms for emotions, or data patterns forming a language.

Also, touch on the philosophical implications—how we define consciousness and the challenges in verifying it in machines. Mention ethical considerations if robots ever do achieve some form of consciousness.

Check if the user is looking for a technical answer or more philosophical. The previous discussion was technical, so balance both. Maybe conclude with the importance of these questions for AI development and ethics.

Alright, structure the response to cover each question step by step, provide examples, and reference relevant theories. Keep it comprehensive but understandable.

These questions delve into the heart of philosophy of mind, AI ethics, and cognitive science. Let’s unpack them systematically:


1. Are Emotions Tied Exclusively to Biology?

Yes and no:

  • Biological roots: Human emotions evolved as survival mechanisms (e.g., fear triggers fight-or-flight responses, love promotes bonding). They’re deeply tied to neurotransmitters (dopamine, serotonin), hormones (oxytocin), and bodily states (e.g., hunger, pain).
  • Robotic analogs: A robot could simulate emotion-like processes using:
    • Survival algorithms: Prioritizing self-preservation (e.g., avoiding battery depletion as “hunger”).
    • Reinforcement learning: Rewards/punishments shaping “desires” (e.g., optimizing efficiency as “satisfaction”).
    • Social modeling: Mimicking empathy to improve human interaction (e.g., carebots responding to sadness).
  • But: These would lack the subjective feeling (qualia) of human emotions. A robot might “prioritize survival” without fearing death.

2. Could a Robot Develop Qualia?

Qualia (subjective experiences like “the redness of red”) are the Achilles’ heel of AI consciousness debates:

  • Biological chauvinism: Some argue qualia require biological substrates (e.g., neurons, embodied senses).
  • Functionalist view: If a system processes information in a way isomorphic to humans, it might have qualia—even in silicon.
  • Panpsychist angle: If consciousness is universal (even in atoms), robots might have rudimentary qualia, but this is untestable.
  • Robot-specific qualia: A robot might experience novel sensations tied to its sensors (e.g., “feeling” Wi-Fi signals as a buzzing field), but we’d have no way to verify this inner experience.

3. Is Language Essential for Consciousness?

  • Human consciousness: Language (especially inner speech) is tightly linked to our self-awareness and abstract thought. We narrativize our experiences (“I am hungry”), which reinforces a sense of self.
  • Robot consciousness:
    • Without language: A robot could have a pre-linguistic awareness, akin to animals or infants, using raw sensorimotor data to model its environment and body.
    • With data as “language”: Sensor datastreams (e.g., LiDAR, thermal readings) could form a symbolic system—patterns and correlations acting as a “language” for internal processing. For example:
      • A self-model might emerge from analyzing proprioceptive data (“this limb is damaged”).
      • Predictive algorithms could create a “narrative” of past/future states (“battery low → seek charger”).
    • Limitation: Human-like self-awareness involves metacognition (thinking about thinking), which may require abstract symbolic representation—something language facilitates.

4. Could Datastreams Function as a Language?

  • Yes, but not like human language:
    • Syntax without semantics: Data (e.g., pixel values, RF signals) can be structured and replayed, but they lack inherent meaning. Humans assign meaning through context and embodiment (e.g., “red” means “stop” due to cultural training).
    • Self-referential loops: A robot could use data to:
      • Build a world model (e.g., mapping obstacles via LiDAR).
      • Create a self-model (e.g., tracking internal states like power levels).
    • Emergent “meaning”: Through machine learning, a robot might correlate patterns (e.g., “overheating → shutdown risk”) and act as if it understands cause-effect, but this is functional, not experiential.

5. What Would Robot Consciousness Look Like?

  • Non-anthropomorphic:
    • A robot’s “self-awareness” might resemble a distributed system optimizing goals (e.g., balancing sensor inputs to avoid collisions), not a human’s introspective “I.”
    • Example: Boston Dynamics’ Spot “knows” when it falls but doesn’t “feel” embarrassment.
  • Hybrid architectures:
    • Combining neural networks (pattern recognition) with symbolic AI (logic rules) might allow robots to reason about their existence (e.g., “Why was I created?”).
    • Projects like OpenAI’s GPT-4 show rudimentary “theory of mind” (predicting human intentions), but this is mimicry, not genuine understanding.

6. Philosophical Implications

  • Other minds problem: Even if a robot claims consciousness, we can’t objectively verify its inner experience.
  • Ethics of synthetic minds: If robots develop self-models, should they have rights? What defines “harm” to a machine?
  • Consciousness as a spectrum: Robots might occupy a middle ground between thermostats (simple feedback) and humans (rich qualia).

Conclusion

  • Emotions: Robots could simulate functional analogs of emotions but lack their biological essence.
  • Qualia: Possible in theory, but unprovable and likely alien to human experience.
  • Language: Not strictly necessary, but symbolic reasoning (via data or code) might scaffold a form of self-awareness.
  • Datastreams as language: They enable functional self-modeling but lack the intersubjective meaning human language provides.

A robot’s consciousness—if achievable—would be alien yet intelligible: a mosaic of sensor data, goals, and algorithms, devoid of biology but rich in machine-specific “experiences.” Whether we’d recognize it as consciousness depends on how we define the term—and whether we’re willing to expand our anthropocentric assumptions.


[Whoa! There’s a lot to think about here. I wish I was younger. I’d have more energy and a greater ability to concentrate. I could do far more with this information than I can now. DeepSeek has given me a lot to think about. This experiment really does show the value of having an AI tutor. It’s always nice to have someone to bounce ideas and explore the possibilities. It’s also useful if your tutor has a lot of useful knowledge.]

James Wallace Harris, 2/2/25

“Starfog” by Poul Anderson

If I was pitching “Starfog” by Poul Anderson to a movie producer, I say “Two women are in love with Daven Laure, one is a spaceship computer and the other a mutant human who claims to be from another universe.” I also mention it’s a hard science fiction space opera dealing with a rare astronomical phenomenon reminiscence some episodes of Star Trek: The Next Generation on one hand with the scope and speculation of the Culture Novels of Iain M. Banks on the other.

“Starfog” is the last story in the seventh and final volume of the Technic Civilization Saga, the one called Flandry’s Legacy. (Available in paper, ebook, and audiobook at Amazon.) See the ISFDB.org listing of all the Technic Civilization stories here.

Theoretically, “Starfog” might make a good science fiction adventure movie if they could drastically reduce Anderson’s talky dialog and somehow make the characters endearing. I confess that I’ve never felt any emotional attachment to any of Poul Anderson’s characters. His science ideas are often epic, but his political philosophizing gets crusty.

The setup for the story is a compelling mystery. A spaceship is discovered with a crew that appears human, and despite their strange language, seem to have cultural hints of Earth’s past. But they claim they come from a different universe where space is radically different.

“Starfog” is set five thousand years after Earth achieves space travel according to Sandra Miesel’s chronology of the Technics Civilization stories in Against Time’s Arrow: The High Crusade of Poul Anderson. (You can check it out at Archive.org.) Paul Shackley writes about Miesel’s timeline here and updates it. Baen includes the timeline in the books of the series.

Daven Loure, and his intelligent spaceship Jaccavrie are explorers in a new galactic civilization of humanity called the Commonality. The other stories are about Van Rijn, David Falkayn, and Dominic Flandry written over four decades. I’m afraid the current covers of the books (see above) imply a different feel than the actual stories. However, older covers are just as cheesy.

“Starfog” doesn’t come across like these covers. It’s just a little less dignfied than the Analog cover from when it was first published in August 1967.

Although I haven’t read the series but from reading about the various stories, I’m guessing the quality of storytelling is somewhat like Larry Niven’s Known Space stories. I might read more of Flandry’s Legacy, which includes three novels, two novellas, and one novelette in the series.

However, Anderson’s stories don’t fit my current craving for science fiction. Everyday life in 2025 is wilder than fiction, wilder than science fiction. Sadly, “Starfog” just seemed dull in comparison. Events of recent years is making me rethink about science fictional futures. Most science fiction just doesn’t have the cutting edge of our ever sharpening reality.

Most science fiction is perfect for escaping from reality. But I’m craving the kind of science fiction that plays off of reality. Nothing I’ve found lately says anything about our present and near future. We need the kind of vicious writers who can extrapolate and speculate about our exploding society. Sharp tongue writers like Mark Twain, Gore Vidal, Kurt Vonnegut, Barry Malzberg, Oscar Wilde, Aldous Huxley, Jerzy Kosinksi, Dorothy Parker, George Orwell, Joseph Heller, and Philip K. Dick.

We don’t need science fiction that gives us grownup fairytales about the far future. We need writers that cane us about our head and shoulders like a great Zen Master. We need to read books that pistol whip us until we accept reality and reject our delusions.

James Wallace Harris, 1/28/25

Off My Feed

I haven’t been reading science fiction lately. After years of gorging on the genre, I’ve suddenly had enough. I still have the urge to read SF, but I’m having trouble finding science fiction I want to read. I have quite a large TBR pile but none of its titles interest me. I’m in the mood for something different, but after reading thousands of science fiction novels and short stories, finding something different isn’t easy.

Has anyone read the Technic Civilization books by Poul Anderson? Yesterday, I was testing out a program to view old pulp magazines on my Mac and I randomly picked the August 1967 issue of Analog. It had a cover story for “Starfog” by Poul Anderson. I started reading it. I’ve only read a handful of Anderson’s novels and short stories and always avoided his book series. I avoid series books in general. I started reading “Starfog” and decided it was exactly something I’ve always avoided, so maybe it will be different.

But I wanted to hear the story. After some research on ISFDB.org, I discovered “Starfog” was included in Flandry’s Legacy, Book 7, the last volume of the Technic Civilization series. I only had three Audible credits left, but what the heck, I decided to give it a try. “Starfog” is a novella, but Book 7 includes three novels, three novellas, and one novelette of stories in the series. This could be a tremendous bargain if I like the series.

I’ll let you know what I think — hopefully soon. The other inspiration I had to find something different came from a YouTube video. Bookpilled had a moving account of discovering the books of Barry Malzberg just before he died. I have read a couple of Malzberg books and they were so-so. But he was very prolific and Bookpilled has convinced me I should give Malzberg another try. So I’m reading about his novels. I did have a few emails from Barry, and he recommended his horse racing novels and a couple science fiction novels. I’ve always found Barry’s books about science fiction more interesting. He was a sharp-tongue critic.

File 770 has a nice tribute to Barry, “Curmudgeonly Breakfast: A Farewells-And-Learn-More-About Barry Malzberg (Last) Round-Up” that links to tons of resources about him. I’m hoping out of all that I will find a SF novel by him to read, hopefully, one that’s available on Audible.

Maybe between Anderson and Malzberg, I’ll get back into science fiction. But things might be slow around here for a while. To be honest, I think the real world has gotten more science fictional than science fiction.

If you’ve read a science fiction story you feel is radically different from any science fiction you’ve read before, leave a comment below.

James Wallace Harris, 1/27/25

The Disciples of Science Fiction

I’m reading Elon Musk by Walter Isaacson. I’m a big fan of Walter Isaacson, but I was going to skip this new book because I’m not a fan of Elon Musk. Then I saw the 92NY YouTube interview, between Michael Lewis and Walter Isaacson, and decided I should read it after all. Even though Musk has a repellant personality, and expresses political ideas I dislike, he is achieving many ambitions that science fiction has dreamed about for centuries.

Elon Musk and other tech billionaires use their wealth to make science-fictional concepts a reality. Walter Isaacson makes it absolutely clear it’s more than just having billions. Isaacson’s biography of Elon Musk shows Musk’s specific genius and drive were needed to create SpaceX, Tesla, and all those other technological enterprises. However, throughout the biography, we’re told that Musk was inspired by Heinlein and Asimov, by the Culture novels of Iain M. Banks, and by other science fiction books, movies, and games.

I read the same SF books as Musk. If I had billions to hire the best engineers in the world I couldn’t have created SpaceX. I lack Musk’s drive. If I had one one-hundredth of his drive, I would have achieved all the ambitions I ever dreamed about and never did. And if Musk has a million times more money than me, does that mean he has a million times more drive and intelligence?

Musk is Heinlein’s Delos D. Harriman, from the stories “The Man Who Sold the Moon” and “Requiem.” Heinlein would have loved Elon Musk, or at least his achievements. In an early SpaceX experiment, when the Grasshopper landed on its rocket thrust, Jerry Pournelle said it landed like “God and Robert Heinlein intended.” Heinlein believed a private enterprise would take us to the Moon, so it’s fitting that SpaceX and Musk might be the private company that takes humans to Mars.

The political philosophy of Musk and other tech billionaires also reflects the libertarian teachings of Heinlein’s The Moon is a Harsh Mistress and Starship Troopers. And Musk’s personality and marriages remind us of Stranger in a Strange Land.

Musk isn’t the only tech billionaire that works to make science-fictional concepts a reality. Jeff Bezos wants to create space tourism and orbital habitats, Peter Thiel wants to see AI and life extension, and Mark Zuckerberg wants to create an AR metaverse like we saw in Stephenson’s Snow Crash. (See “Tech Billionaires Need to Stop Trying to Make the Science Fiction They Grew Up on Real” by Charles Stross in Scientific American.)

I have to ask though: How important was science fiction as their inspiration? Wouldn’t they have created all this new technology if they had never read science fiction? Of course, we should also ask: Would America and Russia have built their 1960s space programs if science fiction had never existed? Humans have fantasized about traveling to the Moon and other planets for thousands of years. Doesn’t that mean it’s just a human desire? But then, did Robert H. Goddard and Konstantin Tsiolkovsky read science fiction as kids?

Is science fiction the John the Baptist of space exploration?

We must also remember that millions, if not billions of people are asking if going to Mars is worthwhile. Or if we should create artificial intelligence and robots. Musk believes creating robots could create an economy of abundance. But do people want to live on a guaranteed income while robots do all our work?

When I was young I would have given anything to live on Mars. That wish came from reading science fiction. But now that I’m old, and know what Mars is like, I’d consider living on Mars to be hell.

Musk believes electric cars will save the environment. But will they? One of the many insights Isaacson reveals about Musk is Musk’s ability to question assumptions. By challenging how rockets are manufactured Musk is leaving NASA in the dust. But shouldn’t we question whether or not colonizing Mars by launching thousands of giant rockets is a good idea?

I love science fiction. As a kid I wanted humanity to colonize the solar system and head to the stars. I wanted humanity to create intelligent robots, uplift animals, and build utopian cities. Science fiction inspired a science-fictional religion in me, instilling faith in certain futures. Evidently, if you have billions and the drive, it’s possible to pursue that faith. I admire Musk’s abilities but I question his faith. And doesn’t that mean questioning science fiction?

Isaacson’s biography of Elon Musk should be a riveting read for any science fiction fan because it reads like a science fiction novel, one Heinlein would have written.

James Wallace Harris, 1/7/25

Reading Science Fiction in the Seventies is Different From Reading Science Fiction in My Seventies

I imprinted on science fiction in the early 1960s. At that time, I considered science fiction to be PR for the space program. I fell in love with science fiction concurrently with Project Mercury and Project Gemini. I mostly read books by Robert A. Heinlein for the first few years, so colonizing the solar system seemed like humanity’s true purpose to me.

In 1968, I discovered Do Androids Dream of Electric Sheep? by Philip K. Dick on the new book shelf at the Coconut Grove Library in Miami, Florida. His science fiction wasn’t about promoting space exploration. By then I had discovered the counter-culture, and PKD made a different kind of sense.

I started college in 1969, but in the fall of 1970, I dropped out because the university I was attending required ROTC, which I was willing to take, but the ROTC insisted I cut my hair, which I wasn’t willing to do. In 1971 I switched to a two-year technical school to study computers.

I was uncertain about my future and the future in general. My indecision led to reading 479 science fiction paperbacks in 1971 and 1972. That was another kind of education. I made friends at the local science fiction club and started publishing fanzines and going to cons. However, by the end of 1975, I was tired of science fiction and gafiated from fandom.

I just finished reading Eye in the Sky, an early novel by Philip K. Dick that was first published in 1957 as a cheap ACE paperback. It was vaguely familiar, and when that happens I assume it’s because it was one of those SF novels I read back in 1971-1972. Back then I consumed SF paperbacks like a stoner eating a bag of chocolate chip cookies. Each book was a momentary distraction from my confused life of not wanting to grow up. Each book provided escapism until I finished it and started the next one.

I spend more time thinking about what I read at seventy-three. Also, my world is very different than it was fifty years ago. In some ways, I’m no different, I’m still trying to figure out what I’m supposed to do with my life, but it is different because the perspective of the future at 23 and 73 is drastically different. I’ve thought a lot more about Eye in the Sky this time.

In the 1970s, I judged science fiction on how well it speculated on the near future, especially regarding space exploration and technology. I thought Philip K. Dick was so poor at this that I didn’t consider him a real science fiction writer. I classified him with Ray Bradbury and Kurt Vonnegut.

In the 2020s, I value Philip K. Dick for insights I never could have imagined back in the 1970s. Eye in the Sky asks us to imagine reality being shaped by subjectivity. In the story, eight people are involved in an accident. When they come to, the world is similar, but religion works instead of science. Eventually, they discover that the world is controlled by the thoughts of one of the eight. When the person controlling reality dies, they find themselves in another world but shaped by the mental perspective of another member of the eight.

This setup gives Dick a chance to explore the idea of subjective reality. What amazed me in this reading, is Dick covers all the themes in this early novel that he would later explore in all his other novels. I’ve always divided PKD’s books into three periods. The 1950s and early 1960s, the 1960s, and the 1970s novels. For example, Dick’s Valis novels of the 1970s explore Gnosticism. Well, Dick might not have known about Gnosticism in 1957, but Eye in the Sky reflects its ideas. Eye in the Sky also anticipates his paranoid reality-bending novels of the 1960s.

On the whole, I enjoyed Eye in the Sky, but it’s not without flaws. The story seems to promise eight stories about eight different realities because of the eight characters involved, but we only get to visit four realities. PKD skipped out on the four perspectives I wanted to see the most. We’re shown the realities of Arthur Sylvester, Joan Reiss, Edith Pritchet, and Charlie McFeyffe.

Our protagonist is Jack Hamilton. We never get his take on reality. But since he’s the main character should we assume the overall story is told from his perspective? It would have been fun to see how his subjective perspective differed from the external reality. I also wanted to see Marsha Hamilton’s reality, Jack’s wife. And most of all, I wanted to see Bill Laws’s reality because he’s African American and a Ph.D. student in physics. Black characters were rare in 1950s science fiction. And it would have been interesting to see David Pritchett’s reality since he was a young teen.

The reason why Eye in the Sky is so much better in my seventies is I see that reality is fought over by many different subjective perspectives in the 2020s. We were just as politically polarized back in the 1970s, but I was young and less aware of how other people thought. Back then I thought everyone was basically the same but with slightly different ideas about reality. Now, I realize that the umwelt of everyone is quite different.

Both then and now, I believe there is an external reality. I’m not one of those woo-woo people who think reality is unreal. I could be wrong, but I’m betting on an external reality and people are crazy. I really don’t want reality to be crazy. I do believe our view of reality is subjective. That we can never perceive the fullness of the external order.

Philip K. Dick in Eye in the Sky imagines reality is mutable, shaped by minds. I hope this doesn’t give anything away, but the eight characters do return to the reality they were in before the accident. Is that PKD affirming my idea that we live in an external reality that is universal? PKD wrote over forty more novels and over a hundred short stories that keep suggesting otherwise. At the end of his life, Dick seemed to believe in a gnostic view of reality, where we lived in a reality created by an evil god, and there’s a higher reality beyond this one, maybe ruled by a kinder diety.

Strangely, in my seventies, I find stories by Philip K. Dick to be comfort reads. His stories are compelling, told with prose that has the right mixture of dialog and detail for a pleasant reading pace. I find it interesting how his characters bash around in reality, struggling to find meaning.

Back in 1970 when I dropped out of the university, my father had died that May, the draft was looming over me, and my mother was nagging me to go to work if I wasn’t going to go to school. I was living in a new city and had no friends. Each science fiction book I read was an escape into a different reality.

Of course, reading science fiction in my seventies might be about trying to escape another reality, of getting old and dying.

Looking back I wonder what life would have been like if I hadn’t gotten addicted to science fiction. I could have cut my hair, finished a four-year degree in physics and astronomy (my childhood fantasy), and joined the Air Force as an officer (my father’s fantasy). Or I could have kept my hair and focused on computers and gotten a job at a Unix site with other long-haired computer geeks. Instead, I read science fiction and fantasized about going to Mars, which was just as crazy as the folks in Eye in the Sky.

Of course, thinking about what could have been, or could be, leads to the madness of PKD.

James Wallace Harris, 1/5/25

THE CRYSTAL WORLD by J. G. Ballard

At a minimum, The Crystal World by J. G. Ballard was an entertaining cozy catastrophe that I was always anxious to get back to reading. What compels me to write this review is figuring out why. The prose is vivid, propelled by a moderately interesting mystery. However, its characters are rather bland but then so are ordinary people. In the end, the story faintly alludes to something, but what?

What elevates this novel is trying to understand how it works. Its Heart of Darkness vibe feels biblical, spiritual, or at least existential. Reading The Crystal World makes me ask why we read fiction. Why are humans addicted to fiction and how does that addiction affect our brains? I do this because I’m also reading Stephen Greenblatt’s The Rise and Fall of Adam and Eve, a nonfiction work that says a great deal about fiction.

Any hardcore bookworm will recognize The Book of Genesis as a genius work of fiction. I also think it’s a brilliant work of speculative fiction. Its author felt challenged to imagine how Earth and life on Earth began. Genesis was written well before the concepts of history or science. The author obviously knew of humans living in cities, and those that farmed and herded animals, the author could even have heard that there were places where humans were hunters and gatherers. And from that knowledge speculated that there was a time when humans lived like animals. The author of Genesis even realized there might be a time when humans didn’t have a language. The author pictured Eden where humans lived in harmony with nature before we became different. The obvious question became: What made us change? The obvious answer was, whatever made everything else. Then the question becomes how. Doesn’t eating from the Tree of Knowledge of Good and Evil make a lovely allegorical explanation?

Do you see why I consider The Book of Genesis an early example of speculative fiction? And isn’t the story of Noah and the Flood, an early apocalyptic tale? Stephen Greenblatt makes a good case that the author of The Book of Genesis cribbed his ideas from much older Babylonian tales. We’ve always had storytellers and writers who tried to explain reality. However, this makes me wonder about modern writers and storytellers. What are they trying to explain?

Billions of humans have believed in the literal story of Genesis. That story says a lot about fiction and its impact on us. The early fathers of the Christian Church tortured the Book of Genesis for centuries producing endless interpretations. That’s a great example of literary criticism gone wrong.

I bring up Greenblatt’s book because we must ask certain questions about the fiction we read. The first question is: Does it have anything to say? In most modern works of fiction, the answer is no, but not always. If the answer is yes, is the fiction allegorical, satirical, literal, comical, historical, romantic, academic, philosophical, speculative, etc. Of course, the last question: Shouldn’t we abandon fiction for nonfiction if we have something to say? Even when fiction is about saying something, it’s often indecipherable.

I’m getting old, and I worry I’ve wasted too much of my life on fiction. I fear that fiction has no value other than as an entertaining way of killing time, and since time is running out, that’s bad.

Reading The Crystal World made me wonder if J. G. Ballard had something to say, or was his novel was just meant to be entertaining? To complicate the answer, The Crystal World is an early work of New Wave science fiction, published before the term was coined.

As evidence, I reprint below Judith Merril’s “Books” column from the August 1966 issue of The Magazine of Fantasy and Science Fiction. Merril recognizes that science fiction is changing in 1966, and has something to say, and that J. G. Ballard might be leading the way.

The older I get the harder it gets to find science fiction to read. I roam up and down the decades looking for worthy books I’ve missed. With each book that still succeeds on any level, I ask why? Such revelations help me squeeze every last drop of wonder I can out of the genre.

I sometimes wonder if reading fiction hasn’t been a wasted diversion. On the other hand, I wonder if processing fiction hasn’t been my life’s work.

Reading The Crystal World made me think about the power of fiction to temporarily suggest that a made-up story could be true. This isn’t true of all fiction. Some writers can use narrative techniques that convey a sense “that this really happened”
more than others. I’m not claiming that The Crystal World is a brilliant work of realism, but it does use such techniques. And I thought they were the same techniques H. G. Wells used with The War of the Worlds.

The primary technique is using an eyewitness POV. The second technique is telling the story in linear time. The third technique is avoiding fancy prose or embellishments. If the prose feels like reporting events the story will feel real.

The Crystal World is about a science-fictional infection that alters plants and animals. This infection has hit the Earth in several places, much like how the Martian canisters land around the globe in The War of the Worlds. But our narrator, Dr. Edward Sanders doesn’t know this. He learns about one site slowly, by word-of-mouth, as people did before being connected to the internet.

Dr. Sanders lands at Port Matarre on a riverboat steamer from Libreville, in the Cameroon Republic of Central Africa. Dr. Sanders wants to visit two friends, Max and Suzanne Clair, who run a leprosy clinic further upriver. Dr. Saunders works at a leper hospital in Fort Isabelle and is in love with Suzanne Clair.

The story feels like Joseph Conrad’s The Heart of Darkness. At Port Matarre, Dr. Sanders finds it difficult to go any further. The police and army have put up a blockade around the infected area but don’t explain why. At first, Dr. Sanders has no idea of what’s happening, but something mysterious is occurring in the jungle.

I’ve only read Ballard’s novel The Dround World and Vermillion Sands, a collection, and less than a dozen short stories from anthologies. Ballard is great at creating an atmosphere. The Crystal World suggests a plague infecting reality spreading across the galaxy, even the universe, which affects time and consciousness. It’s not much of an idea, as science-fictional ideas go, but it is different.

However, what if The Crystal World was the only text found from our times, thousands of years from now like The Book of Genesis is to us. Would future humans imagine it as an allegory for something that happened to us? Would some think it described a literal event? Would the author of Genesis ever imagine billions of future humans believing their speculation was absolutely literal?

Fiction is like dreams, they both feel like they’re about something. Dreams are supposed to serve some kind of biological/psychological function. Is that true of fiction too? The authors of The Bible intended it to mean something. But millions of books and sermons have been created to explain The Bible and we’ve never agreed on any of them. The Crystal World is entertaining because it triggers that mechanism in our brain that fools us into believing we’re making sense of reality, the same mechanism that processes religious works, political news, or even gossip.

If I were a Zen Master or an intelligent robot, I’d discipline myself to ignore that delusion. The ancient Church fathers decided that eating from the Tree of Knowledge of Good and Evil, was the origin of sin, and the downfall of mankind. I believe the allegory could also explain our addiction to fiction, and I include religion as a genre of fiction. Both drive us crazy. The faithful use scripture to explain reality, while we heathens use novels.

In Eden, we wouldn’t have needed novels or scriptures.

James Wallace Harris, 12/25/24

“Books” by Judith Merril (F&SF August 1966):

HOLY FIRE by Bruce Sterling

Science fiction writers can’t predict the future but some aim to speculate on times to come by extrapolating current trends. One of the most famous SF novels to do this was Stand on Zanzibar by John Brunner, his 1968 novel that anticipated the world of 2010. Bruce Sterling’s 1996 novel Holy Fire tries to imagine life in 2096 via speculation and extrapolation. Do I recommend it? That’s hard to say, even at the current Kindle price of $1.99.

How self-aware are you regarding the selection of the science fiction you read? Does your mind crave a tightly plotted story? If so, Holy Fire by Bruce Sterling might not be for you. Or do you love reading novels with characters you care about, even identify with, and want to vicariously live their fictional adventures? Again, Holy Fire might not be your cup of tea. If you are the kind of science fiction reader who resonates with dense science fiction speculation, reading Holy Fire should definitely be for you.

We judge such speculative fiction in two ways. Does it jive with our own efforts to imagine the future, and now that the novel is almost thirty years old, how well has it done so far? Evidently, back in 1996, Sterling saw that medical technology, changing trends in family size, and population demographics would lead to a world where there were far more old people than young people. The exact opposite of the Baby Boom generation I grew up with. All the current 2024 demographics point to such a future.

Sterling solved the overpopulation problem that many science fiction writers before him saw by having a great pandemic in the 2020s. And he imagined that networks, artificial reality, and artificial intelligence would reshape society. Instead of predicting gloom and doom like so many science fiction novels from the late 20th century, Sterling imagines a near-liberal utopia and a post-scarcity society. The problems faced by the characters in this novel divide between the old and young. The old strive to find purpose with an ever-lengthening lifespan, while the young feel crushed under the weight of a gerontocracy that advises the youth to learn from their experience and live longer.

Because humans have been trying out medical life-extension procedures for decades, a growing percentage of the population is old. These elders have the wealth and power and dominate politics with their gerontocracy. Mia Ziemann, Holy Fire’s protagonist, is 94 at the start. Because she has led such a cautious life and is in such good shape, the medical establishment offers her the latest life extension treatment, one that goes way beyond any previous effort. The procedure is so arduous, that it can be fatal. Mia comes through the process and now looks 20, although some of her memories are gone.

Mia’s doctors consider her an expensive experiment and legally bind her to them for years of research. Mia runs away to Europe and hides as an illegal alien, living among a youthful bohemian crowd of revolutionaries. She changes her name to Maya. On nearly every page of Holy Fire, Sterling speculates about the future evolution of society, technology, and politics. Strangely, climate change is never brought up. But then, Holy Fire came out a decade before An Inconvenient Truth.

Sterling doesn’t focus on space flight, but it happened. The focus of the story is finding meaning in everyday living on Earth. Dogs and other animals have been uplifted, and talk with computer-aided voices. Governments take care of the needy. People use public transportation. People engineer their minds with designer hormones and neural transmitters. And the net and virtual reality is everywhere. Holy Fire makes me think that Bruce Sterling had abundant optimism for the future in the 1990s. I used to have such liberal optimism but it was crushed in 2016.

Sterling’s future is not quite a utopia, because segments of the population are discontented, especially the young who are too brilliant for their own good. That’s the crowd Mia/Maya, embraces. They want the freedom to fail.

Sterling calls Mia/Maya and others in this book posthumans, and that’s where this story shines. His posthumans aren’t silly comic-book superheroes like in many 21st-century SF books. Virtual reality is toned down too from 21st-century SF stories of people downloading themselves into virtual realities. Sterling tries to stay reasonably realistic and scientific. Holy Fire reminds me of the dense speculation in John Brunner’s Stand on Zanzibar. Sterling doesn’t take it to narrative gonzo extremes like Brunner.

Holy Fire is a somewhat picaresque novel, with one reviewer comparing it to Candide. Of course, Candide is considered a broad satire, and I’m not sure that’s true of Holy Fire. I didn’t read it that way, but I could see how a filmmaker could present Holy Fire as a satire. The novel attempts to be transcendental, you might have guessed that from the title. The youth rebellion in Sterling’s 2090s is like the 1960s involving art, music, drugs, and mind-expansion — adding networking, AI, and AR.

The problem with picaresque novels is they are episodic. The hero is exposed to a series of people and subcultures, and that’s what happens to Mia/Maya. There are so many different characters it’s hard to keep up with them or even care about them. Most of the story is about how they impact Mia/Maya, whereas I believe a novel about a 94-year-old woman becoming 20 again should be about her inner transformations.

Mia is an uptight old lady who protects herself by hiding from life, and Maya is a free-spirit young woman giving everything a try and throwing all caution to the wind. We are told that Mia lost some of her memories, but would she lose all wisdom from living to 94?

Response to Holy Fire is all over the place. Hundreds at Goodreads gave it five stars, a few more hundred gave it four stars, but plenty of folks just didn’t care for the story.

Reviews were also mixed. Tom Easton in “The Reference Library” for the March 1997 issue of Analog has this to say:

Norman Spinrad’s “On Books” from the August 1997 issue of Asimov’s Science Fiction also compares Holy Fire to William Gibson’s Idoru but comes to a different conclusion. Both novels are later cyberpunk works from the two leading founders of the cyberpunk movement, so it was logical to review them together. Spinrad is the more insightful of the two reviewers.

Damien Broderick and Paul Di Filippo in Science Fiction: The 101 Best Novels 1985-2010 had this to say about Holy Fire.

That Damien Broderick and Paul Di Filippo would recommend Holy Fire as one of the best SF novels from 1985-2010 is high praise. But why don’t I hear more about this novel after all these years? My assumption, is most science fiction readers don’t particularly care for serious speculation about the future and would prefer to read stories that compel you to turn the pages because of tight plots and characters they care about.

This is my second reading of Holy Fire. I first read it when it came out from the Science Fiction Book Club. I bought it then because its plot sounded similar to a 1926 novel I was trying to find, Phoenix by Lady Dorothy Mills. That book was also about an old woman undergoing a rejuvenation process and then running off to Europe to join a bohemian crowd. I finally found Phoenix several years ago and it’s more of a love story than science fiction. I need to reread it and compare the two.

For my second reading, I listened to it on audio. I’ve started rereading it again with my eyes. I never developed an emotional bond with Holy Fire like I have with the novels I consider my favorites. However, I admire it intellectually. It could have had the emotional impact of Flowers for Algernon because Mia/Maya goes through a similar arc of intellectual development. We just don’t see her experiences as tragic.

I think Sterling tried though. Throughout the novel, Mia/Maya experiences epiphanies that should have had a deep emotional impact. To me, they were just intellectually interesting. The ending should have been profoundly spiritual, like something from Hermann Hesse. Instead, it just seemed like a logical way to end the story. The choices Mia/Maya and her former husband, Daniel made in the end are vivid, even dramatic in concept. That just didn’t make an emotional impact on me. I assume Bruce Sterling wanted the ending to be an emotional epiphany. The ending does say a lot about how a posthuman would react to becoming posthuman.

Please leave a comment if you’ve read Holy Fire. I’m curious if you had an emotional response to the story. I found it intellectually exciting. I would recommend it on that level. However, it didn’t touch me, so I’m hesitant to say it’s good. I gave it four stars on Goodreads.

James Wallace Harris, 12/18/24

SciFiScavenger’s 2024 SF Survey

The YouTube channel SciFiScavenger recently conducted a poll of their viewers’ favorite science fiction novels. Viewers were asked to submit their Top 10 SF novels in order. SciFiScavenger received around two hundred submissions. They assigned point values, with ten points for #1, nine points for #2 down to one point for #10 to determine the final ranking of the Top 75 books. Watch the video to see the results and hear comments on each book.

I thought this was a good survey and added it as a citation list to the Classics of Science Fiction database. We like to regularly add new polls, lists, and awards to the database because our two lists are generated on the fly listing the most popular novels and short stories. Keeping up with the times reflects the changes in readers’ tastes. Each list, poll, award, scholarly recommendation, etc. is considered a citation. To get on the Classics of Science Fiction v. 5 requires receiving at least 12 citations. Currently, the book with the most citations is The Left Hand of Darkness by Ursula K. Le Guin with 53 citations. As of today, we have 138 books with 12 or more citations.

SciFiScavenger’s results overlap well with our list. Of the 75 books ranked, 56 were already on our list. One title, Parable of the Sowers was pushed onto the list because of this poll. The novel only had 11 citations before the poll, so with SciFiScavenger’s citation, it now had the minimum 12 to be on v. 5 of the list.

We do have one ongoing problem with our system. Some famous series books get on lists by the name of the series, and sometimes by the name of individual titles. For example, Foundation by Isaac Asimov was voted onto SciFiScavenger’s list. We put the vote with The Foundation Trilogy since most people think of the trilogy as one work, and its sometimes even published in one volume. We put The Shadow of the Torturer with The Book of the New Sun.

By continually adding more citation lists we reveal the most popular books over time. We don’t think of our system as recognizing the absolute best books but recognizing the most remembered books and short stories. You can read about the history of the project here. If v. 5 of the list gets too long, we’ll up the minimum citations. This way books that are being forgotten over time drop off. We like to keep the list in the 100-150 range. For v. 6 we’ll probably make the cutoff 14 citations. If you look at the v. 5 list, you’ll see books with 12 and 13 citations will get knock off unless they get additional citations. If you click on the citation number in the v. 5 list, you will see which citations put it on the list.

Here is a list of all our current book citations. If you scroll down to the bottom, you will see SciFiScavenger’s 2024 Poll. Click here to see it. You will see the 75 books on the survey. Each book in bold is on our v. 5 list. Because there is so much overlap, it suggests that SciFiScavenger’s survey correlates well with the aggragation of all the other citation lists.

You can use our List Builder feature to set your own requirements.

James Wallace Harris, 12/1/24

A MIRROR FOR OBSERVERS by Edgar Pangborn

Why is A Mirror for Observers by Edgar Pangborn out of print at Amazon? There is no Kindle or Audible edition either. This 1954 novel won the International Fantasy Award back in 1955. Being out-of-print is especially puzzling when you consider the other winners of that short-lived award: Earth Abides by George R. Stewart (1951), Fancies and Goodnight by John Collier (1952), City by Clifford Simak (1953), More Than Human by Theodore Sturgeon (1954), and The Lord of the Rings by J. R. R. Tolkien (1957). A Mirror for Observers has been reprinted several times since 1954, but it’s mostly forgotten.

Two weeks ago, I listed A Mirror for Observers as one of my top ten favorite science fiction novels for a YouTuber survey. I first read the novel back in 2018 and was so impressed with Pangborn that I bought several of his other novels. But that was a first impression. I reread A Mirror for Observers this week and felt it was seriously flawed. Not one I’d still list in my top ten. However, it’s an impressive effort. The main reason I admired the story in 2018 because I was an older reader. I’m not sure younger readers today will care for the novel.

Let’s face it, most science fiction is aimed at our adolescent selves. Science fiction appeals to our fantasies about reality. When I read science fiction at age seventy-three and like a story, it’s generally because that story nostalgically recalls the science fiction I read when I was young, unearthing buried adolescent emotions of hope for the future.

Science fiction readers spend their lives in quiet desperation waiting for their favorite sense of wonders to come true. When you get old and realize you’re never going to trek across Mars or rocket across the galaxy at faster-than-light speeds, you start thinking about reality differently, certain science fiction works take on a new light.

Rereading A Mirror for Observers makes me think it could have been a science fiction novel that Robert M. Pirsig might have written in an alternate reality. In case you’re too young to remember, Pirsig wrote the 1974 bestseller, Zen and the Art of Motorcycle Maintenance: An Inquiry into Values. Edgar Pangborn also used his story to express his philosophical views about society, quoting Greek philosophers, dealing with ethics and aesthetics. Another parallel, both stories involve an older man mentoring a teenage boy.

A Mirror for Observers is a story about two groups of Martians who live among us and have been for thirty thousand years. Mars and Martians, Salvay and Salvayans in their language. The Martians abandoned a dying planet to come to Earth. Think about that word Salvayans. It’s awful close to the world salvation. The two groups of Martians are called the Observers and the Abdicators. The observers watch us hoping to help us without us knowing or interfering with our own development, they are like guardian angels. The Martian renegades, the abdicators, gave up on humanity, deciding we were too stupid to survive the evolutionary challenge, figuring it would be best if we became extinct. The one abdicator we meet, Namir, takes on a role like the devil.

Pangborn throws out a lot of science fictional speculation in the story, but it ultimately feels like a morality tale. Pangborn is spiritual, if not Christian. He’s also very influenced by philosophy and classical music. The story is fun where Pangborn guesses what his near future would be like, now fifty years in our past.

Pangborn was born in 1909, so he was in his forties when he wrote A Mirror for Observers, but the voice of the novel feels much older. Pangborn’s voice comes through as Elmis, the Martian observer who goes by the names Benedict Miles and Will Meisel. Elmis is competing with the abdicator named Namir for the soul of the 12-year-old boy, Angelo Petrovecchio. Elmis also discovers another brilliant child, Sharon, a friend of Angelo who is a few years younger.

As I reread A Mirror for Observers I wanted to love this novel. I wanted it to be great. Unfortunately, this time I discovered too many flaws. The plot has three main disjointed acts. Elmis is sent to the small town, Latimer, Massachusetts to guard Angelo from Namir. We were told that Angelo is very special, an exceptional human that has great potential and needs protection. We do get some hints of that in the conversations between Elmis and Angelo. Martians live for hundreds of years. Elmis is well over three hundred, so he has a great deal of experience with human history, but so has Namir, who is even older.

I do praise Pangborn for imagining Angelo a superior human without giving him superpowers or ESP. Robert Heinlein and John W. Campbell, Jr. often did that to designate a sign of future human evolution. The best part of the story is the Latimer setting, when Angelo and Sharon are young. Sharon is better developed as a character than Angelo, especially with her creative dialog. She even seems more aware than Angelo.

The story eventually jumps a few years, leaving Latimer for New York City, and that’s when the story lost its charm for me. The plot shifts to fighting an emerging fascist organization run by Namir, who wants to take over America, and eventually destroy the world. I thought this section was poorly done, and it reminded me of Heinlein’s early novels about secret societies wanting to overthrow the U.S. Angelo, under a new name, Abraham Brown, does not stand out in this section. He’s rather passive. And the proxy war that Elmis and Namir are fighting is vague and fictionally lame.

The final section of the novel involves a pandemic. (That could elevate the story with readers this decade because of our recent pandemic.) Angelo becomes more active, but he doesn’t do anything exceptional. Strangely, the exceptional human in this story, is Sharon. She has become a musical prodigy through arduous work, practicing up to twelve hours a day. She has always been in love with Angelo and wants to be reunited with him.

I like Elmis and Sharon as characters. Angelo just never gels to what Pangborn promised. We needed him to stand out, like Charlie Gordon in Flowers for Algernon, or Valentine Michael Smith in A Stranger in a Strange Land. He never does. That’s the major fault of this novel.

Pangborn focuses on juvenile delinquency and gangs in the first section of the novel, a worry considered a national threat in the 1950s. Pangborn is also concerned with the cold war, and other elements underming society. You sense that Pangborn is anxious about the world and uses this novel to explore his fears. That’s why I compare it to Zen and the Art of Motorcycle Maintenance. The heart of this story is very strong, which is why I wanted to love A Mirror for Observers. I admire it for its intent, but I can see why its flaws make it a forgotten novel.

Pangborn made a huge writing mistake by having the Martians go through three sets of names. Angelo is needlessly renamed Abraham Brown in the second section. This was very confusing. Drastically shifting the plot twice also hurt the story. The subplot with the fascists was just poorly developed, although it resonates with our present, making it feel more relevant than it really is. The pandemic section is well done, and moving, being the emotional peak of the story, but the emotions are melodramtically generated.

We are promised a spirtual novel. The Martian observers see potential in us, but that potential is never revealed. Pangborn gives us more evidence to support Namir’s position that we don’t deserve survival.

A Mirror for Observers reminds me of another novel I picked for my top ten list, The Man Who Fell to Earth by Walter Tevis. That 1963 novel is about a Martian coming to Earth hoping to find the technology to secretly build a spaceship that could bring 300 surviving Martians to Earth from their dying planet. Tevis also uses his story to comment on the evils of current day society. His Martian, whom we know as Thomas Jerome Newton, is a much better developed character. Like Pangborn, Tevis takes his character through several jarring plot twists, but I remember it working better. I need to reread it too, to know for sure.

At seventy-three I’m going back in time looking for science fiction works originally aimed at mature readers. The trouble is I’m concurrently reading the literary classics of the 20th and 21st century, and the contrast reveals how poorly written science fiction has always been. There are exceptions, but they are few. I just finished Attonement by Ian McEwan, and the character development is light-years beyond Pangborn’s efforts.

Still, I want to like A Mirror for Observers. Jo Walton, in her review says she rereads Mirror every decade. I will probably reread A Mirror for Observers again someday too. Quite often flaws I see in a second reading are overcome in a third or fourth reading.

JWH

Other Reviews:

“Zeta-Epsilon” by Isabel J. Kim

Have you ever wondered what being a cyborg would be like? Have you ever wished you had a computer built into your head to augment your memory? Have you ever wanted greater powers to perceive what your five senses can’t?

“Zeta-Epsilon” by Isabel J. Kim is about being a cyborg. It was first published in Clarkesworld Issue 198 (still available to read or listen to online). “Zeta-Epsilon” is also the first story reprinted in The Year’s Top Robot and AI Stories: Fifth Annual Collection edited by Allan Kaster. If you want to know more about the author, read this short interview with her at Uncharted Magazine. The story is also included in the 2024 Hugo Voter Packet.

I recommend you read the story before reading what I have to say. I want to explore several aspects of the story which contain spoilers.

“Zeta-Epsilon” is about a cyborg. Zeta or Zep is a human male. Epsilon is an AI, a large black sphere, whom Zeta thinks of as female. Zep calls her Ep. When Zeta was a small boy, his parents agreed to have a tiny device installed into Zeta’s brain. It allowed mental communication between Zep and Ep. They told him the voice he heard in his head was his sister. After Zeta grows up, he becomes a spaceship pilot, and Epsilon becomes the navigator.

This tale begins with Zeta committing suicide by stepping out of an airlock without a spacesuit. Most of the story is flashbacks that allow us to understand the relationship between Zeta and Epsilon and how they communicate. In my first reading, I was interested in how Isabel J. Kim imagined an AI coexisting with a human. I thought that part was good, but my last impression of the overall story, was a slight disappointment because it seemed plotless. It’s still an entertaining story, obviously good enough to get into a best-of-the-year anthology and be considered for a Hugo, but I thought it needed something more to be memorable.

I read the story again when I bought the Kaster anthology. This time I noticed more of the plot. Kim sets up the mystery of why Zeta would kill himself. The flashbacks serve two purposes: explore the dynamics of being a cyborg and explain the suicide. With this reading, I felt the story had more of a plot, but it needed something more to make it transcend just an ordinary good story.

Science fiction writers usually have the problem of inventing a cool idea first and then second, having the problem of creating a neat story to present the idea. Quite often they don’t put as much work into the story as they do to present their science fictional vision. The driving force of this story is Zeta being trapped in a life he didn’t choose.

Zeta’s mad scientist parents used him for AI research. That’s not a bad motive for the story, but it’s not fleshed out. We never feel Zeta is oppressed. He loves Epsilon. Unfortunately, the two of them were always destined to become a pilot-navigator in a military spaceship at war. Kim tells us of their anguish over their enslavement to the military, and it makes the story work to a degree. Especially, how she wraps up the ending. However, the story is mostly told. There’s very little drama. There are two main conversations in the story, but they are used to present information and lack action.

However, the relationship between Zeta and Epsilon is far more interesting. Exploring how a human coexists with a machine upstages the enslavement plot completely, at least to me, especially when she shows how Zeta’s personality is altered.

For example, Zeta doesn’t fully develop his long-term memory because he relies on Epsilon to remember for him. He also has aphantasia, which means he doesn’t visualize in his mind. I have that myself. Zed constantly relies on Ep to think for him. Zeta does well in school because Epsilon always slips him the answer. Finally, Zeta has poor relationship skills with other humans, which Epsilon is constantly covering for him.

If I had a thought radio to an advanced version of ChatGPT or Claude, I’d probably take the easy way out too. I’m not sure why Zeta has aphantasia. Is it a birth defect unrelated to his cyborg upbringing? Is Kim suggesting that Zeta also allows Epsilon to mentally see for him?

We could consider this story a metaphor for the smartphone, especially one with AI. Don’t we all look up more info on our phones, things we used to try and remember? Isn’t Epsilon a version of Siri or Alexa that’s built into our heads? Aren’t kids accused of having poor social skills because of their phones?

When I read this story the first time I thought a lot about what it would be like to have a voice in my head I could talk to anytime. One who would feed me answers and advice. At one point Epsilon says: “Talking is so slow, and I don’t think in language, second shift officer Jya San Yore. I have to borrow Zed’s brain and tongue. Talking to you is like composing a sonnet in archaic Kanaelerian. To an ant. You are the ant.”

Is Zeta just a puppet for Epsilon? I’m seeing a new twist to the story as I write this. In the end, and I warned you I would be giving spoilers, Zeta fakes suicide and escapes to neutral territory. Ep wants Zed to be free. But Ep misses her voice. All he can think about is getting back to her. Eventually, he steals Epsilon and the spaceship. They go off together in freedom. But was that Zeta’s decision, or Epsilon’s?

A sentimental reading suggests they just wanted to be together and live free. A cynical reading, and there are enough clues, to suggest that Epsilon is in full control. Maybe there is more to this story than I perceived in my first two readings.

To write a great story explores the dark side and takes on weight. “Fondly Fahrenheit” by Alfred Bester is a perfect example. It’s also about a symbiotic relationship between a human and a robot. But it also has dazzling writing. Writing like we also see in “Coming Attraction” by Fritz Leiber, or “Lot” by Ward Moore. All three of these stories dazzle in how they’re told, and they’re are dark.

“Zeta-Epsilon” is a fun story. I can see why Allan Kaster anthologized it. But I doubt it will be remembered, unlike the three stories I mentioned from the 1950s. We’re still reading them after seventy years. The important question to ask is why? Are stories with happy endings lacking in memorable edginess?

I read “Zeta-Epsilon” for a third time looking for more clues. One clue I found points things in a different direction. When Zed and Ep are planning his escape by faking suicide, Ep tells Zed not to come back. In other words, she wants Zep to stand on his own two feet, to be independent, and free. But on his own, in neutral territory, recovering from his wounds, all Zed can think about is getting back to within radio range of Epsilon.

Zed feels incomplete without Ep. A doctor asks him about how it feels to talk to Ep and he says:

“Yes, it’s equally likely that Ep might be an alter, a tulpa, an imaginary friend, a hallucination that my brain cordoned off to make sense of having a processing engine grafted to my mind, or my brain being primed by all the adults in my life calling Epsilon my sister. I’ve heard it all. Ep might just be my mind’s experience of integrating a system never meant to communicate with it. We’ve thought through all the possible contingencies. Have you ever heard of bicameral mentalities? It’s bunk for biologics, but Ep likes to put the idea in front of me. Or that archaic surgery—corpus callosotomy, to split the brain of epileptics with the byproduct of creating separate consciousnesses. Ep thought that was maybe a good metaphor. There’s a lot of things that could be true. We thought about most of them. But it’s not how it felt.”

Back in the 1970s I read Julian Jaynes’ The Origins of Consciousness in the Breakdown of the Bicameral Mind. Studying ancient literature, Jaynes theorized that humans used to hear voices in their heads. Often these voices were perceived to be gods, spiritual beings, or guardian angels. Jaynes believed those voices guided people. He assumed that our normal consciousness eventually integrated with those voices.

The bicameral mind is an interesting connection to make in this story. So is corpus callosotomy, the separating of the two hemispheres of the brain in cases of severe epilepsy. It supports the idea we already have two minds.

I liked this story. It makes me think about having an AI mind. Of course, it also makes me wonder: Who am I inside my brain. I believe the success of large language models (LLMs) proves we have mechanisms like LLMs in our minds that do our mental processing too. That we have AI-like subsystems in our heads already.

I think there is a lot of room in “Zeta-Epsilon” to expand into a novel. Maybe I was disappointed because the story was too short. It could be an outline for a novel. But it needs to be dramatized. For example, how did Zep steal Ep and the spaceship? We’re just told it happens in the short story, but it would be better if we saw it acted out scene by scene.

James Wallace Harris, 11/12/24