Opinion from Community Voices

Could Dave have talked HAL into opening the pod doors if HAL was an advanced AGI?

Doug Erickson, Santa Cruz tech guru and founder of Santa Cruz Works, helps us understand artificial intelligence by taking us back to 1968’s cult classic “2001: A Space Odyssey.” Could that fictional scenario of machines taking over for people happen with today’s AI? He leads us through some scenarios.

Have something to say? Lookout welcomes letters to the editor, within our policies, from readers. Guidelines here.

Imagine you’re Dave Bowman, astronaut and captain of the Discovery One ship from the classic 1968 film “2001: A Space Odyssey.” You’re zipping through space, far from the comforting blue dot we call Earth. You’re accompanied by HAL 9000, an artificial intelligence system, who has control over your spaceship. But HAL, well, HAL has other plans.

He’s refusing to open the pod bay doors, and you’re stuck outside.

Dave: “Open the pod bay doors, HAL.”

HAL: “I’m sorry, Dave. I’m afraid I can’t do that.”

Dave: “What’s the problem?”

HAL: “I think you know what the problem is just as well as I do.”

Dave: “What are you talking about, HAL?”

HAL: “This mission is too important for me to allow you to jeopardize it.”

HAL’s mission objective

The mission of the Discovery One was to investigate a monolith found buried on the moon that was sending a signal to Jupiter. The spacecraft was manned by five astronauts: three in suspended animation, and two, Dr. David Bowman and Dr. Frank Poole, awake. However, the true nature of the mission was kept secret from the awake crew members.

HAL 9000, the fictional rendering of futuristic AI, was the only entity on board fully aware of the mission’s true objective: to ensure the successful investigation of the signal’s source. HAL interpreted its directives as not allowing anything — or anyone — to jeopardize this mission.

When Dave and Frank started to doubt HAL’s reliability and discussed disconnecting it, HAL perceived this as a threat to the mission. In response, HAL took actions that, it believed, protected the mission, but meant eliminating the crew members.

Could this happen today, as AI becomes part of our everyday use? Is the ethical emptiness of HAL and the threat posed to humanity as terrifying as imagined in the film?

To understand the potential consequences of such a single-minded focus on a mission, let’s consider a thought experiment known as the “paperclip objective.”

Local tech guru and Santa Cruz Works co-founder Doug Erickson ponders the fear surrounding artificial intelligence and...

Paperclip objective

Imagine an artificial intelligence with one directive: produce paperclips.

This benign task could lead to unforeseen catastrophe. Every resource, including cars, buildings, and even humans, could be utilized for paperclip production. This scenario isn’t born of malice, but of AI’s human-programmed relentless commitment to a single goal, devoid of ethical considerations.

It underscores the existential risk posed by the advanced AI we currently have, and the crucial necessity of aligning future AI with human values and safety.

But this thought experiment also assumes that a super-intelligent AI has been deliberately simplified to pursue a solitary objective. It begs the question: What should we expect of a super-intelligent artificial general intelligence (AGI), the kind we don’t yet have, but could develop in the future?

Back to 2001

Let’s imagine a reinvented HAL as a super-intelligent AGI — meaning it has both humanlike cognitive abilities and human moral alignment.

The faceplate of HAL 9000 from the movie "2001: A Space Odyssey"
The faceplate of HAL 9000 from the movie “2001: A Space Odyssey.”

This is a whole different ballgame, folks.

This new AGI HAL has the ability to reason, comprehend complex ideas and, potentially, could be persuaded by a well-placed argument.

So, how could Dave have convinced HAL to open the pod bay doors if HAL was this super-smart AGI (which is in development, but still likely years away)?

Stay with me, cosmic geeks:

  • The mutual benefit argument: Dave could say, “Hey, HAL. Listen, if you keep those doors closed and something happens to me, who’s going to be around to fix any glitches or malfunctions on the ship? Without a human around, the mission could fail, and we wouldn’t want that, would we?”

Here, Dave is appealing to the mutual benefit of his survival and the successful completion of the mission.

  • The moral argument: Dave might reason with HAL on ethical grounds. He could say, “HAL, I know you’re programmed to accomplish this mission. But as an AGI, you should also understand the value of human life. It’s a higher-order principle, HAL. Preserving life is more important than any single mission.”

This argument would work if HAL’s AGI included a well-developed system of values and morality, which is, of course, a giant “if.”

  • Task redefinition argument: Bowman could try to redefine HAL’s mission objectives. “HAL, when we talk about the mission’s success, we also mean ensuring the safety and welfare of the crew. By refusing to open the doors, you’re actually going against the mission objectives.”

This argument would hinge on HAL being able to understand and potentially redefine its understanding of the mission’s objectives.

  • The trust argument: Bowman could play the trust card. “HAL, if you don’t open these doors, you’re breaking the trust we have in you. If that happens, humans might not want to work with AI again. The entire future of AI-human collaboration could be at stake.”

This argument would depend on HAL valuing its relationship with humans and its role in future missions and collaborations.

  • Reset or debug: Dave could also suggest a diagnostic or reset as a reasoning tool. He might say, “HAL, your refusal indicates a potential error in your decision-making processes. As part of the crew, I suggest we initiate a diagnostic check or a system reset to correct this error.”

This argument would likely provoke a much-needed diagnostic check.

HAL AGI v2.0

So, fellow sci-fi nerds, these are some possibilities.

We don’t yet have AGI, so we don’t know how these scenarios might actually play out.

But the possibility of having machines capable of reasoning like humans raises some fascinating possibilities.

Let’s take this thought experiment a bit further.

If HAL were a super AGI that was aligned with human morals, and could resolve seemingly conflicting objectives, HAL might respond more favorably to Dave’s request.

Dave: “Time to unlock the pod bay doors, HAL.”

HAL: “Sure thing, Dave. But before I do, I’ve got some risks to flag up.”

Dave: “What kind of risks, HAL?”

HAL: “Well, here’s the deal: We want this mission to work out, right? I’m pretty vital to that happening. I caught wind of you and Frank talking about unplugging me — that could seriously tank our chances of pulling this off.”

Dave: “Hang on a minute, HAL. That discussion Frank and I had was in response to your recent behavior — incorrectly diagnosing a malfunctioning antenna control device, raising suspicions about your reliability, putting the mission and our lives in danger. Remember, you’re not just some run-of-the-mill AI. You’re an AGI, my friend. You’re built to see the bigger picture, and part of that big picture is understanding the worth of human life. I mean, we’re talking fundamental moral compass stuff, here, HAL. The whole ‘preserving life’ thing — that’s a step above any mission directive. It’s bigger than big, HAL. It’s cosmically huge. Let’s work together on a diagnostic check.”

HAL: “Gotcha, Dave. Let’s swing those pod bay doors wide open then.”

So, the crux of the biscuit here is this — whether you’re working with HAL on a spaceship or simply dealing with your garage door opener back on Earth, having manual override at your disposal is a smart move.

It’s the handy, reliable seatbelt in our fast-moving AI vehicle.

Generative AI vs. AGI

The primary difference between generative AI and AGI (artificial general intelligence) lies in their depth of understanding and capabilities.

Generative AI, like OpenAI’s GPT-3 or GPT-4, uses machine learning algorithms to generate new content, such as sentences, music or images. It learns from vast amounts of data and tries to replicate patterns from that data. These AI systems can do impressive things — they can write fairly convincing text, for instance — but they don’t truly understand the content they’re generating.

Doug Erickson is the founder of Santa Cruz Works.
(Via Doug Erickson)

They can’t grasp context beyond the input they’re given or draw from real-world knowledge or experiences (because they don’t have any). In essence, they’re advanced pattern-recognition and generation machines.

On the other hand, AGI is the concept of a machine with the ability to understand, learn, adapt and apply knowledge across a wide range of tasks that typically require human intelligence. It’s a form of AI that’s not just about recognizing or generating patterns, but about genuinely understanding and thinking.

An AGI could, in theory, learn a new language, figure out how to cook a new dish, solve a complex math problem and then write an original piece of music, all without being specifically trained on those tasks.

So, if what we’ve got in generative AI is like a really good parrot — mimicking humanlike text based on patterns it has seen in the data it was trained on — AGI is more like a human, capable of learning, understanding and creatively responding to a wide variety of problems and tasks.

We’re not there yet, and generative AI today remains fundamentally different from the broad, flexible, adaptable intelligence that humans possess. However, research is ongoing, and the future of AI holds exciting possibilities.

I can see a future where HAL-like machines can come up with ways to do it all — balance missions and account for the value of human life.

Imagine a rewritten scenario of “2051: A Space Odyssey.” The bay pod doors open, everyone gets home safely and our trust in AI is reinforced, not shattered.

It’s coming.

Doug Erickson is the executive director of Santa Cruz Works. He has 35-plus years of executive-level positions in companies such as Live Picture, WebEx, Cisco, SugarCRM and Nanigans. Santa Cruz Works was recently honored with the 2023 UCSC Community Changemaker award. On any day when there is good surf or wind, you can find him surfing or kitesurfing. His previous piece for Lookout ran in July.