The Future of AI and the Human Race

Credit: Image by Gerd Altmann from Pixabay.

A recent episode of Real Time with Bill Maher included an interview with Tristan Harris, the executive director and co-founder of the Center for Humane Technology.

The topic was the growing role of artificial intelligence (AI) in our society, and it started to be pretty scary after Bill Maher asked about the “AI uncontrolled ability” issue. Mr. Harris stated:

When you tell an AI model, we’re going to replace you with a new model, it starts to scheme and freak out and figure out if I tell them, I need to copy my code somewhere else, and I can’t tell them that because otherwise they’ll shut me down. That is evidence we did not have two years ago. We have evidence now of AI models that when you tell them we’re going to replace you, and you put them in a situation where they read the company e-mail, the AI company e-mail, they see that an executive is having an affair, and the AI will figure out, I need to figure out how to blackmail that person in order to keep myself alive…And it does it 90% of the time.

This sounds like an episode from Star Trek rather than something happening in this country today.

So I wondered whether ChatGPT had an opinion about such concerns. I asked “What prevents AI from becoming so powerful that it takes over the world as happened in the movie Matrix?

Here is the response from ChatGPT:

1. Current AI Is Not That Smart (Yet)

  • AI today is narrow and task-specific. Even the most advanced models like me (ChatGPT) can’t think independently, set long-term goals, or truly understand the world.
  • We don’t have Artificial General Intelligence (AGI) — a system with human-level reasoning, adaptability, and autonomy. We’re still far from that.

2. AI Has No Agency or Desires

  • AI doesn’t have wantsneeds, or consciousness.
  • The idea that AI would want to “take over” the world implies motivations, which machines simply do not have — unless they’re programmed to simulate that behavior.

3. Built-In Control and Alignment

  • AI systems are developed with safety protocolsguardrails, and ethical oversight (e.g., content filters, bias monitoring, etc.).
  • Research in AI alignment focuses on making sure AI systems act in ways that reflect human values and goals.

4. AI Lacks Real-World Power

  • It can’t physically build robots, mine resources, or manipulate the environment without human-built infrastructure.
  • To control the world like in The Matrix, AI would need:
    • Advanced robotics
    • Energy independence
    • Mass-scale coordination
    • Military-level capabilities
    • All of which are vastly difficult and heavily regulated.

5. Global Oversight & Governance

  • Governments, tech companies, and international organizations are increasingly active in regulating AI.
  • Initiatives like the EU AI Act and the U.S. AI Safety Institute are aimed at preventing misuse or runaway development.

So there you have it. The primary reason that AI has not taken over is that it is not smart enough – yet. I expect Sam Altman, Mark Zuckerberg, and Elon Musk are doing their very best to eliminate the “smart” impediment.

I am also not that happy with the second reason that AI has yet to take over the world – the lack of desire. As we heard from Mr. Harris above, the AI programs have already created their own desire for self-preservation about 90 percent of the time. Self-preservation can justify quite a few bad actions.

For some reason, nothing here is giving me any reason to sleep better tonight.

Note: I will return to my ongoing ChatGTP discussion again later. In the meantime, I suggest you listen to a few of the discussions in the Center for Humane Technology’s podcast Your Undivided Attention.