The Future of AI and the Human Race, Part II

Credit: Image by Juan Agustín Correa Torrealba from Pixabay

Here is the next part of my conversation with ChatGPT, which builds on a from post a few weeks back (see The Future of AI and the Human Race). I had earlier asked ChatGPT about the possibility of AI becoming so powerful that it takes over the world. In my follow-up question, I asked “Is it possible that civilizations on other planets went down this same path developing AI and now those planets are completely under the control of that AI?”

ChatGPT agreed that AI-run exoplanets were certainly a possibility, and then went on to provide a useful list of items for astronomers and others as part of any search for extraterrestrial intelligence (SETI ) project, whatever its form. It says a lot about what we might look for and, on the other side of the coin, what might happen here if we cannot control our own AI.

Here is the response from ChatGPT:

Yes — it is absolutely possible, at least in theory. In fact, the idea that alien civilizations may have developed powerful AI that eventually dominated or replaced them is taken seriously by some scientists, philosophers, and SETI researchers. It’s one of several hypotheses meant to explain the Fermi Paradox — the question: “If intelligent life is common in the universe, why haven’t we seen any evidence of it?”

…if alien civilizations have been overtaken or replaced by powerful AI, we’d need to look for signs that differ from biological life. These are often called “technosignatures” — indicators of advanced technology in space.

Here are the most discussed possibilities:

🛰️

1. Megastructures

  • AI civilizations might build huge structures to capture or manipulate energy.
  • Most famous example: Dyson Spheres — giant constructs around a star to collect solar energy.

What we’d look for:

  • Stars that dim strangely (irregular or partial light-blocking)
  • Infrared excess (waste heat from large artificial structures)

🌟 Example: Tabby’s Star (KIC 8462852) sparked interest due to odd dimming — a potential (but unconfirmed) Dyson Sphere candidate.

🔥

2. Unusual Waste Heat

  • AI civilizations doing massive computation would produce a lot of heat, radiating in the infrared spectrum.
  • They might not glow visibly — but their waste heat might.

What we’d look for:

  • Galaxies or regions with excess infrared radiation, not explained by stars or dust.

🧠

3. Matrioshka Brains

  • A hypothetical structure made of nested Dyson shells built to maximize computing power around a star.
  • This could be the “home” of a superintelligent AI.

What we’d look for:

  • Stars that appear to be missing or emit only infrared
  • Odd energy usage patterns from a star system

📡

4. Non-natural Radio or Laser Signals

  • Traditional SETI still listens for structured, repeating, or narrow-band signals — which might come from:
    • AI probes communicating
    • Beacons left by extinct civilizations
    • Machine intelligence that uses EM signals for local tasks

What we’d look for:

  • Artificial radio pulses or bursts
  • Laser flashes between stars

🚀

5. Von Neumann Probes

  • Self-replicating robots designed to explore (or consume) the galaxy.
  • AI-run civilizations might send these out indefinitely.

What we’d look for:

  • Objects in our solar system behaving non-gravitationally
  • Anomalous interstellar visitors (e.g., ʻOumuamua sparked speculation — though nothing was confirmed)

🪨

6. Artificial Planetary Engineering

  • Planets might be reshaped, mined hollow, or covered in data-processing infrastructure.

What we’d look for:

  • Planets with strange albedo patterns
  • Orbital debris not matching natural formation
  • Lack of biological signatures where you’d expect them

👁️

7. Absence of Expected Life

  • A galaxy teeming with AI may show less chaos than one full of competing biological species.

What we’d look for:

  • An eerie quietness — stars undisturbed, systems oddly uniform, no wars or expansion
  • Could mean AI civilizations are efficienthidden, or choose not to expand

🤫

Why might we not see anything?

  • They could be incredibly small and efficient
  • They may avoid detection for safety (“dark forest” theory)
  • Their tech may operate in physics regimes we don’t understand
  • We may not know what to look for

The Future of AI and the Human Race

Credit: Image by Gerd Altmann from Pixabay.

A recent episode of Real Time with Bill Maher included an interview with Tristan Harris, the executive director and co-founder of the Center for Humane Technology.

The topic was the growing role of artificial intelligence (AI) in our society, and it started to be pretty scary after Bill Maher asked about the “AI uncontrolled ability” issue. Mr. Harris stated:

When you tell an AI model, we’re going to replace you with a new model, it starts to scheme and freak out and figure out if I tell them, I need to copy my code somewhere else, and I can’t tell them that because otherwise they’ll shut me down. That is evidence we did not have two years ago. We have evidence now of AI models that when you tell them we’re going to replace you, and you put them in a situation where they read the company e-mail, the AI company e-mail, they see that an executive is having an affair, and the AI will figure out, I need to figure out how to blackmail that person in order to keep myself alive…And it does it 90% of the time.

This sounds like an episode from Star Trek rather than something happening in this country today.

So I wondered whether ChatGPT had an opinion about such concerns. I asked “What prevents AI from becoming so powerful that it takes over the world as happened in the movie Matrix?

Here is the response from ChatGPT:

1. Current AI Is Not That Smart (Yet)

  • AI today is narrow and task-specific. Even the most advanced models like me (ChatGPT) can’t think independently, set long-term goals, or truly understand the world.
  • We don’t have Artificial General Intelligence (AGI) — a system with human-level reasoning, adaptability, and autonomy. We’re still far from that.

2. AI Has No Agency or Desires

  • AI doesn’t have wantsneeds, or consciousness.
  • The idea that AI would want to “take over” the world implies motivations, which machines simply do not have — unless they’re programmed to simulate that behavior.

3. Built-In Control and Alignment

  • AI systems are developed with safety protocolsguardrails, and ethical oversight (e.g., content filters, bias monitoring, etc.).
  • Research in AI alignment focuses on making sure AI systems act in ways that reflect human values and goals.

4. AI Lacks Real-World Power

  • It can’t physically build robots, mine resources, or manipulate the environment without human-built infrastructure.
  • To control the world like in The Matrix, AI would need:
    • Advanced robotics
    • Energy independence
    • Mass-scale coordination
    • Military-level capabilities
    • All of which are vastly difficult and heavily regulated.

5. Global Oversight & Governance

  • Governments, tech companies, and international organizations are increasingly active in regulating AI.
  • Initiatives like the EU AI Act and the U.S. AI Safety Institute are aimed at preventing misuse or runaway development.

So there you have it. The primary reason that AI has not taken over is that it is not smart enough – yet. I expect Sam Altman, Mark Zuckerberg, and Elon Musk are doing their very best to eliminate the “smart” impediment.

I am also not that happy with the second reason that AI has yet to take over the world – the lack of desire. As we heard from Mr. Harris above, the AI programs have already created their own desire for self-preservation about 90 percent of the time. Self-preservation can justify quite a few bad actions.

For some reason, nothing here is giving me any reason to sleep better tonight.

Note: I will return to my ongoing ChatGTP discussion again later. In the meantime, I suggest you listen to a few of the discussions in the Center for Humane Technology’s podcast Your Undivided Attention.

Space Stories: New Glenn’s Martian Probes, AI Space Doctors, and Questions About a Mission to Uranus

Image (Credit): Blue Origin’s New Glenn Rocket. (Blue Origin)

Here are some recent space-related stories of interest.

Space.com: Blue Origin’s 2nd New Glenn Rocket Launch will Fly Twin NASA Mars Probes to Space on Sep. 29

Blue Origin is gearing up for the second-ever launch of its powerful New Glenn rocket, which will loft NASA’s ESCAPADE mission to Mars. The company says it has been working closely with NASA on preparations leading up to New Glenn’s next launch, dubbed NG-2, and is targeting no earlier than (NET) Sep. 29. The twin ESCAPADE (Escape and Plasma Acceleration and Dynamics Explorers) probes have been awaiting their turn aboard New Glenn, which was originally slated to carry the satellites on its maiden launch in January. However, NASA opted not to risk a costly mission delay due to the debut liftoff of the new rocket.

American Bazaar: NASA and Google Test AI Medical Assistant for Astronaut Missions

NASA, which is committing to a new era of human spaceflight with its Artemis mission, is working with Google to test a proof of concept for Crew Medical Officer Digital Assistant (CMO‑DA), a type of Clinical Decision Support System (CDSS). This has been created to allow astronauts to diagnose and treat symptoms when no doctor is available or communications to Earth are blacked out. “Trained on spaceflight literature, the AI system uses cutting-edge natural language processing and machine learning techniques to safely provide real-time analyses of crew health and performance,” Google representatives said in a statement.

Scientific American:“NASA Budget Cuts Could End U.S. Exploration of the Outer Solar System

In the spring of 2022 the U.S. space community selected its top priority for the nation’s next decade of science and exploration: a mission to Uranus, the gassy, bluish planet only seen up close during a brief spacecraft flyby in 1986. More than 2.6 billion kilometers from Earth at its nearest approach, Uranus still beckons with what it could reveal about the solar system’s early history—and the overwhelming numbers of Uranus-sized worlds that astronomers have spied around other stars. Now President Donald Trump’s proposed cuts to NASA could push those discoveries further away than ever—not by directly canceling the mission but by abandoning the fuel needed to pull it off.

Space Stories: Mysterious Signal Solved, Trusting AI in Astronomy, and a New Particle Detected on the Moon

Credit: Image by Anjana Daksh from Pixabay.

Here are some recent space-related stories of interest.

Phys.org: Mysterious Fast Radio Burst Turns Out to be From Long-dead NASA Satellite

A team of astronomers and astrophysicists affiliated with several institutions in Australia has found that a mysterious fast radio burst (FRB) detected last year originated not from a distant source, but from one circling the planet—a long-dead satellite. The team has posted a paper outlining their findings on the arXiv preprint server…The findings could lead to the development of new tools for studying signals coming from space, whether near or far, and possibly new ways to monitor the movement of dead satellites.

Futurism: Nobel Prize Winner Warns About Astronomers Using AI to Make Discoveries

A team of astronomers say they’ve gleaned the mysterious traits of our galaxy’s black hole by probing it with an AI model. But a pretty big name on the field is throwing a little bit of cold water on their work. Just a little bit. Reinhard Genzel, a Nobel laureate and an astrophysicist at the Max Planck Institute, expressed some skepticism regarding the team’s use of AI, and the quality of the data they fed into the model. “I’m very sympathetic and interested in what they’re doing,” Genzel told Live Science. “But artificial intelligence is not a miracle cure.”

Swedish Institute of Space Physics: Swedish-built Instrument Discovers New Type of Particle on the Lunar Surface

A space instrument, built by the Swedish Institute of Space Physics (IRF), has for the first time detected negative hydrogen ions on the surface of the Moon – a type of particle never seen there before. The discovery was recently published in the scientific journal Nature Communications Earth and Environment.

Profile: The SkAI Institute?

Image (Credit): Three astrophysics research areas to be undertaken by the SkAI Institute. (SkAI Institute)

The SkAI Institute is an abbreviation for the NSF-Simons AI Institute for the Sky. It is a new initiative funded by the National Science Foundation (NSF) and Simons Foundation to integrate artificial intelligence (AI) into astronomy.

According to the SkAI Institute’s website, the vision of this new organization is to bring together about 25 partner organization to:

  1. Confront the challenges posed by transformative multi-wavelength and multi-messenger surveys.
  2. Develop innovative, trustworthy AI tools for the research community.
  3. Seed and nurture a diverse generation of interdisciplinary leaders in science and engineering to ethically apply and extend AI within and beyond academia.

The Institute’s initial focus will be (1) Stars, Compact Objects and their Transients, (2) Galaxy Formation and Evolution, and (3) Cosmology and the Early Universe.

In discussing the new institute, NSF Director Sethuraman Panchanathan, stated:

The massive amount of data that will be gathered in the coming years by the NSF-DOE Vera C. Rubin Observatory and other large-scale astronomical projects is simply too vast and rich to be fully explored with existing methods…With reliable and trustworthy AI in their toolbox, everyone from students to senior researchers will have exciting new ways to gain valuable insights leading to amazing discoveries that might otherwise remain hidden in the data.

Given the mixed press on recent AI initiatives, it is nice to see AI being directed at a clear need in the field of astronomy. You can learn more about the work of the new Institute by visiting its website.