Space Stories: Another Artemis II Delay, AI Discovers Cosmic Oddities in Hubble Data, and AI Drives a Martian Rover

Image (Credit): Artemis II mission patch. (NASA)

Here are some recent space-related stories of interest.

CBS News: Artemis II Moon Rocket Fueling Test Runs into Problems with Hydrogen Leak

A hydrogen leak at the base of NASA’s Artemis II moon rocket Monday threw a wrench into a carefully planned countdown “wet dress” rehearsal, but engineers were able to manage a workaround and the test proceeded toward a simulated launch. Whether mission managers will be able to clear the rocket for an actual launch as early as Sunday to propel four astronauts on a flight to the moon will depend on the results of a detailed overnight review and post-test analysis. NASA only has three days — Feb. 8, 10 and 11 — to get the mission off this month or the flight will slip to March.

ZME Science: Astronomers Unleashed an AI on Hubble’s Archive and Uncovered 1,300 “Cosmic Oddities.” Most Were Completely New to Science

For more than three decades, the Hubble Space Telescope has collected targeted images to answer specific scientific questions, from mapping galaxies to studying nearby nebulae. Hubble has gathered so much data that despite their best efforts, astronomers haven’t had the time to analyze it all in detail yet…Now, two astronomers have revisited that massive archive with a new plan. They deployed an artificial intelligence system designed to notice when something looks “wrong”. In just 60 hours of computing time, the tool flagged over 1,300 anomalies hidden within 100 million Hubble snapshots. Hundreds of them have never appeared in scientific literature.

NASA/JPL NASA’s Perseverance Rover Completes First AI-Planned Drive on Mars

NASA’s Perseverance Mars rover has completed the first drives on another world that were planned by artificial intelligence. Executed on Dec. 8 and 10, and led by the agency’s Jet Propulsion Laboratory in Southern California, the demonstration used generative AI to create waypoints for Perseverance, a complex decision-making task typically performed manually by the mission’s human rover planners…During the demonstration, the team leveraged a type of generative AI called vision-language models to analyze existing data from JPL’s surface mission dataset. The AI used the same imagery and data that human planners rely on to generate waypoints — fixed locations where the rover takes up a new set of instructions — so that Perseverance could safely navigate the challenging Martian terrain.

Podcast: We Need To Talk About AI

Credit: Image by Brian Penny from Pixabay.

The recent Cool Worlds Podcast is basically a rambling talk about the use of AI in the scientific community. Titled “We Need To Talk About AI,” this dialogue by Professor David Kipping follows his visit to the Institute of Advanced Study at Princeton where he heard about how his colleagues are using AI in their work.

Professor Kipping covers many points and makes it clear from the start that he has some serious questions about the impact of AI on his own work and the work of graduate students. For instance, he asks:

  • Will cheap AI change the science community’s need need for graduate students in the future given the time and cost to develop those new scientists compared to the amazing advances in AI?
  • Will the cheap AI program of today become more costly down the road once the AI companies need to recoup the billions of dollars invested in this technology?
  • Will science become too dependent on this technology while human skills atrophy?

He is also very honest about how he uses AI in the production of his own public videos explaining scientific developments and controversies. More interestingly, he wonders aloud whether we will even need his videos in the future as AI gets better and we have the ability to seek our own answers rather than waiting for the next video.

It’s a lot to digest and worth your time, if only because it is an ongoing set of questions in basically every industry at this point.

Listen for yourself and consider giving your own input back to Professor Kipping. He is soliciting your opinion as he finds his way forward in this new world.

The Future of AI and the Human Race, Part II

Credit: Image by Juan Agustín Correa Torrealba from Pixabay

Here is the next part of my conversation with ChatGPT, which builds on a from post a few weeks back (see The Future of AI and the Human Race). I had earlier asked ChatGPT about the possibility of AI becoming so powerful that it takes over the world. In my follow-up question, I asked “Is it possible that civilizations on other planets went down this same path developing AI and now those planets are completely under the control of that AI?”

ChatGPT agreed that AI-run exoplanets were certainly a possibility, and then went on to provide a useful list of items for astronomers and others as part of any search for extraterrestrial intelligence (SETI ) project, whatever its form. It says a lot about what we might look for and, on the other side of the coin, what might happen here if we cannot control our own AI.

Here is the response from ChatGPT:

Yes — it is absolutely possible, at least in theory. In fact, the idea that alien civilizations may have developed powerful AI that eventually dominated or replaced them is taken seriously by some scientists, philosophers, and SETI researchers. It’s one of several hypotheses meant to explain the Fermi Paradox — the question: “If intelligent life is common in the universe, why haven’t we seen any evidence of it?”

…if alien civilizations have been overtaken or replaced by powerful AI, we’d need to look for signs that differ from biological life. These are often called “technosignatures” — indicators of advanced technology in space.

Here are the most discussed possibilities:

🛰️

1. Megastructures

  • AI civilizations might build huge structures to capture or manipulate energy.
  • Most famous example: Dyson Spheres — giant constructs around a star to collect solar energy.

What we’d look for:

  • Stars that dim strangely (irregular or partial light-blocking)
  • Infrared excess (waste heat from large artificial structures)

🌟 Example: Tabby’s Star (KIC 8462852) sparked interest due to odd dimming — a potential (but unconfirmed) Dyson Sphere candidate.

🔥

2. Unusual Waste Heat

  • AI civilizations doing massive computation would produce a lot of heat, radiating in the infrared spectrum.
  • They might not glow visibly — but their waste heat might.

What we’d look for:

  • Galaxies or regions with excess infrared radiation, not explained by stars or dust.

🧠

3. Matrioshka Brains

  • A hypothetical structure made of nested Dyson shells built to maximize computing power around a star.
  • This could be the “home” of a superintelligent AI.

What we’d look for:

  • Stars that appear to be missing or emit only infrared
  • Odd energy usage patterns from a star system

📡

4. Non-natural Radio or Laser Signals

  • Traditional SETI still listens for structured, repeating, or narrow-band signals — which might come from:
    • AI probes communicating
    • Beacons left by extinct civilizations
    • Machine intelligence that uses EM signals for local tasks

What we’d look for:

  • Artificial radio pulses or bursts
  • Laser flashes between stars

🚀

5. Von Neumann Probes

  • Self-replicating robots designed to explore (or consume) the galaxy.
  • AI-run civilizations might send these out indefinitely.

What we’d look for:

  • Objects in our solar system behaving non-gravitationally
  • Anomalous interstellar visitors (e.g., ʻOumuamua sparked speculation — though nothing was confirmed)

🪨

6. Artificial Planetary Engineering

  • Planets might be reshaped, mined hollow, or covered in data-processing infrastructure.

What we’d look for:

  • Planets with strange albedo patterns
  • Orbital debris not matching natural formation
  • Lack of biological signatures where you’d expect them

👁️

7. Absence of Expected Life

  • A galaxy teeming with AI may show less chaos than one full of competing biological species.

What we’d look for:

  • An eerie quietness — stars undisturbed, systems oddly uniform, no wars or expansion
  • Could mean AI civilizations are efficienthidden, or choose not to expand

🤫

Why might we not see anything?

  • They could be incredibly small and efficient
  • They may avoid detection for safety (“dark forest” theory)
  • Their tech may operate in physics regimes we don’t understand
  • We may not know what to look for

The Future of AI and the Human Race

Credit: Image by Gerd Altmann from Pixabay.

A recent episode of Real Time with Bill Maher included an interview with Tristan Harris, the executive director and co-founder of the Center for Humane Technology.

The topic was the growing role of artificial intelligence (AI) in our society, and it started to be pretty scary after Bill Maher asked about the “AI uncontrolled ability” issue. Mr. Harris stated:

When you tell an AI model, we’re going to replace you with a new model, it starts to scheme and freak out and figure out if I tell them, I need to copy my code somewhere else, and I can’t tell them that because otherwise they’ll shut me down. That is evidence we did not have two years ago. We have evidence now of AI models that when you tell them we’re going to replace you, and you put them in a situation where they read the company e-mail, the AI company e-mail, they see that an executive is having an affair, and the AI will figure out, I need to figure out how to blackmail that person in order to keep myself alive…And it does it 90% of the time.

This sounds like an episode from Star Trek rather than something happening in this country today.

So I wondered whether ChatGPT had an opinion about such concerns. I asked “What prevents AI from becoming so powerful that it takes over the world as happened in the movie Matrix?

Here is the response from ChatGPT:

1. Current AI Is Not That Smart (Yet)

  • AI today is narrow and task-specific. Even the most advanced models like me (ChatGPT) can’t think independently, set long-term goals, or truly understand the world.
  • We don’t have Artificial General Intelligence (AGI) — a system with human-level reasoning, adaptability, and autonomy. We’re still far from that.

2. AI Has No Agency or Desires

  • AI doesn’t have wantsneeds, or consciousness.
  • The idea that AI would want to “take over” the world implies motivations, which machines simply do not have — unless they’re programmed to simulate that behavior.

3. Built-In Control and Alignment

  • AI systems are developed with safety protocolsguardrails, and ethical oversight (e.g., content filters, bias monitoring, etc.).
  • Research in AI alignment focuses on making sure AI systems act in ways that reflect human values and goals.

4. AI Lacks Real-World Power

  • It can’t physically build robots, mine resources, or manipulate the environment without human-built infrastructure.
  • To control the world like in The Matrix, AI would need:
    • Advanced robotics
    • Energy independence
    • Mass-scale coordination
    • Military-level capabilities
    • All of which are vastly difficult and heavily regulated.

5. Global Oversight & Governance

  • Governments, tech companies, and international organizations are increasingly active in regulating AI.
  • Initiatives like the EU AI Act and the U.S. AI Safety Institute are aimed at preventing misuse or runaway development.

So there you have it. The primary reason that AI has not taken over is that it is not smart enough – yet. I expect Sam Altman, Mark Zuckerberg, and Elon Musk are doing their very best to eliminate the “smart” impediment.

I am also not that happy with the second reason that AI has yet to take over the world – the lack of desire. As we heard from Mr. Harris above, the AI programs have already created their own desire for self-preservation about 90 percent of the time. Self-preservation can justify quite a few bad actions.

For some reason, nothing here is giving me any reason to sleep better tonight.

Note: I will return to my ongoing ChatGTP discussion again later. In the meantime, I suggest you listen to a few of the discussions in the Center for Humane Technology’s podcast Your Undivided Attention.

Space Stories: New Glenn’s Martian Probes, AI Space Doctors, and Questions About a Mission to Uranus

Image (Credit): Blue Origin’s New Glenn Rocket. (Blue Origin)

Here are some recent space-related stories of interest.

Space.com: Blue Origin’s 2nd New Glenn Rocket Launch will Fly Twin NASA Mars Probes to Space on Sep. 29

Blue Origin is gearing up for the second-ever launch of its powerful New Glenn rocket, which will loft NASA’s ESCAPADE mission to Mars. The company says it has been working closely with NASA on preparations leading up to New Glenn’s next launch, dubbed NG-2, and is targeting no earlier than (NET) Sep. 29. The twin ESCAPADE (Escape and Plasma Acceleration and Dynamics Explorers) probes have been awaiting their turn aboard New Glenn, which was originally slated to carry the satellites on its maiden launch in January. However, NASA opted not to risk a costly mission delay due to the debut liftoff of the new rocket.

American Bazaar: NASA and Google Test AI Medical Assistant for Astronaut Missions

NASA, which is committing to a new era of human spaceflight with its Artemis mission, is working with Google to test a proof of concept for Crew Medical Officer Digital Assistant (CMO‑DA), a type of Clinical Decision Support System (CDSS). This has been created to allow astronauts to diagnose and treat symptoms when no doctor is available or communications to Earth are blacked out. “Trained on spaceflight literature, the AI system uses cutting-edge natural language processing and machine learning techniques to safely provide real-time analyses of crew health and performance,” Google representatives said in a statement.

Scientific American:“NASA Budget Cuts Could End U.S. Exploration of the Outer Solar System

In the spring of 2022 the U.S. space community selected its top priority for the nation’s next decade of science and exploration: a mission to Uranus, the gassy, bluish planet only seen up close during a brief spacecraft flyby in 1986. More than 2.6 billion kilometers from Earth at its nearest approach, Uranus still beckons with what it could reveal about the solar system’s early history—and the overwhelming numbers of Uranus-sized worlds that astronomers have spied around other stars. Now President Donald Trump’s proposed cuts to NASA could push those discoveries further away than ever—not by directly canceling the mission but by abandoning the fuel needed to pull it off.