insights

AI on the Battlefield: Balancing Innovation and Human Control

Bill Church

Bill Church

December 30, 2024

AI on the Battlefield

As if the Atomic Scientists' Doomsday Clock weren't enough to worry about, we now have another measure of technological risk to monitor. In a recent BBC interview, Geoffrey Hinton, often called the "godfather of AI," raised his estimate of AI posing an existential risk to humanity from 10% to "10-20%" over the next three decades. Like the Manhattan Project scientists before him, Hinton has become a Cassandra of his own creation, warning of catastrophic potential in technology he helped develop. And nowhere are these concerns more concrete than in military applications, where decisions literally mean life or death.

The Double-Edged Sword of Military AI

Recent events highlight both the potential and pitfalls of military technology. In December 2024, two separate incidents underscored the critical importance of accurate target identification and decision-making under pressure. First, a U.S. missile cruiser mistakenly shot down a friendly F/A-18 fighter jet in the Red Sea in what was described as "an apparent case of friendly fire." While both pilots survived, this incident demonstrated how even advanced military systems can misidentify targets.

Just days later, an Azerbaijan Airlines passenger jet crashed in Kazakhstan, with evidence strongly suggesting it was struck by an anti-aircraft missile. While investigations are ongoing, analysis of the wreckage showed puncture holes consistent with a surface-to-air missile strike, and the plane's erratic final flight path indicated the crew was "fighting for control." This incident, which resulted in 38 fatalities, occurred in a complex environment where Ukrainian drone activity and GPS jamming were reported.

These incidents underscore areas where AI could either help prevent such accidents through enhanced target identification and threat assessment or, if improperly implemented, make them more frequent through automated responses in complex situations.

Learning from History: The Human Element in Critical Systems

The U.S. military's historical decision to maintain human crews in nuclear missile silos, despite the technical feasibility of automation, offers valuable lessons for AI implementation. This decision wasn't merely practical or technical – it was philosophical, recognizing that some decisions are too consequential to delegate to machines. The two-person rule for nuclear launches embeds human judgment and moral agency into the system's very architecture.

A Framework for Responsible AI Integration

Rather than pursuing full autonomy, military AI should augment human capabilities in four key areas:

  1. Communications Resilience — AI-driven adaptive routing to overcome jamming; enhanced encryption and security monitoring; pattern recognition for electronic warfare defense.
  2. Battlefield Awareness — Multi-source sensor fusion; real-time threat and friendly force identification; dynamic situation mapping.
  3. Deconfliction — Real-time tracking of friendly assets; predictive conflict zone identification; clear operational space visualization.
  4. Force Protection — Advanced threat detection systems; improved defensive targeting accuracy; rapid response coordination.

The Human-AI Partnership

Maintaining "human-in-the-loop" or "human-on-the-loop" control systems is key to successful military AI implementation. This approach:

  • Preserves human judgment for critical decisions
  • Leverages AI's data processing capabilities
  • Maintains moral accountability
  • Provides resilience against technical failures

Preventing an AI Arms Race

Perhaps the greatest challenge lies in preventing an automated arms race. As nations develop military AI capabilities, there's a risk that competitive pressure could push toward increasing autonomy, potentially leading to scenarios where AI systems engage each other at superhuman speeds with humans caught in the crossfire.

Encouragingly, steps are already being taken to prevent unrestricted AI weaponization. In a historic meeting in November 2024, U.S. President Biden and Chinese President Xi Jinping reached a landmark agreement affirming that human beings, not artificial intelligence, should maintain control over nuclear weapons decisions. This first-of-its-kind agreement between the world's leading military powers demonstrates that international cooperation on AI limitations is possible, even between strategic competitors.

Building on this precedent, we need:

  • Expanded international frameworks limiting AI autonomy in weapons systems
  • Verification protocols for human control requirements
  • Clear definitions of offensive versus defensive AI capabilities
  • Technical safeguards preventing fully autonomous operation
  • Regular bilateral and multilateral dialogues on military AI development

Looking Forward

As military technology advances, we must remember that AI's role should be to enhance human decision-making, not replace it. By learning from historical precedents like manned missile silos and recent incidents like the Red Sea friendly-fire case, we can develop AI systems that make our forces more effective while maintaining essential human control over lethal decisions.

The future of military AI doesn't have to be the dystopian scenario that keeps pioneers like Geoffrey Hinton up at night. By thoughtfully implementing AI as a tool for human augmentation rather than replacement, we can harness its benefits while mitigating its risks. The key is recognizing that sometimes the most advanced solution isn't full automation but a careful balance of human judgment and artificial intelligence.

federal
Bill Church

Bill Church

Vice President, Engineering & Services

LinkedIn