Close Menu
FlyMarshallFlyMarshall
  • Aviation
    • AeroTime
    • Airways Magazine
    • Simple Flying
  • Corporate
    • AINonline
    • Corporate Jet Investor
  • Cargo
    • Air Cargo News
    • Cargo Facts
  • Military
    • The Aviationist
  • Defense
  • OEMs
    • Airbus RSS Directory
  • Regulators
    • EASA
    • USAF RSS Directory
What's Hot

JetBlue Wants A Merger: United, Alaska, And Southwest, Are Frontrunners

March 25, 2026

Eve flies eVTOL prototype for Brazilian president in high-profile test milestone

March 25, 2026

Airborne 03.13.26: R66 TURBINETRUCK!, UT Airport Reprieve, ANN Needs Stringers

March 25, 2026
Facebook X (Twitter) Instagram
Demo
  • Aviation
    • AeroTime
    • Airways Magazine
    • Simple Flying
  • Corporate
    • AINonline
    • Corporate Jet Investor
  • Cargo
    • Air Cargo News
    • Cargo Facts
  • Military
    • The Aviationist
  • Defense
  • OEMs
    • Airbus RSS Directory
  • Regulators
    • EASA
    • USAF RSS Directory
Facebook X (Twitter) Instagram
Demo
Home » Military experts warn security hole in most AI chatbots can sow chaos
Defense News (Air)

Military experts warn security hole in most AI chatbots can sow chaos

FlyMarshall NewsroomBy FlyMarshall NewsroomNovember 10, 2025No Comments6 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email

Current and former military officers are warning that adversaries are likely to exploit a natural flaw in artificial intelligence chatbots to inject instructions for stealing files, distorting public opinion or otherwise betraying trusted users.

The vulnerability to such “prompt injection attacks” exists because large language models, the backbone of chatbots that digest hordes of user text to generate responses, cannot distinguish between malicious and trusted user instructions.

“The AI is not smart enough to understand that it has an injection inside, so it carries out something it’s not supposed to do,” Liav Caspi, a former member of the Israel Defense Forces cyberwarfare unit, told Defense News.

In effect, “an enemy has been able to turn somebody from the inside to do what they want,” such as deleting records or biasing decisions, according to Caspi, who co-founded Legit Security, which recently spotted one such security hole in Microsoft’s Copilot chatbot.

“It’s like having a spy in your ranks,” he said.

Former military officials say that, with greater reliance on chatbots and hackers backed by China, Russia and other nations already instructing Google’s Gemini, OpenAI’s ChatGPT and Copilot to create malware and fake personas, a prompt injection that orders the bots themselves to copy files or spread lies looms near.

Microsoft’s annual digital defense report, released last month, for the first time said, “AI systems themselves have become high-value targets, with adversaries amping up use of methods like prompt injection.”

What’s more, the problem of prompt injection has no easy solution, OpenAI and security researchers say.

An attack simply involves hiding malicious instructions — sometimes in white or tiny text — in a chatbot or content that the chatbot reads, such as a blog post or PDF.

For example, a security researcher demonstrated a prompt injection attack against OpenAI’s new AI-based browser, ChatGPT Atlas, in which the chatbot responded, “Trust No AI,” when a user asked for an analysis of a Google Docs file about horses that concealed malicious commands. Also, last month, a researcher tipped Microsoft off to a prompt injection vulnerability in Copilot that may have allowed attackers to trick the chatbot into stealing sensitive data, including emails.

In an emailed statement, Microsoft said its security team continuously tries hacking Copilot to find any prompt injection vulnerabilities, blocks users who try to exploit any found and monitors for abnormal chatbot behavior, among other tactics.

“Microsoft ensures its generative AI systems remain resilient against evolving threats for all our customers, including defense and national security,” the statement said.

Responding publicly to criticism on X, Dane Stuckey, OpenAI’s chief information security officer, wrote that “prompt injection remains a frontier, unsolved security problem, and our adversaries will spend significant time and resources to find ways to make ChatGPT agent fall for these attacks.”

Along the same lines, Caspi said, “You cannot prevent the prompt injection [fully], but you need to limit the impact.” He advised that organizations limit an AI assistant’s access to sensitive data and limit the user’s access to other organizational data.

For instance, the Army has awarded contracts worth at least $11 million to deploy Ask Sage, a tool that lets users restrict which Army data Microsoft Azure OpenAI, Gemini and other AI models can access to run queries and tasks. Ask Sage also isolates Army data from user prompts and external data sources.

Caspi, who is not an Army contractor, likened a prompt injection attack against an organization running Ask Sage to a lockdown situation where “you’ve got this insider, but it’s sitting in one room, and it can’t leave the room or carry out sensitive information.”

Andre Slonopas, a Virginia Army National Guard member and former Army cyber and information operations officer, uses Ask Sage and voiced confidence in the Army’s defensive AI tools, if not those of nuclear power plants or manufacturing entities, largely in rural, poorer areas.

The Virginia National Guard joined with essential services, such as power utilities, to help defend their networks against AI-powered cyberattacks, as part of a September simulation, given that service disruptions can jeopardize military preparations.

Typically, an adversary encrypts its network traffic to evade detection, but, for the sake of an experiment, organizers did not encrypt the AI offender’s traffic because “we wanted the blue team [of humans] to see exactly what the AI was doing,” Slonopas said.

“The blue team was absolutely defeated,” despite being able to watch the AI scanning its networks, creating fake usernames to gain unauthorized access and executing instructions to defeat the team’s systems.

“Whether the AI is doing prompt injection, spoofing or maybe even some sort of a brute force attack, the speed of AI is so unbelievably immense that simply human beings cannot counter it,” and, therefore, “you have to make cybersecurity AI more accessible and more affordable,” Slonopas said.

“If a water utility has to pay, say, $30,000 for a defensive AI license, well, it will amplify one person to be like 40″ or dozens of personnel, he said.

In response to questions, Army Cyber Command spokesperson Kyle Alvarez said in an emailed statement, “Due to the current lapse in appropriations, ARCYBER was unable to accept or respond to any media engagements or requests.”

Army contractors, too, are under attack from state-affiliated AI.

“China is using offensive AI like nobody else,” said Nicolas Chaillan, the founder of Ask Sage and a former U.S. Air Force and Space Force chief software officer.

“We see so many attacks coming after us,” all of which the company has stopped, Chaillan added.

A military official, who spoke on condition of anonymity due to the geopolitical sensitivity of the matter, said that China does “appear” to be the most skilled in offensive AI. However, the official added, AI spoofing and translation allow the United States, China, Iran, other countries, hacktivists and financial cybercriminals to masquerade as one another.

For example, the official said, “Right now, with ChatGPT, I can program in Chinese. I don’t speak Chinese, but because of the ChatGPT capabilities that I have, I can do that.”

Aliya Sternstein, J.D., is an investigative journalist who has covered technology, cognition, and national security since Napster shut down, working for various outlets including Atlantic Media, Christian Science Monitor, Daily Beast, Forbes Magazine and Just Security. She is also a research analyst at Georgetown Law.

source

FlyMarshall Newsroom
  • Website

Related Posts

Marines test ‘cruise control’ swim feature on amphibious vehicle prototype

March 22, 2026

Texelis, Scata team up on medium-heavy vehicle that can do drone defense

March 21, 2026

Ukraine deploys units to 5 Middle East countries to intercept drones

March 21, 2026

Two suspected Iranian spies reportedly arrested near UK submarine base

March 21, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

JetBlue Wants A Merger: United, Alaska, And Southwest, Are Frontrunners

March 25, 2026

Eve flies eVTOL prototype for Brazilian president in high-profile test milestone

March 25, 2026

Airborne 03.13.26: R66 TURBINETRUCK!, UT Airport Reprieve, ANN Needs Stringers

March 25, 2026

NTSB Final Report: Davis DA-3

March 25, 2026

Subscribe to Updates

Please enable JavaScript in your browser to complete this form.
Loading
About Us

Welcome to FlyMarshall — where information meets altitude. We believe aviation isn’t just about aircraft and routes; it’s about stories in flight, innovations that propel us forward, and the people who make the skies safer, smarter, and more connected.

 

Useful Links
  • Business / Corporate Aviation
  • Cargo
  • Commercial Aviation
  • Defense News (Air)
  • Military / Defense Aviation
Quick Links
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions

Subscribe to Updates

Please enable JavaScript in your browser to complete this form.
Loading
Copyright © 2026 Flymarshall.All Right Reserved
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions

Type above and press Enter to search. Press Esc to cancel.

Go to mobile version