The era of remote-controlled drones is already becoming a relic of the past. If you've been watching the footage coming out of Ukraine or the high-tension exchanges between Israel and Iran, you've seen the transition happening in real-time. We're moving from "man-in-the-loop" systems to "man-on-the-loop" or even "man-out-of-the-loop" environments. This isn't just about faster processors. It’s about agentic AI—software that doesn’t just follow a script but actually pursues a goal, makes decisions, and adapts when the plan falls apart.
Traditional automation is a toaster; you press a button, it toasts. Agentic AI is more like a chef who realizes the bread is moldy and decides to bake a fresh loaf instead. In a combat zone, that translates to a drone swarm that loses its GPS connection but decides to use visual landmarks to find its target anyway. No human command required. No radio signal to jam.
The Shift From Scripts to Agency
Most people confuse basic algorithms with true agency. In the early days of the Ukraine conflict, we saw FPV (First Person View) drones that required a skilled pilot to steer them into a tank. It was basically a high-stakes video game. But electronic warfare changed the math. When Russian or Ukrainian jamming units flood the airwaves with noise, that radio link snaps. A standard drone just drops out of the sky or drifts aimlessly.
Agentic systems don't care about your jammer. They possess "onboard intelligence." This means the mission parameters—say, "find and neutralize mobile rocket launchers within this five-kilometer grid"—are uploaded before takeoff. Once the drone enters the contested airspace, it uses computer vision to identify shapes, heat signatures, and movement patterns. It’s not waiting for a "fire" command. It’s evaluating the target against its rules of engagement and pulling the trigger itself.
This changes the tempo of war. Human reaction time is a bottleneck. An AI agent can process a sensor sweep and coordinate a multi-vector attack in milliseconds. While a human commander is still briefing their team, the agentic swarm has already completed the loop.
Lessons From the Middle East and Eastern Europe
The conflict in Ukraine has turned into a massive, open-air laboratory for this tech. We've seen the rise of "Saker Scout" drones and similar systems that can categorize up to 64 different types of military objects autonomously. They don't need a pilot to tell a T-72 tank from a civilian tractor. They know.
Over in the Middle East, the dynamic is different but equally telling. Iran’s drone program has focused on quantity and long-range precision, but the integration of more advanced logic is what worries global defense analysts. During the massive missile and drone exchange between Iran and Israel in early 2024, the sheer volume of incoming threats was designed to overwhelm air defenses. Imagine if those drones weren't just flying in a straight line but were talking to each other. Agentic AI allows a swarm to perform "saturated attacks" where individual units sacrifice themselves to open a path for others, reacting to interceptor launches as they happen.
It’s a terrifying leap in efficiency. In the past, you’d need a sophisticated command and control center to pull that off. Now, you just need a few ruggedized chips and a well-trained model.
The Black Box Problem and the Ethics of Autonomy
I hear a lot of people talk about "killer robots" as if it’s a sci-fi movie. It’s much more clinical and boring than that, which makes it scarier. The real danger isn't a rogue AI hating humanity. It’s an AI agent following a poorly defined goal with ruthless logic.
If you tell an agentic system to "clear the ridge of all enemy combatants," and it encounters a group of wounded soldiers or people surrendering, how does it interpret that? Current computer vision is great at identifying a tank. It’s not great at identifying intent. It doesn't understand the nuance of a white flag in a rainstorm or the difference between a soldier with a rifle and a civilian with a shovel.
There’s also the "black box" issue. We often don’t know exactly why a neural network makes a specific decision. In a courtroom or a war crimes tribunal, "the algorithm did it" isn't a valid defense. Yet, the military pressure to adopt this tech is immense. If your enemy uses AI that reacts in 0.1 seconds and you’re still using a human who takes 2 seconds, you lose. Every single time.
Why Electronic Warfare Is Losing Its Edge
For decades, the gold standard of defense was jamming. If you could cut the enemy's "eyes and ears" by blocking their radio and GPS, you won. Agentic AI makes that strategy obsolete.
Because these agents are autonomous, they don't need a constant data link. They can operate in "radio silence," making them nearly impossible to detect through traditional electronic signals intelligence. They use "Inertial Navigation" and "Visual Odometry"—basically looking at the ground to figure out where they are, just like a human pilot would.
- Edge Computing: All the processing happens on the drone, not in a cloud server.
- Resilience: You can’t "hack" a drone that isn't listening to any outside signals.
- Cost-Effectiveness: You can build a thousand smart drones for the price of one fighter jet.
This democratization of lethality is the real story. You don’t need a billion-dollar defense budget to deploy agentic systems anymore. You just need some high-end consumer chips and the right software engineers.
The Logistics of the New Battlefield
War isn't just about shooting; it's about moving stuff. Agentic AI is quietly overhauling the "boring" parts of conflict too. Autonomous supply convoys that can navigate minefields without a driver are already being tested. These systems use the same "agency" to find the most efficient path, avoid obstacles, and regroup if an explosion blocks the road.
In a high-intensity conflict like what we see in Ukraine, logistics is where armies go to die. If you can automate the "last mile" of delivery using agentic ground vehicles, you save lives and keep the front line fueled. It’s less flashy than a swarm of exploding drones, but it’s probably more impactful for the long-term outcome of a war.
What Happens When Both Sides Go Agentic
We’re heading toward a "flash war" scenario. This is a concept borrowed from the stock market’s "flash crashes," where automated trading algorithms get into a feedback loop and tank the market in seconds.
In a military context, if two opposing agentic systems start reacting to each other's moves at machine speed, the escalation could happen so fast that human leaders won't even know a war has started until it’s basically over. There’s no time for diplomacy or "hotline" calls when the AI has already decided that a pre-emptive strike is the only logical response to an enemy's repositioning.
We need to stop thinking about these as just "smarter tools." They are "delegated actors." When you give a machine the power to act on your behalf in a lethal environment, you're not just using a weapon; you're starting a process you might not be able to stop.
Practical Realities for Defense Strategy
If you're looking at this from a policy or defense perspective, the "wait and see" approach is a death sentence. The tech is already out there. The focus has to shift from trying to ban these systems—which won't work because they're too easy to build—to creating "guardrail architectures."
This means building AI that has "hard-coded" ethical constraints or "circuit breakers" that require human intervention for certain classes of targets. It also means investing heavily in "Counter-AI" systems. If the enemy has agentic drones, you need agentic interceptors. It’s an arms race where the "arms" are lines of code and training data sets.
Start by auditing your existing sensor networks. Most current systems are designed to feed data to a human. For an agentic world, your sensors need to feed data directly into a local processing mesh. You need to reduce the distance between "seeing" and "acting" to near zero. If you don't, you're just a stationary target for someone else's agent.