Since Iran was attacked by US-Israel, they have had a simple strategy: Target US-funded infrastructure and cripple oil reserves of the world. And do it with single-use cheap autonomous drones. Track updates on Iran US war

Three Amazon Data Centres in the United Arab Emirates (UAE) and Bahrain were damaged earlier this month, while 17 submarine cables passing through Strait of Hormuz, which Islamic Revolutionary Guard Corps has declared closed, were destroyed. At the same time, oil infrastructure has been attacked across the Middle East, making many countries, including India scramble for oil and gas. Follow Middle East conflict updates
Also read: India has diversified LPG sources, says Hardeep Singh Puri amid panic over supply
What’s enabling them to do all of this at a rapid scale and at a cheap cost are their arsenal of autonomous drones: The infamous Shahed-136.
The low cost and precision targeting of these suicide drones makes it easier for them to be used at scale. A Shahed-136 costs about $20,000 while a Patriot missile which the US uses to defend these assets costs $4 million, according to Council on Foreign Relations in the US. Even if an Iranian drone is successfully countered, it’s a financial drain for its enemy. A recent piece in The New York Times called it “using a bazooka to kill a fly”.
It’s this reason that Pentagon recently approached Ukraine to help them tackle Iranian drones. Ukraine, after all, thanks to the war it’s been at for four years with Russia, has experience in dealing with these drones.
Ukraine was the testing ground
Before the Russia-Ukraine war, autonomous drones had been a prototype, tested but never used in wars. Throughout its aggression, Russia has primary depended on autonomous drones – Shahed or similar models – to constantly bombard Ukraine. In a recent tweet, Ukraine president Volodymyr Zelenskyy claimed that Russia has used more than 57,000 Shahed-type attack drones against Ukraine over the last four years.
Also read: After India, more countries get US’s 30-day waiver for Russian oil purchase amid Iran war
This cheap bombardment of their country has quickly made Ukraine ramp up its own drone production as well as develop a cheaper technology to intercept autonomous drones. Now, the US and West Asia want this cheap technology.
“Requests have come to us to share our experience in Shaheds with United States, Europeans and the Middle East,” said Zelenskyy in another tweet earlier this month. He should gloat. Thanks to the war, Ukraine has 450 drone producers. In 2025 alone, the country produced more than 4.5 million First-Point of View (FPV) drones costing $400-$500 and 100,000 interceptor drones costing $1,500-$4,000 apiece.
This has upended war economics. Countries such as the US are quickly ramping up their expendable drone production, aiming to make it cheaper. It’s a booming industry. According to Markets and Markets, the military drone market was valued at $15.23 billion in 2024 and is expected to reach $22.81 billion by 2030. Open to selling its technology in international markets, Ukraine is now building interceptor drones that can launch in swarms and hunt down incoming targets without human interception; sort of like security guards in your skies, to counter killer robots that might come in swarms to your skies.
Tech that’s cheap and AI-run
Through history, new technologies change warfare. Guns made swords obsolete. Cannons made forts defenceless and anti-ship missiles did the same to battleships. In recent years, cheap, AI-guided weapons have changed warfare dynamics. The reason is AI integration and costs.
Lethal Autonomous Weapon Systems (LAWS) as these drones are called are dramatically different from automated drones or their older cousins Unmanned Aerial Vehicles (UAVs). Even Shahed is a semi-autonomous system – a one-way attack drone designed to follow a pre-programmed flight path and strike fixed ground targets. They’re cheap, fly under radar detection but they have limited intelligence.
Thanks to dramatic advances in AI, the next generation of drones will come with real-time decision-making using AI agents. They will combine inputs from their multiple ‘eyes’ – high-resolution cameras, infrared sensors, and LIDAR. With this data, a drone can adapt their flight path, avoid obstacles, track targets and deploy the bombs without any human intervention. Some drones – like the Israeli Harop and the Turkish Kargu-2 can even remain airborne above designated locations for a period of time, engaging when pre-set conditions are met.
Drones have also learnt to work together. Drone swarms can mimic birds or insects to map large-scale areas, head into a building or coordinate for kamikaze attacks.
The Chinese military showcased a 200-drone swarm that could self-heal, adapt and coordinate multi-axis strikes. Perhaps that’s the reason countries are ramping up drone defence too. Earlier this March, an Indian startup successfully tested an autonomous drone swarm interceptor called YAMA.
“Drones are the biggest battlefield innovation in a generation,” said US Defence Secretary Pete Hegseth last year in a memo. The other new technology that’s making drones cheaper is 3D printing. The UK and US armies have demonstrated how they can 3D print first-person view (FPV) attack drones in the field. This brought the cost of these drones down to $400-500 a piece.
The low cost of an autonomous drone allows two things to happen. They’re cheap to make, so anyone, not only governments, can make and rapidly deploy them. The other thing is that countries might now tumble into the era of forever wars. Since they don’t cost so high, countries, factions, terrorists can be at war for long periods of time without losing much money or human personnel.
How much autonomy is too much?
Last week, videos revealed that a US Tomahawk missile hit a military base and a primary school near it in southern Iran, killing 168 people, including around 110 children.
The intelligence for the strike was built by AI on top of old Iranian maps. Social media is burning up, demanding a human be accountable. So far, no one has stepped forward.
There are many new opportunities of this emerging technology. “It increases your chances of victory, possibly save human lives, and since these AI systems can analyse quickly, take quicker decisions,” says Robert Sparrow, professor at Monash University, Australia who studies robotics and warfare ethics. “The dangers are denying human dignity by treating the enemy like vermin, when we send machines to kill them.”
AI-run weapons are also the crux of the recent argument between Anthropic and Pentagon in the US. It was Anthropic’s Claude AI that some claim made a mistake in the Tomahawk in identifying the school as part of the base the US military was trying to target. Thanks to flawed data and old maps.
LLMs like Claude AI are prone to hallucinations, insufficient data, and attacks by prompt injection. Militaries depend on AI intelligence as they need to act fast – especially during war. Though Anthropic has built AI safety in its contracts and insists that AI agents shouldn’t be allowed to program autonomous weapons that kill without human involvement, it’s not something that is going to happen. Multiple laws in different countries state that AI-powered drones are should engage only with a human green light, but as the Ukraine-Russia war has shown us, policies fly out of the window during conflict. Ukraine used LAWS the minute Russia started to use radio jamming to block remote human intervention.
The questions we have to ask, as a society, as someone who identifies as humans is are there any decisions we don’t want AI to take? And how do we stop AI from starting a war by mistake?
(An author and columnist, Shweta Taneja tracks the evolving relationship between science, technology and modern society)












