Israel’s military campaign in Gaza – which started after the deadly October 7 last year in which over 1,100 people were brutally killed – has entered its 10th month and over 40,000 people have died.
As violence rages in the Gaza Strip, Israel has opened a different front in the West Bank, another Palestinian territory. The large-scale military operation in the West Bank has entered its second day and at least 16 people have died.
As Israel’s “war” drags on for 10 months, the focus is back on Israel’s AI tools that have been extensively used in its bombing campaign in the Gaza Strip.
‘Gospel’, ‘Alchemist’, ‘The Death of Wisdom’ and ‘Lavender’ are not titles of a novel but names of the Artificial Intelligence (AI) tools that have been used to process vast amounts of data, identify suspects who have links with Hamas and the Palestinian Islamic Jihad and to strike them.
A detailed investigation by +972 Magazine and Local Call reveals some disturbing details from Israel’s bombing campaign, especially how the Israel Defence Forces (IDF) fully relied on a tool for its bombing missions.
‘Lavender’ and its use case
Lavender, developed by Israel’s elite intelligence division, Unit 8200, operates as an AI-powered database designed to identify potential targets linked to Hamas and Palestinian Islamic Jihad (PIJ). Lavender uses machine learning algorithms and processes vast amounts of data to pinpoint individuals deemed “junior” militants within these armed groups.
Lavender initially identified as many as 37,000 Palestinian men associated with Hamas or PIJ. The usage of AI to identify targets marks a significant change in how the Israeli intelligence apparatus, Mossad and Shin Bet, function – relying on more labour-intensive human decision-making.
Soldiers often made decisions in as little as 20 seconds to determine whether to bomb these identified targets based on Lavender’s information, primarily to ascertain the gender of the target. Human soldiers frequently followed the machine’s information unquestioningly, despite the AI program’s error margin of up to 10 per cent, meaning it could be incorrect up to 10 per cent of the time.
According to the report, the program often targeted individuals with minimal or no affiliation with Hamas.
Gospel – Israel’s Another AI Arm
Systems such as “Gospel” are being used to allow automatic tools to produce targets at a fast pace, and works by improving accurate and high-quality intelligence material according to the requirement,” the IDF said.
“With the help of artificial intelligence, and through the rapid and automatic extraction of updated intelligence – it produces a recommendation for the researcher, with the goal being that there will be a complete match between the machine’s recommendation and the identification performed by a person,” the IDF added.
The AI platforms crunch data to select targets for air strikes. Ensuing raids can then be rapidly assembled with another artificial intelligence model called Fire Factory, Bloomberg reported. Fire Factory calculates munition loads, prioritizes and assigns thousands of targets to aircraft and drones, and proposes a schedule, the report added.
The report by +972 Magazine mentions the book ‘The Human-Machine Team: How to Create Synergy Between Human and Artificial Intelligence That Will Revolutionize Our World’. The author ‘Brigadier General YS’, who is reportedly the commander of the 8200 Intelligence Unit of Israel, makes a case for the use of AI in “deep defence” and gives scenarios that could threaten Israel in the future.
In the chapter “Deep Defense:” New Potentials, the author says “Deep defence is the ability of national establishments to use The Human-Machine Team concept to address security challenges to expose issues in new ways that were heretofore impossible.”
The Human-Machine Team should have the ability to identify tens of thousands of targets before the battle begins and thousands of targets should be identified every day. The author makes a case where he explains why it is important to create such tools so that the military can strike at the right targets at the right time with less collateral damage.
What About AI in the Russia-Ukraine War?
AI begets AI. The use of automated tools like unmanned FPV drones, and robots has reduced the human-risk factor for warring nations but has increased the dependency on technology, which seems like a win-win situation for a nation but the ethical and legal concerns of using AI are always followed by the benefits of the technology.
The Russia-Ukraine war is a testing lab for future tools for fighting in combat. The concept of drone attacks has proliferated to different conflicts in different regions, especially non-state actors like Houthi rebels and Hezbollah fighting Israel.
The deployment of automated drones does not simply define the use of AI in conflict.
AI is primarily used to analyze geospatial intelligence by processing satellite images and decoding open-source intelligence like videos, and photos available online. Surveillance drone footage, on-ground human intelligence (HUMINT), satellite images, and open-source data are all combined and processed by AI tools to deliver an outcome that is used to conduct missions. This represents the use of data analytics on the battlefield.
According to a report by National Defense magazine, Ukraine reportedly used Clearview AI, a software tool from a US-based firm, for facial recognition to identify dead soldiers and Russian assailants and combat misinformation. US firms like Primer have deployed tools to decode Russian encrypted messages delivered through radio.
Meanwhile, Ukraine is working on developing AI-enabled drones to counter radio jamming. Cheap FPV drones, widely used for several months, have witnessed a drop in their hits due to the jamming of radio signals, a form of Electronic Warfare which Russians are masters of.
“We are already working with the concept that soon, there will be no connection on the front line” between pilot and UAV, Reuters reported, quoting Max Makarchuk, the AI lead for Brave1, a defence tech accelerator set up by the Ukrainian government.
Radio jamming blocks the operator’s contact with the munition (a drone) by forming a protective invisible layer around the target, therefore resulting in damage to the drone. Automating the final part of the drone’s flight can enable success.
Meanwhile, Russia is focusing on developing AI systems to counter the West and fight Ukraine on the battlefield. If numbers are compared then Russia is way ahead of Ukraine in terms of military prowess but the Red Army has suffered huge losses on the battlefield.
Areas like increasing command, control and communication with AI-enabled decision-making, developing smarter weapons, which it calls “intellectualization of weapons”, and developing more unmanned aerial/ground vehicles and AI-enabled guidance systems for missiles, are in Moscow’s focus.
The maker of the Russian Kamikaze drone KUB-LA, ZALA Aero Group, claims it is capable of selecting and engaging targets using AI. The Lancet-3 loitering munition is highly autonomous and the use of sensors enables it to locate and destroy a target without human guidance, even being capable of returning to the operator if a target has not been found.
In May, a Russian S-350 Vityaz, surface-to-air-missile reportedly shot down an aircraft in autonomous mode, which was claimed as the first AI-enable missile kill. The system detected, tracked and destroyed a Ukrainian air target without human assistance. The claim remains contested.
Heavy investment on both sides of the border reaffirms the central role of AI in war and how future wars could be co-commanded by technology and a human.
Waiting for response to load…