
In the lead-up to the ongoing Gaza war, Israel quietly transformed its intelligence operations by integrating advanced artificial intelligence tools into its military strategy. This effort, referred to as an "AI factory," has drawn both attention and criticism, particularly in its application during the Gaza conflict. The system has become a defining aspect of Israel’s military approach, sparking debates about its ethical implications and its potential to reshape modern warfare.
Years before the October 7, 2023, Hamas attack that left over 1,400 people dead, Israel's military and intelligence units began integrating AI into their operations. This transformation was driven by the need for faster decision-making and more precise targeting in one of the world’s most volatile regions. The Israel Defense Forces (IDF) repurposed its intelligence division into a testing ground for AI technologies, allowing algorithms to process massive amounts of data, identify patterns, and suggest targets with unprecedented speed.
The system, known internally as Habsora — Hebrew for “the Gospel” — became one of the most significant tools in this AI arsenal. Designed to analyze vast datasets, it could pinpoint potential targets in real-time, even in the absence of human analysts. However, as the system evolved, questions arose about whether human oversight was sufficient to ensure ethical decision-making.
Following the devastating Hamas attack, Israel launched a military campaign targeting Gaza with relentless airstrikes. Initially, the IDF relied on its meticulously curated database, which included details of Hamas’s operational infrastructure, from tunnels to weapons storage sites and command centers. This database, built over years of intelligence gathering, allowed for targeted strikes in the early days of the war.
However, as the conflict continued, the IDF found its "target bank" depleting rapidly. The intensity of the military campaign required additional targets at an accelerated pace. At this juncture, the Habsora system came into full effect. Leveraging advanced machine learning and data analytics, it generated hundreds of new targets within hours. This capability allowed the IDF to sustain the campaign's momentum, even as traditional methods of intelligence gathering struggled to keep pace.
Habsora's deployment underscores the growing reliance on AI to augment and, in some cases, replace human decision-making in warfare. The system could rapidly cross-reference data from a range of sources, including surveillance drones, signal intercepts, and ground reports, to identify potential threats. Yet, the absence of comprehensive human review has raised alarms about the accuracy and ethical implications of these decisions.
The use of AI-driven systems like Habsora has sparked significant debate within Israel’s military leadership. While proponents argue that these tools are essential for maintaining operational superiority, critics within the IDF have raised concerns about the potential for collateral damage and the dehumanization of conflict.
One of the primary issues is the question of whether humans remain "in the loop" during decision-making processes. Traditionally, military strikes involve layers of review by intelligence analysts and commanders to minimize civilian harm. However, with AI generating targets at unprecedented speeds, the time available for such reviews is dramatically reduced.
Moreover, algorithms are only as good as the data they are trained on. Errors in data collection or biases embedded in the algorithms could lead to misidentification of targets, increasing the risk of civilian casualties. These concerns have fueled debates about the appropriate role of AI in warfare and whether its use undermines the principles of proportionality and necessity in conflict.
Israel’s deployment of Habsora and other AI systems marks a turning point in the use of technology in military conflicts. While AI has long been used in surveillance and intelligence gathering, its application in generating real-time combat targets represents a significant escalation. This development could have far-reaching implications for global military strategies, as other nations observe and potentially adopt similar technologies.
Critics warn that the normalization of AI in warfare risks creating a dangerous precedent, where decisions of life and death are increasingly ceded to algorithms. International law and existing frameworks for armed conflict have yet to fully address these issues, leaving a regulatory gap that could lead to unintended consequences.
On the other hand, supporters of AI-driven military tools argue that these technologies can enhance precision and reduce civilian harm when used responsibly. By providing more accurate targeting data, systems like Habsora could potentially limit the scope and duration of conflicts.
The use of AI in Israel’s Gaza campaign has reignited global discussions about the ethics and legality of autonomous systems in conflict. Human rights organizations have called for greater transparency in how these tools are used and stronger safeguards to prevent misuse. Meanwhile, military experts caution that the rapid pace of technological advancement requires equally swift updates to international regulations.
The United Nations has previously debated the regulation of lethal autonomous weapons systems, but progress has been slow due to differing views among member states. Israel’s example highlights the urgency of these discussions, as the capabilities of AI-driven systems continue to outpace existing legal and ethical frameworks.
Israel’s "AI factory" and the deployment of Habsora in Gaza represent a bold step into the future of warfare. While these technologies offer undeniable advantages in terms of speed and efficiency, they also raise profound questions about the role of human judgment in life-and-death decisions. As other nations observe the outcomes of Israel’s AI-driven strategy, the global conversation on the ethics and regulation of AI in warfare will only intensify.