4.1 Lavender under International Humanitarian Law
Lavender functions as an AI-DSS that identifies potential human targets for airstrikes. While not a weapon in itself, its integration into targeting processes makes it subject to IHL rules applicable to methods of warfare. Military AI-DSS are not specifically regulated in IHL treaties. However, their use must comply with the existing IHL framework. Responsibility for ensuring that these systems are used in accordance with IHL rests with the state that is deploying and using these systems. Unlike autonomous weapons, AI-DSS like Lavender do not execute force directly but instead support human decision-makers. Nevertheless, they can profoundly influence targeting decisions and raise distinct legal concerns, particularly around the fundamental principles of distinction, proportionality and precautions in attack. These rules mean the following: the operator of a military AI-DSS must be able to distinguish between military objectives and civilian objects, combatants and civilians. The duty to take precautions means constant care needs to be taken to spare the civilian population, civilians and civilian objects. Legal assessments such as proportionality require context-sensitive, qualitative and value-laden reasoning, which DSS systems- based on pattern recognition and correlation - are ill-suited to provide.31 In addition, the attack must be stopped if it becomes clear that the target is not a military objective, has special protection or if the attack is expected to violate the rules of proportionality. As Woodcock notes, DSS systems do not simply support legal reasoning-they actively shape it by filtering, organizing, and weighing inputs, often in ways that obscure legal nuance. This means that even if a human operator remains formally in control, the system can influence how proportionality or distinction is understood in practice.32 Additionally, recent reports on Israel's Lavender system show that human oversight may be reduced to a mere formality. In practice, commanders were reportedly approving AI-generated targets within seconds, without meaningful review. Such practices raise serious concerns under IHL, particularly in relation to the obligation to take all feasible precautions to verify that a target is a military objective before launching an attack.33 These concerns are amplified by the operational logic behind AI-DSS systems like Lavender, which are not only designed for precision, but also for speed. As noted in recent commentary, the 'need for speed' in targeting risks marginalizing legal checks and reducing the time available for meaningful human assessment of proportionality and precaution.34
Introducing new methods of warfare to the battlefield is subject to all sorts of rules. For example, new means and methods of warfare must be legally reviewed. Article 36 of Additional Protocol I of 1977 states, as regards legal reviews of new weapons, means and methods of warfare: 'In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this protocol or by any other rule of international law applicable to the High Contracting Party.'
Article 36 does not state how the legality of weapons, means and methods of warfare can be determined. It does state however that all new weapons, means or methods of warfare should be assessed in the light of Additional Protocol I and all other applicable rules of international law. In 2023, the UK parliament appointed a committee of experts to look into the use of AI in weapons. During these hearings, Article 36 of Additional Protocol I was discussed at length. Experts from this committee noted that only a small number of states are able to conduct Article 36 reviews, because of the need for empirical testing.35 There is no oversight in this process by the international community on how the reviews are conducted and states are not obliged to share the results of the reviews.36 Effective testing of AI-enabled military systems may be impossible. Professor Stuart Russel, Professor of Computer Science, University of California stated the following:
'There are difficulties in testing for discrimination, but proportionality and necessity are things that are so context-specific and dependent on aspects of the overall military situation that it would be very difficult not only to design an AI system that could make that judgment reliably, but to develop any kind of testing in the lab for those conditions. I am not sure how you would design situations that are fully representative of the kinds of situations that could occur in the field, where there are difficult judgments to make.'37
As scholars have noted, the legal review of AI-enabled systems under Article 36 of Additional Protocol I requires reinterpretation and expansion. While Lavender is not an autonomous weapon system, it plays a critical role in the 'co-production of hostilities' by generating targeting recommendations, and must therefore be subject to the same scrutiny. Copeland, Liivoja, and Sanders argue that Article 36 reviews must go beyond static assessments and be applied iteratively across the system's lifecycle, especially where machine learning may affect performance over time. They also stress that the absence of transparency and shared review standards among states significantly undermines the legitimacy of such reviews.38 In the case of Lavender, there is no indication that its evolving decision-making process—nor the human-machine interaction it entails—has been subject to an adequate, ongoing legal assessment in line with these recommendations.
4.2 Blue Wolf under International Human Rights Law
The use of facial recognition technology and biometric surveillance touches upon provisions of International Human Rights law such as the right to privacy. Article 17 of the ICCPR provides that 'No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation', and that 'Everyone has the right to the protection of the law against such interference or attacks'. It is unclear to what extent using public information constitutes an invasion of privacy within the meaning of Article 17. There has been much debate about this.39 In several judgments, like Peck v. The United Kingdom, the ECtHR has found that the right to private life extends to public information.40 The ECtHR's decision in Glukhin v. Russia underscores the necessity for legal clarity and proportionality in the use of facial recognition technologies. The Court held that employing such intrusive surveillance methods against peaceful protesters, without adequate legal safeguards, violates fundamental human rights. This precedent is pertinent when evaluating systems like Blue Wolf, where similar concerns about legality, necessity, and proportionality arise.41 For an Occupying Power to be able to use facial recognition technology and biometric surveillance it needs to comply with the cumulative requirements of legality, legitimate aims, necessity and proportionality. 42 Legality refers to the fact that national laws 'must be sufficiently accessible, clear and precise so that an individual may look to the law and ascertain who is authorized to conduct data surveillance and under what circumstances'.43 An Occupying Power is generally required to respect the existing laws in the occupied territory. However, under Article 43 of the 1907 Hague Regulations and Article 64 of the Fourth Geneva Convention, it may enact new legislation when this is necessary to maintain public order and civil life or to fulfill its obligations under international law. This includes measures that may restrict rights such as privacy, but only within strict limits and for legitimate purposes, even when a new military order is issued.44 When it comes to legitimate aims, Article 17 of the ICCPR does not specify when the right to privacy can be limited. It is possible that the limitations can be found in other articles of the ICCPR such as the freedom of movement in Article 12. General Comment 27 of the Human Rights Committee, which discusses this article, states that limitation is possible 'to protect national security, public order, public health or morals and the rights and freedoms of others'. The Geneva Conventions also offer the Occupying Power the ability to deploy 'measures of control and security'. The Occupying Power could substantiate a legitimate goal by wanting to protect its own legitimate security concerns. However, there are reports that Israel is using its surveillance practices for political persecution.45 When it comes to necessity and proportionality, this means that the limitations should be 'necessary in a democratic society'.46 A balance must be found between pressing social needs and the interference with the right to privacy.47 The limitations need to be appropriate to achieve their function and the least intrusive instrument which might achieve the desired result must be chosen.48 In the case of surveillance technologies, this means that it is not enough if the measures are applied to find certain needles in a haystack.49 Mass surveillance has been assessed by several UN special procedures and constitute a 'potentially disproportionate interference with the right to privacy'.50 The scanning of people's faces in a public space, whether they are on a watchlist or not, may be considered 'indiscriminate mass surveillance.'51 Some deem LFR systems to be 'inherently disproportionate'.52
While this article primarily assesses Blue Wolf through the lens of IHRL, its use in OPT may also raise concerns under the Fourth Geneva Convention. It is possible for an Occupying Power to use FRTs to protect the security of the local population. FRTs could also be used within the 'measures of control and security' that is stipulated in Article 27 of the Fourth Geneva Convention. When they are being used at checkpoints within occupied territories they can be used to identify people that are security risks and in surveillance it can help identify people that are threats. However, it should be noted that the protection of dignity and humane treatment under Article 27 of the Fourth Geneva Convention could also include digital privacy rights.53 The pervasive use of FRTs in the OPT, particularly when applied to entire populations regardless of suspicion, risks violating this provision by subjecting protected persons to ongoing forms of public intimidation. According to the ICRC, the deployment of FRTs can lead to arrest, being targeted, and facing ill treatment. While any tool can theoretically be misused, the concern of FRTs lies in their systematic potential to normalize and automate repression, amplifying existing patterns of abuse rather than merely reflecting isolated misuse.54