This is the most ethically difficult deployment of AI on this site. It is also the one moving fastest. The two ongoing wars in 2026 — Ukraine and the continuing aftermath in Gaza and Lebanon — have between them done more to reshape the practical use of AI in combat than the previous twenty years of academic and industry discussion. What follows is a description of what is actually happening, what is documented, and what the public arguments are about. The page does not take a position on whether autonomous weapons should exist; it does take the position that a guide to AI in 2026 cannot be honest if it leaves this out.

Computer vision for surveillance and targeting

The single largest use of AI in modern military operations is computer vision applied to surveillance imagery. Every major military runs sensor platforms — satellites, drones, aircraft, ground vehicles — that produce vastly more imagery and video than human analysts can review. The standard response since the early 2010s has been to use convolutional neural networks to do automated object detection, classification and tracking on these feeds: identify vehicles, count them, follow them, flag changes in patterns of life.

The US Department of Defense's Project Maven, started in 2017, was the foundational deployment. It used computer vision to analyse drone footage. Google was contracted as a technical partner, and Google employees revolted in 2018, leading the company to drop the contract. Maven did not stop; it moved to other contractors and grew. By 2026 the pipeline has expanded to incorporate full-motion video, signals intelligence, geospatial data, and increasingly LLMs for synthesising findings into intelligence briefings. Palantir is the most visible commercial vendor in this space, with deployments across the US, UK, Israel and several European militaries.

The Lavender system — what we know

The most-discussed AI weapons system of the current period is the Israeli Defence Forces' Lavender. The reporting on Lavender comes mainly from a 2024 investigation by +972 Magazine and Local Call, with subsequent corroboration and expansion in The Guardian, The New York Times and academic analyses through 2025-2026.

The publicly-reported description: Lavender is a machine-learning system that scores Palestinian men in Gaza for likelihood of being Hamas militants based on a wide range of inputs — communications metadata, group memberships, family relations, location patterns. Israeli intelligence officers cited in the original reporting described an error rate of around 10% — meaning roughly one in ten people flagged was misidentified — but said this was treated as acceptable. In the early weeks of the post-October-2023 Gaza campaign, Lavender reportedly generated a list of around 37,000 targets. The reported policy permitted strikes on identified junior operatives in their homes (a mode known as "Where's Daddy?"), with civilian casualty thresholds reported at 15-20 per junior target and over 100 for senior commanders.

Some of these specifics are disputed. The IDF has said Lavender is "not a system" but a "database" used as one input among many. Independent verification of the technical details and the precise role of human review is, by the nature of the situation, limited. What is clearly established is that the IDF has integrated AI-based target identification into its operational tempo, and that the resulting tempo has been faster and more lethal than comparable previous campaigns. The proportion of civilians among casualties in Gaza in 2023-2025 was extraordinarily high by historical standards.

The international legal response is still developing. The 2025 UN General Assembly resolution on autonomous weapons systems passed 156-3 (Russia, Israel, US opposed; China abstained), instructing the Secretary-General to convene work toward a treaty. The international humanitarian law framework — distinction, proportionality, precaution — is not changed by the technology, but its application becomes harder when the speed of decisions exceeds the human cognitive capacity to verify them.

The drone war in Ukraine

Ukraine has become the world's foremost laboratory for drone-based warfare. The two sides operate millions of drones in total — small first-person-view (FPV) racing drones converted into kamikaze munitions, larger reconnaissance drones, and increasingly autonomous platforms. Ukraine's Deputy Defence Minister stated in 2025 that fully autonomous weapons are not yet operational but partial autonomy has been deployed in some systems.

The autonomy in question is mostly terminal-phase guidance. A drone is launched and flown manually toward a target area; in the last few seconds before impact, an onboard model takes over to lock onto and hit the specific target. This sidesteps the increasingly effective Russian electronic-warfare jamming that disrupts the radio link between operator and drone. The military logic is compelling. The ethical line being crossed is real: a weapon that selects its target without a human in the immediate loop, even for a few seconds, is meaningfully different from one that a human is steering all the way.

The scale of the deployment is enormous. Ukraine's drone production reached around 4 million units in 2024 and was projected higher in 2025-2026. The war has changed the cost calculus of armoured warfare: a $500 drone can disable a $10 million tank. Other militaries are watching closely.

Intelligence analysis

Less glamorous than weapons, but probably more transformative in the long run, is the application of AI to intelligence analysis. The volume of data — signals intelligence, geospatial intelligence, open-source intelligence — is far beyond what human analysts can process. The AI overlay does pattern detection, anomaly flagging, translation, summarisation, and link analysis (who connects to whom, who moves with whom, what financial flows correlate with what activity).

Palantir, Anduril, Shield AI and Helsing are the most prominent commercial vendors. The major intelligence agencies have their own internal capability that is generally believed to be more advanced than the commercial products. The current direction is integration of LLMs into the analyst's workflow — querying intelligence holdings in natural language, drafting assessments that an analyst then verifies and signs off on. Some of this has rolled out in production at Five Eyes agencies in the last two years.

Cyber operations

Cyber is the most rapidly-evolving AI use in the defence space and the one that affects civilian readers most directly. The attacker side has been using ML for some time: writing more-targeted phishing, generating polymorphic malware, finding software vulnerabilities by automated fuzzing. The defender side has used ML for intrusion detection and anomaly detection for over a decade.

Generative AI has changed the equation in two ways. First, the barriers to producing convincing phishing emails in any language have collapsed. Second, AI-assisted vulnerability discovery has compressed the time between a vulnerability being published and being exploited. The DARPA AI Cyber Challenge in 2024 demonstrated that AI systems can autonomously find and patch software vulnerabilities; offensive variants of the same capability exist.

The defensive payoff is uneven. Microsoft, CrowdStrike, Palo Alto Networks and others have built generative-AI tools into their security products that help analysts triage alerts and write incident reports. These genuinely speed up routine work. Whether they net out as a defensive advantage given that attackers have access to the same technology is an open question.

The Australian context — AUKUS and the AI Safeguard

Australia's defence AI work is shaped by AUKUS, the trilateral security pact with the US and UK. AUKUS Pillar 2 explicitly covers advanced capabilities including AI, autonomy, quantum, hypersonics and undersea systems. The 2023 Defence Strategic Review and the subsequent National Defence Strategy committed Australia to a substantial uplift in defence AI capability, with the Defence Science and Technology Group and the Advanced Strategic Capabilities Accelerator (ASCA) as the main organisational drivers.

On the policy side, the 2024 Defence AI Ethical Framework and the broader Voluntary AI Safety Standard published by the Department of Industry in 2024 set out the formal principles. Australia is a signatory to the Political Declaration on Responsible Military Use of AI and Autonomy. Whether these frameworks are operationally meaningful or principally cosmetic is the perennial question of military ethics policy; the answer probably depends on the specific weapons system you ask about.

The honest summary

The defence AI debate has a familiar shape — the technology is here, it works at uneven levels, it is being deployed faster than the regulatory and ethical conversation can keep up, and the public has limited visibility into what is actually being done in their name. The unique feature of this domain is that the cost of getting it wrong is measured in human lives and that the deployments happen in conditions where independent verification is difficult or impossible.

For a guide aimed at readers wanting to understand AI in their daily lives, the takeaway is two-fold. First: the AI you use casually at home is built on the same technical substrate as the AI now used in war. Second: the political conversation about AI safety should not be limited to existential risk from future superintelligent systems. There is a present-tense AI safety problem in the systems that are already deployed, and the people most exposed to that problem are not in San Francisco.