Press reports circulating in recent days allege that Ukrainian forces are deploying artificial intelligence equipped drones to attack Russian troops without ensuring human oversight, marking the first battlefield use of so-called ‘killer robots’.
Although the reports are to a degree sensationalised for wider public consumption, it seems there may be a kernel of truth in the allegations – at least in the sense that, although the drones are designed specifically to attack targeted vehicles such as tanks and armoured personnel carriers, the resulting explosions are almost certainly killing Russian soldiers without any direct command from a human operator. To an extent, this is a somewhat spurious claim, since any artillery shell or mortar bomb has similar effects, even when ‘commanded’ by a human being. The intention of kinetic attack is to neutralise or destroy enemy forces – including personnel – and the issue of direct or remote command is irrelevant to the intended effect. However, this episode is re-igniting the often furious debate between proponents of the ‘human-in-the-loop’ and ‘human-outside-the-loop’ operational solutions (HITL and HOTL respectively).
The drone in question – the Saker Scout quadcopter – has been in service in Ukraine since September and can deliver a 3kg warhead to targets at ranges of up to 12km, enabling successful attack of the heaviest armoured vehicles being deployed by Russian forces. Manufactured by a Ukrainian company established shortly before the Russian invasion, the Saker was originally intended for a variety of civil requirements such as crop surveillance and protection. Post invasion, the company changed its focus to enabling the AI-assisted vision software to detect, discriminate, identify and engage military targets. Integrated into the national Delta intelligence system, Saker Scout has two autonomous operating modes: it can be instructed to reconnoitre and area and report findings, or can be instructed to identify and prosecute targets autonomously – use of the latter mode having been confirmed by company officials.
Given that both sides in the conflict are losing phenomenal numbers of drones (London’s Royal United Services Institute recently suggested a combined 10,000 per month total), it seems inevitable that the attraction of detecting, locating and engaging targets at machine speed may overcome the arguments for maintaining human supervision at all times. Delaying decision-making until all moral and humanitarian concerns have been addressed may, as a consequence, incur unavoidable and potentially catastrophic ‘friendly’ losses. In early October, Ukraine’s Minister for Digital Transformation, Mykhailo Fedorov, described the use of autonomous weapons as “logical and inevitable”. Arguments against such use, however, centre on the inherent unreliability of current software solutions, which have a reputation for generating frequent ‘false positives’ – misidentifying a potential target by any other name.
Although a 2021 UN report claimed that autonomous weaponised drones were used in Libya the previous year, such use was never proven. The claim that Ukraine’s current use is the first recorded use of AI-enabled weapons being deployed may, therefore, hold a certain degree of truth. The problem remains extant: how capable are such drones in identifying and validating targets – particularly in dense electronic warfare environments – before committing kinetic effect?
Perhaps only time will tell.
For more information: Ministry of Defence of Ukraine | Міноборони (mil.gov.ua)
(Image: the Saker Scout drone is being used in significant volume in Ukraine, where it has been in service since approval for use September. Credit: Ukraine MoD)
Tim Mahon is Publishing Director, Counter-UAS at Unmanned Airspace