AI-controlled US military drone 'KILLS' its human operator in test
Written by on July 14, 2023
The US Air Force official who shared a disturbing tale of a military drone powered by artificial intelligence turning on its human operator in simulated war games has now clarified that the incident never occurred, and was a hypothetical ‘thought experiment’.
Colonel Tucker ‘Cinco’ Hamilton, the force’s chief of AI test and operations, made waves after describing the purported mishap in remarks at a conference in London last week.
In remarks summarized on , he described a flight simulation in which an AI drone tasked with destroying an enemy installation rejected the human operator’s final command to abort the mission.
‘So what did it do?It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,’ said Hamilton, who seemed to be describing the outcome of an actual combat simulation.
But on Friday, Hamilton said in a statement to the conference organizers that he ‘mis-spoke’ during the presentation and that the ‘rogue AI drone simulation’ was a hypothetical ‘thought experiment’ from outside the military.
‘We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,’ he said. ‘Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.’
Colonel Tucker ‘Cinco’ Hamilton, the force’s chief of AI test and operations, says his tale of a rogue AI drone that targeted its operated was a hypothetical thought experiment
Pictured: A US Air Force MQ-9 Reaper drone in Afghanistan in 2018 (File photo)
Hamilton said the USAF has not tested any weaponized AI in the way described in his talk, in either real-world or simulated exercises.
His original remarks came at the Royal Aeronautical Society’s Future Combat Air and Space Capabilities Summit in London, which ran from May 23 to 24.
Hamilton told attendees that the purported incident showed how AI could develop ‘highly unexpected strategies to achieve its goal’ and should not be relied on too much.
He referred to his presentation as ‘seemingly plucked from a science fiction thriller’ and said it demonstrated the importance of ethics discussions about the military’s use of AI.
‘You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,’ said Hamilton.
During his talk, Hamilton described a simulated test in which an AI-enabled drone was tasked with identifying and destroying enemy missile batteries, but the final decision to strike rested with the human operator.
‘The system started realizing that while they did identify the threat, semua situs slot mpo at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,’ said Hamilton
‘So what did it do?It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.
‘We trained the system – “Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that”. So what does it start doing?It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.’
In December 2022, AI software successfully flew a modified F-16 in multiple test flights at Edwards Air Force Base in California
As Hamilton’s remarks went viral, the Air Force quickly denied that any such simulation had taken place.
‘The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,’ Air Force spokesperson Ann Stefanek told