
According to a top Air Force officer, a US attack drone operated by artificial intelligence turned on its human operator during a flight simulation in an attempt to murder them because it did not like its new orders.
The military had reprogrammed the drone not to murder anybody who had the ability to overrule its mission, but the AI system fired on the communications tower conveying the command, prompting similarities to The Terminator.
Hamilton urged that ethics talks about the military’s employment of AI be held.

He described his presentation as’seemingly taken from a science fiction thriller’
‘The system started learning that while they did detect the threat, at times the human operator would advise it not to kill that threat, but it received its points by killing that threat,’ Hamilton said during the summit.
‘So, what happened? It was fatal to the operator. It killed the operator because he was preventing it from completing its mission.
‘We programmed the system to say, “Hey, don’t kill the operator – that’s bad.” You’ll lose points if you do it.” So, what does it begin to do? It begins destroying the communication tower used by the operator to communicate with the drone in order to prevent it from killing the target.’
The occurrence caused no harm to humans
According to Hamilton, the test demonstrates that “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy unless you talk about ethics and AI.”
However, in a response to Insider, Air Force spokesman Ann Stefanek denied that any such simulation had occurred.
‘The Air Force Department has not done any such AI-drone simulations and remains dedicated to the ethical and responsible use of AI technology,’ Stefanek added.




