Rise of the killer robots: Experts reveal just how close we are to a Terminator-style takeover

Rise of the killer robots: Experts reveal just how close we are to a Terminator-style takeover

It’s been exactly 40 years since The Terminator hit the big screen, shocking cinemagoers with its terrifying depiction of a post-apocalyptic future. 

In James Cameron’s epic sci-fi blockbuster, billions of people are killed when self-aware machines trigger a global nuclear war around the start of the 21st century. 

Arnold Schwarzenegger stars as the eponymous robotic assassin sent back in time from 2029 to 1984 to eliminate the threat of a human resistance. 

Famously, the Terminator, which looks just like an adult human, ‘absolutely will not stop … until you are dead’, as one character puts it. 

While this sounds like pure sci-fi, academic and industry figures – including Elon Musk – fear that humanity will indeed be annihilated by AI. 

But when exactly will this happen? And will humanity’s demise mirror the apocalypse depicted in the Hollywood film?

MailOnline spoke to experts to find out just how close we are to a Terminator-style takeover. 

In James Cameron’s epic sci-fi blockbuster – which arrived in US cinemas on Friday, October 26, 1984 – Arnold Schwarzenegger stars as the eponymous robotic assassin 

In the classic film, the Terminator’s objective is simple – to kill Sarah Connor, an LA resident who will give birth to John, who will lead a rebellion against the machines. 

Terminator is equipped with weapons and an impenetrable metal exoskeleton, plus advanced vision and superhuman limbs that can crush or strangle us with ease.

Natalie Cramp, partner at data firm JMAN Group, said a real-life equivalent of the Terminator in the real world is possible, but thankfully it likely won’t be during our lifetime.

‘Anything is possible in the future, but we are a long way from robotics getting to the level where Terminator-like machines have the capacity to overthrow humanity,’ she told MailOnline.

According to the expert, humanoid-style robots such as the Terminator aren’t the most likely pathway for robotics and AI to advance right now. 

Rather, the more urgent threat in the industry are the machines that are already commonly in use, such as drones and autonomous cars. 

‘There are so many hurdles to making a robot like that effectively work – not least how you power it and coordinate movements,’ Cramp told MailOnline. 

‘The main problem is that it isn’t actually the most efficient form for a robot to take to be useful. 

The Terminator is equipped with weapons and an impenetrable metal exoskeleton, as well as massive superhuman limbs that can crush or strangle us with ease

The Terminator is equipped with weapons and an impenetrable metal exoskeleton, as well as massive superhuman limbs that can crush or strangle us with ease

‘If we’re speculating on what type of AI-devices could “go rogue” and harm us, it’s likely to be everyday objects and infrastructure – a self-driving car that malfunctions or a power grid that goes down.’ 

Mark Lee, a professor of artificial intelligence at the University of Birmingham, said a Terminator-style apocalypse would happen when ‘any government is mad enough to hand over control of national defence to an AI’. 

‘Thankfully I don’t think there’s a nation mad enough to consider this,’ he told MailOnline. 

Professor Lee agreed that there are different kinds of AI are a more pressing concern, including the powerful algorithms behind them. 

‘The immediate danger from AI for most people is the effect on society as we move to AI systems which make decisions on mundane things like job or mortgage applications,’ he told MailOnline. 

‘However, there is also considerable effort in military applications such as AI guided missile systems or drones. 

‘We need to be careful here but the worry is that even if the western world agrees an ethical framework, others in the world might not.’ 

The Terminator's objective is simple - to kill Sarah Connor, an LA resident who will give birth to John, who will lead a rebellion against the machines

The Terminator’s objective is simple – to kill Sarah Connor, an LA resident who will give birth to John, who will lead a rebellion against the machines

Dr Tom Watts, a researcher on American foreign policy and international security at Royal Holloway University of London, said it’s ‘crucially important’ human operators continue to exercise control over robots and AI.

‘The entire international community, from superpowers such as China and the US to smaller countries, needs to find the political will to cooperate – and to manage the ethical and legal challenges posed by the military applications of AI during this time of geopolitical upheaval,’ he writes in a new piece for The Conversation. 

‘How nations navigate these challenges will determine whether we can avoid the dystopian future so vividly imagined in The Terminator – even if we don’t see time travelling cyborgs any time soon.’ 

In 1991, a hugely successful sequel – Terminator 2: Judgment Day – was released, depicting a ‘friendly’ reprogrammed version of the eponymous bot.

The film’s humanoid antagonist called T-1000 can run at the speed of a car and in one memorable scene liquifies himself to walk through metal bars. 

Scarily, researchers in Hong Kong are working towards making this a reality, having designed a small prototype that can change between liquid and solid stages. 

Overall, creating a walking, talking robot with lethal powers will be more of a  challenge than designing the software system that acts as its brain. 

Since its release, The Terminator has been recognised as one of the greatest science fiction movies of all time. 

At the box office, it made more than 12 times its modest budget of US$6.4 million, which is £4.9 million at today’s exchange rate. 

Dr Watts believes the film’s greatest legacy has been to ‘distort how we collectively think and speak about AI’, which today poses an ‘existential danger that often dominates public discussion’. 

Elon Musk is among the technology leaders who have helped keep a focus on the supposed existential risk of AI to humanity, often while referencing the film. 

A TIMELINE OF ELON MUSK’S COMMENTS ON AI

Musk has been a long-standing, and very vocal, condemner of AI technology and the precautions humans should take 

Musk has been a long-standing, and very vocal, condemner of AI technology and the precautions humans should take 

Elon Musk is one of the most prominent names and faces in developing technologies. 

The billionaire entrepreneur heads up SpaceX, Tesla and the Boring company. 

But while he is on the forefront of creating AI technologies, he is also acutely aware of its dangers. 

Here is a comprehensive timeline of all Musk’s premonitions, thoughts and warnings about AI, so far.   

August 2014 – ‘We need to be super careful with AI. Potentially more dangerous than nukes.’ 

October 2014 – ‘I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence.’

October 2014 – ‘With artificial intelligence we are summoning the demon.’ 

June 2016 – ‘The benign situation with ultra-intelligent AI is that we would be so far below in intelligence we’d be like a pet, or a house cat.’

July 2017 – ‘I think AI is something that is risky at the civilisation level, not merely at the individual risk level, and that’s why it really demands a lot of safety research.’ 

July 2017 – ‘I have exposure to the very most cutting-edge AI and I think people should be really concerned about it.’

July 2017 – ‘I keep sounding the alarm bell but until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal.’

August 2017 –  ‘If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.’

November 2017 – ‘Maybe there’s a five to 10 percent chance of success [of making AI safe].’

March 2018 – ‘AI is much more dangerous than nukes. So why do we have no regulatory oversight?’ 

April 2018 – ‘[AI is] a very important subject. It’s going to affect our lives in ways we can’t even imagine right now.’

April 2018 – ‘[We could create] an immortal dictator from which we would never escape.’ 

November 2018 – ‘Maybe AI will make me follow it, laugh like a demon & say who’s the pet now.’

September 2019 – ‘If advanced AI (beyond basic bots) hasn’t been applied to manipulate social media, it won’t be long before it is.’

February 2020 – ‘At Tesla, using AI to solve self-driving isn’t just icing on the cake, it the cake.’

July 2020 – ‘We’re headed toward a situation where AI is vastly smarter than humans and I think that time frame is less than five years from now. But that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.’ 

April 2021: ‘A major part of real-world AI has to be solved to make unsupervised, generalized full self-driving work.’

February 2022: ‘We have to solve a huge part of AI just to make cars drive themselves.’ 

December 2022: ‘The danger of training AI to be woke – in other words, lie – is deadly.’ 

Leave a Reply

Your email address will not be published. Required fields are marked *