Free Software Review
X

Blog & News

As AI continues its evolution, penetrating different industries and corners of our life, an inevitable – yet oft-evaded – question finds itself cropping up: does AI have a place in war? With opinions split and legislation hazy, we might find ourselves faced with the elephant in the room sooner than we’re comfortable with.

 

Throughout history, the innovations made both at home and on the battlefield have found themselves entangled, intrinsically linked to one another. Often, technology developed for military gain finds itself disseminated amongst the masses in more harmless forms – such as GPS, drones and even microwaves. Sometimes, the journey is upstream.

In either case, the benefits often outweigh the costs, with technologies reshaping how we live our day-to-day lives. But as technology develops, so too does the potential use for it in a warzone – and when it comes to AI, there’s the possibility of a dark future ahead.

 

A Dangerous Foe on Screen

Cast your mind back to the action films exploding onto cinema screens in the latter half of the twentieth century. As consumers were discovering the joys of vacuum cleaners, dishwashers and microwave ovens within the home, writers and moviemakers were dreaming of the potential that future technology could have.

Starting with HAL in Kubrick’s 2001: A Space Odyssey and moving onto the likes of The Terminator, Hollywood’s depictions of AI on the battlefield – and even in real life – have all followed similar patterns: they’re calculating, cold and unfeeling, free of the trappings of humanity. The reality is a different story.

AI in its current form is transforming industries, but there’s no threat of it developing consciousness and overthrowing mankind just yet. There are, however, some concerns around AI’s place in war that aren’t too far from the science fiction blockbusters of not too long ago.

 

Fighting Twenty-First Century Wars

If there’s one thing Hollywood’s various depictions of AI in war all have in common, it’s that they most often hail from the US. The real world isn’t far from reflecting upon this, either. In recent weeks, the Pentagon has openly pledged the military’s largest investment in AI for weaponry, coming in at a reported $2billion over five years.

In the UK, meanwhile, we have a more diverse sentiment. In May this year, the Military of Defence published a report detailing concerns around AI teaching itself about war through video games, and the devastating potential for cyber-attacks in that instance.

Across the world, concern is being raised around AI’s place on the battlefield, with some experts pushing the UN to regulate their development and deployment. As Toby Walsh, professor of artificial intelligence at the University of New South Wales, was quoted as saying in the Guardian’s AI piece back in April, there’s potential for AI weapons to be “used by terrorists and rogue states against civilian populations. Unlike human soldiers, they will follow any orders, however evil.”

 

Rising Tide

But despite the protestations – clearly people understand that there’s something amiss about using unfeeling artificial intelligence to fight their wars for them – countries are spending billions developing AI-based weapons.

Over here, a flagship AI lab was unveiled earlier in the year by the Defence Secretary, as a means of keeping up with foreign competitors. Meanwhile, the US military has been deploying AI as a means of automating war for years, with outlets in 2016 pondering the technology’s future in the hands of President Trump.

The AI tide is clearly rising, but which way will it come crashing down? In favour of removing the human element from battle altogether, or, as the likes of Elon Musk are aiming for in the ‘Campaign to Stop Killer Robots’, a separation of AI and war?

 

Is There Good to Battlefield AI?

Before any serious decisions are made, it’s always a good idea to inspect both sides of any debate. Yes, there’s the potential for military AI to go wrong – either through misuse, growing beyond our control, the moral element, or AI itself falling victim to cyber-attacks – but there are also opportunities to scale back war to save lives.

The obvious benefit to removing the human element is, of course, people being saved. However, this only really comes into play when robots themselves are fighting wars, rather than the AI you’ll find on a computer system. Through the likes of automated missile launches, the human element is still very much involved: AI-driven drone strikes still kill people, even if they’re efficient at avoiding such a result where possible.

There’s also the potential for AI to remove mistakes, lower risks and speed up reaction times, all combining to make the world a safer place – even if that safety is somewhat akin to the tensions felt during the Cold War. If we all build up AI to defend our countries against threats from one another, then an uneasy peace is likely to form – until, ultimately, something comes along to replace AI.

Aside from sizeable investments currently being pumped into its development, military AI could also signify huge savings for various governments. Military operations become more efficient, staffing is reduced in areas where it was previously saturated, and funding is freed up to be redistributed amongst other sectors that need it.

 

The Moral Element

It’s clear, then, that there are some benefits to artificial intelligence in a military environment. Despite this, there remains a strong reason for us all to agree to not use AI in war: the moral element.

By removing themselves from the battlefield, decisions about what happens – who lives and who dies – thousands of miles away can be made in a second, without a thought for who else might be hurt as a result. Military bosses are effectively detached from the situation, and that’s a problem.

Aside from governments at war with one another, it’s worth going back to Professor Walsh’s opinion: there’s a lot of potential for AI to be misused by despots and terrorist groups.

So, what are we to do? Clearly, exploring artificial intelligence as a solution is too much of a lure to dissuade governments from going any further. The least we can all do is to push for regulation from the UN and, above all else, hope that nobody has any need to use AI in a way which compromises their humanity. There may not be a way to come back from that.

Kaleida creates bespoke software solutions for clients in a number of industries. To read about our past work, feel free to explore our case studies page, or get in touch for a free software review.

News & Blog

3 Cloud Tech Trends To Look Out For In 2021

3 Cloud Tech Trends To Look Out For In 2021 31st December 2020
3 Cloud Tech Trends To Look Out For In 2021 Hover Icon Read More

If there is one thing that we are all hoping for with what 2021 may hold for us is a somewhat “business-as-usual” year following the unprecedented situation that 2020 presented us with. One thing that has been evident, though, is …

Twitter Adopt Amazon Web Services In New Partnership

Twitter Adopt Amazon Web Services In New Partnership 16th December 2020
Twitter Adopt Amazon Web Services In New Partnership Hover Icon Read More

Twitter have recently moved their infrastructure into the cloud under the vast online giant Amazon Web Services (AWS). They have been brought on to scope including the delivery of timelines in a multi-year agreement in which Twitter will leverage the …

New Survey Reports Heavy Business Modernisation Through Digital

New Survey Reports Heavy Business Modernisation Through Digital 30th November 2020
New Survey Reports Heavy Business Modernisation Through Digital Hover Icon Read More

Key insights from a survey have been released that have found among various other results; that businesses are heavily modernising, scaling toward digital and enhancing existing processes as a result of the coronavirus pandemic. The survey of IT leaders has …

Read more news & blog articles

How could we help your business?

A more efficient and profitable use of data. Safer, more secure systems. Operational efficiencies. Special projects. Manage and support your systems to make sure your success is sustainable, is ongoing.

Kaleida Logo
Kaleida Map
Kaleida Keate House, 1 Scholar Green Road, Cobra Court, Trafford Park, Manchester, M32 0TR
Map Pin
Call us on 0161 870 8160 Email us at enquiry@kaleida.co.uk

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close