In the future, major decisions taken in the theatre of war are likely to be made by machines.
The decision to shoot a missile or drop a bomb will be made electronically, eliminating the flawed human mind from any decision-making process. Systems based on artificial intelligence will perform many of the tasks that are currently the responsibility of humans.
These robot “brains” will gather and assess real-time data, scrutinising facts to decide the next course of action. This could ultimately lead to full weapons systems – even nuclear warheads – being controlled by an unemotional, independent box of electronic tricks.
Clearly, the ethical considerations are immense.
This week, officials from 13 US-allied countries met online to try to figure out how they could use AI and machine learning across their military and defence capabilities, but in such a way as to blunt the numerous potential downsides.
The meeting was organised by the US Joint Artificial Intelligence Centre (JAIC) on behalf of the defence department’s AI Centre of Excellence. It aims to find a way to integrate rapid paces of technological change with a solid foundation of ethical rules and regulations.
It will also share knowledge and develop unified processes and data between its members. Strong relationships will be essential to ensure its long-term success and America’s allies will understand this too.
Not only is this initiative important for the safety and security of future generations, it is also politically significant today. Washington has clashed with many allies during the Trump presidency on numerous geopolitical issues – even raising trade barriers on Canada, Mexico, the European Union and Japan. But managing the future of AI in warfare is an easy banner on which to unite.
AI is concerned with the development of smart machines that can perform the complex tasks typically associated with human intelligence. AI can be used for information gathering, surveillance and reconnaissance – but could also be connected to live weapons systems. If machines are handed the ability to make decisions that could result in death and destruction, the global security implications are enormous.
This week’s summit was the first to try to manage this technological progress from an ethical point of view.
Participating countries included military delegations from Australia, Canada, Denmark, Estonia, Finland, France, Israel, Japan, Norway, the Republic of Korea, Sweden and the UK.
“We want it to be almost like a problem-solving forum,” Stephanie Culberson, director of international AI policy at the JAIC, noted. She said that the meetings were designed to be informal and collaborative – unlike the usual military-to-military engagements, which are highly formal.
The forum will also act as a conduit for sharing technical information – with consistent standards applied between these allies on the management of data – the lifeblood of machine learning and AI.
China was obviously not present at the ethical discussion.
AI is another frontier field where progress is being accelerated by the ideological clash between Washington and Beijing. Each country wants to attain global leadership in the technology that will drive the economies of tomorrow and the AI arms race is a major battle front.
China’s rapid success in this area is concerning. Beijing’s authoritarian government uses AI and citizens’ data in ways that are impossible in democratic countries because they would violate privacy and civil liberty laws.
Facial recognition technologies used for the surveillance and detention of Muslim ethnic minorities in its western Xinjiang province have also been a drive of its innovation in this area.