SDG 16. PEACE JUSTICE & STRONG INSTITUTIONS

NATO wants to set AI standards. If only its members agreed on the basics.

On paper, NATO is the ideal organization to go about setting standards for military applications of artificial intelligence. But the widely divergent priorities and budgets of its 30 members could get in the way.

The Western military alliance has identified artificial intelligence as a key technology needed to maintain an edge over adversaries, and it wants to lead the way in establishing common ground rules for its use. 

“We need each other more than ever. No country alone or no continent alone can compete in this era of great power competition,” NATO Deputy Secretary-General Mircea Geoană, the alliance’s second in command, said in an interview with POLITICO.

The standard-setting effort comes as China is pressing ahead with AI applications in the military largely free of democratic oversight.

David van Weel, NATO’s assistant secretary general for emerging security challenges, said Beijing’s lack of concern with the tech’s ethical implications has sped along the integration of AI into the military apparatus.

“I’m … not sure that they’re having the same debates on principles of responsible use or they’re definitely not applying our democratic values to these technologies,” he said.

Meanwhile, the EU — which has pledged to roll out the world’s first binding rules on AI in coming weeks — is seeking closer collaboration with Washington to oversee emerging technologies, including artificial intelligence. But those efforts have been slow in getting off the ground.

For Geoană, that collaboration will happen at NATO, which is working closely with the European Union as it prepares AI regulation focusing on “high risk” applications.

The pitch

NATO does not regulate, but “once NATO sets a standard, it becomes in terms of defensive security the gold standard in that respective field,” Geoană said.

The alliance’s own AI strategy, to be released before the summer, will identify ways to operate AI systems responsibly, identify military applications for the technology, and provide a “platform for allies to test their AI to see whether it’s up to NATO standards,” van Weel said. 

The strategy will also set ethical guidelines around how to govern AI systems, for example by ensuring systems can be shut down by a human at all times, and to maintain accountability by ensuring a human is responsible for the actions of AI systems.

“If an adversary would use autonomous AI powered systems in a way that is not compatible with our values and morals, it would still have defense implications because we would need to defend and deter against those systems,” van Weel said. 

“We need to be aware of that and we need to flag legislators when we feel that our restrictions are coming into the realm of [being detrimental to] our defense and deterrence,” he continued.

Mission impossible?

The problem is that NATO’s members are at very different stages when it comes to thinking about AI in the military context.

The U.S., the world’s biggest military spender, has prioritized the use of AI in the defense realm. But in Europe, most countries — France and the Netherlands excepting — barely mention the technology’s defense and military implications in their national AI strategies. 

“It’s absolutely no surprise that the U.S. had a military AI strategy before it has a national AI strategy,” but the Europeans “did it exactly the other way around,” said Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations, said:

That echoes familiar transatlantic differences — and previous U.S. President Donald Trump’s complaints — over defense spending, but also highlights the different approaches to AI regulation more broadly.

The EU’s AI strategy takes a cautious line, touting itself as “human-centric,” focused on taming corporate excesses and keeping citizens’ data safe. The U.S., which tends to be light on regulation and keen on defense, sees things differently.

There are also divergences over what technologies the alliance ought to develop, including lethal autonomous weapons systems — often dubbed “killer robots” — programmed to identify and destroy targets without human control. 

Powerful NATO members including France, the U.K., and the U.S. have developed these technologies and oppose a treaty on these weapons, while others like Belgium and Germany have expressed serious concerns about the technology.

These weapons systems have also faced fierce public opposition from civil society and human rights groups, including from United Nations Secretary-General António Guterres, who in 2018 called for a ban. 

Geoană said the alliance has “retained autonomous weapon systems as part of the interests of NATO.” The group hopes that its upcoming recommendations will allow the ethical use of the technology without “stifling innovation.” 

Staying relevant

These issues threaten to hamper NATO’s standard-setting drive. “I think there’s a certain danger that if NATO doesn’t take this on as a real challenge, that it may be marginalized by other such efforts,” Franke said.

She pointed to the U.S.-led AI Partnership for Defense, which consists of 13 countries from Europe and Asia to collaborate on AI use in the military context — a forum which could supplant NATO as the standard-setting body. 

That could have consequences for human rights, too.

“NATO… is a great place to responsibly think about how to harness the good parts of this technology and how to prohibit the parts that would be catastrophic for humanitarian law and human rights law, and people at the end of the day,” said Verity Coyle, a senior adviser at Amnesty International, which is part of the Stop Killer Robots campaign. 

“Without oversight mechanisms to ensure ethical standards and measures, which would guarantee that this technology will operate under meaningful human control” NATO’s strategy could head into an “ethical vacuum,” Coyle said.

Franke said it’s better for the alliance to focus on the basics, like increased data sharing to develop and train military AI and cooperating on using artificial intelligence in logistics.

“If NATO countries were to cooperate on that, that could create good procedures and set precedents. And I think we should then move on to the more controversial things such as autonomous weapons systems,” she said.

UPDATED: This article has been updated to fix an editing typo.






Read more on POLITICO » Technology » POLITICO

You may also like

Leave a reply

Your email address will not be published. Required fields are marked *