Googlers revolt over AI military tech contract, brainiacs boycott killer robots, and more
Roundup Here is a round up of this week’s AI news beyond what we've already covered.
Advertisement
Get ready for ethical debates around autonomous weapons, a free online AI course, and a cracking video of a Russian drone.
Stop killer robots!
A large group on internationally renowned AI academics have signed an open letter threatening to boycott collaborations with a top South Korean University working to create autonomous weapons.
The letter comes just before the United Nations is set to discuss killer robots in a meeting next week. It has led by Toby Walsh, a professor of AI at the University of South Wales and lists more than 50 names, including Geoffrey Hinton, Yoshua Bengio, Stuart Russell, Zoubin Ghahramani and Jürgen Schmidhuber.
It accuses KAIST, a top university focused on science and engineering in South Korea, of partnering with Hanwha Systems, a leading arms company in South Korea to develop AI for military purposes.
“At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like KAIST looks to accelerate the arms race to develop such weapons."
"We therefore publicly declare that we will boycott all collaborations with any part of KAIST until such time as the President of KAIST provides assurances, which we have sought but not received, that the Center will not develop autonomous weapons lacking meaningful human control. We will, for example, not visit KAIST, host visitors from KAIST, or contribute to any research project involving KAIST,” the letter said.
It warned that autonomous weapons would allow despots and terrorists to use them without ethical restraints on innocent people.
“If developed, autonomous weapons will be the third revolution in warfare...This Pandora's box will be hard to close if it is opened. As with other technologies banned in the past like blinding lasers, we can simply decide not to develop them. We urge KAIST to follow this path, and work instead on uses of AI to improve and not harm human lives.”
KAIST and Hanwha Systems reportedly opened the Research Center for the Convergence of National Defense and Artificial Intelligence in February. In a statement, however, KAIST president Sung-Chul Shin denied plans of building killer robots, and said he was “saddened” by the risk of boycott.
Google employees order CEO to drop out of Project Maven
Here’s another letter, this time signed by thousands of Google employees. It urged CEO Sundar Pichai to scrap Google’s collaboration with the Pentagon in Project Maven, using AI to analyse drone footage.
The letter was first reported by the New York Times, and calls for Google to “cancel [the] project immediately”, and “draft, publicize, and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”
Last month, it was reported that Google is assisting the Pentagon in using one of its computer vision TensorFlow API’s to help track and identify objects in videos taken by drones.
It’s still unclear how involved Google are. But it does highlight two things: 1) The US military are increasingly using AI and machine learning in warfare, and 2) It’s not very good at developing the technology on its own.
It has sparked an ethical debate amongst Googlers. “We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology,” the letter read.
It also warns that continuing Project Maven will significantly tarnish Google’s brand and its ability to compete for talent, and that the moral responsibility of its technologies cannot be outsourced to third parties like the Pentagon.
International Committee of the Red Cross’ AI ethics report
At El Reg, we like to keep things jolly. So here is yet another reminder of how AI may be used to kill everyone. The International Committee of the Red Cross published its report on the ethics of autonomous weapons based on a United Nations meeting from last year.
The main message from the report is that humans must, ultimately, remain in control of weapon systems, and that countries need to work out the limits on how autonomous future weapons can be.
Some have argued that the technology could decrease innocent civilian casualties as weapons like drone strikes will be more precise. But the ICRC argue that “many take the view that decisions to kill, injure and destroy must not be delegated to machines”.
Ethical and legal considerations could make it a requirement that humans have the ability to intervene and deactivate the weapons. We should also think about the types of scenarios where they might be used, the types of target and the working environment.
The report also discusses the black box nature of deep learning, and warns that autonomous systems based on these types of algorithms makes them “inherently unpredictable”.
A spokesperson from the ICRC told The Register: “There is a likelihood that increasingly autonomous weapon systems could become less predictable, particularly in case of increased mobility, increased adaptability and/or increased interaction of multiple systems (as swarms)."
"The loss of predictability regarding the outcomes of using an autonomous weapon would raise serious legal and ethical concerns. Therefore, ICRC’s view is that governments must work urgently to establish internationally agreed limits on autonomy in weapon systems.”
Free AI course
Microsoft have released an online course for software engineers interested in AI.
It’s made up of 10 courses, and includes an introduction to AI in Python, statistics and maths, ethical considerations, machine learning, deep learning, and reinforcement learning. They can be taken in any order and requires anywhere from 8-16 hours to complete, so some decent level of commitment is required.
There is also a “final project”, where Microsoft gives you one last problem for you to tackle and you’re given a grade and a certificate.
The program runs for three months: January - March, April - June, July - September and October - December.
You can enroll here.
Musical AI chairs
There has been some reshuffling at Google’s headquarters. John Giannandrea, senior VP of engineering, who led the search and AI units has gone to Apple.
Now Jeff Dean, who leads Google Brain, has stepped in as the company’s head of AI. Ben Gomes, VP of search engineering will take over the search unit.
Good job!
Russia’s first postal drone had a less than stellar first delivery.
Video shows a six-armed drone launching from a mat outside the post office on the floor. As it takes off, people crowd round and crane their necks following its path into the air.
It manages to go pretty high and seems to be flying fast, before it crashes into a brick wall.