This is a guest blog from start-up NGO FullAI, who are working to secure human decisions over machine decisions
For decades apocalyptic movies featuring killer machines defined the public’s view of artificial intelligence (AI). Now that AI is entering our daily lives through voice controlled smartphones and self-driving cars, many people are fearing machines will someday take over. We are forgetting that meanwhile AI is already deciding on who receives healthcare and which offender goes to prison or not.
The Netherlands-based NGO FullAI works to ensure real-life artificial intelligence is developed and used in a responsible way. At FullAI we are observing that alarmist media framing on the existential threat of super intelligent machines is diverting attention from the real-life effects artificial intelligence already has on society and individuals.
FullAI was founded in February 2016. We are based in Amsterdam and target global policy frameworks and technology company’s policies on artificial intelligence. FullAI recognizes the potential benefits of artificial intelligence. Already AI is better at diagnosing specific types of cancer then humans.
Artificial intelligence is helping doctors understand which patients are at risk for complications. GPS systems rely on AI to bring us home. Internet search engines – who can life without them? – heavily rely on AI. At the same time we are seeing that software developers and policy makers are not able to answer basic ethical questions on AI.
Automated decisions on parole
In the United States for example artificial intelligence is being used to make life-changing decisions on when prisoners should be given parole. Research shows the AI used is biased against some racial profiles, it consistently misevaluates black defendants as high risk and white defendants as low risk. The researchers found the system was wrong in 80 percent of cases at predicting who would commit a violent act in the future.
As AIs are rolled out to assess everything from credit ratings to suitability for jobs, the risks that they are getting it wrong – either through bias or due to hard core commercial interests – are very real. Decision-making processes assisted by artificial intelligence which affect human lives need to be made open to public scrutiny.
On our roads driverless cars powered by artificial intelligence are already making life and death decisions. Self-driving cars could be a favourable development as these cars in most circumstances make fewer mistakes than human drivers do. Autonomous cars do however pose ethical questions.
Self-driving cars
Driverless cars are bound to find themselves in situations where an accident is unavoidable, and for instance need to decide whether to run over a pedestrian or spare the life of their passenger. At the moment car makers are not disclosing any information whatsoever on how driverless cars are programmed to react in such a situation.
FullAI pressures carmakers to answers these types of ethical questions before autonomous cars become widely spread. Should these cars be programmed to swerve around to avoid hitting a child running across the road, even if that puts passengers or other traffic at risk? At FullAI we feel government policies should be in place that prohibit driverless cars to value the car above any kind of damage to the car itself.
Google, one of the most powerful players in artificial intelligence, shows this kind of NGO pressure is needed. The technology giant agreed to set up an ethics and safety board when it acquired AI company DeepMind three years ago, but until now has failed to deliver. The board was to ensure the AI technology would not be abused.
DeepMind is a prominent AI company. Its AI system AlphaGo hit the headlines last year when it beat top player Lee Sedol at the ancient Chinese game of Go. Go is considered one of the most complex games in the world.
Ethics board needed
One of the acquisition’s conditions set by DeepMind’s founders was that Google would create an AI ethics board. For more than a single reason the ethics board is needed. Earlier this month DeepMind revealed AlphaGo had been secretly taking on more of the world’s best Go players, and beating them. This kind of secrecy does not add to trust in artificial intelligence.
Apart from pressuring technology companies for ethical AI, FullAI is also contributing to standards in this field. One example is the AI framework which is currently being developed by IEEE, the organisation which is responsible for global standards for products and services. In the first version of this framework IEEE states that technology companies must engage to secure the safety of AI and to avoid the pitfalls of embedded algorithmic biases.
AI technology is growing, and its gaining strength fast. Global investments in artificial intelligence are mushrooming. According to Forrester Research global investment in AI will grow by more than 300% in 2017 compared with 2016. Adding to this is that machines are getting better and better at improving themselves. While recognising the potential benefits for humans, we must also work to ensure AI will not limit human self-determination in the near future.