Sunday, 16 December 2018
BREAKING NEWS

It’s time to address artificial intelligence’s ethical problems

ozalp/iStock

Whether it’s robots coming to take your job or AI being used in military drones, there is no shortage of horror stories about artificial intelligence. Yet for all the potential it has to do harm, AI might have just as much potential to be a force for good in the world.

Harnessing the power for good will require international cooperation, and a completely new approach to tackling difficult ethical questions, the authors of an editorial published in the journal Science argue.

“From diagnosing cancer and understanding climate change to delivering risky and consuming jobs, AI is already showing its potential for good,” says Mariarosaria Taddeo, deputy director of the Digital Ethics Lab at Oxford University and one of the authors of the commentary. “The question is how can harness this potential?”

One example of the potential is the AI from Google’s DeepMind, which made correct diagnoses 94.5 per cent of the time in a trial with Moorfields Eye Hospital, looking at 50 common eye problems. Another is helping us understand how the brain works.

The potential for AI to do good is immense, says Taddeo. Technology using artificial intelligence will have the capability to tackle issues “from environmental disasters to financial crises, from crime, terrorism and war, to famine, poverty, ignorance, inequality, and appalling living standards,” she says.

For example, AI has already been used to sift through hundreds of bird sounds to estimate when songbirds arrived at their Arctic breeding grounds. This kind of analysis will allow researchers to understand how migratory animals are responding to climate change. Another way we are learning about climate change is through images of coral. An AI trained by looking at hundreds of pictures of coral helped researchers to discover a new species this year, and the technique will be used to analyse coral’s resistance to ocean warming.

Yet AI is not without its problems. In order to ensure it can do good, we first have to understand the risks.

The potential problems that come with artificial intelligence include a lack of transparency about what goes into the algorithms. For example, an autonomous vehicle developed by researchers at the chip maker Nvidia went on the roads in 2016, without anyone knowing how it made its driving decisions.

There is also a question over who is responsible if they make a mistake. Take the example of an autonomous car that’s about to be involved in a crash. The car could be programmed to act in the safest way for the passenger, or it could be programmed to protect the people in the other vehicle. Whether or not the manufacturer or the owner makes that decision, who is responsible for the fate of people involved in the car crash? Earlier this year, a team of scientists designed a way to put the decision in the hands of the human passenger. The ‘ethical knob’ would switch a car’s setting from “full altruist” to “full egoist”, with the middle setting being impartial.

Another issue is the potential for AI to unfairly discriminate. One example of this, says Tadeo, was Compas, a risk-assessment tool developed by a privately held company and used by the Wisconsin Department of Corrections. According to Taddeo, the system was used to decide whether to grant people parole and ended up discriminating against African-American and Hispanic men. When a team of journalists studied 10,000 criminal defendants in Broward County, Florida, it turned out the system predicted that black defendants pose a higher risk of recidivism than actually do in the real world, while predicting the opposite for white defendants.

Meanwhile, there is the issue of big data collection. AI is being used to track whole cities in China, drawing on data collected from various sources. For AI to progress, the amount of data needed for it to be successful is only going to increase. This means there will be increasing chances for people’s data to be collected, stored and manipulated without their consent, or even their knowledge.

But Taddeo says national and supranational laws and regulations, such as GDPR, will be crucial to establish boundaries and enforce principles. Yet ultimately, AI is going to be created globally and used around the world, potentially also in space, for example when hunting for exoplanets. So the ways we regulate it cannot be specific to boundaries on Earth.

There should be no universal regulator of artificial intelligence, she says. “AI will be implemented across a wide range of fields, from infrastructure-building and national defence to education, sport, and entertainment” she says. So, a one-size-fits-all approach would not work. “We need to consider culturally-dependent and domain-dependent differences.” For example, in one culture it may be deemed acceptable to take a photograph of a person, but another culture may not allow photographs to be taken for religious reasons.

There are a few initiatives already working on understanding AI technology and its foreseeable impact. These include AI4People, the first global forum in Europe on the social impact of AI, the EU’s strategy for AI and the EU Declaration on Cooperation on Artificial Intelligence. The EU declaration was signed earlier this year, and those involved pledged to work together on both AI ethics and using AI for good purposes, including modernising Europe’s education and training systems.

Other initiatives include the Partnership on Artificial Intelligence to Benefit People and Society, which both of the Science editorial’s authors are members of. “We designed the Partnership on AI, in part, so that we can invest more attention and effort on harnessing AI to contribute to solutions for some of humanity’s most challenging problems, including making advances in health and wellbeing, transportation, education, and the sciences” say Eric Horvitz and Mustafa Suleyman, the Partnership on AI’s founding co-chairs.

These are in their early stages, but more initiatives like this need to be created so an informed debate can be had, says Taddeo. The most important thing is we keep talking about it. “The debate on the governance of AI needs to involve scientists, academics, engineers, lawyers, policy-makers, politicians, civil society and business representatives” says Taddeo. “We need to understand the nature of post-AI societies and the values that should underpin the design, regulation, and use of AI in these societies.”

After all, we are only humans. So the risk remains that we may misuse or underuse AI.

“In this respect, AI is not different from electricity or steam engines” says Taddeo. “It is our responsibility to steer the use of AI in such a way to foster human flourishing and well-being and mitigate the risks that this technology brings about.”

Source: https://www.wired.co.uk/article/artificial-intelligence-ethical-framework

« »