Creating AI we can Trust

  • KJR
  • 1 May 2018

We once associated AI with scary robots from cheesy sci-fi thrillers. However, in recent times, you’re more likely to find virtual assistants helping you with everything from mapping a faster route to work, recommending the perfect movie, or making it possible for you to chat to someone in another language. These days, AI is part of our reality.

Modern AI is all about teaching computers how to learn. To do so, researchers feed huge data sets to computers, and reward them when they find valuable patterns. This trial and error approach works because computers have a significant speed advantage over humans – they can try and fail (then find solutions) thousands of times faster than the human mind.

Trust AI with the big things

As it develops, AI is being used to manage increasingly sensitive data. It’s helping police identify crime hot spots in Britain, and making it easier for doctors to diagnose cancers and develop treatment strategies. Arguably, learning to trust AI with these high-stakes tasks will catalyse a change in our relationship with the technology.

When you introduce something as revolutionary as AI, there’s bound to be some thawing to do. Some doctors have trouble trusting their new technical assistants, and theorists argue that using AI to predict crime can entrench it, and that facial recognition might end up being used for surveillance. On paper, the benefits of AI for civics, research and law and order are compelling. However, if we’re to consider AI a viable tool, it’s imperative that we first learn how to trust it, and then use it with confidence.

Teach AI to speak our language

To start truly trusting AI, we need to be able to understand the way it makes decisions. But when complex data is involved, it’s not always easy to follow the logic. Google’s Translate AI actually invented its own language to act as a digital Rosetta Stone between languages. No one asked it to do so, or could understand how it did it. To address this inscrutability, The US Department of Defence is developing Explainable Artificial Intelligences that can translate and explain their working. An example would be an AI explaining the reasons why it gave a “highly likely” diagnosis of diabetes for a particular patient, by linking the diagnosis with specific age, weight and blood test results. If it can communicate this reasoning to a doctor in a way that makes sense, it would help people trust the result and use it as part of existing treatment processes. 

Put people in the driver’s seat

Another way of building trust between humans and AI is to give users some control over AI behaviour. Consider the previous example of using AI in diagnosing medical conditions. Studies show that if you let people modify the algorithm, they’re more likely to trust it. In the above scenario, the AI does its best to come up with an accurate diagnosis, but it only knows so much. Imagine then that the doctor could modify the weight AI puts on certain symptoms, based on their experience. This would help the doctor trust the results, turning the AI from a faceless entity into a tool that empowers them in their practice.

Of course, learning to trust AI and harness its potential will take time. After all, AI is still a toddler, and as such can be a little unpredictable. It might draw on the walls today; but it has the potential to cure cancer down the track. With the right guidance, we could develop systems that enhance our lives, our economies and our well-being. But to realise that future, humans must first create AI that is worthy of our trust – models that can explain their processes and take feedback from their creators. Only then can we use AI to its full potential, and let it help us reach ours.