Department of Engineering / News / In tech we trust?

Department of Engineering

In tech we trust?

In tech we trust?

As machine learning systems become more complex and pervasive, Cambridge researchers believe it’s time for new thinking about new technology.

Many jobs will be substantially altered if not replaced by machines in coming decades. We need to think about how to deal with these big changes.

Dr Adrian Weller

Before becoming a senior research fellow in the Department of Engineering and a Turing Fellow at The Alan Turing Institute, Dr Adrian Weller spent many years working in trading for leading investment banks and hedge funds, and has seen first-hand how machine learning is changing the way we live and work. What it means to trust machine learning systems is what concerns him.

“Not long ago, many markets were traded on exchanges by people in pits screaming and yelling,” Dr Weller recalls. “Today, most market making and order matching is handled by computers. Automated algorithms can typically provide tighter, more responsive markets – and liquid markets are good for society.”

But cutting humans out of the loop can have unintended consequences, as the flash crash of 2010 shows. During 36 minutes on 6 May, nearly one trillion dollars were wiped off US stock markets as an unusually large sell order produced an emergent co-ordinated response from automated algorithms. “The flash crash was an important example illustrating that over time, as we have more AI agents operating in the real world, they may interact in ways that are hard to predict,” he says.

Algorithms are also beginning to be involved in critical decisions about our lives and liberty. In medicine, machine learning is helping diagnose diseases such as cancer and diabetic retinopathy; in US courts, algorithms are used to inform decisions about bail, sentencing and parole; and on social media and the web, our personal data and browsing history shape the news stories and advertisements we see.

How much we trust the ‘black box’ of machine learning systems, both as individuals and society, is clearly important. “There are settings, such as criminal justice, where we need to be able to ask why a system arrived at its conclusion – to check that appropriate process was followed, and to enable meaningful challenge,” says Dr Weller. “Equally, to have effective real-world deployment of algorithmic systems, people will have to trust them.”

But even if we can lift the lid on these black boxes, how do we interpret what’s going on inside? “There are many kinds of transparency,” he explains. “A user contesting a decision needs a different kind of transparency to a developer who wants to debug a system. And a third form of transparency might be needed to ensure a system is accountable if something goes wrong, for example an accident involving a driverless car.”

If we can make them trustworthy and transparent, how can we ensure that algorithms do not discriminate unfairly against particular groups? While it might be useful for Google to advertise products it ‘thinks’ we are most likely to buy, it is more disquieting to discover the assumptions it makes based on our name or postcode.

When Latanya Sweeney, Professor of Government and Technology in Residence at Harvard University, tried to track down one of her academic papers by Googling her name, she was shocked to be presented with ads suggesting that she had been arrested. After much research, she discovered that 'black-sounding' names were 25% more likely to result in the delivery of this kind of advertising.

Like Sweeney, Dr Weller is both disturbed and intrigued by examples of machine-learned discrimination. “It’s a worry,” he acknowledges. “And people sometimes stop there – they assume it’s a case of garbage in, garbage out, end of story. In fact, it’s just the beginning, because we’re developing techniques that can automatically detect and remove some forms of bias.”

Transparency, reliability and trustworthiness are at the core of Dr Weller’s work at the Leverhulme Centre for the Future of Intelligence and The Alan Turing Institute. His project grapples with how to make machine-learning decisions interpretable, develop new ways to ensure that AI systems perform well in real-world settings, and examine whether empathy is possible – or desirable – in AI.

Machine learning systems are here to stay. Whether they are a force for good rather than a source of division and discrimination depends partly on researchers such as Dr Weller. The stakes are high, but so are the opportunities. Universities have a vital role to play, both as critic and conscience of society. Academics can help society imagine what lies ahead and decide what we want from machine learning – and what it would be wise to guard against.

Dr Weller believes the future of work is a huge issue: “Many jobs will be substantially altered if not replaced by machines in coming decades. We need to think about how to deal with these big changes.”

And academics must keep talking as well as thinking. “We’re grappling with pressing and important issues,” he concludes. “As technical experts we need to engage with society and talk about what we’re doing so that policy makers can try to work towards policy that’s technically and legally sensible.”

This is an edited version of an article that first appeared in the University of Cambridge's Research Horizons magazine, issue 35.

The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways that permit your use and sharing of our content under their respective Terms.