This interview is taken from the Mind Forest Podcast Episode 001: full-length episode here: http://www.buzzsprout.com/258312

Dr Joanna Bryson is a professor and researcher specialising in the study of natural and artificial intelligence. As well as being an associate professor at the University of Bath, she sits on the ethics board of ‘Mind Fire’ Global – a non-profit examining how the principles of human intelligence can be used to build better AI. She is a member of the British Engineering and Physical Sciences Research Council. One of Joanna’s focuses is on building accountable and transparent AI. I travelled to the University of Oxford to meet with Joanna, to learn more about her work.

What is AI?

Artificial intelligence is intelligence that is artificial. What is intelligence? Intelligence is being able to respond to the environment or context, according to a definition going back to the 19th century. The better you can respond to threats, the more intelligent you are said to be. By that definition, plants and thermostats are intelligent, but some people find that uninteresting. However, if you are really trying to discuss artificial intelligence, it is important to realise there is an enormous range of intelligence.

AI is intelligence in an artefact- something a human has deliberately built. The fact that a human created it is central. People often think intelligence means ‘human-like’, but there is a lot more to ‘humanness’ than intelligence, like art or emotions. If people think saying something is more intelligent is saying it is more human, that is like saying if something is taller than a human it is more human. People have this false, mistaken idea that just making something more intelligent makes it more human-like.

Intelligence is one of our traits, a trait we share with a lot of animals and even plants. One of the things we find important in human intelligence is cognitive ability. Something is cognitive if it can change how it learns. A plant can’t change the way it senses and acts in the way a human can. Humans can categorise and react.

So you are saying intelligence is just one subset of ‘Humanness’?

Exactly.

How has the study of the natural world informed your work around AI?

I am interested in understanding cognition. Why are some species using cognition as a strategy more than other species? Why is there diversity in nature? Why do people use cognitive strategies in different ways? A lot of my current work was originally based in questions on the origins of language.

You can think of language or culture as a public good. For example, “Why am I telling you this stuff?” The culture itself has presented enough context for us to be doing this interview. Our culture is organised in a way that we invest in each other communally, so we get something back in the extent to which we create a sustainable society, and live through the ‘winter’.

I use AI to better understand natural intelligence. When you learn something new and look at our society, e.g. our massively high amounts of communication, what are the consequences of that? Theoretical biology tells us that when you have better ways to communicate you are more likely to invest at the group level rather than the individual level because you are more likely to find cooperative strategies to create coalitions and public goods. That shows us that communication, in general, is giving us ways to create new allegiances very quickly. 

Are there parallels between the intelligence we see in nature and AI?

Computation is computation. A paper I published recently demonstrated that AI trained through machine learning and human culture has the same bias that people do. If AI gets all the data from society, it will still have the same bias that society has. Emotions are a means by which animals coordinate behaviour. You can find the same neurotransmitters humans have for basic emotions in species that do not even have neurons. Emotions and drives like loneliness are not things we need to build into AI. If people think all intelligence is the same, then they think their vacuum cleaner is lonely when they leave for vacation.

In terms of phenomenological consciousness, the odds we would ever build a machine that has that feeling are fairly minimal. Cows and rats feel much more like what it is to be human than any robot would ever feel, not because of intelligence but because of nature showing you what is a good or a bad thing in your life via emotions. 

To what extent do ethical concerns surround the Intelligence that has no phenomenology – e.g. AI?

My catch line for 20 years has been: we are obliged to build robots we are not obliged to. Part of the design component of robots is to pick their motivations. I think it is unlikely that you would build something like human phenomenology into AI, but whether or not you could; I am saying it is a bad idea. If we did, then we would have all kind of obligations to robots, and we are not even good at taking care of our obligations to other people, let alone robots. Why would you want a robot to have the ability to suffer? That is not an intrinsic part of what a tool needs to do. 

Are you saying that ethical obligation begins when a robot has the ability to suffer?

There is a real question in the natural world about which kind of suffering we care about. Most people eat cows, we do all kinds of things to rats, kids suffer when you leave them at a daycare centre- but they would suffer more if they didn’t get an education. There is a process we take when we decide which kind of suffering we allow. However, I have a problem with thinking this weighing up of suffering is all there is to ethics.

I think what ethics is mostly for is keeping societies together: we just describe the set of rules by which we respect each other and create healthy collaborations- and we call that ethics. 

If you could design one piece of legislation relating to AI going forward – what would it be?

I have a clear answer except I am not sure we need new legislation for it. At the end of the day, AI is just software. The most important thing we need to do is maintain and communicate human accountability across the AI systems- and there are known ways to do that! A lot of people are hacking around with machine learning and treating it like it’s a kid. AI is not something we ‘hand off’ to like in procreation; it is something we designed.

If you cannot prove why you wrote the software system the way you wrote it, then you are accountable for what it does. If you can show that you followed best practice, then there is no liability on the creator, or the user is accountable. The main point is that humans are always accountable. We should never treat the AI itself as accountable. We don’t need new legislation; we need to communicate this understanding to the public. We need judges who understand, and we may need to set up some new regulatory bodies with expertise in this. It’s really not that different to what people already do to monitor software systems, for example with driverless cars.

What about companies who use machine learning to mine and sell data?

I hope there are serious fines that bring these companies to justice. Some of these companies are so large that you could take them down to a tenth of their current value and they could still recover. You need to do something serious to make people think “alright – that’s an issue!”.

Image Credit: University of Bath

Charles Thefaut
Charles is a Freelance Writer and Podcaster based in London. Areas of interest include Psychology, Philosophy, AI, and Environmental Issues. He has a BA in Philosophy and MSc in Psychology. Charles has published articles for Resurgence&Ecologist Magazine and Sacred Hoop Magazine, amongst others. His New Podcast, Mind Forest, has been launched recently and can be found here: http://www.buzzsprout.com/258312 or Spotify.

LEAVE A REPLY

Please enter your comment!
Please enter your name here