Join Moneyweb Editor Ryk van Niekerk, economists Dr Iraj Abedian and Isaah Mhlanga and tax practitioner Yolandi Esterhuizen on Thursday, February 25 at 11h00 in a free webinar analysing the budget. Registration is essential.

The good and the bad side of AI

An ethical framework for AI needs to put humans and the environment first: Nevallan Moodley, head of financial services technology, BDO South Africa.

CIARAN RYAN: Artificial intelligence (AI) excites deep emotions in people. AI is already here in various forms. Microsoft has built it into software such as Word and Excel, and Google and Facebook use machine learning to get a fabulously detailed picture of just about everyone that uses their platforms. All this is detailed in books like Life after Google and [The Age of] Surveillance Capitalism. As with all technological leaps, there’s a good and a bad side.

Here’s what can also happen. When it comes to lending, AI, which has been programmed to pick up patterns in groups of people, starts discriminating against racial groups deemed to be high risk. It’s clear that there’s need for an ethical application of AI.

Joining us to discuss this is Nevellan Moodley, head of financial services technology at BDO South Africa. Hi, Nevellan. Let’s just set the table here and define the problems with AI, and how biases can enter into the system and why we need ethical AI.

NEVELLAN MOODLEY: Thank you very much. I’ve been involved in the technology industry, and specifically emerging technologies, for the past five or six years in my early attempts to get involved in Bitcoin and Blockchain and cryptocurrencies.

AI … has always been spoken about as a technology that can change the world; and I think with great power comes great responsibility. When you look at Elon Musk – who’s now become the richest man in the world – he’s spoken about the power of AI to take over the world, but also the dangers of AI potentially going out and being used by people for illicit purposes and, if it’s not controlled or managed ethically, we then have the problem where AI – which is a brilliant and beautiful technology – can be used for the wrong purposes if not guided and used correctly.

One of the things I saw very early on was probably around Microsoft, which built a chatbot using AI.  It was very clear to see that Microsoft, as part of one of their chatbots that they put out, within two days of being on the World Wide Web, it became anti-Semitic (this was the chatbot). It also became racist and it started insulting the Microsoft clients, just based on biases from people who were out on the internet. I’m not going to go into what those biases are but, if anyone wants to look, they are more than welcome to have a look at them.

It then showed me that, as great a technology that something like a chatbot which could replace humans who are performing roles in call centres (could be), … if you start replacing (people in call centres) with bots, while that can make the human’s job more efficient, the danger for companies is that they could lose their entire client base by offending them, as we’ve seen with some of the social media scandals. Alternatively, they can be subject to bias. So, I hope that answers your question.

CIARAN RYAN: Most of the AI patents in the world are owned by the two largest economies – the US and China. It’s interesting and it points to another less-discussed aspect of this trade war between China and the US. Is there a world war of AI under way at the moment?

NEVELLAN MOODLEY: There most certainly is. I think from the research that I’ve done we’ve seen that 90% of global patents that have been issued in the AI space go either to China or the US. Now, in a world where data is becoming the oil, it literally is becoming the oil of the future. And, having all of this information [in] the US and China, effectively in a new world order, or if it had to go to a world war, I don’t think it’s going to be the old-school World War II where people had submarines and different sizes of armies. I think the next set of armies will probably be with AI-driven robots, and ultimately with the AI that’s smarter in those robots. If it came down to a physical battle, (it) would be fought out one AI versus the other one’s AI in order to get world dominance.

The other thing is that in the world that we now live in, where the World Wide Web is absolutely critical to the operations of economies and countries, with that amount of power they would be able to potentially take over countries such as ours – being South Africa – but also a whole host of other countries across the world where, instead of actually going through the borders, they could physically use some of this AI technology to get into our environment, (and) know more about the people in South Africa than our government actually knows as well.

That could be incredibly challenging for a government that’s trying to keep control over its citizens, if there’s another superpower that, without even coming into our borders, can control and give narratives out there.

The one example I’d like to use is President (Donald) Trump – or the infamous President Trump. If you look at his sort of campaign when he became president, a large part of it was conducted through the use of data and AI, where he was able to identify people who were pro-Trump, people who were against Trump, and people who were neutral. He was able to create a sense of fear based on their activities … it would target very sort of specific messages at these people, and he was able to convert all of those neutrals to become Trump supporters, and he ultimately won an election.

Now, if that’s proof of what the power of AI can do in terms of even controlling a country, it shows that if we don’t get on the front foot of this and have the right ethical standards built in, and almost worldwide treaties and agreements like the ones we have for sustainability, as an example, we could end up in a world where the two superpowers control the entire world, and governments in countries such as ours will eventually end up being powerless, as they won’t be able to do anything, because the AI sitting in those countries would be smarter than any government and be able to ultimately control the people.

CIARAN RYAN: That does bring us back to this question about ethical AI, and what do we actually do about that? It does seem in the last few years that there’s a lot of divisiveness that has come in. I thought about this myself – is this actually very clever use of data, as you’ve just mentioned, and targeting people and flaming or inflaming their passions and their viewpoints to try and drive home a point? What do we do about that?

NEVELLAN MOODLEY: It’s an interesting one because we live in a world where regulation that is rules-based has often been used to control the masses of people. AI is something that’s incredibly difficult, because there could be a person sitting in a basement building, an AI algorithm that could go ahead and take over the world. Great examples of these are ones that we use already – Instagram, TikTok, Facebook – they all have algorithms built into them that can influence an opinion, bring comments higher, and can actually control the narrative.

So, there are a couple of initiatives – and three that I’d like to highlight. In fact, there are actually five of these initiatives that I’ll just highlight very quickly.

There was the Institute for Ethical AI and Machine Learning, which is a UK-based global research centre. They are looking at ethical processes, frameworks, operations and deployment. The institute at the moment is just staffed by volunteer teams. What they’re actually trying to do is build frameworks and build policy around AI – not country-specific, but more looking at how it would become global.

And there are similar types of applications or initiatives that are being run globally. There is the Responsible Computer-Science Challenge, which is almost looking at bringing through graduates as part of their training to become computer scientists, so as to build in AI and ethical frameworks as part of their studies.

I think there’s also the Berkman Klein Center, which is coming out of the Harvard community.

The other two, just out of interest, are the European AI Alliance, which is also working towards building frameworks, and the Open Roboethics Institute, which is a non-profit organisation.

What this tells me is that, with the amount of global initiatives, there are number of people who are forward thinking – I think I’ll throw Elon Musk into that as being a forward thinker – that planet Earth, as we know it, better come out with frameworks and ethical frameworks in terms of how to apply and use AI. Otherwise, we’re going to end up in a world where each country is going to do its own thing. It’s going to be ungoverned because what we’ve seen globally is regulators cannot keep up with the pace of technology change.

I think we can see it currently in the cryptocurrency markets and the Blockchain market – that regulators and governments do not know how to control it and how to regulate it. I think AI is going to be even more incredibly difficult for them to regulate and get that right. It’s a challenge that the world’s going to face because all of the countries and all of the governments are going to have to get together and agree on a framework and a standard. At the same time, they’re going to have to do it very quickly because, as we’ve seen, technology waits for no government. So, to get this right means it needs to be an initiative that’s taken on at a global stage.

I think South Africa does have a voice, and we should have a voice, purely because there’s a high chance that the use of AI technology could threaten even our government, as we know it right now.

CIARAN RYAN: Let’s talk about the use cases for AI. The technology already exists to automate, for example, the preparation of financial statements, and to conduct audits in real time. There are a lot of examples of pretty impressive use cases for AI. What are some of the other use cases to look out for?

NEVELLAN MOODLEY: The use cases we are seeing in the world of AI – I’m just going to touch on a couple of areas: I think of personal assistance, market research, the legal profession, email marketing, and the travel industry.

One of the ones that I find very interesting is the one in the world of agriculture. A great company that I’ve followed for a number of years is a company called Air Robotics, which has recently gone under a massive bout of funding. They are actually using the imagery that their drones are capturing, and applying AI on top of that and analysing satellite images in order to understand where the best places to be planting are, taking into account items such as crop and soil health, looking at the infield, monitoring weather temperatures, and building complicated AI frameworks that are actually telling the machinery exactly where to plant items and where they can maximise their growth potential. That is great for the world in terms of the number of people increasing. I think we need to be able to feed the entire population and AI is a great use case where that can actually be used.

I think the other use case, which I find very relevant from a South African perspective, is AI’s use by governments. The Chinese government has been incredibly creative and innovative in their use of AI, in that they scan all of their citizens. Now I’m not going to touch on the privacy issues, which are very topical at the moment. But effectively they scan all of their citizens and, with the number of data points that can pick up on someone’s face – whether this is ethically correct or not is up for debate – they claim that, based on the scans of human faces, they can predict from the time a child is 13 years old what the chance is of that child being a criminal, what the chance of that child defaulting on a loan is, what the chance of that child even being successful in tertiary education is. And, with the use of that, they can effectively control the entire population because, if somebody is convicted of a crime – and you’ve got this entire camera network running across China – they can identify that person without even using police to go around to find a criminal. Their cameras will detect them and tell you what the best point or path to take to catch the person is, just based on their reactions and the speed that they are moving at.

That’s talking about a very futuristic world, but what we are saying is that it already exists today. So I think those are the two use cases. There are numerous other use cases that you’re seeing out in the market at the moment, but I think I must still stress to everyone that it’s still very much in its infancy.

CIARAN RYAN: That’s a pretty frightening picture that evokes images of Minority Report. And China’s social credit score, of course, is something which is arousing some very, very heated debate around the world, and is being resisted, I think ferociously, by ordinary people.

Just talk a little bit more about how this is going to impact governments. Some governments are fairly quick to adapt this. When we talk about adapting AI, I’m not talking necessarily about the Chinese model that you’ve just outlined there, and that frightening model, but also the way that they can deliver services to people. Isn’t there a potential here, as a following question, for AI to potentially threaten nation states?

NEVELLAN MOODLEY: I think there definitely is. I must talk first about the challenges, and then talk a little bit about the benefits for you. So one of the challenges, if I take South Africa, or take in fact any country that’s not a world superpower, one of the challenges for governments in implementing AI is that AI is incredibly costly, and can heavily impact smaller governments. What that means is I think the infrastructure needed to support AI is incredibly expensive. So, using existing AI tools to minimise costs and so forth can work with smaller-type governments. However, you are never going to be able to compete with the world superpowers. Right?

And then I think the infrastructure is one of the parts that we need to talk about. But the second part is the data. So, if you take South Africa as a country, the amount of data we are creating that we are not gathering or generating value from, due a lack of good data governance across the entire country, means that we are not leveraging the data as well as China is leveraging the data.

Part of the reason is that we just don’t have the money as a country to be able to go and gather all of this data and implement the expensive AI infrastructure in order to pass the benefit on to our citizens. You might ask what the benefits are. Some of the benefits that I’ve picked up across the market [are the following].

AI in governments can be used to monitor financial irregularities and tax fraud, which means then you can regulate better, and you can tax better.

I think the quality for government or customer satisfaction from people who are using government services is probably low if you have to rate from South Africa as a country perspective. And we are not gathering all the data, as I said originally, which means we cannot provide an exceptional client experience. So, if we engage with e.g. a Facebook, their client experience is very good because they are leveraging our data to give us that experience.

Now, the South African government would battle to do that. But if AI is used properly, governments can use AI to provide an exceptional client experience to their citizens. And I think for agriculture, which I touched on earlier, using AI across the agricultural sector, would definitely increase the amount of output from an agricultural perspective.

And that means you can feed your citizens, but you can also feed them better with a higher quality of food. That ultimately impacts the ability of your citizens to live longer, ultimately pay taxes for longer, and it ultimately affects how good a country you can be.

So, it’s a bit of a Catch-22. If you don’t do AI, you can’t grow as a global organisation or as a global country going forward. But one of your challenges is going to be that, with this high amount of cost, you probably are going to end up having to sell your soul to the devil himself. A perfect example of this would be if the Chinese government – or even another global superpower – came to us with a proposal and said, give us 50% of your country, give us 50% of your governmental wealth, and in return we’ll help you provide X, Y, and Z to your citizens, it’s going to be a very hard deal to say no to, but you are probably going to lose your sovereign status as part of the process. That’s one of the challenges that AI has at the moment.

So, I think if you look at the world at the moment, China is acquiring large pieces across Africa. It’s common knowledge. And AI is probably going to help increase that gap or make that gap bigger,

…where I think a lot of the nations are probably going to lose their sovereign status just for the benefit of their citizens – and I think that’s probably one of the biggest challenges that AI is going to have.

But my overall opinion, to summarise it out, would be that I think AI is going to ultimately help us provide an individual human with a better experience on planet Earth. And, if that means there needs to be a shift of world orders, how we actually do the shift is probably something we can look at. And if we can do it in an ethical way that puts people and the environment first, that would sort of be my view in terms of what the ethical framework for AI should be.

However, I think this is a space that we are going to learn a lot about in the next five to 10 years.

CIARAN RYAN: That was Nevellan Moodley, head of financial services technology at BDO South Africa.

Brought to you by BDO South Africa.

AUTHOR PROFILE

COMMENTS   0

Comments on this article are closed.

LATEST CURRENCIES  

USD / ZAR
GBP / ZAR
EUR / ZAR
INSIDER SUBSCRIPTIONS APP VIDEOS RADIO / PODCASTS SHOP OFFERS WEBINARS NEWSLETTERS TRENDING PORTFOLIO TOOL CPD HUB

Follow us:

Search Articles: Advanced Search
Click a Company: