Understanding the Uses of Machine Learning and AI in Finance

In this episode, we continued our ongoing series on fintech in Asia by interviewing David Hardoon, the Chief Data Officer of the Monetary Authority of Singapore (MAS). We spoke with him about the innovative uses of machine learning and the leveraging of big data among banks and the financial system more broadly.

David walked us through how the new Data Analytics Group at MAS is approaching the ethical use of data when so many financial institutions are employing new AI applications. We also discussed the need for awareness of the potential for unsupervised algorithms to either help or hinder financial inclusion. Some of our key takeaways include:

  • One MAS initiative, FEAT, is focused on four guiding principles for financial institutions concerning the usage of data in AI innovations: fairness, ethics, accountability, and transparency.
  • Machine learning can be applied in wide range of financial services from insurance to wealth management, for example, by using behavioral data to lower premiums and by offering algorithm-based robo advisory services that market services to a wider swath of people.
  • Regulators ought to ensure financial institutions understand the risks involved in using algorithms and unsupervised machine learning – are these risks acceptable?
  • Applications of AI – artificial intelligence – will more likely augment, not replace, financial jobs of the future. The technology is likely to transform the services provided by banks and the roles of bank branches, and will alter the relationship between banks and customers and underscore the importance of data.

Transcript

Hide this section

Sean Creehan:

Welcome to Pacific Exchanges, a podcast from the Federal Reserve Bank of San Francisco. I’m Sean Creehan.

Paul Tierno:

And I’m Paul Tierno. We’re analysts in the Country Analysis Unit, and our job is to monitor financial and economic developments in Asia. Today, we return to our series on financial technology in Asia.

We spoke with David Hardoon, the Chief Data Officer of the Monetary Authority of Singapore. David is an expert in artificial intelligence, and we spoke with him about Singapore’s innovative uses of machine learning and big data in the financial sector and the central bank’s ongoing thinking of the ethical and regulatory implications of AI.

Sean Creehan:

We first heard about the Monetary Authority of Singapore’s explorations of financial technology two years ago when their chief fintech officer Sopnendu Mohanty joined us, and it was fascinating to hear about their latest focus on the ethics of artificial intelligence in the financial sector. This is a central question as the financial system—and really the entire economy—is transformed by artificial intelligence applications in the coming years.

Paul Tierno:

Yeah, and I was also really interested to hear David’s view on the use of augmented intelligence to support and not replace humans working in the financial sector, and what this means for everything from macro stress tests to the customer experience in a bank branch.

Alright, let’s get to the conversation with David.

Sean Creehan:

Well, thank you for joining us, David.

David Hardoon:

Thank you very much for having me.

Sean Creehan:

So you assumed the role of chief data officer of the monetary authority of Singapore last year. Could you tell us a little bit about your role as the head of the data analytics group?

David Hardoon:

Okay, so it’s about a year and give or take, three, four months, that both the role as well as the group has been formed. I think on that hand, it was a bit of a white canvas. What I mean by that is, as any central bank, or as you said, regulator, you have a statistics function.

But the idea was how to grow beyond traditional statistics to start going about leveraging data, leveraging technology, leveraging even to the extent of AI within the organization. I think to a certain extent, that was a bit of the unique propositions of the group, because fundamentally, a lot of times, you’re looking, “How do I work with the industry?” How do I get the industry to incorporate, implement, whatever requirements from a technology point of view. But that was the intention. So effectively, with that, the role of the chief data officer as well as the head of the analytics group is very much of almost like an in-house consultant within the organization to continuously see what the other departments need from a Business As Usual perspective.

But at the same time, how do you, I like to call it doing things unusual. What are the other things that we haven’t thought of or they haven’t thought of that could be potentially beneficial? That’s kind of a high level perspective.

Sean Creehan:

So I guess when you talk to data scientists in general, a lot of the work sometimes can be just structuring the data, wrangling it. I wonder if it’s internal data, is that already pretty much in good shape for your analysis? How much time do you actually have to spend on the back end?

David Hardoon:

Absolutely, and maybe if I may, I’ll just take a step back, and I’ll give you a sensing of how the group is formed. That, to some degree will also answer your question. So effectively, there’s three officers in one unit that form the bulk of the group. So one is supervisory technology which is dedicated to the supervisory side of the house. That’s, I guess, what you call the use of data.

Then you have, okay, very long name, specialist analytics and visualization, just simply put, the non-supervisory side of the house even down to the HR functions, or elevators, I like to joke.

But then, to your point, I have a data governance and architecture office. They effectively would look at what are the best practices in getting data, how to making sure that it’s machine readable how it’s structured. But to your point, a large portion of work is still in terms of, I guess, the unflattering way of describing it is the sanitary work of data. But it’s extremely important.

But I think one of the things that we’re looking at is rather than just focusing the efforts on how do I clean it? We’re trying to take a step further back and seeing okay, how did it get at this stage? And can we address it at that point so, okay … It would be unrealistic to think that you’ll never have to clean your data. But how can I minimize that effort effectively?

Paul Tierno:

This is all currently, what we’re speaking about is all just data internally, within the MAS, right? So there’s been a lot of hype about the use of data by tech startups, right? And the availability of consumer and customer data. What do you see as the value for these startups to use this data in the financial sector? How do you think about that? And what do you see in the industry?

David Hardoon:

Well, the jury’s still out about that one. There’s a lot of discussion. There’s a lot of, I guess, at a lack of a better word, hype about it. But the hope is there that it’s possible. If you look at some of the fintech players coming out of China, when they’re using quite an array of external social data, information data, data generated by the consumers themselves, which is non-traditional finance, which in turn, is translated to a better and a more refined product to allow to have a larger inclusive set of individuals.

But I think it’s not the hinge, because the question really is, what data that can be externally collected, that is genuinely viable and provides you the insight that you can use vis-a-vis this data that you’re just collecting and you’re hoping that will be useful. Now from an MAS perspective, to your point, it’s largely … Well, to a very large extent, it’s internal data. But that internal data, in fact, is collected from the industry.

Now, concurrently, we’re looking at other sources, from, I guess more traditional means, news and so forth, how to go about enriching that. So I think we’re collectively still at early stages, but hoping that there’s genuine value.

Sean Creehan:

Do you see a distinction between a more developing market like China where maybe there’s more challenges in access to finance. Maybe some people have never been served, don’t have a financial history. We’ve talked about this before on the podcast with other guests, but is there a distinction between a market like that versus Singapore, here in the United States where maybe it’s already pretty saturated, well served market. Do you see that as a distinction?

David Hardoon:

Absolutely. And it’s actually interesting that you brought that up, because I just had a conversation a couple days ago about the usual complaint that we get in Singapore is, “How come we’re not moving as fast?” Or, “How come things aren’t moving to that extent?” But then when you think about that the irony is these kind of economies that do have these challenges and do have these limitations from India, Africa, China, to an extent, that begets the need to accelerate it, to need to accelerate that adoption of technology. Whereas, when you have an economy which effectively everyone’s bankable. Everyone has access to financial- You don’t have that equivalent burning drive to inclusive essentially, this group of individuals.

So we’re kind of trying to find our footing of how do we drive innovation when, effectively, there’s not so much pain as, vis–a- vis, other locations. So I definitely see those differences, but the view that we take is because we have a bit more of a stable set up and accessibility, we can start looking at other challenges and other questions that, perhaps, don’t come to surface in these kind of corners like China, like you mentioned.

I’ll give you a simple example of that. We started looking into things like fairness, ethics, accountability and transparency in AI, in finance. Now, this is difference from data governance. I want to just draw the distinction. From a data governance point of view, it’s really the case, like if you look at GDPR, did I get your permission to access this data or to use this data?

Do I have a set of confidentiality and whatnot? This initiative, what we call FEAT for short, is more how is that data subsequently used? I’ve got your permission, but now when I build an application, when I provide a service, is it transparent? Is it ethical? Does it have accountability – You see, I mean these are very big questions. These are very difficult questions, but because we have the time to a certain degree to focus on them, we’re saying, “We need to focus on them.” Because there is a genuine risk in the use of data and if I can, I’ll take the next step towards AI.

While everyone’s hoping for inclusion, you can have exclusion. We need to make sure that we get the best and we avoid the worst, effectively.

Paul Tierno:

So you referenced one of MAS’s initiatives of FEAT. Can you talk a little bit more about that, and who you’re working with?

David Hardoon:

Sure. So this kind of started off last year, towards the end of last year. Like I mentioned we really were scratching our head. This is before the recent public incidents from a commercial side. We said, “Is this, first of all, happening internationally, looking at the more outcome aspects of data and AI?” And we started having conversations with the financial institutions, some of the banks, securities, and so forth, to get a bit of their sensings, because the perception is always that, from an industry point of view, we don’t want anymore regulation. We don’t want more of those kind of elements.

So we really had a very frank and open conversation. We realized two things. We realized that, to an extent, there wasn’t that much focus back then in terms of that broadly speaking ethical outcome and ethical usage of data. I mean nowadays, there’s already been the House of Lords report, and there’s been other initiatives. But more interestingly, when we spoke to the financial institutions, mainly the regulated entities, they said, “Yes, please.” I really took that by surprise because what we realized is that because these are fundamentally, from essence, regulated, they had a challenge from the innovation point view of, can we do this? Can we not do it? Is it allowed? Is it not allowed?

And having, to a certain degree, maybe not regulation, but guiding principles. In fact, in turn, helps them to say, “Okay, this is the room in which we can operate.” And vis a vis, it in fact, also is beneficial for the unregulated entities, the fin tech players and so forth, because for them, it’s a complete wide playing field.

They don’t have that culture of regulation. And homing them in and by saying, “Guys, look, these are just common sense principles that you should really have in place.” So that’s happened last year. So we decided this is something that had to be done in collaboration with the industry. It had to be co-created with them. So we constituted a committee with the CDOs from the various banks. I know that’s a very odd setup.

Usually you would see legal and compliance officers, but we deliberately wanted to start with the CDOs to think from how they perceive things. I’ll give you a very simplistic example of why that was important. So, discrimination. We talked about the potentiality of exclusion of certain attributes. To make something simple, let’s say gender, or age. But any data scientist who’s worth their salt was saying, “Well, you can exclude that variable, but I can derive it.” I can find an algorithmic approach to calculate the likelihood of gender, the likelihood of age. So you see, it’s moving away from just saying, “Exclude that attribute vis a vis, you shouldn’t be using age-related attributes or information to doing reverse calculations.”

But that works in a very contradictory manner, because if you think of insurance, you need to know age, fundamentally. You would not offer pregnancy insurance to a man. So we needed to really have very deep conversations and tease these out. Now, that kind of stage has happened, and now we’ve gone through the entire industry sharing these principles to get their feedback, but the intention is to have high level, simplistic effect. It’s just one pager, the subsequent page is just explanations, of across these four pillars, the principles with fairness, ethics, accountability, and transparency in usage of AI within finance.

Sean Creehan:

It’s interesting as well, because I don’t know if you mean this specifically, but the use of actually artificial intelligence to come up with an ideal ethics. It’s interesting to me because we don’t necessarily think of ethics and intelligence as necessarily the same thing. Someone could be highly intelligent, but yet unethical. A year or so ago I was taking this MIT quiz that was designed to create an ethics for a self-driving car. So you would have someone in a car, and some pedestrians of different ages. Maybe there’s a pet or this or that. They would constantly be changing the scenario and asking you to answer, “Would you crash the car and kill the people in the car, and save that pedestrian? Or kill the pedestrian and save yourself?”

So it was just really interesting to think of training a machine to be ethical. And so I’m just wondering … There’s different ways of thinking about this. There’s rules-based ethics that we can tell a machine, “This is the way you behave.” Or it’s more utilitarian. We’re trying to maximize to a couple of different variables. So I’m just curious, how are you thinking about this as someone that has a PhD in machine learning?

David Hardoon:

Oh, these are things which, in fact, keep me awake at night because these are very deeply complicated questions. Okay, so I have a PhD in machine learning, which is rudimentary based on math. But the problem is, the answer to these questions aren’t mathematical in nature, cannot be optimized. I’ll give you a very simple example.

From a societal point of view, we want to make sure that … Back to my earlier point is inclusion. But what happens if unbiased data, unbiased data, so I’m using that as the predicate to this point, shows you that a certain cohort or certain demographics should be discriminated against because it is statistically correct. But again, it’s unbiased data. So therefore, it should be, to a certain extent, a representation of reality.

What do you do in that kind of situation? Do you say, “Okay, well, look. Just like now, I offer you a credit card, or I don’t offer you a credit card.” One could always argue that’s discrimination. The person who doesn’t get that credit card will say, “Well, it’s unfair.”

But where do you draw that line of unfairness to, from a society point of view saying, “No. Even though it is true, from the math. Even though it’s true from the data, we don’t want that to happen.” Now that actually goes to same example of an accident. It’s a hugely difficult thing. Now, you can say, “Well, all the math … All the calculations said I should have killed the passenger. But is it something that you should expect?

Now, the interesting thing about this is the machine, the AI elements can further emphasize that debate that we have. So when you’re sitting in a passenger seat and you have the – God forbid – situation, you go through huge moral conflict to get to that conclusion. Granted, you may be questioned, you may be challenged. But ultimately, they will think that you’ve done the right course if you had no other choice. But if it’s a machine that has done that exact same thing, now, thankfully, we didn’t … haven’t yet reached that kind of situation, but we don’t know. We don’t exactly know yet what is our boundary between something that is statistically correct, statistically plausi- optimal versus, “Yeah, but you shouldn’t have done it.”

Sean Creehan:

Right and think about the liability from the perspective of a company that’s actually designing that car that’s going to make that decision. You’ll be able to say very specifically, essentially the company wrote an algorithm that made this person die. What’s that person’s family going to say?

David Hardoon:

Absolutely. I had a wonderful conversation with someone in insurance. They said it is really challenging them to reveal- How do you go about providing insurance? And as an intermedium, because they’re looking at self driving cars. They said actually we just provide them a manufacturing insurance. So it’s not insurance for you actually driving on the road. It’s an insurance for, as a manufacturing product.

So jury’s still out. We have a lot to find out.

Paul Tierno:

It’s interesting. You say that the jury’s still out. But I’m wondering, what are some of the applications for machine learning that currently exist, whether it’s in Singapore or globally for financial institutions, but more broadly. What exists? And what do you see as the future of machine learning and AI?

David Hardoon:

A couple of them, off the top of my mind that come up, and you’re starting to see them specifically in Singapore. I’m not sure about the rest, although, quite certain you’ll see it in the rest of the world as well. So the two that come to mind is one, since we’re talking about insurance is one of them.

So insurance is a phenomenally interesting area, because if you think about it, the more you apply AI, the more irrelevant insurance becomes. What I mean by that is, because insurance, as a predicate, is a pool based risk approach, that I understand relatively your risk, and I pool across.

Now, the essence of machine learning, or AI, excuse me, would allow you to know the risk exactly. Therego, I don’t effectively have to pool it anymore. So what you’re finding is, how some of the insurers are using AI application is they’re changing their model. For example, they’re collecting data from Fit Bit as well as other behavioral attributes. But rather from an insurance point of view, they will then provide you information how to live a healthier life. So how many steps you should be taking, what approach. And then they will incentivize you by saying, “If you do this and you meet these certain goals, we will rebate you a certain amount.”

So you see, it’s a very interesting way in how AI is being used in terms of, on their end, it’s beneficial because if you’re healthier, if you don’t get whatever risks that you have, effectively, they don’t have to pay the underlying premiums. So it’s beneficial, so that’s on the one hand.

On the other aspects, if you look at wealth management. Now, wealth management, usually it’s only a particular echelon that you have, the private bankers and so forth. So what we’re seeing is an extension of how do you take that individualistic private banker that can provide all his 24 by seven to one person? Take that knowledge, take that approach and systemize it to provide it to a much larger cohort who’s may only want to invest 50 bucks, 100 bucks, 500 bucks. You can’t have a private banker for that.

So what you’re finding is a large number of robo advisors in wealth management popping up. Now, to be fair, we’re at that initial stages. So a large number of them would be rule-based, domain expert systems. But what you’re finding is that they’re starting to inject AI-based algorithms that will learn behavior both from the advisors, the wealth management folks as well as the various audience. That’s one that pops in my mind.

But if I broadly capture it, you find that it falls into buckets of inclusion, which, as this example of wealth management, it will fall into the traditional usage of marketing, but nothing really exciting there. Then if you look at it in terms of risk management, but not just in terms of traditional risk. I like to call it operational risk and whatnot. AML being a wonderful, tongue in cheek, wonderful case of it where, let’s be honest, the amount of data we have, we can do a lot better. A lot of these systems providing what, 95, 99% some 90- false positives.

The promise of how can we apply AI based algorithms to help make the haystack smaller, to make it more refined, to understand better when risk is really posed, or an AML activity has really happened.

Sean Creehan:

What about the second order risks here? I’m just interested, because I guess a few questions here. But to the extent you have the rise of robo advisors that are managing most of the money in an economy, I guess one question is, do you see the rise of monolithic, winner-take-all, one algorithm? Or three or four algorithms and that’s it? And just how they interact with each other?

David Hardoon:

It’s a very valid point. And I don’t have an answer for you yet. But one of the reasons … So I mentioned the structure of the group. One of the reasons of supervisory technology office isn’t just in going about applying technology. They do what I like to call blue sky regulation. What I mean by blue sky regulation is to look at the potential risk that is posed by the introduction of that technology, that previously we may have not thought of, because it just wasn’t there. It wasn’t used. That’s one, actually, very great example of what happens, let’s say, when all the algorithms oscillate to one particular behavior. What’s the potential impact to us? So like I said, we are still looking at it. We’re still trying to understand it. But we are looking at it.

And not just in terms of immediate second order, but I guess you’d call even third order. Things like, there’s quite a large push of financial activity to be moved, let’s say, to shared services, to cloud. I think largely, MAS issued out the outsourcing guidelines and so forth from cloud to services. But imagine the scenario when you’re now having systemically important institutions, having their activities on, let’s say, shared services and cloud providers. What does that mean? Do these now have to be regulated in certain manners, do they have to be monitored?

So these are very open questions that we are looking at. We are trying to understand, and what are the potential implications as a regulator, as a supervisor.

Sean Creehan:

So maybe we can talk a little bit more specifically about sup tech, and just the use of technology to enhance supervision and regulation. Maybe you can define it a bit for us. Also explain, is there a difference between sup tech and reg tech? We hear different terms.

David Hardoon:

I feel the utter need to apologize. So let me start by saying I apologize. The intent here was not just to add marketing jargon, which, to a certain extent, is. So suptech and reg tech is essentially the same thing. It’s the same coin, but the two sides of it. However, we thought that it was extremely important to introduce it. Let me explain to you why it was extremely important for us to introduce it rather than just using the term reg tech, which I think everyone’s familiar with and understands it.

The challenge that we realize is when we’ve had the various engagements with our financial institutions – however respectful they are – a lot of times when reg tech comes up, the perception is, you as a regulator telling us, as financial institutions what to do. We wanted to break that relationship. The point of supervisory technology, it’s about technology, that, as a supervisor, we are adopting, that we’re looking at.

Sup tech is not for regulated entities. It is for us, as a regulator. Now that naturally works hand in hand with regulatory technology, which is applied and implemented by those who are being regulated. So again, it’s splitting a hair. It is the same, but we wanted to have that marketing, that public communication effect, effectively, for the industry to realize, as a regulator, we’re taking it seriously. We’re stalled as regulators, but becoming a bit more of partners, trying to understand how can we make these things work better?

Sean Creehan:

One example that I think of that crosses those boundaries is something like a better database technology. It could be a blockchain. It could be something else.

David Hardoon:

Exactly.

Sean Creehan:

Where the bank is updating in real time and the supervisor is seeing it, and there’s ability to interact more seamlessly.

David Hardoon:

Absolutely. That’s exactly the thought process that we have. The other one is AML, as I’m mentioning, where, of course-

Sean Creehan:

Anti-money laundering for our listeners that don’t know the term.

David Hardoon:

No, anti- money laundering, of course, where you have a whole aspect of moral hazard. But we’re looking at how can we get our FIs, financial institutions to be better equipped, better adapt in leveraging technology, but equally, we can’t just say, you need to do better. How do we use technology to understand the various risks that expose, be it as a network, or be it from an individual point of view.

Paul Tierno:

It’s interesting that you raise that point, because when I hear, whether it’s sup tech, or reg tech, maybe it’s just for people who haven’t experienced the term before, but you think of catching up with fin tech, as opposed to, which may or may not be fair.

Like I say, this might be veering a little bit off topic to a certain extent but to what extent is sup tech … Is that a misnomer that it’s keeping up with fin tech? And how can regulators be more proactive to use sup tech in a way that –

David Hardoon:

No, no. I take it fully. It is associated with keeping up. But again, I don’t want to say the word blame, but I put it back to that mindset of the regulatory perspective, especially from entities that are regulated, who have strong compliance processes and so forth.

You find that the risk appetite, and I’ll use that term deliberately, to leverage technology, to experiment with technology, is quite nominal. And the objective here is saying, “Well, how can we start pushing the boundary effectively?” How can we see about leveraging new technologies that potentially are unproven, untested, unexplored, that may have a value to the organization? That’s essentially the idea here.

And what we find, when we started this off, while there’s quite a number of reg tech technologies out there, within the fin tech sphere, it’s still relatively rudimentary. And again, given the immense number of both data items that are available, the technologies and the … Well, science that’s been done, it’s, again, how do you ingest that inwards and start doing more?

And what we’ve found is, having us as a regulator participating from a sup tech point of view. Again, I’ll use that term, the risk appetite has increased. Not to say, okay, let’s throw everything out of the window and just use this untested technology, but saying, “Okay, concurrently to our ops, how do we now pilot things? How do we do proofs of concept? Blockchain and whatnot, to really seeing is there value here?”

Sean Creehan:

So it’s interesting, particularly given your background as a machine learning PhD, but thinking about the use of algorithms in risk management, you mentioned that. We’ve had people present to us before on this. But to the extent there’s the black box of the output, right? How did this algorithm come to this conclusion that this person deserved the loan or this one didn’t? And we’ve heard firms here in the US that are using this technology. And actually to comply with regulation and laws, if they have an algorithm, that’s how they’re making their decisions. They still have to come up with another form of regression that they can explain the functioning of it that got them to that same point. There’s this weird back engineering.

It strikes me as a little off, but in any event, it speaks to this broader issue of, if we let the algorithm start to dictate all of these decisions, but we can’t explain it. If it’s an unsupervised machine learning setup, this boggles your mind, right? And this is applicable in other parts of the economy and society. But how do you think about that? How are you addressing this challenge to the extent you can?

David Hardoon:

Absolutely, so let me start by saying that I think … Okay, first of all, the requirement of interpretability is fundamental. However, I think the interpretation of interpretability has gone off track. Let me explain by that, and tongue and cheek is if you look at, let’s say, traditional credit risk calculation various models and so forth, which are very explained, because you know exactly what are the attributes.

But again, how did they come up with? To a certain degree there is a black box there. It came up from someone’s mind. Using that as an over arcing analogy, the approach that we’re taking, and this, to a certain degree also came out from this FEAT initiative, was do we really need to know the black box? Because in certain scenarios, if you look at deep learning algorithms, you can’t even if you wanted to.

So how do you assure interpretability in those certain situations where algorithmically it’s not possible? Well, I still think it can be done, because if you have a framework of interpretability, can you understand the data that comes in? Can you understand where there’s a certain biasness? Can you understand the algorithm, the methodology? Fundamentally, we do know how it works. Can we put in place tests to ascertain the accuracy and the robustness of those algorithms?

All of these things we can do. In fact, all of these things we have been doing from a research perspective to say, “This is good versus this is not good.” And interpretability is part of our process. Now, you may not be able to say, “Okay, this is the exact path that the algorithm has taken. But I can show you that the methodology that this algorithm is taking is robust, is statistically sound, is appropriate. So it’s these kind of combination.

So I’m a bit less concerned about, can I explain this exact dot and how it actually works from that particular process? Vis-a-vis, this is the entire framework. These are the various steps within this framework, and this is how can I validate that I know what risk I’m posed? I’m either posing or posed to me, because inevitably that’s really the goal.

From a regulator point of view, if I come to you and saying, “Okay, whatever algorithm you’ve used, which by the way, I don’t want to review, because I think that opens up a Pandora’s box. The question is, do you know what you’ve done?” Let’s say, hopefully the answer is yes. Do you know how accurate it is? And do you know what risk is posed to you, as an organization, or fundamentally more systemically?

If the answers to all these are yes, I think that’s the part we should go to, and that’s the direction we should be thinking of, do we understand the risk that’s posed to us or posed to them from a financial institution perspective by the use of these technologies?

Paul Tierno:

Can you talk a little bit about why it would be a Pandora’s box to regulate the algorithm?

David Hardoon:

Yeah, well, again, this is a personal perception. I might be entirely wrong, but because if you think about it, the permutations of the algorithm per se, is infinite. And if, as a regulator, you now have to go to that Nth degree to review the algorithm … Again, it’s not that it’s not feasible. It just means you need a lot more people. Or perhaps you have to build a robo supervisor to actually start going through these codes. It’s a difficult thing. And again, to me it’s less the code perspective, because then there’s always a risk of, “Oh, no. The regulator has reviewed this code, therefore it’s good to go.” Vis-a-vis of, it doesn’t actually really matter.

Even if you’re using a very simplistic algorithm of X plus Z, or a comprehensive, Gaussian based algorithm of whatnot. Do you understand the risk that’s posed to you? If you do, then okay, and it’s something that’s acceptable, go right ahead.

It’s effectively exact same approach that we take on all other elements. When you have a financial institution that can’t handle a certain risk, we don’t allow that. Vis-a-vis another organization who can, we allow it. Again, within a certain boundaries.

Sean Creehan:

So maybe looking ahead to the future a bit, say in 10 years, what sort of … whether it be a transaction with a customer or a back end process within a financial institution, traditional or startup, what kind of processes or transactions do you see as utilizing machine learning in 10 years that don’t today?

Paul Tierno:

And let me just add to that question, what do you see as, how does the financial system sort of change? I’m thinking of jobs and how that all will change?

David Hardoon:

Well interesting that these two points actually interlink. But perhaps, if I may take a step back from 10 years to maybe five, I think the rate and the volatility in which the technology is changing. Year on year we’re amazed. We’re actually dazzled by what’s coming up.

Well, if I kind of start in reverse. I have a very, very strong belief that the way we perceive AI is wrong. I know that’s a very bold statement. I’ll explain what I mean by that, and I’ll get to your question. So artificial intelligence, from its very origin, from research was about replacement of a person.

If you look at even how people think of it, “Oh how can I get a machine to do this better than you?” I think that’s fundamentally wrong. I think we should be thinking of a different form of AI, augmented intelligence. Yeah, superbly sophisticated techniques that may result in the replacement or displacement of people, but it’s how do we augment ourselves?

So, I want to put that as the absolute pillar of everything. So with that, humans and people, and processes, and jobs will be still fundamental, because it’s how do I now leverage on this knowledge and information that I may not have been able to ascertain at that rate or speed previously? Now, I can run stress testing rather across a month or two months or three months. I can run it in minutes, if not seconds. It provides immense possibilities where people, us, play a core within that.

Now, to the aspects of the services. Of course there will be a fundamental change. Now, I think, for example, branches will still play a role, but how you interact with a branch is … it will be more about understanding what you’re interested in. How can it provide a more holistic service rather than a more transactional-based one?

In fact, in a way, you’re really actually seeing that, that financial institutions are becoming more of a hospitality type of industry and it’s a relationship. How do I understand you? So I can provide you better service using those machine learning algorithms and AI?

Secondly, how can I create new products, new services that leverage on that. So that’s on the engagement perspective. I think from a back end point of view, we also see a fundamental change from an operational point of view. Correctly using AI within ops has the promise of fundamentally reducing costs of operation. So actually, blockchain is one of them, but for example, if we look at money transfer across countries or whatnot. Likewise, KYC, know your customer, and AML, cost of running these things, because I now have systems that are able to be more resilient, robust, and consistent, it can reduce costs, as well as, of course, heightening that sensitivity to potential risk.

So all of these things will come together in, I think, providing fundamental better services that, from a back end, will be easily perceived to being very automated. But it’s not just about automation. It’s about information. Now, the concern that I have, if I maybe give a bit of a different perspective is how do we know not to be overloaded by information? That’s the one thing that I don’t know what will happen in five years. I really don’t know. We will have all these services. We’ll have all these possibilities.

We will see massive possibilities from inclusion because of these services. But how do we get to a point where we’re saying, “Well, how do we stay on track and don’t get overloaded by information, because a lot of information will come out. Secondly, will be, how do we, as consumers, interact with this type of system? When do we say, “I want to give you my information and get these kind of services.” Versus, “I don’t want to give you this information, but still get services.”

Sean Creehan:

Interesting stuff. So we’re sitting here just about to start an annual symposium co-hosted, co-organized by the Federal Reserve Bank of San Francisco and the Monetary Authority of Singapore. And two years ago, your colleague, Sopnendu Mohanty Dumahanti, chief fin tech officer of the MAS sat here and told us about what was going on in fin tech in Singapore and the region.

One of the provocative points that I remember and have followed up ever since is this idea that maybe the bank of the future may just consist of a centralized data repository. For lack of a better example, a Google, say. This is where the banks are scared that a tech company like Google will get involved. And then a series of open APIs. And then the best algorithm, the best provider of a various range of services will just link into that and market to those customers, and take their data, and with all sorts of permissions and all of that.

I’m wondering, do you see that as a potential future? To what extent will the banks of the future look like that? And does that imply fewer of them? It’s just a really interesting question, particularly for someone with your background as a chief data officer.

David Hardoon:

I don’t think it’s the bank of the future. I think it’s the bank of the now. What I mean by that is, if you look at … Okay, especially in Singapore, the number of APIs that have been released by the banks is phenomenal. Talking about 10s to 100s of APIs that already released either to select partners or just to the industry, as a whole.

It is the bank of the future. Actually, it’s almost an existential question to every single bank. You’re right, that there are very few from the big tech companies, the Googles, the Facebooks, the Amazon of the world. But I don’t think we should let fear dictate how we evolve and proceed because the reality is that we live in a network world, be it network from an individual perspective, network from a services point of view, and network being data, because there’s one you.

Taking that one you and replicating that same data multiple times, in fact, actually adds more risk and more issues. So where the banks are is, they’re really going through existential process of, do we just fall back and become a utility? Meaning, all I do is I provide that consolidated services from a data centric point of view and core banking operations that then, other, let’s say, fin tech players and various services sit on and provide additional services.

In fact, there’s … I forget the name of it, but there’s actually a bank in Europe that does that, as a business model. When you log in, all the services there are in fact provided by various partners and fin tech players. But then you have other banks just saying, “We’re reinventing ourselves. We don’t want to just be that utility. We also want to provide services.” Now, it’s an ecosystem. They will partner with some. They will compete with others, GDPR for example, which made it forcible that there’s portability of data, so it creates that network. It will happen. It will absolutely happen.

Paul Tierno:

Well, this has been great, David. We really appreciate it.

David Hardoon:

Thank you very much.

Sean Creehan:

Thanks for coming in.

David Hardoon:

Loved it. Thanks.

Paul Tierno:

We hope you enjoyed today’s conversation with David. For more episodes like this, you can find us on iTunes, Google Play, and Stitcher. If you like what you hear, please leave a review. Feedback from listeners like you will help more people find us. And for even more content, look up our Pacific Exchange blog available at frbsf.org. Thanks for joining us.