Can a bank become an AI pioneer? Some are certainly trying. But few have found the right combination of innovation, risk management, talent and capabilities required to drive real and transformative value from their investments. Is there a recipe for successfully applying AI and Machine Learning in Financial Services?
To find out how one large multinational bank is doing it, KPMG’s Doron Telem, Head of Financial Services Risk Consulting, KPMG in Canada sat down with Kathryn Hume, Interim Head at Borealis AI, the AI research centre backed by the Royal Bank of Canada (RBC). Here are some highlights from their conversation.
Doron Telem: What is the relationship between Borealis AI and RBC? How do you work with the business to deliver results?
Kathryn Hume: As a group, we are both outside and inside RBC at the same time. We have our own offices, culture and ways of doing things – we need to offer the type of culture and work environment that attracts the top Machine Learning researchers and engineers. But, at the same time, we closely partner with the main lines of business within RBC and focus on developing a portfolio of Machine Learning products that supports business interests across the bank.
DT: How does Borealis AI’s work help the bank become more resilient and relevant?
KH: We have been doing some very exciting things with the bank that, ultimately, help drive resilience and relevance.
For example, we recently launched a reinforcement learning-based trade execution system with our Capital Markets team. We wanted to see how we could use Machine Learning to help clients with large or bulk orders to better space out their sequence of trades as a way to drive the highest returns. The model we created has proven to be quite dynamic and able to respond in real time to changes in volatility much more flexibly than traditional trading algorithms do.
We’ve also been very successful at helping the retail and commercial bank turn yesterday’s business processes into tomorrow’s future product. So, for example, we’ve built cash flow prediction tools that help our commercial bankers proactively engage clients on upcoming financial needs and provide more tailored advice. And we’ve been working closely with our digital team to create applications that help retail clients manage their finances, benefitting from the latest Machine Learning techniques for personalization.
DT: Competition for top Machine Learning talent is fierce. How are you finding and retaining the right talent?
KH: Our group was initially created, in part, to offer Canadian Machine Learning scientists and researchers an exciting opportunity to drive real innovation in a high-impact sector. But I think what has really motivated most of our talent to come to Borealis AI is our integration with RBC. Our researchers want to apply their curiosity to real-world problems. RBC creates the opportunity to enable large data sets - and we gain the ability to go deep on those problems, understanding the nuances and working closely with the business to create better solutions by applying Machine Learning.
I think our focus on responsible AI has also attracted much of our talent. These are people who care about ethics. They care about the impact of their work. They know that their work can help create a better future society. And working at Borealis AI gives them a chance to do that at scale.
This article is featured in Frontiers in Finance – Resilient and relevant
Explore other articles › Subscribe to receive the latest financial services insights directly to your inbox ›
DT: What role does ethics and responsible AI play in your work?
KH: For Borealis AI and RBC, practicing ethical and responsible AI is not an option – it is the only way we do business. Our relationship with our clients is built on a foundation of trust, and this extends to our technologies. More broadly, I believe there is growing recognition of the ethical risks that can be exacerbated by AI. Around the world, we are seeing a crescendo of debate and discussion around the ethical and responsible use of AI. Businesses, citizens, policy makers and regulators are all very focused on the topic. And the context of the debate continues to evolve.
Innovation and ethics must go hand-in-hand. Otherwise, I suspect we will never earn the social licence required to bring AI into the mainstream. And, sadly, that would mean the tremendous value of AI would essentially be locked away.
DT: Has the field of model governance evolved as a result?
KH: Financial institutions have been thinking about model governance for a really long time. It’s not a new topic for them. The challenge is that Machine Learning models are almost like living software; the models themselves evolve as they learn over time and the context within which they operate changes over time. I think that adds a new level of complexity for financial services risk teams as they try to get their arms around model validation for Machine Learning.
Ironically, perhaps, it’s a problem we are hoping to apply Machine Learning to help solve. In fact, we’ve got a team working on developing automated model validation tools. The challenge is that the field is rather vast – we are working on one model for adversarial robustness—that is, keeping a model safe from manipulation by a nefarious hacker—another for fairness and one to identify bias. It’s possible there will be a universally-applicable model to automate model validation, but we have a long way to get there.
DT: Should all Machine Learning models be explainable before they can be used in a financial services setting?
KH: There is a lot of debate about the need for model explainability. I don’t believe that explainability is always necessary – there are other ways to govern outcomes and ensure models are being used fairly without having to explain exactly how they work. That said, there are certain use cases where you would absolutely want explainability; when making consumer credit decisions, for example.
Ultimately, it needs to be a business decision that considers the cost and complexity, the potential need for explainability down the road, and the necessary trade-offs that must be made in accuracy in order to achieve explainability.
DT: Do you see room for more AI and Machine Learning in financial services?
KH: I think what’s most exciting about this field is that we are really just starting to discover and unlock the value that AI and Machine Learning could bring to financial services and our customers and stakeholders.
We have yet to see the really transformative nature of AI in financial services. But I do believe that organizations like RBC have a responsibility and opportunity to lead the way. And I’m very proud that Borealis AI is part of that journey.
DT: Clearly, Borealis AI and RBC are taking a long-term approach to their investment into AI and Machine Learning capabilities, combining smart innovation with a rigorous focus on ethics and responsibility.
I suspect that – in the coming years – more organizations will start to follow suit, creating dedicated groups focused on exploring both the risks and the immense opportunities of new technologies. And that will be the key to unlocking that transformative value.
Dr. Kathryn Hume
Interim Head of Borealis AI
Kathryn is the interim Head of Borealis AI, the machine learning research lab for the Royal Bank of Canada. Dr. Hume is a widely respected author and speaker on AI, and taught courses on digital transformation and legal ethics at Harvard, MIT, the University of Toronto and the University of Calgary.
Connect with us
- Find office locations kpmg.findOfficeLocations
- kpmg.emailUs
- Social media @ KPMG kpmg.socialMedia