Artificial intelligence (AI) has been described as the ‘internet moment of our time’. As the technology evolves, it increasingly has the power to transform our lives. 2023 was a year when the rapid advancement of new technology combined with increased global uncertainty drove many CEOs to start thinking seriously about embracing and embedding AI into future growth strategies. But change at such pace requires agility and innovative thinking. How people prepare for that change is crucial. Scalability and speed are dependent on those around us embracing and adopting the technology.
In KPMG’s 2023 CEO Outlook survey, 70 percent of surveyed business leaders told us they were making generative AI a top investment priority. Meanwhile, more than half (52 percent) told us they were expecting to see a return on their investment in three to five years, highlighting the confidence that boardrooms have in AI’s seemingly limitless potential.
The challenge for CEOs and other leaders is how to develop a truly strategic AI strategy that embraces the possibilities without ignoring the technical and ethical risks. In KPMG’s CEO Outlook, more than half of leaders (57 percent) had concerns about the ethical challenges created by implementing AI, while in KPMG’s global tech report, a similar number (55 percent) of organizations told us progress toward automation had been delayed because of concerns about how AI systems would make decisions.
As political, business and civic society leaders meet this year in Davos for the latest World Economic Forum Annual Meeting, AI is one of the main topics on the agenda. KPMG and Microsoft are investing in the development of AI, with a clear focus on ensuring the right infrastructure and strategies are in place to help companies embrace AI in a responsible, human-focused way. Both organizations have been collaborating on approaches to responsible AI governance for some time and share a common view on the importance of the development of responsible, trust-focused AI for the business community and wider society.
So, how do you make AI more ‘human-centric’ and what steps should you be taking to embed AI in your future growth strategy, to preserve trust and mitigate risk? Three specialist voices from KPMG and Microsoft offer their insights to help you on your AI journey.
David Rowlands, Global Head of AI, KPMG International
I was appointed Global Head of AI at KPMG late last year as part of KPMG’s multibillion dollar global investment in the technology. It’s a top investment priority for KPMG and, as our CEO Outlook research highlighted, we’re not alone. An overwhelming majority of business leaders in major companies around the world have decided that now is the time to take AI seriously and embed it in future growth plans. And we’ve made a fast start, embracing the challenges of Trusted AI, of enabling our people and carefully managing our technology and data ecosystems.
The question of making AI more human-centric might appear quite vague at first. The biggest advocates for AI would argue we’re already there. It’s no exaggeration to suggest the tech has the potential to transform lives – from stripping away mundane day-to-day tasks in our jobs, to developing new innovative tools that assist modern medical science, or sustainability leaders tackling the climate crisis.
It is genuinely exciting, but with anything new and relatively untested – there are potentially major pitfalls. Making AI more ‘human-centric’ is in my view about setting out a clear strategy that ensures the focus is on trust, transparency and safety and make certain that AI benefits us all, rather than adding new layers of ethical and financial risk in an era where we’re already facing deep uncertainty.
KPMG has therefore launched its Trusted AI framework. It’s a set of clear principles of responsible and ethical AI transformation. It’s like a written constitution setting out clearly how we will use emerging AI technologies to enhance client engagements and the employee experience in a way that is truly responsible, trustworthy and safe. As an international network of member firms with hundreds of thousands of colleagues – most of whom are deep specialist knowledge workers – there can’t be anything more important.
For business leaders looking to embrace a human-centric AI future, I would urge them to look at governance first. For every person who’s excited about AI’s potential, there is another who is deeply concerned. Worried they may lose their job, or worried their company or personal data could be compromised. That’s why governance matters. It’s about setting out guidelines and rules before setting off on your journey – so that you can proceed safely and scale rapidly.
To start on that journey, be clear about what you want to achieve. Rather than simply adopting AI to keep up with your competitors, ask yourself what will success in the future look like? Where do you want your business to be in five years and how can AI be part of that? What will it feel like to be an employee in your future organization? Every citizen has a role to play in making AI work. Collaborate with your employees and upskill them.
The world is on the verge of something special with AI. Now is the moment for us all to look at how we make the technology work for humans. We can do that by being clear in our strategy, setting out guidelines that protect us and those around us, and taking everyone on the journey.
Antony Cook, Corporate Vice President and Deputy General Counsel, Microsoft
At Microsoft, we’re focusing on continuing to integrate AI into all our products responsibly, creating a foundation our customers can build upon as they leverage our AI technology. Our AI development and use is grounded in six principles: (1) fairness - all systems should treat all people fairly; (2) reliability and safety; (3) privacy and security; (4) inclusiveness - all systems should empower everyone and engage people; (5) transparency - all AI systems should be understandable; and (6) accountability - people should be accountable for AI systems. We have created tools and systems to ensure these principles are put into place in each and every product or system we develop. And we’ve created resources for our customers, leveraging our learnings, to help them ensure their use and development on our AI products is done responsibly and in alignment with all of these core principles.
When it comes to ensuring that AI is adopted and used responsibly, there are three key areas that I consider essential:
1. Leadership must be committed and involved: For responsible AI to be meaningful, it has to start at the top. At Microsoft, we have created a Responsible AI Council to oversee our efforts across the company. The Council is chaired by Microsoft’s Vice Chair and President, Brad Smith, and our Chief Technology Officer, Kevin Scott, who sets the company’s technology vision and oversees our Microsoft Research division. This joint leadership is core to our approach, sending a clear signal that Microsoft is committed not just to leadership in AI, but to leadership in responsible AI. The Responsible AI Council meets regularly, and brings together representatives of our core research, policy, and engineering teams dedicated to responsible AI. As customers consider how to structure their own responsible AI programs and governance, it’s imperative to ensure that senior leaders across multiple areas of the company be involved and directly engaged.
2. Build inclusive governance models and actionable guidelines: Each company should create a responsible AI governance model that is inclusive, bringing together representatives from engineering, research, and their policy teams, to develop and implement the governance model and the company’s guidelines around responsible AI. We have senior leaders tasked with spearheading responsible AI within each core business group at Microsoft, and we continually train and grow a large network of responsible AI “champions” to allow us to have broader representation across the globe. Last year, Microsoft publicly released the second version of our Responsible AI Standard, which is our internal playbook for how to build AI systems responsibly. We encourage companies to review this document and to take from it any of our practices that they find beneficial.
3. Invest in and empower your people: Standards and plans are great but will not be meaningful without training your employees to support the rollout of responsible AI across the company. We have invested significantly in responsible AI over the years, with new engineering systems, research-led incubations, and, of course, people. We now have nearly 350 people working on responsible AI, with just over a third of those dedicated full time; the remainder have responsible AI responsibilities as a core part of their jobs. Our community members have positions in policy, engineering, research, sales, legal, and other core functions, touching all aspects of our business.
Last summer, we launched our AI Customer Commitments, building on the resources we had already made available to our customers. We committed to continuing to share what we are learning about developing and deploying AI responsibly and to assist companies in learning how to do the same. Through our AI Assurance Program, we have offered to help customers ensure that the AI applications they deploy on our platforms meet the legal and regulatory requirements for responsible AI, including helping with regulator engagement and advocacy and with their risk framework implementation. And, finally, we have launched and will continue to grow our Responsible AI partner program, leveraging partners like KPMG to assist our mutual customers in deploying their own responsible AI systems.
There is tremendous potential in AI, and creating and using it responsibly will be key for us and our customers across the globe.