To combat potential risks, organizations need to take a holistic approach to responsible AI practices

Published: Thursday, 04 July 2019 08:11

The estimated $15.7trn economic potential of artificial intelligence (AI) will only be realised if the integration of responsible AI practices occurs across organizations, and is considered before any developments take place, according to a new paper by PwC.

Combating a piecemeal approach to AI’s development and integration – which is exposing organizations to potential risks – requires organizations to embed end-to-end understanding, development and integration of responsible AI practices, according to a new toolkit published by PwC.

PwC has identified five dimensions that organizations need to focus on and tailor for their specific strategy, design, development, and deployment of AI. These are: Governance; Ethics and Regulation; Interpretability & Explainability; Robustness & Security; and Bias and Fairness.

The dimensions focus on embedding strategic planning and governance in AI’s development, combating growing public concern about fairness, trust and accountability.

Earlier this year, 85 percent of CEOs said AI would significantly change the way they do business in the next five years, and 84 percent admitted that AI-based decisions need to be explainable in order to be trusted.

Speaking at the recent World Economic Forum in Dalian, Anand Rao, Global AI Leader, PwC US, says:

“The issue of ethics and responsibility in AI are clearly of concern to the majority of business leaders. The C-suite needs to actively drive and engage in the end-to-end integration of a responsible and ethically led strategy for the development of AI in order to balance the economic potential gains with the once-in-a-generation transformation it can make on business and society. One without the other represents fundamental reputational, operational and financial risks.”

As part of PwC’s Responsible AI Toolkit, a diagnostic survey enables organizations to assess their understanding and application of responsible and ethical AI practices. In May and June 2019, around 250 respondents involved in the development and deployment of AI completed the assessment.

The results demonstrate immaturity and inconsistency in the understanding and application of responsible and ethical AI practices:

Anand Rao, Global AI Leader, PwC US, says:

“AI brings opportunity but also inherent challenges around trust and accountability. To realise AI’s productivity prize, success requires integrated organizational and workforce strategies and planning. There is a clear need for those in the C-suite to review the current and future AI practices within their organization, asking questions to not just tackle potential risks, but also to identify whether adequate strategy, controls and processes are in place.

“AI decisions are not unlike those made by humans. In each case, you need to be able to explain your choices, and understand the associated costs and impacts. That’s not just about technology solutions for bias detection, correction, explanation and building safe and secure systems. It necessitates a new level of holistic leadership that considers the ethical and responsible dimensions of technology’s impact on business, starting on day one.”

Find out more about PwC’s Responsible AI Toolkit at www.pwc.com/rai