In its new report ‘The Rise of Artificial Intelligence: Future Outlook and Emerging Risks’, Allianz Global Corporate & Specialty (AGCS) identifies both the benefits and emerging risk concerns around the growing implementation of artificial intelligence (AI).
Today, ‘weak’ or basic forms of AI are able to perform specific tasks, but future generations of ‘strong’ AI applications will be capable of solving difficult problems and executing complex transactions. AI is finding users in almost every industry - from chatbots that offer financial advice to helping doctors diagnose cancer. Better weather prediction, financial transfers and oversight of industrial machinery also fall into the AI scope. According to Accenture, AI could double the annual economic growth rate in 12 developed economies by 2035 (1).
With these potential benefits, however, come risks. Cyber attacks, which are one of the biggest threats to businesses according to the Allianz Risk Barometer 2018, illustrate the contradictory benefits of new technologies such as AI: for instance, AI-powered software could help reduce cyber risk for companies by better detecting attacks, but could also increase risk if malicious hackers are able to take control of systems, machines or vehicles.
Furthermore, AI could enable more serious and targeted cyber incidents to occur by lowering the cost of devising attacks. The same hacker attack – or programming error – for example, could be replicated on numerous machines. It is already estimated that a major global cyber attack has the potential to trigger losses in excess of $50 billion (2), but even a half-day outage at a cloud service provider has the potential to generate losses around $850 million (3).
Emerging AI risks
To identify emerging AI risks, AGCS has focused on five areas of concern: software accessibility, safety, accountability, liability and ethics.
In terms of safety, the race for bringing AI systems to market could lead to insufficient or negligent validation activities, which are necessary to guarantee the deployment of safe and functional AI agents. This, in turn, could lead to an increase in defective products and recalls.
AI agents may take over many decisions from humans in the future, but they cannot legally be held liable for those decisions. In general, the manufacturer or software programmer of an AI agent is liable for defects that cause damages to users. However, AI decisions that are not directly related to design or manufacturing, but are taken by an AI agent because of its interpretation of reality, would have no explicit liable party, according to current law.
“Leaving the decisions to courts may be expensive and inefficient if the number of AI-generated damages start increasing,” Michael Bruch, Head of Emerging Trends at AGCS, says. “A solution to the lack of legal lability would be to establish expert agencies or authorities to develop a liability framework under which designers, manufacturers or sellers of AI products would be subject to limited tort liability.”
Insurers will have a crucial role to play in helping to minimize, manage and transfer emerging risks from AI applications. Traditional coverages will need to be adapted to protect consumers and businesses alike. Insurance will need to better address certain exposures to businesses such as cyber attacks, business interruption, product recall and reputational damage. New liability insurance models will likely be adopted – in areas such as autonomous driving– increasing the pressure on manufacturers and software vendors and decreasing the strict liability of consumers.
- (1) Accenture, Why Artificial Intelligence is the Future of Growth, January 2017
- (2) Lloyd’s, Extreme cyber-attack could cost as much as Superstorm Sandy, July 2017
- (3) Allianz Risk Barometer 2018, January 2018.