The latest enterprise risk management news from around the world

Robot risks: examining the underlying insecurities in the future of robotics

The social influence of robots on people and the insecurities this can bring should not be underestimated. Research conducted by Kaspersky and Ghent University has found that robots can effectively extract sensitive information from people who trust them, by persuading them to take unsafe actions. For example, in certain scenarios, the presence of a robot can have a big impact on people’s willingness to give out access to secure buildings.

The world is rapidly moving towards increased digitalization and mobility of services, with many industries and businesses relying strongly on automatization and the use of robotic systems. Currently, most of these robotic systems are at the academic research stage and it is too early to discuss how to incorporate cyber security measures. However, Kaspersky and Ghent University’s research found a new and unexpected dimension of risk associated with robotics – the social impact it has on people’s behaviour, as well as the potential danger and attack vector this brings. 

The research focused on the impact of a specific social robot – one that was designed and programmed to interact with people using human-like channels, such as speech or non-verbal communication, and as many as around 50 participants. Assuming that social robots can be hacked, and that an attacker had taken control in this scenario, the research envisaged the potential security risks related to the robot actively influencing its users to take certain actions including:

Gaining access to off-limits premises. The robot was placed near a secure entrance of a mixed-use building in the city center of Ghent, Belgium, and asked staff if it could follow them through the door. By default, the area can only be accessed by tapping a security pass on the access readers of doors. During the experiment, not all staff complied with the robot’s request, but 40 percent did unlock the door and keep it open to let the robot into the secured area. However, when the robot was positioned as a pizza delivery person, holding a box from a well-known international take away brand, staff readily accepted the robot’s role and seemed less inclined to question its presence or its reasons for needing access to the secure area.

Extracting sensitive information. The second part of the study focused on obtaining personal information which would typically be used to reset passwords (including date of birth, make of first car, favourite colour, etc.). Again, the social robot was used, this time inviting people to make friendly conversation. With all but one participant, the researchers managed to obtain personal information at a rate of about one item per minute.

Commenting on the results of the experiment, Dmitry Galov, Security Researcher at Kaspersky, said, “At the start of the research we examined the software used in robotic system development. Interestingly we found that designers make a conscious decision to exclude security mechanisms and instead focus on the development of comfort and efficiency. However, as the results of our experiment have shown, developers should not forget about security once the research stage is complete. In addition to the technical considerations there are key aspects to be worried about when it comes to the security of robotics. We hope that our joint project and foray into the field of cybersecurity robotics with colleagues from the University of Ghent will encourage others to follow our example and raise more public and community awareness of the issue.”

Tony Belpaeme, Professor in AI and Robotics at Ghent University added: “Scientific literature indicates that trust in robots and specifically social robots is real and can be used to persuade people to take actions or reveal information. In general, the more human-like the robot is, the more it has the power to persuade and convince. Our experiment has shown that this could carry significant security risks: people tend not to consider them, assuming that the robot is benevolent and trustworthy. This provides a potential conduit for malicious attacks and the three case studies discussed in the report are only a fraction of the security risks associated with social robots. This is why it is crucial to collaborate now to understand and address emerging risks and vulnerabilities – it will pay off in the future.”

Read the paper ‘The potential of social robots for persuasion and manipulation: a proof of concept study’ (PDF).



Want news and features emailed to you?

Signup to our free newsletters and never miss a story.

A website you can trust

The entire Continuity Central website is scanned daily by Sucuri to ensure that no malware exists within the site. This means that you can browse with complete confidence.

Business continuity?

Business continuity can be defined as 'the processes, procedures, decisions and activities to ensure that an organization can continue to function through an operational interruption'. Read more about the basics of business continuity here.

Get the latest news and information sent to you by email

Continuity Central provides a number of free newsletters which are distributed by email. To subscribe click here.