Is VR Mind-Hacking Prevention Possible?

Is VR Mind-Hacking Prevention Possible?
October 27, 2016

Virtual reality is emerging as the technology of the future – but what threats and challenges could it present to business, consumers and government?
Virtual reality (VR) is a technology which really captures the imagination of consumers and business alike – both figuratively and literally.
We have had fantastical portrayals of VR in literature and movies for many years, with the technology now starting to make waves in the real world. We have seen the VR headset market explode, with new incumbents like the Sony Playstation VR going up against more established players like Oculus Rift, Samsung Gear VR and HTC Vive.
We are also seeing the application of VR move into new markets, with Lloyd’s Bank leveraging VR for their graduate intake and Alibaba rolling out a payment service for VR shoppers. There has also been a real drive for companies to push the boundaries of VR – Microsoft only recently published research about technology providing haptic feedback to virtual experiences. This could make VR even more real, with people able to touch and feel the virtual.
Like all new technologies, VR is not immune to criticism and wild stories about the dangers of immersing yourself into the virtual world. From hackers taking over your mind to brain damage, there has been much discourse on the inherent dangers of the technology – but is there any truth to these dangers and if so, how can we protect against them?

PlayStation VR is the latest member of the PS4 family and is the latest addition to the VR headset market.
Like all other connected technology, one of the biggest security risks associated with VR is data and privacy. Although not as exciting as some of the myths perpetuated around VR, data and the security of information generated will be one of the biggest VR concerns – just like today’s connected devices.
“If personal data is collected (for example, through payment mechanisms or profiles of users), data protection compliance obligations will come into play. Transparency and consent, in terms of what data is collected and used for, will be paramount – as will ensuring that there are robust technical and organisational mechanisms in place to ensure that data is kept secure,” Elle Todd, Head of Digital and Data at Olswang, told CBR.
However, there is one major difference between the data collected with today’s devices and the information gathered by VR devices. The most personal information which makes us unique will be put into play – our behaviours, our actions, our movements, what we look at and even our brain waves. This raises a concern often linked to data security – privacy.
“One of the main concerns about VR technology is around privacy, for it introduces the capability to collect new types of very sensitive and very precise data about its users,” Teesside University lecturer Joao Ferreira told CBR.
“Oculus’s privacy policy, for example, states that they automatically collect location information and information about physical movements and dimensions. It is reasonable to expect that future mainstream VR devices will also collect information on emitted brain waves and patterns.”
This could give rise to a whole new level of identity theft, with hackers seizing the new data sets to create more elaborate impersonations. Dr Ferreira even points to the possibility of hackers exploiting brain-computer interfaces in order to extract information such as bank cards and PIN numbers.
The hackers themselves, who are constantly evolving and seeking new exploits and methods, will in part stick to the tried and tested hacks. Positive Technologies’ Alex Matthews spoke to CBR about hackers leveraging the simplicity touted as a benefit of the VR world, with users unwittingly deploying a Trojan or leaking their password with just a wave of a hand. Phishing, meanwhile, could be done via fake virtual objects – a ‘duping’ method already used by scammers according to Mr Matthews.
However, the most dangerous VR object resides in a new payload, with Mr Matthews saying: “AI agents will be, perhaps, the most dangerous VR objects. AI is a hard task for security checks since the range of its actions and reactions could be pretty wide. Some AI bots, like Siri, are programmed to be spontaneous to sound “more natural”. So how can you tell a hacked AI bot from a secure one?”
Hackers will try to manipulate the virtual to create profit in the physical world – you only need look at how Pokemon Go was used by scammers to lure players into a location to mug them. However, they will also try to manipulate the virtual in order to create real physical harm, with Mr Matthews saying:
“VR provides instruments for mind-hacking. It is known that stereoscopic vision systems may cause dizziness, nausea, blurred vision, muscle twitching, headache and disorientation. For vendors, it’s a side-effect they try to reduce; but for hackers, it could be the way to attack you if they learn how to increase these side-effects.”
There is also a danger, although unknown if profitable for malicious actors, that physical harm could extend to the psychology of the user. Where there is a risk, there are people looking to take advantage, and serious thought does need to be given to the blurring of the real and physical worlds and the impact on the mind. Although maybe not under the scope of security, supervision will need to play a part in the VR future,  as AKQA’s Andy Hood told CBR:
“In virtual environments people are very likely to adopt personas and avatars that represent an idealized version of who they are, or even as someone or something entirely different.  The highly immersive nature of virtual reality experiences lead to concerns particularly as young people are even more closely connected online than ever. Through VR, it does present an extra dimension to these problems which requires much stricter supervision and security.”

The UK’s new national cyber centre, announced by the Chancellor in November, will be called the National Cyber Security Centre (NCSC)
Centre to issue guidelines and policies, as well the continuation of the ICO to regulate any breach of data protection laws.
Alex Matthews envisions “some strict PCIDSS-like security standards for VR services where financial operations are involved”, as well as “security audits for VR objects and worlds, similar to penetration testing used currently for web-sites and other critical applications.”
However, this is all really speculation, as you cannot form a defence when you have no experience of the attack. This brings the VR security issue full-circle; the technology is still in its infancy and there are not yet any significant data points available in order to form an attack of defense against risks.
However, at its core VR is a connected technology. We know that connected technologies create a greater attack surface for hackers – hackers who are looking to turn a profit on personal and valuable data. We can assume that hackers will want the greater data sets offered by virtual reality, leading to more advanced identity theft and data leaks.
Security teams need, like in other areas, to adopt a when not if attitude to VR – security needs to be constantly evolving and the emerging threats identified, assessed for risk and monitored. Government and industry also need to look at this emerging tech and try to create a security benchmark for both consumer and business.
Virtual Reality, even now, is enhancing our digital experience – but it is also enhancing the risks to our digital identity and creating a new revenue stream for those with malicious purposes. As we enter this new virtual world, we must remember that our virtual actions could have far-reaching real-world consequences.

Related articles

VRrOOm Wechat