Skip to content
December 19, 2024

Investment information for the new generation

Search

Cyber Risk in the Metaverse

The TV series Gilmore Girls singlehandedly got me through my first year of university. I don’t know if it was that Lorelai reminded me of my mother, or that library-dweller Rory reminded me of myself, or that the show seemed to insinuate that you could eat a pound of Chinese takeout and marshmallows and ice cream and still fit into a pair of size 0 low-waisted jeans (it was, after all, the early 2000s). In any case, it served as a remedy for homesickness and justification for overeating on a Tuesday.

And to those of you who are not fans of the show by way of it being too “girly” or too “slow” or too “meandering” I argue that, at the very least, you have to respect its creators. Because the people behind Gilmore Girls asked a very bold question.

And the question was: What if there was a town where nothing happens, and people just walked around in it and every single person in the town (aside from Lorelai) was mildly insufferable?

Then a lightbulb went off in my head and I made a connection that only those with a hard-hitting journalism background could possibly uncover. The Metaverse-people and the Gilmore Girls-people are colluding. What better way to describe the Metaverse than “a world where nothing happens, and *avatars just walk around in it and every single person in this world is *utterly insufferable?” 

Of course, there are some key differences. The metaverse is clearly lacking Amy Sherman-Palladino’s fast-paced, witty dialogue. It is also lacking a clear plot. The metaverse is everchanging (and therefore ever-provoking my anxiety) whereas Gilmore Girls serves as a constant in my life (I know exactly what I am going to get every time).

But what may be the most apparent discrepancy between the two is something known as cyber risk. Seeing as the only potential threat Gilmore Girls poses is Netflix rudely asking 3 hours into a season 4 binge “are you still watching?” on a Friday evening, only my ego is at stake.

In the metaverse, apparently, we are all at far greater risk.


Imagine discussing a confidential million-dollar deal with your boss. You drink whiskey (because that’s what people do when making confidential million-dollar deals), sign a little paper, and leave the room.

A few days later you meet up with your boss again and toss him a wink (feeling very macho for making said confidential million-dollar deal). Your boss feels uncomfortable, and you soon realize that they have no recollection of your very fancy, confidential, million-dollar deal.


What just happened?

If this occurred in reality, you may be suffering from schizophrenia, and I’d suggest medical attention. In the metaverse, however, this might mean you were the victim of a hacked avatar or deepfake. Deepfakes refer to manipulated digital figures that look or sound like someone else.

Sidebar: The panic surrounding technology’s power to spread false information has only grown since the 2016 election, when “fake news” became a household term. Computers can now generate fake videos of people saying and doing things they would never do. These videos are called “deepfakes” due to their use of a type of machine learning, or AI, called deep learning.

Deepfakes are just one way AI can mess with your perception. Now, computers using deep learning can create AI-generated faces that do not exist in real life. I hate it here.

Everyone always rushes to the new shiny thing. Companies like Meta (Facebook) and Ralph Lauren are hurrying to plant their pixelated flag in the virtual world without acknowledging the blatant cybersecurity risks the metaverse presents.

Prabhu Ram, head of the industry intelligence group at CyberMedia Research and man who is far more articulate than I am, puts it best:

Since the contours and potential of the metaverse are yet to be fully realized, the overt concerns around privacy and security issues in the metaverse remain confined to only a few ‘tech-aware’ companies…

As new attack vectors emerge, they will require a fundamental realignment of today’s security paradigms to identify, verify and secure the metaverse.”


Cybercrime is the new norm. 

Lately I’ve been discovering a myriad of things I didn’t even know I had to be concerned about. The most recent finding, of course, is the knowledge that someone could make a fake video of me that looks like a real video of me – what if they put me in something awful (like a “Make America Great Again” hat) leaving strangers and acquaintances alike to ponder my Trumpism (bleh! God forbid).

Cybercrime in the real world is becoming more and more rampant. A cybersecurity firm called Check Point reported a 50% increase in overall attacks per week on corporate networks in 2021 compared to the year earlier. Someone send this stat over to Ralph and Mark.


‘Tech people’ and their tech opinions of how to combat cyber risk:

Gary Gardiner, head of security engineering at Check Point Software Technologies and man with a great ring to his name, argued that companies involved in designing the metaverse will have to work together to establish a common security standard.

‘Tech people’ are looking at using blockchain to identify and verify users. Alternatively, to complicate things more, ‘tech people’ have also suggested using tokens that could be assigned by an organization (I don’t really understand this one – it’s clearly a ‘tech’ thing). Another option, biometrics (which is the measurement of physiological characteristics like an iris pattern) in a headset. In any case, these are all potential solutions to creating trust amongst users, so you actually know who you are talking to.

Gardiner also suggested having “little exclamation marks” above avatars’ heads to signal that a person is untrustworthy. I think this one is kind of cute – too bad we haven’t found a way to transfer this to reality – it would’ve saved a lot of time in my girlfriends dating lives.


Data Breaches – because apparently, things can always get worse. 

I’m sure you recall the 2018 Facebook and Cambridge Analytica scandal. It was that cute period of time where the world realized that millions of users’ personal data was being harvested and used without their consent to manipulate political elections.

You can only imagine the field day corrupt companies could have with the trails of data scattered around the metaverse. The invasion of user privacy by tech companies will only grow more dangerous in the metaverse if left unchecked.

When users are wearing devices like VR headsets, organizations can collect intimate data such as their head movements, eye movements and voice. But wait, isn’t this data necessary in preventing the aforementioned “cyber risk” so you can verify who you are talking to? Yes. Good point, well made. Seems the ‘tech people’ don’t have an answer for this yet.

The fact that within seconds a company can identify who is wearing a particular device and track the intricacies of that person’s behavior presents an infinite number of problems. I could go on ranting about how deeply unsettling this is, but I get the sense you already know.


The end for now. 

Microsoft co-founder Bill Gates predicted in a blog post in December that within the next two to three years, most virtual meetings will move to the metaverse.

“The foundation [of the metaverse] has to be done well because if the foundation is weak and it’s not done well, people will lose confidence in the platform, and we’ll stop using it.” – Gary Gardiner

Until next week.

 

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *