In simple terms, the metaverse is a virtual space with augmented and embedded virtual realities. Think of people, cities and countries that exist digitally instead of flesh and bone or mortar and steel. The metaverse brings together the power of simulation technologies that we’ve championed over the last century and builds virtual worlds (almost like a video game) where people can have an immersive experience.
Fashion retailers have flocked to the metaverse. Some are creating stores where users can dress their avatars, purchase products in a virtual store, and have them delivered in real life to their homes. Other firms have hosted a Metaverse Fashion Week, with brands showcasing their latest creations on the runway.
Grocery retailers are also looking to set up shop in the metaverse, where users walk around virtual grocery stores, fill their baskets, pay, and have the real thing delivered to their door. If you feel like some lunch after shopping, you might be able to order your favourite burger and fries.
As well as using established e-commerce platforms, we could also take a walk in the metaverse, browsing shops, picking up items, checking adverts and offers and having them delivered. The metaverse is becoming another option in the world of omnichannel retail.
The metaverse could also be a melting pot of machine learning technologies like Computer Vision (CV), Natural Language Processing (NLP) and Reinforcement Learning (RL). More companies are seeing the potential of bringing voice and vision technology together for decision-making. But, technological advancements in voice and vision alone can’t take us close to Artificial General Intelligence unless we can use these sensations in a concerted way to make decisions.
It means the metaverse may become a virtual experimental ground – a massive multiplayer game where we get to create, train, and deploy a rich mix of machine learning technologies to develop new retail options and consumer experiences.
Financial institutions and social media giants are among those joining retailers in the rush to claim their slice of the metaverse pie. But crucially, will consumers want the metaverse? Remember, we will have to devote physical (not virtual) time, money and energy to interact with the metaverse.
At the same time, I can see it being parallel to playing a videogame where you build community with your virtual neighbours, but do we have the cognitive capacity to live a parallel life? Don’t we already have information overload?
Yes, tons of companies would be minting money showing venture capitalists and users the promised metaverse land, but it’d be crucial to ask what the product was? Build it, and the customer will come flocking mantra may not work. So, what could work?
In my opinion, the first step towards unleashing the metaverse is when you can use it to (a) make decisions and (b) use it as a synthetic world for machine learning (ML) data generation. One needs to build the product-market fit incrementally, otherwise the customer adoption argument becomes quite strained if industries start opening shops in virtual spaces and using digital currencies or Non-fungible Tokens for buying/selling digital assets.
The incremental step towards building such a metaverse and addressing points A and B above is the digital twin, a subset of the metaverse. Take a small piece of the natural world, say a retail store, and use a dumbed-down metaverse (the digital twin) to enable real-time visibility of all assets (commodities, store associates, supply chain flows, etc.).
Then use technologies like CV to measure real-time supply-demand here-and-now at the store. NLP can sieve through thousands of correspondences and tell you what tasks need to be done. Finally, under the constraints of the digital twin, RL can make decisions about unfolding futures.
This will help store managers have a real-time view of store operations and take the nascent field of digital twins to make usable decisions. Technologically, it allows us to combine various voice and vision attributes and take optimal decisions.
The other incremental step towards the metaverse surrounds the digital twin again, but this time with a view to generating synthetic data. Some technology companies and start-ups are already championing this line of thought. The central concept behind all of this is domain randomisation.
A digital twin enables us to create synthetic worlds and various subsets of the same world. What happens if my bedroom is painted red instead of white, or the roads I walk on are pebbled instead of concrete?
For example, most deep-learning-based CV algorithms take a ton of training data. The digital twin (if constructed with rigour to reduce covariance shift from a natural environment) can supply us with annotated synthetic data, be it millions of kilometres of driving data for self-driving cars or hundreds of permutations for objects under different view statistics i.e. fruit and veg in store will look different if seen from the left or right, night or day.
Using computer graphics, one can have CV algorithms look at all the possible iterations of your fruit and veg. We’re seeing a turn to data augmentation in other industries too, like the deep learning neural network algorithms fellow researchers and engineers are working on for machine vision cameras.
Similarly, RL algorithms have a tough time generalising when the domain is randomised, i.e., random changes in material (texture, colour), light direction, light conditions, and placement of objects. The metaverse concept could help us alleviate some of these data efficiency problems.
In summary, incremental steps can take companies from useful current virtual experiences, into focused end-product discussions using digital twin so that the excitement for the metaverse can shape and lessen the input-output mismatch between the product and market needs. Unchecked false assumptions can kill companies, no matter how technically great the solution is.