Join Gaming Leaders, Alongside GamesBeat and Facebook Gaming, For Their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 January 25-27, 2022. Learn more about the event.
This article was written by Louis Rosenberg, CEO and Chief Scientist at Unanimous AI
My first experience in a virtual world took place in 1991 as a doctoral student working in a virtual reality laboratory at NASA. I was using a variety of early VR systems to model interocular distance (i.e. the distance between your eyes) and optimize depth perception in software. Despite being a true believer in the potential of virtual reality, I found the experience somewhat miserable. Not because of the low fidelity, because I knew it would improve steadily, but because it was confining and claustrophobic to have a diving mask strapped to my face for an extended period of time.
Even when I used the first 3D glasses (i.e. shutter glasses for viewing 3D on flat monitors), the feeling of being locked in didn’t go away. I always had to keep my gaze forward, like I was wearing blinders on the real world. There was nothing I wanted more than to take the blinders off and allow the power of virtual reality to be splashed into my actual physical environment.
This led me to develop the Virtual Fixtures System for the US Air Force, a platform that allows users to manually interact with virtual objects that have been precisely integrated into their perception of a real environment. This was before expressions like “augmented reality” or “mixed reality” were coined. But even in those early days, watching users enthusiastically experiment with the prototype system, I was convinced that the future of computing would be a seamless fusion of real and virtual content displayed all around us.
Event
The 2nd Annual GamesBeat and Facebook Gaming and GamesBeat: Into the Metaverse 2 Summit
Learn more
Cut 30 years later, and the phrase “metaverse” suddenly became fashionable. At the same time, the hardware for virtual reality is significantly cheaper, smaller, lighter, and has much higher fidelity. And yet, the same problems I experienced three decades ago still exist. Like it or not, wearing a scuba mask is unpleasant for most people, causing you to feel cut off from your surroundings in ways that just aren’t natural.
This is why the Metaverse, when widely adopted, will be an augmented reality environment accessible through transparent lenses. This will hold true even though full VR hardware will offer significantly higher fidelity. The point is, visual fidelity is not the factor that will drive adoption at scale. Instead, adoption will depend on which technology delivers the most natural experience to our perceptual system. And the most natural way to present digital content to the human perceptual system is to integrate it directly into our physical environment.
Of course, a minimum of fidelity is required, but what is much more important is perceptual consistency. By this I mean that all of the sensory signals (i.e. sight, sound, touch, and movement) feed into a single mental model of the world in your brain. With augmented reality, this can be achieved with relatively low visual fidelity, as long as the virtual elements are recorded spatially and temporally in your surroundings in a convincing manner. And because our sense of distance (i.e. depth perception) is relatively crude, it’s not difficult to be convincing for it.
But for virtual reality, providing a unified sensory model of the world is much more difficult. This might seem surprising because it’s much easier for VR hardware to deliver high-fidelity visuals without lag or distortion. But unless you use elaborate and impractical equipment, your body will be sitting or still while most virtual experiences involve movement. This inconsistency forces your brain to create and maintain two separate models of your world – one for your real environment and one for the virtual world that is presented in your headset.
When I tell this to people, they often push back, forgetting that no matter what goes into their helmet, their brain always maintains a model of their body sitting in their chair, facing a particular direction in a particular room, with their feet up. touching the ground (etc.). Because of this inconsistency of perception, your brain is forced to maintain two mental models. There are ways to reduce the effect, but it’s only when you merge the real and virtual worlds into a single cohesive experience (i.e. nurture a unified mental model) that this is truly resolved.
This is why augmented reality will inherit the earth. It will not only eclipse VR as the primary gateway to the metaverse, but also replace the current ecosystem of phones and desktops as the primary interface to digital content. After all, walking down the street with a bent neck, looking at a phone in your hand, is not the most natural way to experience the contents of the human perceptual system. Augmented reality is, which is why I firmly believe that within 10 years, AR hardware and software will become dominant, eclipsing the phones and desktops in our lives.
This will trigger incredible opportunities for artists and designers, artists and educators, as they are suddenly able to beautify our world in a way that defies constraints (see Metaverse 2030 for examples). Augmented reality will also give us superpowers, allowing each of us to change our world in the blink of an eye or a finger. And it will feel deeply real, as long as the designers focus on coherent perceptual signals feeding our brains and care less about absolute fidelity. This principle was such an important revelation to me while working on augmented reality and virtual reality in the early 90s that I gave it a name: perceptual design.
As for what the future holds, the vision currently portrayed by major platform providers of a metaverse full of cartoonish avatars is misleading. Yes, virtual socializing worlds will become more and more popular, but they will not be the means by which immersive media transforms society. The real metaverse – the one that will become the central platform of our lives – will be an augmented world. And by 2030, it will be everywhere.
Louis Rosenberg is CEO and Chief Scientist at Unanimous AI.
Data makers
Welcome to the VentureBeat community!
DataDecisionMakers is the place where experts, including data technicians, can share data-related ideas and innovations.
If you’re interested in learning more about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing your own article!
Read more about DataDecisionMakers