Digital Reality™ technologies (augmented, mixed, and virtual) have been at the forefront of the technology market in news, hype, and investment growth. Futurists predict that it will transform the way we live, work, and play while critics point to significant challenges to adoption because of device size, power constraints, and weight of hardware. However, a key factor in the growth and impact of Digital Reality will come from the ever-increasing maturity of cloud computing.
Digital Reality requires immense computing power to deliver stringent frame-rate requirements, dynamic lighting and shadows, stereoscopic processing, 360 environments with volumetric data and SLAM environment mapping that have to happen in under 20 milliseconds. This is what creates the need for computing horsepower before even considering how complex the content is. These lofty graphics requirements mean a large hardware cost, severe battery constraints, and graphics complexity limitations.
All this means Digital Reality has a tradeoff: Either tether to a powerful computer (often costing over $1,500) or suffer low-resolution, cartoonish 3D “holograms.” For the consumer, this means it simply costs too much when compared to traditional 2D entertainment. For enterprises, this makes it difficult to run already processor-intensive programs (like CAD, BIM, EHR) that contain extremely complex, dense datasets.
The world is more connected than ever, and there is an ever-increasing demand for quicker speeds and higher computational power. Cloud computing has already had a significant impact, with estimates claiming the market reached nearly $250 billion in 2017 alone. These technologies enable businesses to virtualize various components of the technology stack, allowing them to run servers, enterprise platforms, and even business programs “by the minute,” only using as much as their business needs and avoiding any upfront or fixed overhead of costly IT. High-speed fiber-optic networks provide a global connective layer between consumers, businesses, and these cloud services.
The benefit of cloud computing has meant that we can run apps on a laptop, tablet, or smartphone that would normally be too complex for the device to handle. Speech recognition is a great example of this, where a device leverages high-speed internet to connect to a datacenter’s massive CPU bank and process the data there. A smartphone alone would struggle to store and process millions of voice samples to process a request in order to determine the user’s intent. Instead, the device compresses the voice sample and sends it to the cloud to be processed and returns the result to the user – an example of outsourcing computationally intensive processes to the cloud.
Recent unveilings from major players in the cloud industry have indicated a seismic shift in the future of computing via the cloud. Cloud providers have been installing graphic processing units (GPU) in their data centers, which means that cloud platforms have gained the capability to render graphics. The GPU-enabled-cloud enables programs that require an expensive, dedicated, local GPU to instead be virtualized and run in the cloud in an efficient and cost effective manner. This also enables graphics processing and rendering to happen on the cloud, and not local devices, just like speech recognition on a mobile device. The cloud market’s adoption of GPUs allow us to move new types of computing, specifically rendering graphics, off of the device, again removing the need for expensive, large, and power-consuming GPUs at the local device level.
We can call this graphics as a service (GaaS), cloud computing’s upcoming capability that will enable Digital Reality hardware to become more powerful, while also enabling significant miniaturization by removing substantial hardware requirements. This GaaS cloud-rendering capability for Digital Reality will rely on technology innovation from telecom and network companies, centered on the need for low latency connections and a massive increase in data volumes.
As a result of GaaS coming to market, along with widespread access to low-latency connections like 5G, cloud rendering will become a standard affair. Over the next 3-5 years, we expect data to be delivered to devices within 10-20 milliseconds using fiber optics, or in 1-4 milliseconds with 5G networks. Current VR systems typically achieve 30 to 40 milliseconds of latency. These high-speed networks can provide the critical connection to the virtually unlimited amount of processing power via GaaS, rendered and housed in the cloud, to deliver Digital Reality experiences to consumers and the enterprise. As a result, we are seeing entertainment companies leverage cloud computing to offer consumers an alternative to purchasing expensive hardware.
In order for devices to reduce the need for local graphic-processing power via GaaS, latency requirements must be established and met through the use of edge computing and data centers for cloud rendering. Network innovations to make more flexible networks in how resources are allocated will be crucial in allowing fast paced processing to take place. AT&T is pioneering such edge-rendering with breakthrough companies such as Gridraster by embedding graphics processing capabilities in local cells and towers, rather than centralized servers. Edge node providers are going to see a spike in demand for local low-latency connections, as companies begin to allow customers to "rent" their graphics processing power via the cloud.
When fiber optics speeds and 5G infrastructures come to scale, Digital Reality hardware will no longer be hamstrung by the need for high-powered graphics processing and will create a massive increase in data consumption. Consumer demand for these media services will increase data usage, as immersive media can be up to 20 times as dense compared to an HD video. Enterprises will demand more from their network and cloud infrastructure providers in order to access real-time rendering for scenes and the necessary graphics processing power to fuel VR enabled workflows.
Cloud-enabled GaaS is one of the keys to unlocking ubiquitous “spatial computing” for Digital Reality. Just as the mobile devices of today are able to make use of server-side computing power to run computationally intensive apps like voice technologies, Digital Reality devices will become as slim and as powerful as these mobile devices: a lightweight, functional, attractive display that relies on the cloud to access the world with the wave of a finger or the blink of an eye.
Jiten Dajee is a consultant specializing in augmented, mixed, and virtual technologies for Deloitte Digital. He leads research, market intelligence, and cross sector solution development for Deloitte’s global client base. Aside from commercial work, he also is involved in pushing academic research and understanding of Digital Reality technology with graduate students and faculty in the U.S.