Digital: Healthcare and the Metaverse

How AR/VR, AI and 3D technology are powering the next industrial revolution for healthcare

While still in its infancy, it is estimated that both AR and VR will surpass 100 million users worldwide by 2027.1 In realising this trend, it is clear that organisations adopting AR/VR app development services to create immersive experiences for their users which may enable them to excel within the realm of healthcare

Dijam Panigrahi at GridRaster

Immersive mixed reality and extended reality technologies, which are comprised by virtual reality (VR) and augmented reality (AR), continue to be key driving factors in business innovation and expansion. By transforming how health manufacturing companies run, interact with their customers and accomplish their objectives, this technological solution set has been making a significant impact across the industry landscape.

What is AR/VR?

Aiming to improve the user's perception and interaction with the digital world, augmented reality (AR) and virtual reality (VR) are two separate, but related, technologies. The main distinctions between AR and VR are the devices used and the nature of the experience: AR takes place in a real-world environment, while VR is entirely virtual − AR and VR are both included in the category of immersive technology known as Extended Reality (XR). There is also mixed reality (MR), which is essentially a combination of augmented reality (AR) and virtual reality (VR). It combines the physical and digital worlds to build a space where they live side by side and communicate in real time.

By superimposing digital data like images, videos and 3D models onto the physical environment, augmented reality, or AR, improves how users perceive and interact with their surroundings. The digital content is typically displayed in real-time using a smartphone, tablet, or specialised AR glasses. While still being aware of their immediate surroundings, users of AR technology can view and interact with virtual objects. Numerous AR applications can be found in a range of sectors, including manufacturing, construction, retail, healthcare and more. VR can completely submerge a user in a digital environment that is simulated and may not at all resemble the real world. The virtual world that users enter when wearing a VR headset can be interactive and responsive to their movements. The technology aims to give users a sense of presence and immersion by making them feel as though they are actually ‘inside’ a virtual environment. Both AR and VR have distinctive qualities that present intriguing business opportunities.

Image

What’s even more interesting is that these immersive mixed reality technologies are combining with 3D artificial intelligence (AI), machine learning (ML), cloud services and the Internet of Things (IoT) to power everything from training, design, engineering, production, robotics and automation for businesses across industries, especially in the growing e-commerce environment. As a result, enterprises in manufacturing, healthcare, technology, construction, energy, automotive, aerospace and financial services (to name a few) are more competitive and positioned well for future growth.

Ultimately, these technologies are being leveraged to help companies make more intelligent decisions and to virtually supplement human capital to better serve the customer. In doing so, organisations can create a more robust and personalised experience for customers, whether that’s an end consumer or a partner along the supply chain. In every instance, smart, savvy and successful organisations are moving their workload infrastructures to cloud environments to launch and manage new tools for scalable operations.

Where immersive mixed reality continues to challenge enterprises

The challenge is that these technologies require heavy doses of data, the ability to process vast amounts of data at impeccable speeds and the ability to scale projects in a computer environment that the traditional office environment doesn't often allow.

Enterprises looking to leverage ‘Industry 4.0’ through the metaverse requires a precise and persistent fusion of the real and virtual worlds. This means rendering complex models and scenes in photorealistic detail, rendered at the correct physical location (with respect to both the real and virtual worlds) with the correct scale and accurate pose. Think of the accuracy and precise nature required in leveraging AR/VR to design, build or repair components of an airline engine, or an advanced surgical device used in medical applications. This is achieved today by using discrete GPUs from one or more servers and delivering the rendered frames wirelessly or remotely to the head-mounted displays (HMDs) such as the Microsoft HoloLens and the Oculus Quest.

The importance of 3D and AI in immersive mixed reality

One of the key requirements for mixed reality applications is to precisely overlay on an object its model or the digital twin. This helps in providing work instructions for assembly and training and to catch any errors or defects in manufacturing. The user can also track the object(s) and adjust the rendering as the work progresses.

Most on-device object tracking systems use 2D image and/or marker-based tracking. This severely limits overlay accuracy in 3D because 2D tracking cannot estimate depth with high accuracy and consequently the scale and the pose. This means even though users can get what looks like a good match when looking from one angle and/or position, the overlay loses alignment as the user moves around in 6DOF. Also, the object detection, identification and its scale and orientation estimation − called object registration − is achieved, in most cases, computationally or using simple computer vision methods with standard training libraries (examples: Google MediaPipe, VisionLib). This works well for regular and/or smaller and simpler objects such as hands, faces, cups, tables, chairs, wheels, regular geometry structures, etc. However, for large, complex objects in enterprise use cases, labelled training data (more so in 3D) is not readily available. This makes it difficult, if not impossible, to use the 2D image-based tracking to align, overlay and persistently track the object and fuse the rendered model with it in 3D.

Image

Enterprise-level users are overcoming these challenges by leveraging 3D environments and AI technology into their immersive mixed reality design/build projects.

Deep learning-based 3D AI allows users to identify 3D objects of arbitrary shape and size in various orientations with high accuracy in the 3D space. This approach is scalable with any arbitrary shape and is amenable to use in enterprise use cases requiring rendering overlay of complex 3D models and digital twins with their real-world counterparts.

This can also be scaled to register with partially completed structures with the complete 3D models, allowing for on-going construction and assembly. Users achieve accuracy in millimeters in the object registration and rendering with this platform approach overcoming the limitation of current device-only approach. This approach to 3D object tracking will allow users to truly fuse the real and virtual worlds in enterprise applications, opening many uses including but not limited to: training with precise contextual work instructions, defect and error detection in construction and assembly, and 3D design and engineering with life-size 3D rendering and overlay.

Why working in a Cloud environment is crucial

Enterprises and manufacturers should be cautious in how they design and deploy these technologies, because there is a great difference in the platform they are built on and maximised for use. Even though technologies like AR/VR have been in use for several years, many manufacturers have deployed virtual solutions on the devices, where all the technology data is stored locally, severely limiting the performance and scale needed for today’s virtual designs. It limits the ability to conduct knowledge sharing between organisations that can be critical when designing new products and understanding the best way for virtual buildouts.

Manufacturers today are overcoming these limitations by leveraging cloud-based (or remote server based) AR/VR platforms powered by distributed cloud architecture and 3D vision-based AI. These cloud platforms provide the desired performance and scalability to drive innovation in the industry at speed and scale.


Image

Dijam Panigrahi is co-founder and COO of GridRaster. Mr Panigrahi has an MBA in strategy and marketing from the Indian Institute of Management, Lucknow, India, where he was a recipient of the Tata Business Leadership Award. He is a performance-driven and results-oriented leader with over 16 years of international experience in Market Development, Business Growth and Product Management. He has successfully led initiatives from start to end in multiple geographical markets.