


At GTC 2026, Miris demonstrated a new infrastructure pattern for delivering high-fidelity digital twin assets: adaptive spatial streaming, direct to the browser. Working with Inhance, the team built an interactive digital twin of a full jet engine assembly as the proof point. A 1GB+ file size of engineering-grade CAD geometry, streamed to any device, loading instantly with progressive fidelity refinement. No installs. No remote GPU sessions. Just a URL.
The goal was not to showcase a demo for its own sake. It was to demonstrate an infrastructure pattern that Physical AI workflows need and don't have yet: high-fidelity spatial assets, delivered like media.
You open a link. The engine appears in sub-second time-to-interaction, regardless of total asset complexity. A 1GB CAD assembly loads and responds as quickly as a simple 5MB glTF.
Then, as you rotate and zoom, fidelity refines adaptively in real-time. The fan blades sharpen, internal structures resolve, and materials differentiate between metal alloys, composites, and wiring harnesses. Detail increases exactly where you're looking.
There is no loading screen. No handoff to a remote desktop. You're interacting with the asset.
Physical AI depends on real-world accurate digital twins. Robotic systems train against them. Manufacturing teams validate designs with them. Field engineers reference them to diagnose real-world equipment. These assets are the connective tissue between physical operations and the AI systems that model them.
But getting them to the people and systems that need them remains difficult in a way that doesn't match the sophistication of the assets themselves.
A typical engineering-grade digital twin can be hundreds of megabytes to multiple gigabytes in size. Sharing one with a remote stakeholder means packaging it for a specific platform, transferring a large file, and hoping the recipient has hardware performant enough to render it and the required software installed. Scaling access across a distributed team multiplies the friction at every step
For teams working with high-fidelity digital twin assets, where iteration speed and broad accessibility determine project timelines, the delivery layer is a compounding bottleneck.
The complexity of digital twin assets is increasing. Simulation-grade assemblies are growing in geometric density. Teams are more distributed. And iteration cycles are accelerating as AI-driven design workflows generate more asset variants, faster.
The delivery approaches available today were not designed for this.
Downloading complete assets requires large transfers and local hardware capable of rendering dense geometry. Remote GPU sessions can deliver high fidelity, but each concurrent viewer requires dedicated compute resources, creating hard limits on simultaneous access. Both approaches assume a controlled, high-bandwidth environment that doesn't reflect how distributed teams actually work.
The gap between how fast Physical AI assets are produced and how slowly they can be distributed is widening.
The core idea is straightforward. Instead of transferring complete files and rendering them locally, the answer is to stream spatial data adaptively. The same paradigm shift that decoupled video playback from file size two decades ago, now applied to 3D.
In a streaming delivery model, the experience starts immediately. Initial data arrives in sub-second timeframes, providing instant interactivity. Additional spatial detail streams progressively, refining based on what the viewer is looking at, the capabilities of their device, and their network conditions. No downloads. No installs. No platform-specific builds.
The delivery layer manages complexity on the viewer's behalf, rather than requiring every consumer of an asset to bear its full weight.
Miris partnered with Inhance to build our demo for GTC featuring an industrial jet engine assembly. This is not a simplified showcase model. The asset carries the polygon density, material diversity, and internal structural detail typical of real engineering workflows. It was chosen specifically because it represents the kind of asset that conventional delivery approaches handle poorly.
Inhance led the design and development of the interactive experience layer, applying deep expertise in 3D product visualization to make an extremely dense, engineering-grade asset feel intuitive and explorable in a browser. The challenge was not just rendering the model. It was presenting complexity in a way that made sense at every zoom level.
The experience runs entirely in the browser on laptop and mobile devices. Orbital rotation and zoom are smooth. Zooming into the assembly reveals dense surface geometry that continues to sharpen as additional spatial data streams in. Navigating between sub-assemblies (fan blades, combustion chamber, exhaust, electronic controls) loads complex geometry seamlessly into the scene, with contextual annotations providing engineering detail at each stop.
Building on a streaming delivery layer changed the assumptions that typically shape interactive 3D experiences. In traditional pipelines, experience design is constrained by what can be fully loaded, rendered locally, and supported across devices. That leads to predictable tradeoffs: reduced detail, staged loading, or simplified interaction models. With Miris handling the delivery layer, those constraints were largely removed. The Inhance team designed for immediate interaction and continuous refinement, where fidelity resolves dynamically based on user focus and context.
That shift changed how navigation, structure, and interaction came together. Instead of treating the experience as a bounded viewer, Inhance treated it as a system that adapts in real time. Transitions between sub-assemblies, level-of-detail behavior, and annotation layers were designed to work with the streaming model, not around it, maintaining continuity while exposing increasing complexity without introducing friction or breaking flow.
The result is something that does not feel like a constrained viewer or a demo environment. It feels like the beginning of a different way to communicate design and system information, where 3D becomes a practical medium for collaboration rather than a technical obstacle to work around.
The architecture separates the two roles that older methods combine: getting the asset ready and showing the viewer the experience
Upstream, Miris runs its optimization pipeline on GPU infrastructure provided by CoreWeave. Source CAD data is converted into a series of streamable spatial asset components, optimized once per asset version. This step is compute-intensive, but it happens once, not per user.
Downstream, Miris handles delivery. When a viewer opens the experience, the Miris SDK dynamically streams only the spatial data needed for that moment, rendered on the local device. Because Miris only sends spatial data relevant to the viewer's current focus, the SDK can evict information that's no longer in view and prioritize fidelity where it matters. Memory utilization stays bounded, and bandwidth is spent only on data the viewer actually needs.
The key idea is simple: the expensive optimization step happens once, upstream. Delivery and client-side rendering scale independently. Adding more viewers does not require provisioning additional infrastructure. Distribution costs behave like bandwidth, not like rendering.
This infrastructure pattern addresses four constraints that Physical AI workflows hit simultaneously.
Iteration speed. When sharing an updated asset means sending a URL instead of packaging and transferring a multi-gigabyte file, review cycles compress from days to minutes. Stakeholders interact with the latest version immediately.
Accessibility. Browser-native delivery means any stakeholder with a modern device and a network connection can access an engineering-grade digital twin. No workstation requirements. No IT provisioning. No per-device installations across a distributed team.
Scalability. Whether ten engineers or ten thousand field technicians need to access the same asset, the delivery layer scales with bandwidth rather than requiring dedicated infrastructure per concurrent viewer.
Predictable operations. Teams plan around consistent delivery costs rather than managing variable compute provisioning tied to concurrent access patterns.
The GTC demo is a proof point, not a product announcement. It shows an early but functional version of an infrastructure pattern: high-fidelity digital twin assets, streamed to a browser, with no per-viewer infrastructure provisioning.
There is more to build. But the core delivery architecture works, and the constraints it removes (file transfers, hardware dependencies, concurrent access limits) are real constraints that teams hit today.
For teams building Physical AI workflows that depend on distributing high-fidelity digital twin assets, the delivery layer that has been missing is now available to test.
About Inhance
Inhance specializes in creating interactive experiences that translate complex engineering and product data into intuitive, high-impact digital environments. We combine real-time 3D, scalable delivery systems, and thoughtful interaction design to help teams communicate with clarity and precision. Visit www.inhance.com or reach out at info@inhance.com to learn more.