3D retail's last barrier isn't creation. It's delivery.



A meaningful share of online returns has nothing to do with product failure or shipping. Of the roughly $816 billion in returns U.S. retailers absorbed in 2022, 22-32% traced back to the gap between what customers saw on a screen and what arrived at their door (Doofinder; Linnworks, 2024). To compensate, shoppers have adopted defensive habits. Bracketing (ordering multiple variations of a single SKU, intending to return most of them) accounts for an estimated 10% of returns on its own.
Higher-resolution photography has not closed the gap. A 4K image of a handbag cannot communicate scale relative to a body, surface texture under varied lighting, or whether a sofa will fit through a doorway. The fidelity problem needs a dimensional shift, not a bigger pixel.

3D visualization is that shift, and the impact is no longer theoretical.
Shopify reports that products with 3D or AR content convert at roughly twice the rate of products without it. The sample is its full merchant network, not a pilot. 82% of visitors who land on a 3D-enabled product page interact with the asset, indicating latent demand for information that 2D cannot provide (CGI Backgrounds, 2025).
Return reduction follows. Across multiple datasets, retailers see approximately 40% lower return rates on products with 3D visualization (Shopify; Banuba, 2024). In a furniture-specific pilot, a category with notoriously brutal returns logistics, Macy's saw a 25% return reduction and a 60% larger average basket size. The 3D experience moves customers from buying a SKU to designing a solution.
For luxury, the calculus is about trust. Cartier's virtual try-on for watches uses precise wrist tracking to replicate the boutique experience. Richemont's AI demand forecasting saved over $280 million in excess stock (Peter Fisk, 2024). At $10,000+ price points, fidelity is not a feature. It is the entire premise of a digital channel.
Until recently, none of this scaled. Three barriers stood in the way.
Creation cost. Photorealistic 3D assets used to require artists in Maya or Blender working for days per SKU. Photogrammetry was fragile, particularly on transparent or fine-detailed materials. Digitizing a 10,000-SKU catalog was economically infeasible outside the highest-margin categories.
That cost curve has broken. AI-driven pipelines using 3D Gaussian Splatting and neural reconstruction are delivering 60-80% cost reductions and roughly 60% faster production timelines. Cost-per-asset has dropped from hundreds of dollars to tens. Full-catalog digitization is now within reach for mainstream retailers, not just the highest-margin categories.
Format fragmentation. For most of the last decade, retailers maintained multiple incompatible 3D files for the same product: one for manufacturing, another for the web, another for AR. Conversion between formats was destructive. The original look of the product rarely survived intact. Lighting response, surface materials, polygon detail, shading behavior: each conversion introduced loss. What arrived on the web was a diminished version of what the artist built.
The format wars are ending. The Alliance for OpenUSD (AOUSD), backed by Pixar, Apple, Adobe, Autodesk, and NVIDIA, has established OpenUSD as a common authoring layer. The Khronos Group is aligning glTF, the efficient web format, with OpenUSD. The membership list is the real signal: IKEA, Lowe's, Amazon, and Wayfair are general members. This is not a technology consortium. It is a retail operations consortium. Separately, the industry is converging on PLY as the standard format for Gaussian splat assets, a quiet but consequential development for any pipeline built on AI-driven 3D capture.
Delivery infrastructure. This is the barrier that looks solved until you ship at scale. Loading a glb from a CDN works. It is simple, accessible, and almost universally how retailers start. The problem is that simplicity is the ceiling, not the floor. The easy path compresses fidelity, breaks under load, and forces tradeoffs that undo the investment in creation. Most retailers don't discover this until they're already in production.
Investing in beautiful 3D assets is meaningless if you cannot get them to customers at scale. And here, retailers face an old tradeoff in new clothes.
Option one is WebGL. It scales because rendering happens on the client device. But 3D assets are large, and large files mean long download times. To keep load times acceptable, assets get compressed aggressively. The visual fidelity that justified the investment in creation is stripped away before delivery. For retailers running 3D in paid ads, a 4MB file size ceiling makes this worse, but the compression tradeoff exists well before you hit that limit.
Option two is pixel streaming. The render happens on a cloud GPU, and a video stream gets sent to the device. Visual quality survives. But every concurrent session needs its own GPU allocation. Costs scale exponentially with usage. On Black Friday, the platform that performed beautifully for a thousand users either breaks under a million or bankrupts the marketing budget.
Neither option holds up at enterprise scale. Each forces retailers to sacrifice at least one of the four outcomes that determine whether 3D works as infrastructure: speed, fidelity, scale, or cost.
Adaptive spatial streaming is the architectural response to this tradeoff. Instead of rendering on a server and sending video (pixel streaming), or shipping a complete file before any interaction (WebGL), the platform streams 3D spatial data that reconstructs on the client device.
This is the architecture Miris is built on. Assets are conditioned once upstream, using GPU compute at the ingest stage to apply AI-driven optimization. At delivery, no cloud GPU is required per session. Costs behave like a CDN, scaling with bandwidth rather than concurrent compute. The platform that handles a quiet Tuesday handles Black Friday without provisioning anything new.
The customer experience is analogous to adaptive video streaming. Initial samples arrive instantly. Detail increases progressively as the stream adapts to network conditions, device capabilities, and where the user is looking. Zoom into a stitch, more spatial detail streams to that region. Move quickly through a space, the protocol prioritizes coverage over fine detail.
A single upload works across web, mobile, and AR without platform-specific rebuilds. E-commerce, paid advertising, social, and AR experiences all run from the same conditioned asset.
3D visualization is transitioning from differentiator to baseline. The retailers operationalizing it now (Lowe's with its Apple Vision Pro Style Studio and NVIDIA Omniverse store digital twin, Amazon shifting to backend auto-generation, Home Depot's investment in HOVER for Pro contractors) are building infrastructure that will compound for the next decade.
The strategic question has narrowed. It is no longer whether 3D works. It is whether your delivery layer can scale when it does.
If you lead retail strategy or e-commerce, the full strategic report (including buying patterns across the leading retailers and a deeper look at the emerging 3D infrastructure stack) is in the ebook: 3D in Retail: From Gimmick to Growth Driver.