What Is 3D Gaussian Splatting? The Technology Replacing NeRF
Updated Mar 2026
In July 2023, a team at INRIA published a paper that quietly upended the 3D reconstruction field. "3D Gaussian Splatting for Real-Time Radiance Field Rendering" showed that you could reconstruct a photorealistic 3D scene from photographs and render it at 100-200 fps on consumer hardware — something Neural Radiance Fields (NeRF) could not do without expensive GPUs and seconds-per-frame render times. Within 18 months, the paper had 4,000+ citations, Niantic acquired the team behind Scaniverse to build a consumer 3DGS capture app, and the Khronos Group (the standards body behind OpenGL and Vulkan) announced a glTF extension for Gaussian Splatting. As of early 2026, 3DGS is no longer a research curiosity — it is a production technology used in real estate virtual tours, cultural heritage preservation, game development, and VFX previsualization. This guide explains what Gaussian Splatting actually is, how it differs from NeRF and traditional photogrammetry, and where the technology is heading.
Tools used in this guide
Step-by-Step Guide
- 1
The core idea: millions of tiny ellipsoids instead of a neural network
Traditional 3D reconstruction produces meshes — surfaces made of triangles. NeRF stores scenes as neural network weights — you query the network with a 3D coordinate and viewing direction, and it returns a color and density. Both have trade-offs. Meshes lose fine detail and struggle with semi-transparent objects. NeRFs produce stunning results but require running a neural network for every pixel in every frame — prohibitively slow for real-time applications. Gaussian Splatting takes a different approach: it represents the scene as millions of 3D Gaussians (ellipsoids). Each Gaussian has a position, a 3D covariance matrix (defining its shape and orientation), an opacity value, and spherical harmonics coefficients that encode view-dependent color — the way surfaces change appearance as you look at them from different angles. To render a frame, the algorithm sorts Gaussians by depth, projects them onto the screen as 2D splats, and alpha-composites them front-to-back. No neural network inference, no ray marching — just sorting and rasterization, which GPUs have been optimized for since the 1990s.
- 2
How a Gaussian Splatting scene is created
The pipeline starts identically to photogrammetry: capture 50-500 photographs of a scene from different viewpoints. These go through Structure-from-Motion (SfM) — typically COLMAP — which estimates camera positions and produces a sparse point cloud. This is where 3DGS diverges from traditional reconstruction. Instead of building a mesh from the point cloud, the algorithm initializes a 3D Gaussian at each point and then optimizes all Gaussian parameters through differentiable rendering. The training loop renders the scene from known camera viewpoints, compares the rendered image to the actual photograph, and adjusts Gaussian positions, shapes, colors, and opacities to minimize the difference. Training also adaptively splits large Gaussians into smaller ones in detailed areas and prunes Gaussians with near-zero opacity. A typical scene trains in 10-40 minutes on an RTX 3060 — much faster than NeRF training, which often takes hours.
- 3
Why it is 100-1000x faster than NeRF
The speed advantage comes down to rendering architecture. NeRF requires hundreds of neural network forward passes per pixel per frame — a 1920x1080 image at 60 fps would need billions of network evaluations per second. Even with acceleration structures like Instant-NGP, real-time NeRF rendering requires a high-end GPU and aggressive resolution compromises. Gaussian Splatting rendering is embarrassingly parallel: sort N Gaussians by depth (O(N log N), done on GPU with radix sort), project each Gaussian to 2D, composite. On an RTX 3060, a scene with 1 million Gaussians renders at 80-120 fps at full HD. On an M1 MacBook Air with integrated graphics, the same scene hits 30-45 fps via WebGL. Even mobile phones render at 25-30 fps for moderately sized scenes. This is why 3DGS has largely replaced NeRF for any application requiring interactive viewing.
- 4
The format ecosystem: PLY, SPLAT, SPZ, KSplat
Unlike meshes (which settled on glTF/GLB years ago), the 3DGS format landscape is still evolving. PLY is the universal output format — every training tool produces it. A PLY file stores all Gaussian parameters at full float32 precision, typically 200-300 bytes per Gaussian. A scene with 1 million Gaussians produces a 200-300 MB PLY. Great for archival, impractical for web delivery. SPLAT (by antimatter15) is a simpler format that strips spherical harmonics and quantizes colors to uint8 — about 85% smaller than PLY but with flat, view-independent colors. SPZ (by Niantic/Scaniverse) applies vector quantization and arithmetic coding to achieve 10x compression over PLY while preserving spherical harmonics. It is on the Khronos standardization track. KSplat is PlayCanvas SuperSplat's format — capable editor but ecosystem-locked. Our recommendation: archive as PLY, distribute as SPZ. You can convert between all four formats at polyvia3d.com.
- 5
Real-world applications in 2026
Real estate is the largest commercial adopter. Matterport and Zillow are both experimenting with 3DGS-based virtual tours that look dramatically more realistic than their previous point-cloud or mesh-based systems. A single apartment scan produces a 50-80 MB SPZ file that loads in 3 seconds and renders at 60 fps — indistinguishable from a video walkthrough but fully interactive. Cultural heritage preservation uses 3DGS to capture fragile sites that cannot be laser-scanned repeatedly. The Smithsonian's Digitization Program has started testing 3DGS alongside their existing photogrammetry pipeline. Game developers use 3DGS for environment previsualization — capturing a real location as a Gaussian Splat and importing it into Unity or Unreal Engine as a starting point for level design. VFX studios use it for on-set reference — a 3DGS capture of a practical set extension helps composite CGI elements accurately.
- 6
Where the technology is heading
Three trends will shape 3DGS in 2026-2027. First, standardization: the Khronos Group's KHR_gaussian_splatting glTF extension is in release candidate stage, expected to be ratified in 2026. This means 3DGS will be a first-class citizen alongside meshes in the glTF ecosystem — Three.js, Babylon.js, Unity, Unreal, and every glTF viewer will support it natively. Second, mobile capture: Scaniverse (Niantic) already produces SPZ files directly on iPhone, and Polycam is adding 3DGS output. Within a year, creating a Gaussian Splat will be as easy as taking a panorama photo. Third, hybrid rendering: researchers are combining 3DGS with meshes — using Gaussians for complex materials (foliage, hair, glass) and meshes for flat surfaces (walls, floors). This best-of-both-worlds approach may become the default in game engines.
- 7
Getting started: your first Gaussian Splat in 10 minutes
You do not need a GPU server to try 3DGS. The fastest path: download the Scaniverse app (free, iOS), capture an object or small scene, export as SPZ, and view it at polyvia3d.com/splat-viewer/spz. Total time: about 5 minutes. If you want to try training from your own photos: upload 50-100 photos to Polycam or Luma AI (both offer free tiers), wait 10-30 minutes, download the PLY output, and view it at polyvia3d.com/splat-viewer/ply. Then compress it to SPZ at polyvia3d.com/splat-convert/ply-to-spz — your 150 MB file will shrink to about 13 MB with no visible quality loss. For local training, see our gaussian-splatting-tutorial guide.