Skip to main content

What Is 3D Gaussian Splatting? The Technology Replacing NeRF

Updated Mar 2026

In July 2023, a team at INRIA published a paper that quietly upended the 3D reconstruction field. "3D Gaussian Splatting for Real-Time Radiance Field Rendering" showed that you could reconstruct a photorealistic 3D scene from photographs and render it at 100-200 fps on consumer hardware — something Neural Radiance Fields (NeRF) could not do without expensive GPUs and seconds-per-frame render times. Within 18 months, the paper had 4,000+ citations, Niantic acquired the team behind Scaniverse to build a consumer 3DGS capture app, and the Khronos Group (the standards body behind OpenGL and Vulkan) announced a glTF extension for Gaussian Splatting. As of early 2026, 3DGS is no longer a research curiosity — it is a production technology used in real estate virtual tours, cultural heritage preservation, game development, and VFX previsualization. This guide explains what Gaussian Splatting actually is, how it differs from NeRF and traditional photogrammetry, and where the technology is heading.

Tools used in this guide

Step-by-Step Guide

  1. 1

    The core idea: millions of tiny ellipsoids instead of a neural network

    Traditional 3D reconstruction produces meshes — surfaces made of triangles. NeRF stores scenes as neural network weights — you query the network with a 3D coordinate and viewing direction, and it returns a color and density. Both have trade-offs. Meshes lose fine detail and struggle with semi-transparent objects. NeRFs produce stunning results but require running a neural network for every pixel in every frame — prohibitively slow for real-time applications. Gaussian Splatting takes a different approach: it represents the scene as millions of 3D Gaussians (ellipsoids). Each Gaussian has a position, a 3D covariance matrix (defining its shape and orientation), an opacity value, and spherical harmonics coefficients that encode view-dependent color — the way surfaces change appearance as you look at them from different angles. To render a frame, the algorithm sorts Gaussians by depth, projects them onto the screen as 2D splats, and alpha-composites them front-to-back. No neural network inference, no ray marching — just sorting and rasterization, which GPUs have been optimized for since the 1990s.

  2. 2

    How a Gaussian Splatting scene is created

    The pipeline starts identically to photogrammetry: capture 50-500 photographs of a scene from different viewpoints. These go through Structure-from-Motion (SfM) — typically COLMAP — which estimates camera positions and produces a sparse point cloud. This is where 3DGS diverges from traditional reconstruction. Instead of building a mesh from the point cloud, the algorithm initializes a 3D Gaussian at each point and then optimizes all Gaussian parameters through differentiable rendering. The training loop renders the scene from known camera viewpoints, compares the rendered image to the actual photograph, and adjusts Gaussian positions, shapes, colors, and opacities to minimize the difference. Training also adaptively splits large Gaussians into smaller ones in detailed areas and prunes Gaussians with near-zero opacity. A typical scene trains in 10-40 minutes on an RTX 3060 — much faster than NeRF training, which often takes hours.

  3. 3

    Why it is 100-1000x faster than NeRF

    The speed advantage comes down to rendering architecture. NeRF requires hundreds of neural network forward passes per pixel per frame — a 1920x1080 image at 60 fps would need billions of network evaluations per second. Even with acceleration structures like Instant-NGP, real-time NeRF rendering requires a high-end GPU and aggressive resolution compromises. Gaussian Splatting rendering is embarrassingly parallel: sort N Gaussians by depth (O(N log N), done on GPU with radix sort), project each Gaussian to 2D, composite. On an RTX 3060, a scene with 1 million Gaussians renders at 80-120 fps at full HD. On an M1 MacBook Air with integrated graphics, the same scene hits 30-45 fps via WebGL. Even mobile phones render at 25-30 fps for moderately sized scenes. This is why 3DGS has largely replaced NeRF for any application requiring interactive viewing.

  4. 4

    The format ecosystem: PLY, SPLAT, SPZ, KSplat

    Unlike meshes (which settled on glTF/GLB years ago), the 3DGS format landscape is still evolving. PLY is the universal output format — every training tool produces it. A PLY file stores all Gaussian parameters at full float32 precision, typically 200-300 bytes per Gaussian. A scene with 1 million Gaussians produces a 200-300 MB PLY. Great for archival, impractical for web delivery. SPLAT (by antimatter15) is a simpler format that strips spherical harmonics and quantizes colors to uint8 — about 85% smaller than PLY but with flat, view-independent colors. SPZ (by Niantic/Scaniverse) applies vector quantization and arithmetic coding to achieve 10x compression over PLY while preserving spherical harmonics. It is on the Khronos standardization track. KSplat is PlayCanvas SuperSplat's format — capable editor but ecosystem-locked. Our recommendation: archive as PLY, distribute as SPZ. You can convert between all four formats at polyvia3d.com.

  5. 5

    Real-world applications in 2026

    Real estate is the largest commercial adopter. Matterport and Zillow are both experimenting with 3DGS-based virtual tours that look dramatically more realistic than their previous point-cloud or mesh-based systems. A single apartment scan produces a 50-80 MB SPZ file that loads in 3 seconds and renders at 60 fps — indistinguishable from a video walkthrough but fully interactive. Cultural heritage preservation uses 3DGS to capture fragile sites that cannot be laser-scanned repeatedly. The Smithsonian's Digitization Program has started testing 3DGS alongside their existing photogrammetry pipeline. Game developers use 3DGS for environment previsualization — capturing a real location as a Gaussian Splat and importing it into Unity or Unreal Engine as a starting point for level design. VFX studios use it for on-set reference — a 3DGS capture of a practical set extension helps composite CGI elements accurately.

  6. 6

    Where the technology is heading

    Three trends will shape 3DGS in 2026-2027. First, standardization: the Khronos Group's KHR_gaussian_splatting glTF extension is in release candidate stage, expected to be ratified in 2026. This means 3DGS will be a first-class citizen alongside meshes in the glTF ecosystem — Three.js, Babylon.js, Unity, Unreal, and every glTF viewer will support it natively. Second, mobile capture: Scaniverse (Niantic) already produces SPZ files directly on iPhone, and Polycam is adding 3DGS output. Within a year, creating a Gaussian Splat will be as easy as taking a panorama photo. Third, hybrid rendering: researchers are combining 3DGS with meshes — using Gaussians for complex materials (foliage, hair, glass) and meshes for flat surfaces (walls, floors). This best-of-both-worlds approach may become the default in game engines.

  7. 7

    Getting started: your first Gaussian Splat in 10 minutes

    You do not need a GPU server to try 3DGS. The fastest path: download the Scaniverse app (free, iOS), capture an object or small scene, export as SPZ, and view it at polyvia3d.com/splat-viewer/spz. Total time: about 5 minutes. If you want to try training from your own photos: upload 50-100 photos to Polycam or Luma AI (both offer free tiers), wait 10-30 minutes, download the PLY output, and view it at polyvia3d.com/splat-viewer/ply. Then compress it to SPZ at polyvia3d.com/splat-convert/ply-to-spz — your 150 MB file will shrink to about 13 MB with no visible quality loss. For local training, see our gaussian-splatting-tutorial guide.

Frequently Asked Questions

Is Gaussian Splatting better than NeRF?
For real-time interactive viewing, yes — 3DGS is 100-1000x faster to render and produces comparable visual quality. NeRF still has advantages in certain research contexts: it can handle reflective surfaces and transparent objects slightly better, and novel NeRF variants (Zip-NeRF, Nerfacto) produce marginally sharper results on challenging scenes. But for any practical application where users need to view scenes interactively — real estate, VFX, cultural heritage, gaming — 3DGS has largely replaced NeRF.
What hardware do I need for Gaussian Splatting?
For viewing: any device with a modern browser (WebGL 2.0 support). Our viewer handles 1M Gaussian scenes at 30-60 fps on laptops from 2019 onward, and even recent smartphones. For training: an NVIDIA GPU with 8+ GB VRAM (RTX 3060 or better). Cloud services like Polycam and Luma AI handle training for you if you do not have a GPU. For capture: any smartphone camera works. LiDAR (iPhone Pro models) helps but is not required.
Can I convert a Gaussian Splat to a regular 3D mesh?
It is an active research area. Tools like SuGaR (CVPR 2024) and 2DGS extract meshes from Gaussian Splatting scenes, but results are noisy and far from the quality of meshes produced by traditional photogrammetry. If you need a printable or editable mesh, traditional photogrammetry (RealityCapture, Meshroom) is still the better choice. 3DGS excels at visual reconstruction — it looks photorealistic but is not made of surfaces you can manipulate in a CAD tool.
How large are Gaussian Splatting files?
Raw PLY files range from 50 MB (small object, 200K Gaussians) to 800 MB (large outdoor scene, 4M Gaussians). After SPZ compression, these shrink to 5-80 MB — small enough for web delivery. A typical room-scale capture (1M Gaussians) is about 200 MB as PLY and 15-20 MB as SPZ. Position data uses high-precision quantization with negligible error, and spherical harmonics are quantized with negligible visual impact.
What is the difference between Gaussian Splatting and photogrammetry?
Both reconstruct 3D from photos, but they produce different outputs. Photogrammetry produces meshes (triangulated surfaces) — great for 3D printing, CAD, and game engines, but tends to look "plasticky" on organic surfaces. Gaussian Splatting produces a cloud of colored ellipsoids — looks photorealistic and renders fast, but cannot be edited, 3D printed, or used in physics simulations. Think of photogrammetry as a 3D scan and 3DGS as an interactive photograph.

Related Tools

Related Format Guides