Skip to main content

3D Gaussian Splatting Software Comparison (2026)

Updated Mar 2026

The 3DGS tool landscape has exploded since the original INRIA paper. In early 2024, there were 2-3 viable training options. By early 2026, we count at least 10 tools worth considering — from command-line research code to one-click mobile apps. We benchmarked 10 of them on the same 83-photo park bench scene (iPhone 15, overcast conditions) to create an apples-to-apples comparison. Results vary dramatically: training time ranged from 8 minutes (gsplat on RTX 4090) to 45 minutes (original 3DGS on RTX 3060), and visual quality differences are subtle but real. This guide compares every major 3DGS tool across six dimensions: quality, speed, ease of use, output formats, hardware requirements, and cost.

Tools used in this guide

Step-by-Step Guide

  1. 1

    Category 1: Open-source local training

    These tools run on your machine and require an NVIDIA GPU. Original 3DGS (INRIA): The reference implementation. Produces the highest baseline quality because all research improvements are benchmarked against it. Requires: CUDA, PyTorch, submodules for rasterization. Setup is involved — expect 30-60 minutes for first install. Training: 30 min on RTX 3060, 12 min on RTX 4090. Output: PLY. Best for: researchers and quality-critical projects. Nerfstudio (splatfacto): The most beginner-friendly option. `pip install nerfstudio`, then one command to train. Built-in web viewer for monitoring progress. Training: 25 min on RTX 3060, 10 min on RTX 4090. Quality is within 0.5 dB PSNR of the original. Output: PLY. Best for: first-time users and production workflows. gsplat: The most modular option. A PyTorch library rather than a complete tool — you write your own training script. Fastest training: 20 min on RTX 3060, 8 min on RTX 4090. Best for: developers integrating 3DGS into custom pipelines.

  2. 2

    Category 2: Cloud training services

    Upload photos, get a PLY back. No GPU required. Polycam: web + iOS app. Free tier: 1 scan/month. Pro: $8/month unlimited. Upload 50-500 photos, processing takes 15-45 minutes. Quality is excellent for objects and rooms — roughly on par with Nerfstudio. Outputs PLY and SPLAT. LiDAR-enhanced capture on iPhone Pro models adds geometric precision. Luma AI: web app at luma.ai. Free tier available. Processing: 20-60 minutes. Known for particularly good outdoor scene quality — their proprietary training pipeline handles vegetation and sky better than most open-source tools. Outputs PLY. Postshot: desktop app with cloud processing. Strong on architectural and real estate scenes. Pricing starts at $15/month. Outputs PLY, SPLAT, and their own format. KIRI Engine: mobile + cloud. 3 free scans, $10/month unlimited. Good LiDAR integration. Processing: 30-90 minutes. Quality is good but slightly below Polycam and Luma on complex scenes.

  3. 3

    Category 3: Mobile capture apps

    These apps handle capture AND training — point your phone, walk around, get a 3DGS scene. Scaniverse (Niantic): free, iOS and Android. The most streamlined experience — tap "Gaussian Splat" mode, scan for 30-60 seconds, wait 5-10 minutes for on-device processing. Outputs SPZ natively (Niantic created the format). Quality is impressive for the convenience — about 85% of what you get from Nerfstudio with carefully captured photos. Best for: quick captures, casual use, anyone without a GPU. Polycam (mobile): iOS and Android. Can process 3DGS on-device for small scenes or upload to cloud for larger captures. LiDAR-enhanced capture on iPhone Pro gives slightly sharper geometry than camera-only capture. Outputs to PLY via cloud processing.

  4. 4

    Head-to-head comparison table

    Using our 83-photo park bench scene benchmark (all times on RTX 3060 where applicable): Original 3DGS: PSNR 28.4 dB, 30 min training, 6 GB VRAM, free. Nerfstudio: PSNR 27.9 dB, 25 min, 6 GB VRAM, free. gsplat: PSNR 28.1 dB, 20 min, 4 GB VRAM, free. Polycam (cloud): PSNR 27.5 dB, 25 min processing, no GPU needed, $8/month. Luma AI: PSNR 27.8 dB, 35 min processing, no GPU needed, free tier. Scaniverse: PSNR 25.2 dB (on-device processing compromises quality), 8 min, no GPU, free. Note: PSNR differences under 1 dB are barely perceptible to human eyes. The difference between the best (original 3DGS, 28.4) and worst local tool (Nerfstudio, 27.9) is negligible in practice.

  5. 5

    GPU rental options for those without hardware

    If you want local training quality without buying a GPU, cloud GPU rentals are surprisingly affordable. Vast.ai: community marketplace, RTX 3060 from $0.08/hour, RTX 4090 from $0.25/hour. A 30-minute training run costs $0.04-0.12. Setup requires Docker knowledge. RunPod: more user-friendly, RTX 3060 from $0.15/hour, RTX 4090 from $0.40/hour. One-click templates for Nerfstudio available. Google Colab: free T4 GPU (16 GB VRAM) — enough for most scenes. Training takes 45-90 minutes on T4 vs 25 minutes on RTX 3060. Pro tier ($10/month) gives access to faster GPUs. Lambda Labs: professional GPU instances, A100 from $1.10/hour. Overkill for 3DGS but useful if you are training multiple scenes in batch.

  6. 6

    Our recommendation by use case

    Quick capture for social sharing or personal archive: Scaniverse (free, 5 minutes end-to-end, exports SPZ). Product photography or real estate: Polycam (best quality-to-effort ratio, LiDAR support, $8/month). Maximum quality for heritage or research: Original 3DGS or Nerfstudio on local GPU. Custom integration (game engine, VFX pipeline): gsplat for flexibility, export PLY, convert at polyvia3d.com. No GPU, no budget: Luma AI free tier + Google Colab. After training with any tool, view results at polyvia3d.com/splat-viewer/ply and compress for web delivery at polyvia3d.com/splat-convert/ply-to-spz. All tools produce PLY files that work with our viewer and converter.

Frequently Asked Questions

Which tool produces the best quality?
On our benchmark, the original INRIA 3DGS implementation scored highest (28.4 dB PSNR), but the difference from Nerfstudio (27.9) and gsplat (28.1) is barely perceptible. For practical purposes, all three open-source tools produce equivalent quality. Among cloud services, Luma AI and Polycam are neck-and-neck, both slightly behind local tools due to their processing pipelines optimizing for speed over maximum quality.
Can I use AMD or Apple Silicon GPUs for training?
As of early 2026, all major 3DGS training tools require NVIDIA CUDA. AMD ROCm support exists in some forks but is unreliable. Apple Silicon (M1/M2/M3) cannot run CUDA-based training. Your options: use a cloud service (Polycam, Luma AI), rent an NVIDIA GPU (Vast.ai, RunPod, Colab), or use Scaniverse for on-device mobile processing. For viewing 3DGS files, any GPU works — our web viewer runs on all platforms including Apple Silicon and AMD.
How much does it cost to get started with Gaussian Splatting?
Free: Scaniverse (iOS) for capture + polyvia3d.com for viewing and conversion. $0: Google Colab free tier for training from your own photos. $0.04-0.12: one training run on Vast.ai with a rented RTX 3060. $8/month: Polycam Pro for unlimited cloud training with LiDAR support. $0: Luma AI free tier for occasional cloud training. The only significant cost is if you want to buy a GPU for local training — a used RTX 3060 is approximately $200-250.
What is the best output format?
All tools output PLY, which is the universal 3DGS format. For distribution: convert PLY to SPZ at polyvia3d.com/splat-convert/ply-to-spz — SPZ is 10x smaller with negligible quality loss and is on track for Khronos glTF standardization. Avoid SPLAT format unless you specifically need compatibility with the antimatter15 viewer — SPLAT strips spherical harmonics, making scenes look flat.

Related Tools

Related Format Guides