Skip to main content

Gaussian Splatting vs Photogrammetry: Same Photos, Different Outputs

Photogrammetry gives you meshes. Gaussian Splatting gives you light fields. Both start from photos. Here is when to use each.

Updated Mar 2026

Same Input, Fundamentally Different Outputs

Both 3DGS and photogrammetry start from the same input: a set of photographs taken from multiple viewpoints. Both use COLMAP (or equivalent SfM) to estimate camera positions. But they produce fundamentally different outputs.

Photogrammetry produces a triangle mesh with texture maps — the same kind of geometry used in games, movies, 3D printing, and CAD. You can select faces, move vertices, boolean-cut, UV-unwrap, and 3D print the result. The output is a tangible, editable 3D object.

Gaussian Splatting produces a cloud of colored ellipsoids — not a surface. You cannot select edges, cut holes, or 3D print it. But it looks dramatically more photorealistic, especially on complex materials (vegetation, hair, fabric, glass). And it processes 5-20x faster.

This is not a "which is better" question — they serve different purposes. The right choice depends on what you need to do with the output.

Visual Quality: 3DGS Wins on Realism

Side-by-side, 3DGS scenes look more real. The reason: spherical harmonics. Each Gaussian encodes how its color changes with viewing angle, capturing the subtle material appearance that makes real objects look real — the way a wooden table shifts from warm brown to cool gray as you view it at a shallow angle, or the way fabric changes hue in different lighting directions.

Photogrammetry textures are view-independent: a pixel on the texture map shows the same color regardless of camera angle. This is fine for matte surfaces (concrete, painted walls) but looks artificial on anything with specular properties (wood, metal, skin, leaves). Professional photogrammetry compensates with PBR material maps, but generating accurate PBR from photos is a separate, difficult problem.

Where photogrammetry looks better: large flat surfaces (walls, floors, roads) where mesh geometry is clean and textures tile well. 3DGS sometimes shows "cloudy" artifacts on large flat areas because the Gaussians are optimized for visual accuracy, not geometric flatness.

Use Case: 3D Printing

Photogrammetry wins outright. 3DGS output cannot be 3D printed — it is not a surface, so slicers cannot process it. Research tools like SuGaR extract approximate meshes from Gaussian Splatting scenes, but results are noisy, low-detail, and far from print-ready.

If your goal is 3D printing a real-world object: use photogrammetry. Capture photos → process with RealityCapture or Meshroom → export STL/OBJ → repair with polyvia3d.com/repair/stl → slice and print. See our guide on converting 3D scans to printable files.

If your goal is visual documentation with the option to print later: capture photos and run both pipelines. Use the 3DGS output for interactive web viewing and the photogrammetry output for printing. Both use the same input photos and COLMAP step.

Use Case: Web Display

3DGS has a growing advantage for web display. A compressed SPZ file (15-20 MB) loads faster than a textured mesh + textures (often 50-200 MB even after Draco compression). The visual quality is better for organic scenes. And the rendering pipeline is simpler — no texture loading, no material setup, just sort-and-splat.

Photogrammetry meshes work well on the web via GLB/glTF + Draco compression + Google Model Viewer. For products, architecture, and anything that benefits from clean geometry, this is the established pipeline. But for natural scenes (gardens, forests, historical sites), 3DGS produces more convincing results.

Our recommendation for web: use 3DGS for photorealistic scene tours (real estate, heritage, landscapes). Use photogrammetry meshes for product visualization and anything that needs to look "clean" (architecture, furniture, manufactured objects).

Processing Time and Cost

3DGS is dramatically faster. Training a room-scale scene: 20-40 minutes on an RTX 3060. Processing the same scene with photogrammetry: 2-8 hours for a high-quality textured mesh (RealityCapture), or 30-90 minutes for a quick preview mesh (Meshroom with reduced settings). Cloud photogrammetry services (RealityCapture cloud, Capturing Reality) charge per scene or per month.

For rapid documentation (construction sites, insurance claims, archaeological digs), 3DGS's speed advantage is significant. Capture photos in the morning, have an interactive 3D scene by lunch. Photogrammetry would not finish processing until the evening.

Cost comparison: 3DGS is free (open-source tools + your own GPU) or cheap (cloud services from $0-10/month). Professional photogrammetry software: RealityCapture $15-30K/year, Metashape $179-549, Meshroom free. Cloud photogrammetry: $30-200/month depending on volume.

3DGS vs Photogrammetry at a Glance

Feature3D Gaussian SplattingPhotogrammetry
Output typeRadiance field (ellipsoid cloud)Triangle mesh
Visual realismPhotorealistic (view-dependent)Good (texture-dependent)
3D printableNo (not a surface)Yes
Editable in CAD/BlenderVery limited (crop only)Full mesh editing
Real-time rendering60–200 fps60+ fps (with LOD)
Processing time10–40 min (GPU)1–12 hours
Thin structures (hair, foliage)ExcellentPoor (mesh holes)
Flat surfaces (walls, floors)GoodExcellent
File size (typical scene)15–20 MB (SPZ)50–200 MB (textured mesh)
ToolsNerfstudio, Polycam, Luma AIRealityCapture, Meshroom, Metashape

Frequently Asked Questions

Yes. Both start from COLMAP SfM output, so you can reuse the camera estimation step. Capture once, process twice. This is the recommended approach when you need both interactive viewing (3DGS) and a printable/editable mesh (photogrammetry) from the same scene.
Not entirely. 3DGS produces better visual results but cannot generate meshes for printing, editing, or physics simulation. Photogrammetry will remain essential for manufacturing, game development, and any workflow requiring solid geometry. The two technologies are complementary, not competing.
For visual quality: 3DGS handles vegetation, water, and complex outdoor materials significantly better than photogrammetry, which struggles with thin structures and transparent objects. For geometric accuracy: photogrammetry produces cleaner ground planes and building facades. For aerial/drone surveys: photogrammetry is more mature, with established tools and workflows.
Very limited. You can crop regions (remove Gaussians inside/outside a bounding box) and delete floating artifacts in tools like SuperSplat. But you cannot select individual surfaces, move objects, add new geometry, or apply boolean operations. If you need to edit the 3D content, photogrammetry (mesh output) is the better choice.

How-to Guides

Related Guides