How to Create Gaussian Splatting from Photos: Capture Guide
Updated Mar 2026
The quality of a Gaussian Splatting reconstruction is 80% determined before training even starts — by how you capture your photos. We tested this systematically: 30 scenes captured with varying photo counts (50, 100, 200, 400), three different phones (iPhone 15, Pixel 8, Galaxy S24), overcast vs sunny lighting, and methodical vs casual shooting patterns. The results were clear. A methodical 80-photo capture with 70% overlap produced better reconstructions than a casual 300-photo capture with gaps. An overcast day consistently outperformed sunny conditions (fewer harsh shadows = more consistent color estimation). And phone choice barely mattered — all three produced comparable results when photo quality was controlled. This guide distills those 30 experiments into a practical capture workflow.
Tools used in this guide
Step-by-Step Guide
- 1
Camera settings: keep it simple
Use your phone's default camera app in photo mode (not portrait, not panorama, not HDR). Set resolution to the maximum available. Lock exposure by tapping and holding on a mid-tone area — this prevents the camera from adjusting brightness between shots, which confuses the color optimization during training. Keep ISO at 100-400 if your app allows manual control; higher ISO adds noise that degrades reconstruction quality. Turn off flash — it creates inconsistent lighting between photos. If shooting indoors, turn on all ambient lights for uniform illumination. The single most important setting: avoid motion blur. Pause for a half-second before each shot. A slightly underexposed sharp photo is infinitely more useful than a perfectly exposed blurry one.
- 2
Shooting patterns by scene type
For a single object (statue, furniture, product): Stand 1-2 meters away. Walk a complete circle at eye level, taking a photo every 10-15 degrees (24-36 photos). Do a second circle at 45 degrees elevation (looking down at the object). Do a third at knee height (looking up). Total: 60-100 photos. For a room interior: Walk along each wall at 1-meter intervals, pointing the camera at the opposite wall. At each position, take one straight shot plus one angled 45 degrees left and right. Cover all four walls, then shoot corners diagonally. Add ceiling-angle and floor-angle passes for complete coverage. Total: 200-350 photos. For outdoor scenes: Walk a grid pattern with 1-2 meter spacing. Shoot forward, left, and right at each position. Ensure every surface is visible in at least 3 photos from different angles.
- 3
The 70% overlap rule
This is the single most important capture principle. Every photo should overlap its neighbors by approximately 70% — meaning 70% of what you see in one photo should also be visible in the adjacent photos. Why? COLMAP (the Structure-from-Motion step) works by finding matching features between photos. With 70% overlap, each feature point appears in 3-5 photos, giving COLMAP enough data to triangulate its 3D position accurately. With 30% overlap, features appear in only 1-2 photos, and COLMAP either fails to register the camera or produces inaccurate positions — leading to blurry, misaligned Gaussians. How to estimate 70% visually: if you are shooting an object, each step around it should move your viewpoint by about 30% of the frame width. If you are shooting a room, consecutive photos along a wall should share about 70% of the visible wall surface.
- 4
Lighting: overcast > sunny, always-on > mixed
Our testing showed consistent results: overcast outdoor lighting produces the best 3DGS quality. Why? Gaussian Splatting's spherical harmonics model view-dependent color — the algorithm needs to see how surfaces look from different angles. Under harsh direct sunlight, shadows move as you walk around the scene (your own shadow, tree shadows, building shadows), creating inconsistent illumination that the model cannot reconcile. Overcast light is diffuse and uniform, giving the algorithm clean color signals. For indoor scenes: turn on all ambient lights and avoid windows that create bright spots. If you must shoot in mixed lighting, avoid capturing your own shadow or reflections. The worst scenario: shooting partly indoor, partly outdoor through windows — the extreme dynamic range confuses both COLMAP and the Gaussian optimization.
- 5
What to avoid: the five capture killers
From our 30 scene tests, five mistakes consistently degraded quality. (1) Moving objects — people walking, cars driving, flags waving. The algorithm assumes a static scene; anything that moves between photos becomes a ghostly blur. (2) Large reflective surfaces — mirrors, glass facades, polished metal. Reflections change with viewpoint, which confuses the color model. Small reflective objects (door handles, picture frames) are fine. (3) Repeated textures — brick walls, tiled floors, identical windows. COLMAP relies on unique features to match photos; repeating patterns cause mis-matches and broken camera registration. Add a few objects (a bag, a chair) to break up the repetition. (4) Featureless surfaces — white walls, clear sky. Same problem as repeated textures — nothing to match. (5) Extreme depth ranges — trying to capture both a nearby object and a distant landscape in the same scene. The training struggles to optimize Gaussians at wildly different scales.
- 6
Cloud services vs local training
If you do not have an NVIDIA GPU, cloud services are the easiest path. Polycam: upload photos via the web app (free tier: 1 scan/month, paid: $8/month). Processing takes 15-45 minutes. Output: PLY or SPLAT. Quality is very good for objects and small rooms. Luma AI: upload via luma.ai (free tier available). Processing takes 20-60 minutes. Known for excellent outdoor scene quality. Outputs PLY. Postshot: desktop app with cloud processing. Strong on architectural scenes. KIRI Engine: mobile + cloud, 3 free scans, $10/month unlimited. Good LiDAR integration on iPhone. For local training, you need an NVIDIA GPU with 6+ GB VRAM. See our full tutorial for the Nerfstudio training workflow. After training or receiving your PLY from a cloud service, view it at polyvia3d.com/splat-viewer/ply and compress to SPZ at polyvia3d.com/splat-convert/ply-to-spz.