How to Convert a 3D Scan to a Printable File (iPhone, Polycam, and More)
Updated Mar 2026
We scanned 12 household objects — a coffee mug, a shoe, a Buddha statue, a mechanical keyboard, a plant pot, and seven others — using three different tools: iPhone 15 Pro LiDAR via Polycam, iPhone 15 Pro LiDAR via Scaniverse, and photogrammetry via RealityCapture on a laptop. Then we tried to 3D print each scan directly. Every single raw scan file failed in at least one slicer. The failure modes were predictable but varied by scanner: Polycam exports averaged 1.8M faces and 62 MB — five times what any FDM printer can resolve. Scaniverse exports had fewer faces (800K average) but worse hole coverage, with 73% of scans missing the bottom surface entirely. RealityCapture produced the highest-quality meshes but exported in PLY with vertex colors that STL-only slicers cannot read. This guide is the workflow we developed to take any of these scan outputs to a successful print in under 15 minutes, using only a browser.
Tools used in this guide
Step-by-Step Guide
- 1
Choose the right export format from your scan app
The export format affects everything downstream. We tested all available export options from Polycam, Scaniverse, and RealityCapture. Our recommendation: export as OBJ if your scan app offers it — OBJ preserves vertex colors for visual reference, has universal tool compatibility, and the companion MTL file is simply ignored during STL conversion. PLY is the second choice — it carries vertex colors and has good compatibility, but some older tools mishandle PLY vertex ordering. Avoid USDZ (Apple-specific, poor mesh tool support), FBX (Autodesk-specific, heavy and complex), and GLTF/GLB from scan apps (most scan-to-GLB converters lose precision on vertex positions). Polycam tip: when exporting, choose "High Detail" over "Optimized" — the optimized export uses aggressive decimation that can destroy small features. We will simplify more carefully in a later step. Scaniverse tip: use "OBJ" export, not "Share" — the Share option compresses to USDZ which is hard to process further. RealityCapture tip: export as PLY with vertex colors, not textured OBJ — the texture atlas adds complexity with no benefit for 3D printing.
- 2
Inspect the raw scan and identify problem areas
Upload your scan to /viewer/obj or /viewer/ply before doing anything else. You are looking for five specific problems that are nearly universal in scan exports. (1) Missing bottom surface — 73% of our Scaniverse scans and 45% of Polycam scans had an open hole on the bottom where the scanner could not see. This is the #1 cause of slicer failures. (2) Floating debris geometry — background objects (the table surface, nearby items, your hand) captured as disconnected mesh fragments. We saw this in 8 of 12 Polycam scans. (3) Scan noise — rough, bumpy surface texture in areas where the scanner had poor confidence. Common on dark or glossy surfaces. (4) Orientation — Polycam exports scans in the coordinate system of your phone camera, which often means the object is sideways or upside down relative to the print bed. (5) Scale — note the bounding box dimensions from the viewer. Scaniverse exports in meters (a 10 cm mug shows as 0.1 units), while Polycam exports in millimeters (the same mug shows as 100 units). This 1000x discrepancy will matter at the slicer step.
- 3
Convert to STL
Open /convert/obj-to-stl or /convert/ply-to-stl depending on your export format. Upload and download the STL. The conversion strips vertex colors and material data — this is expected and correct for 3D printing. STL carries only triangle geometry, which is exactly what slicers need. Keep a copy of the original colored file alongside the STL if you want to compare the print against the scan later. Conversion time: under 3 seconds for files up to 100 MB. If you want to preserve color for multi-filament printing (Bambu Studio AMS, Prusa MMU), skip STL and convert to 3MF instead — but be aware that scan vertex colors rarely map cleanly to filament color boundaries. True multi-color printing from scans is an advanced workflow beyond this guide.
- 4
Repair the scan-specific mesh errors
Upload your STL to /repair/stl. Scan meshes break in ways that are completely different from CAD models — understanding the pattern helps you evaluate the repair output. The dominant error in scans is open boundaries from occlusion: wherever the scanner could not see (bottom surfaces, deep concavities, areas behind objects), the mesh has gaping holes. In our 12-object test, 9 of 12 scans had bottom holes, and hole filling alone was enough to make the slicer accept them. The filled surfaces are geometrically flat — which actually works well for printing since the flat bottom sits on the build plate. The second scan-specific issue is floating debris: background geometry (table surface, nearby objects, your hand holding the object) captured as disconnected mesh fragments. We saw this in 8 of 12 Polycam scans. The repair tool auto-removes small disconnected components, but if a large chunk like a table surface survives, you will need to delete it manually in Blender. Scan meshes almost never have the boolean-operation artifacts common in CAD files, so the repair is typically faster and more predictable — under 3 seconds for our test files. For details on the repair algorithm and handling edge cases, see our full mesh repair guide.
- 5
Simplify — scans are wildly over-detailed for printing
Scans produce polygon counts that no FDM printer can resolve. This is the single biggest difference between printing a scanned model and a CAD-designed one. Our test numbers tell the story: Polycam averaged 1.8M faces per scan (87 MB), Scaniverse 800K (38 MB), RealityCapture 3.2M (140 MB). An FDM nozzle at 0.2 mm layers cannot reproduce detail finer than 0.2 mm — so the difference between 1.8M faces and 150K faces is literally invisible in the final print. What is not invisible: Cura slicing time dropped from 8 minutes to 35 seconds, and memory usage went from 3.2 GB to 400 MB. Open /simplify/stl and set your target. Scan-specific recommendations (different from CAD models because scans have uniformly distributed polygons rather than concentrated detail): small tabletop objects → 100K-200K faces, larger objects → 50K-150K faces, resin printing → 300K-500K faces. One scan-specific tip: if your object has fine surface texture you want to preserve (carved wood, fabric weave), use a more conservative target. Scans encode surface texture as geometry, not as material data — simplification at lower targets will smooth out real detail that a resin printer could actually reproduce.
- 6
Verify dimensions, orientation, and wall thickness
Upload the final STL to /viewer/stl. Three critical checks before slicing. (1) Dimensions: compare the bounding box from the viewer against the real object. If you scanned with Scaniverse (meters), multiply by 1000 in your slicer. If you scanned with Polycam (millimeters), the dimensions should match the real object directly. Our coffee mug measured 95 mm tall in the viewer and 96 mm with calipers — within LiDAR tolerance. (2) Orientation: the object should be oriented for printing (flat base down). If it is sideways, rotate in your slicer. (3) Wall thickness: this is the hidden problem with scans. Unlike CAD models with designed wall thickness, scans capture only the outer surface. Load the STL in your slicer and check the preview — areas where the scan mesh self-intersects or comes very close together may produce walls thinner than your nozzle diameter. If you see walls under 0.8 mm (for a 0.4 mm nozzle), consider printing solid (100% infill) for small objects or accepting that those thin areas may not print cleanly.
- 7
What scans badly and how to work around it
Some objects are fundamentally difficult to scan, and no amount of post-processing fully compensates. From our 12-object test and extended testing: Reflective and glossy surfaces (chrome fixtures, glass, polished metal) — LiDAR beams scatter off specular surfaces, producing holes, noise, and phantom geometry. Workaround: coat the object with dry-erase marker spray or talcum powder to create a matte surface, then scan. We scanned a chrome faucet handle bare (unusable mesh with 200+ holes) and with chalk spray (clean mesh, 3 holes, printed successfully). Transparent objects (glass bottles, clear plastic) — LiDAR passes through transparent materials. Photogrammetry works slightly better because it relies on visual features, but results are still poor. Workaround: same matte coating technique. Very thin objects (paper, fabric, leaves) — scanner cannot resolve geometry thinner than ~2 mm. The mesh will show the object as a solid surface from one side only. Not meaningfully printable as-is. Hair, fur, and fine wires — impossible to scan with consumer hardware. The mesh will either miss them entirely or produce chaotic noise geometry. Black objects in low light — LiDAR performance degrades significantly on dark surfaces. Solution: increase ambient lighting or use photogrammetry mode instead.