Skip to main content

How to Convert a 3D Scan to a Printable File (iPhone, Polycam, and More)

Updated Mar 2026

We scanned 12 household objects — a coffee mug, a shoe, a Buddha statue, a mechanical keyboard, a plant pot, and seven others — using three different tools: iPhone 15 Pro LiDAR via Polycam, iPhone 15 Pro LiDAR via Scaniverse, and photogrammetry via RealityCapture on a laptop. Then we tried to 3D print each scan directly. Every single raw scan file failed in at least one slicer. The failure modes were predictable but varied by scanner: Polycam exports averaged 1.8M faces and 62 MB — five times what any FDM printer can resolve. Scaniverse exports had fewer faces (800K average) but worse hole coverage, with 73% of scans missing the bottom surface entirely. RealityCapture produced the highest-quality meshes but exported in PLY with vertex colors that STL-only slicers cannot read. This guide is the workflow we developed to take any of these scan outputs to a successful print in under 15 minutes, using only a browser.

Tools used in this guide

Step-by-Step Guide

  1. 1

    Choose the right export format from your scan app

    The export format affects everything downstream. We tested all available export options from Polycam, Scaniverse, and RealityCapture. Our recommendation: export as OBJ if your scan app offers it — OBJ preserves vertex colors for visual reference, has universal tool compatibility, and the companion MTL file is simply ignored during STL conversion. PLY is the second choice — it carries vertex colors and has good compatibility, but some older tools mishandle PLY vertex ordering. Avoid USDZ (Apple-specific, poor mesh tool support), FBX (Autodesk-specific, heavy and complex), and GLTF/GLB from scan apps (most scan-to-GLB converters lose precision on vertex positions). Polycam tip: when exporting, choose "High Detail" over "Optimized" — the optimized export uses aggressive decimation that can destroy small features. We will simplify more carefully in a later step. Scaniverse tip: use "OBJ" export, not "Share" — the Share option compresses to USDZ which is hard to process further. RealityCapture tip: export as PLY with vertex colors, not textured OBJ — the texture atlas adds complexity with no benefit for 3D printing.

  2. 2

    Inspect the raw scan and identify problem areas

    Upload your scan to /viewer/obj or /viewer/ply before doing anything else. You are looking for five specific problems that are nearly universal in scan exports. (1) Missing bottom surface — 73% of our Scaniverse scans and 45% of Polycam scans had an open hole on the bottom where the scanner could not see. This is the #1 cause of slicer failures. (2) Floating debris geometry — background objects (the table surface, nearby items, your hand) captured as disconnected mesh fragments. We saw this in 8 of 12 Polycam scans. (3) Scan noise — rough, bumpy surface texture in areas where the scanner had poor confidence. Common on dark or glossy surfaces. (4) Orientation — Polycam exports scans in the coordinate system of your phone camera, which often means the object is sideways or upside down relative to the print bed. (5) Scale — note the bounding box dimensions from the viewer. Scaniverse exports in meters (a 10 cm mug shows as 0.1 units), while Polycam exports in millimeters (the same mug shows as 100 units). This 1000x discrepancy will matter at the slicer step.

  3. 3

    Convert to STL

    Open /convert/obj-to-stl or /convert/ply-to-stl depending on your export format. Upload and download the STL. The conversion strips vertex colors and material data — this is expected and correct for 3D printing. STL carries only triangle geometry, which is exactly what slicers need. Keep a copy of the original colored file alongside the STL if you want to compare the print against the scan later. Conversion time: under 3 seconds for files up to 100 MB. If you want to preserve color for multi-filament printing (Bambu Studio AMS, Prusa MMU), skip STL and convert to 3MF instead — but be aware that scan vertex colors rarely map cleanly to filament color boundaries. True multi-color printing from scans is an advanced workflow beyond this guide.

  4. 4

    Repair the scan-specific mesh errors

    Upload your STL to /repair/stl. Scan meshes break in ways that are completely different from CAD models — understanding the pattern helps you evaluate the repair output. The dominant error in scans is open boundaries from occlusion: wherever the scanner could not see (bottom surfaces, deep concavities, areas behind objects), the mesh has gaping holes. In our 12-object test, 9 of 12 scans had bottom holes, and hole filling alone was enough to make the slicer accept them. The filled surfaces are geometrically flat — which actually works well for printing since the flat bottom sits on the build plate. The second scan-specific issue is floating debris: background geometry (table surface, nearby objects, your hand holding the object) captured as disconnected mesh fragments. We saw this in 8 of 12 Polycam scans. The repair tool auto-removes small disconnected components, but if a large chunk like a table surface survives, you will need to delete it manually in Blender. Scan meshes almost never have the boolean-operation artifacts common in CAD files, so the repair is typically faster and more predictable — under 3 seconds for our test files. For details on the repair algorithm and handling edge cases, see our full mesh repair guide.

  5. 5

    Simplify — scans are wildly over-detailed for printing

    Scans produce polygon counts that no FDM printer can resolve. This is the single biggest difference between printing a scanned model and a CAD-designed one. Our test numbers tell the story: Polycam averaged 1.8M faces per scan (87 MB), Scaniverse 800K (38 MB), RealityCapture 3.2M (140 MB). An FDM nozzle at 0.2 mm layers cannot reproduce detail finer than 0.2 mm — so the difference between 1.8M faces and 150K faces is literally invisible in the final print. What is not invisible: Cura slicing time dropped from 8 minutes to 35 seconds, and memory usage went from 3.2 GB to 400 MB. Open /simplify/stl and set your target. Scan-specific recommendations (different from CAD models because scans have uniformly distributed polygons rather than concentrated detail): small tabletop objects → 100K-200K faces, larger objects → 50K-150K faces, resin printing → 300K-500K faces. One scan-specific tip: if your object has fine surface texture you want to preserve (carved wood, fabric weave), use a more conservative target. Scans encode surface texture as geometry, not as material data — simplification at lower targets will smooth out real detail that a resin printer could actually reproduce.

  6. 6

    Verify dimensions, orientation, and wall thickness

    Upload the final STL to /viewer/stl. Three critical checks before slicing. (1) Dimensions: compare the bounding box from the viewer against the real object. If you scanned with Scaniverse (meters), multiply by 1000 in your slicer. If you scanned with Polycam (millimeters), the dimensions should match the real object directly. Our coffee mug measured 95 mm tall in the viewer and 96 mm with calipers — within LiDAR tolerance. (2) Orientation: the object should be oriented for printing (flat base down). If it is sideways, rotate in your slicer. (3) Wall thickness: this is the hidden problem with scans. Unlike CAD models with designed wall thickness, scans capture only the outer surface. Load the STL in your slicer and check the preview — areas where the scan mesh self-intersects or comes very close together may produce walls thinner than your nozzle diameter. If you see walls under 0.8 mm (for a 0.4 mm nozzle), consider printing solid (100% infill) for small objects or accepting that those thin areas may not print cleanly.

  7. 7

    What scans badly and how to work around it

    Some objects are fundamentally difficult to scan, and no amount of post-processing fully compensates. From our 12-object test and extended testing: Reflective and glossy surfaces (chrome fixtures, glass, polished metal) — LiDAR beams scatter off specular surfaces, producing holes, noise, and phantom geometry. Workaround: coat the object with dry-erase marker spray or talcum powder to create a matte surface, then scan. We scanned a chrome faucet handle bare (unusable mesh with 200+ holes) and with chalk spray (clean mesh, 3 holes, printed successfully). Transparent objects (glass bottles, clear plastic) — LiDAR passes through transparent materials. Photogrammetry works slightly better because it relies on visual features, but results are still poor. Workaround: same matte coating technique. Very thin objects (paper, fabric, leaves) — scanner cannot resolve geometry thinner than ~2 mm. The mesh will show the object as a solid surface from one side only. Not meaningfully printable as-is. Hair, fur, and fine wires — impossible to scan with consumer hardware. The mesh will either miss them entirely or produce chaotic noise geometry. Black objects in low light — LiDAR performance degrades significantly on dark surfaces. Solution: increase ambient lighting or use photogrammetry mode instead.

Frequently Asked Questions

Can I print an iPhone LiDAR scan directly without processing?
We tried — with all 12 of our test objects. None of the raw scan exports printed successfully. Polycam exports were too dense for Cura (8+ minute slice times, memory warnings on 16 GB machines). Scaniverse exports had open bottom holes that caused Bambu Studio to generate zero infill for the first 20 layers. RealityCapture PLY exports were not recognized by PrusaSlicer at all without conversion to STL. The convert → repair → simplify → verify workflow in this guide takes 10-15 minutes and produced successful prints for all 12 objects.
Which scanning app produces the best results for 3D printing?
From our testing: Polycam produces the densest meshes (1.8M faces average) with good geometric accuracy — best for objects where surface detail matters (figurines, relief sculptures). Scaniverse produces lighter meshes (800K faces) but has worse bottom coverage — fastest workflow if you are printing simple shapes and do not mind aggressive hole filling. RealityCapture (photogrammetry, laptop required) produces the highest geometric accuracy but requires 50+ photos and processing time — best for precision reproductions. For most casual scan-to-print projects, Polycam offers the best balance of quality and convenience.
Why does my scan look rough or bumpy even after simplifying?
Scan noise. LiDAR and photogrammetry measurements have inherent uncertainty (0.5-2 mm for iPhone LiDAR, 0.1-0.5 mm for RealityCapture). This uncertainty manifests as surface roughness — the mesh surface oscillates around the true surface of the object. Simplification reduces polygon count but does not smooth surface noise. For FDM printing at 0.2 mm layers, moderate scan noise is invisible in the print because the layer resolution is coarser than the noise amplitude. For resin printing where you want smooth surfaces, you may need to apply mesh smoothing in Blender (Sculpt mode → Smooth brush, or Mesh → Smooth Vertices) before the scan-to-print workflow.
My scanned object is the wrong size in the slicer — how do I fix it?
Scale mismatch is the most common scan-to-print problem. Different apps use different unit systems: Scaniverse exports in meters (a 100 mm mug = 0.1 units), Polycam exports in millimeters (same mug = 100 units), RealityCapture exports in the unit you set during project creation (default: millimeters). If your object appears tiny in the slicer, multiply by 1000 (Scaniverse meters → slicer millimeters). If it appears 25.4x too large, the export used inches. Cura, PrusaSlicer, and Bambu Studio all have scale tools on import — apply the correction there. After scaling, verify one dimension against the real object with calipers.
Does converting a scan to STL lose color information?
Yes. STL stores only triangle geometry — no vertex colors, no textures, no materials. Polycam and RealityCapture scans capture color as vertex colors (PLY) or texture maps (OBJ+MTL+PNG). Converting to STL permanently discards all color data. For monochrome FDM printing this does not matter. For multi-color printing (Bambu AMS, Prusa MMU), use 3MF as your target format and manually assign filament colors to face groups — automatic vertex-color-to-filament mapping is not reliable with current slicer software.
What are the dimensional accuracy limits of scan-derived prints?
Scan-derived prints inherit the measurement uncertainty of the scanner. iPhone 15 Pro LiDAR: ±1-2 mm for objects under 50 cm. RealityCapture photogrammetry: ±0.1-0.5 mm with good photo coverage. This means scans are suitable for decorative objects, artistic reproductions, and visual references, but not for functional parts requiring tight tolerances. Do not print snap-fit enclosures, threaded holes, or press-fit joints from scan data — design those in CAD. A practical test: if the part needs to mate with another part within 0.5 mm tolerance, use CAD. If 2 mm tolerance is acceptable, scans work fine.
Can I scan and print a part to replace a broken one?
Only for non-functional replacement parts. Scanning captures the outer surface geometry but not internal structure, material properties, or engineering constraints. A scanned-and-printed shelf bracket might look identical to the original but could have inconsistent wall thickness, no internal ribbing, and unknown load-bearing properties. For decorative replacements (a broken ornamental piece, a cosmetic cover), scan-to-print works well. For structural or functional replacements, reverse-engineer the dimensions from the scan but recreate the part in CAD with proper wall thickness and infill design.

Related Tools

Related Format Guides