Photogrammetry Scanner System for Small Scale Objects

Photogrammetry System for Small Scale Objects

Collection of background information.

Goal is to create a system that allows accurate and reliable 3D scanning of small objects, both for use in computer applications (games, content, etc), and (functional) CAD.

Structured Light Scanners

We tried MakeItZone’s structured light scanners (Kinect, Ocipital Scanner) with Skannect. Discovered they’re affected by ambient light. Also only good for items ~50cm scale or larger.


Did some quick experiments with taking pictures with a cell phone and using meshroom for photogrammetry. Happily surprised to see results with small (~2cm) objects. Lots of noise, but can see the potential.

Current Tasks

  • understanding meshroom
  • figuring out appropriate lighting
  • discovering what reference info is needed (plain background + turntable, moving “floor” on turntable, moving around the object…)
  • trying turntables and backdrops

Future Goals/Ideas

  • design and build a computer controlled turntable
  • design and build an arm that can raise a camera in a circular arc, that keeps the camera equi-distant and always pointing towards the center of the turn-table
  • application to generate turntable/platform pattern sheet with embedded ArUco fiducials
  • automated image processing
    • good initial experiments with DigiKam’s batch processing
    • increasing sharpness/resolution; decreasing noise
      • multi-frame super resolution
      • NOT single image, or any “GAN” based algorithm
        • these use techniques (neural nets) to create/“imagine” higher res details based on pieces of other images that could fit what would create the real blurred pixels.
      • OpenCV has a superres module
      • put (web) cam on platform and take multiple, slightly shifted images (and multiple exposures in the same spot?)
      • multiple exposures in same place + averaging -> removing noise
      • multiple images slightly displaced -> sharper details (there are limits- the sensor pixel size is fixed, and there’s only so much that can be implied by how things appear as they shift across pixels.)
      • Guess: images need to be displaced in a plane orthogonal to centre of lens. If camera is rotated, angles to items, occlusions, etc will be changed by larger amounts and there will be more distortion.
    • depth of field
  • color matching/grading
    • white balance
    • color matching (eg include reference color swatch in all images)
  • background removal (ffmpeg has chroma-key abilities that work with images)
  • camera calibration
  • creating pre-calculated camera location and pose info for meshroom
  • Augmented reality
    • Far future: using real-time point clouds from ARCore/ArKit as seed to meshroom
    • ARKit augmented image capture app:
      • initialize: mark side and top location of a rectangular prism that encloses the target
      • calculate the rings of “snap” points (eg 10˚ spacing) where we want images captured
      • run ARScene, showing transparent prism over the item to be captured
      • indicate where the points are in 3Space and guide the user to move phone camera to the right location, and with the right rotation
      • automatically grab images when camera is at the “snap” points. Grab multiple exposures if possible, for noise reduction, and superres.

I did a set of 26 photos in my backyard at Meshroom and iPhone default settings. All cameras are used and the image is in focus.

So yes, I think the answer to our question is that Meshroom is set by default to out door photos.


A very good posting and added comments on small scale photogrammetry.

Seems to be quite different from a busy back ground with depth of field focus (FStop)

There are more links in this post that I will continue to read up on.


Quick test of a white turntable, no special lighting and hand held.

Only 5 cameras used, but is hows the best results yet! Thank you Dr Falkingham!


Looking good!

I’ve been doing some experiments too.


But so-so results. Too much shadow.

More reading, latest setup:

With 20ish photos starting to see promise when scanning a complex shape with undercuts.

Note - used a piece of white acrylic with a light box under it. Think that is helping.

Am using ‘ultra’ setting for feature extraction and adding akaze as well.

Have ordered a camera ring light, 2 smaller LED panels, and a CPD filter for our old DSLR.

With a robot we could give mesh room the camera locations, which if nothing else will save time but I think will improve the quality.

Also looking at ways to do batch editing of input pictures.

@Malcolm - moved the comment thread from the #reference section to the #project section… as I’ve just dumped a bunch of generic meshroom and photogrammetry background research into the 3D Scanning reference page.

Think it makes sense to split generic info from this specific implementation/project :wink:

One thing to note- take a look at the reconstruction with the texture (colours) turned off. Maybe turn on the wireframe. See if the geometry is ok.

The human brain is very good at inferring shape from colour and shading!

Created a GitHub repository to share code/files/data.

Have uploaded my experiments with photography parameters and meshroom.

Possibly: no shadows, need to add fidicuals to turn table. Some shadows may be giving enough reference information for the algorithms to estimate camera location and pose.

Here is another web link to a tutorial for Photogrammetry for small objects from the University of Reading in the U.K. First of three YouTube installments. Good advice on setup for better results

Quick update.

Set everything to ultra… and with 3 running my PC was getting no where after a couple of days.

Knocked it back to “high”; all completed but unfortunately no major improvements.

I did add mesh re-sampling on the rook. The mesh looks more manageable.

Am working on a program to generate backgrounds using aruco blocks. Might have a prototype done for this Saturday.

I ran across this: and thought it looked promising

Thanks @clintd! Will have to look into that design. Might be a lot simpler to mount a camera to!

I just saw that there’s another version of a scanner from the same place that the last link I sent was from:

That’s really cool!

Thanks for passing that on @clintd!


I was hoping to find some Youtube videos to see the new model in use, but maybe it’s too new for that yet