3D Scanning

Structured Light

Photogrammetry

AKA Structure from motion (SFM). With lots of photographs from different vantage points, and lots of really clever math, computers can infer 3D shapes and extract model data from 2D images.

Meshroom

Amazing set of tools and UI, really nice that it’s opensource!

Documentation is lagging behind, and really needs to be built from source (binary release as of Jan 2020 is ~year old.) Easy to break the graph of tasks, BUT makes the process visual!

Tips and Tricks

Research Dump on Meshroom + Photogrammetry

4 posts were merged into an existing topic: Photogrammetry Scanner System for Small Scale Objects

Here is another web link to a tutorial for Photogrammetry for small objects from the University of Reading in the U.K. First of three YouTube installments. Good advice on setup for better results.

1 Like

Thanks @Sockeye! Do you mind adding that to the main topic? (It should be a “wiki” post- which means anyone can edit it. In theory. Not well tested yet :wink:)

Julian:
I copied this over to the project page. Hope that is what you intended :).

That’s good too- and I’ve added it to the list of references above.

Quick Review of Regard3D Photogrammetry App

Regard3D is the only free app I can find that is CPU only that has a ready to go download (and doesn’t need a bunch of other things to be downloaded and manually added/configured/run.)

It’s a lot slower than eg Meshroom… the example I captured below is from 21 photos. It managed to use 19 of them. It took an hour to do the first step, about 30 minutes to do the next, and about 10 minutes to generate a final STL file.

The window on the right in this screenshot is regard3D. The window on the left is another app that I loaded the model into to see what the shape looks like without any colouring.

But it works! So a starting point for anyone interested in this technique. And it has a good, clear, and concise tutorial: https://www.regard3d.org/index.php/documentation/tutorial

(I also tried COLMAP; it gave more live updates showing what it was doing, but it can’t generate a dense mesh (explained below), so the results aren’t so useful. It also crashed several times.)

The only other app I know that (still) works on Macs is Agisoft Metashape (was called photoscan.) It’s $179 (US?), and there are educational rates- but you have to contact them to find out what they are.

For comparison, I ran the same photoset with Meshroom (Windows/Linux):

It completed in ~15 minutes (GPUs are good for this kind of math!) & could orient and use 20 of the 21 images. It’s detected a larger area of the shape, as well as more of the surrounding background.

Both programs need well lit photos without shadows.

Aside: steps for photogrammetry-

  1. Generate “features” (unique points) for each photo (the two popular algorithms for this are SIFT- developed at UBC(!) and AKAZE)
  2. generate relationships between images using the features that appear in 2+ images (generates a rough idea of where the camera was for each image, how the images overlap in space) using Structure From Motion (SFM)
  3. bundle adjustment (given a rough idea of where the camera was for each image, use all this together to better figure out where the camera was for each image)

This gives a good set of info where the camera was for each picture, but only a few points on the object of interest.

  1. Using the quite accurate camera locations + photo orientations + features, we can use a different algorithm to triangulate each feature in space and build up many points on the surface of the object <— DENSIFICATION step
  2. Take the dense point cloud and generate a “solid” surface (mesh-> STL file)
  3. Project each of the photos onto the generated mesh to get the coloured textures (UV Mapping)

Nearly all photogrammetry software requires an NVidia graphics card for step 4 (to use their CUDA math libraries), but often the other steps can also use the GPU.