Author Archives: sunapi386

3D Face Model Generation

A useful tool was recommended to me by a former coworker who works at this new startup called Bellus3d. They use the depth sensor in the iPhone (with face ID logins) along with the camera and generate this model. Their app looks good. Easy to use.

They had a web UI to view the model in 3D. You can move it around.

I could import this to Autodesk Meshmixer and 3D print it.

There are 46,000 vertices and 90,000 triangles. My 3d printer isn't all that accurate, I get an effective resolution of 1.0mm to 1.5mm, so I am going to reduce complexity. It also helps reduce filament threads. See

Edit -> Make Solid. Collapse triangle mesh. I use cell size of 1.8mm and mesh density 1.4mm.

2D Photos to 3D model

Came across some photogrammetry software lately. I had an idea to try them out by taking a bunch of photos and generate a 3d model.

But it's difficult. Tools I tried were COLMAP and visualSFM. Seems to be bad results. I think I had insufficient number of photos, and there weren't enough correlated features.

I thought "well what if I used a video", so I took a video of my mouse, and dump them into images.

But a lot of them were blurry, like this.

So I thought there should be a way to filter out the blurry images. Such as using

  • Fast Fourier Transform
  • Laplace (or LoG) filter

There were some suggestions on StackOverflow that hint OpenCV is a good tool. I found some blurry detector on github but there was some issues. For one, the threshold computed varied too greatly and so I had to manually figure out what were the 80th percentile and better (sharper, higher score) images. But some images contained no content, such as a blank table. It would be classified as sharp... so if going down this method, you might want to remove those blank images.

Install Nvidia GTX 2080 driver on Ubuntu 16.04

  1. sudo vim /etc/default/grub # to change "quiet splash" to "quiet nosplash"
  2. sudo update-grub
  3. reboot
  4. (Press ESC)
  5. Select ubuntu recovery mode, enable networking, then root terminal.
  6. sudo add-apt-repository ppa:graphics-drivers (if you haven't already)
  7. sudo apt purge nvidia* (if you tried installing already)
  8. sudo apt install nvidia-418 (nvidia-430 doesn't work with lightdm yet)
  9. You may need to sudo systemctl stop lightdm
  10. nvidia-smi (check your version is correct)

3D Statue of Liberty

I take aerial lidar point cloud and turn it into a 3D object.

With Meshmixer, there is a great option in "Edit" > "Make Solid" and it generated this for me. Well done! Exactly what I was looking for. Otherwise it prints as hollow.

Have to fill in holes here

After some sculpting...

The Meshmixer tools are way more intuitive to use than Blender or anything else for that matter!

I didn't notice but the bottom is not completely flat. As I printed this out. There was a power interruption when I used the pressure washer it jumped the breaker. Here's the result.

There are also too many polygons with the original. 83,172 Verticies and 166,308 Faces. Using MeshMixer I could reduce verticies to 21,141 and faces 42,282. This lower poly count object should print cleaner too. Notice the surface.

Obviously the wood print failed. But I am happy with the white result!

Converting PLY to STL in Meshlab

Meshlab can do this, if you don't have access to a linux machine and use

ctmconv red-rocks-smrf-only-delaunay.ply red-rocks-smrf-only-delaunay.ply

Meshlab seems great, may even be as useful as Blender.

On Windows 10, the default viewer.

I'm going to use the Creality Slicer (which is based off Cura). It doesn't quite fit.

This seems reasonable to print.

Will fire up the printer and see!

Converting Point Cloud to 3D Surface Map

Source Data

Looking at Red Rocks.

PDAL pipeline

# This is a hjson file,
# Linux bash
# macOS bash
# Install
#curl -sSL $GET | sudo tar -xz -C /usr/local/bin
# Translate to Json
#hjson -j pipeline.hjson > pipeline.json
#pdal pipeline pipeline.json --verbose 8

# Input
      # read from our ept server
      # up to 0.5m resolutions
      # type: readers.ept
      # bounds: ([802000, 802500], [2493000, 2493500])
      # filename: http://localhost:8080/ept.json
      filename: red-rocks.laz
      # filename:
      # resolution: 0.5
#     {
#       # read from our las file
#       type: readers.las
#       filename: small500-no-outliers.laz
#     }

# Filters
      # adds a classification value of 7 to the noise points
      type: filters.outlier
      # method: radius
      # radius: 1.0
      # min_k: 8 # min number of neighbors in radius

      method: statistical
      mean_k: 8
      multiplier: 3


      # voxel-based sampling filter
      # reduce the size of the pc
      # cell size of 0.2 meters in xyz
      type: filters.voxelcenternearestneighbor
      cell: 0.1

      # Need to assign point cloud dimension NumberOfReturns 1
      # Otherwise: "No returns to process."
        assignment : NumberOfReturns[0:0]=1

      # Ground classification, ignore the noise points
      type: filters.smrf

      # only allow ground classified points
      type: filters.range
      limits: Classification[2:2]

      # OPTIONAL
      # turn this into a DEM 3D model
      # do not use multiple types
      # type: filters.delaunay
      type: filters.poisson

# Output

      # write to ply
      filename: red-rocks-smrf-only-poisson.ply
      storage_mode: default
# Output
    # {
    #   # write to laz
    #   type:writers.las
    #   filename: red-rocks-ground.laz
    # }

Mesh Results

The mesh doesn't look right.

Greedy Projection

Issue seems to be the points are arranged in a sequential fashion


Looks nicer with depth: 10 (default 8). The ply file is 84M.

To go a little more "detailed", I put depth to 12. The file went from 84M to 789M. And it is definitely overkill for 3D printing.

Grid Projection

Program failed to compute the grid projection.

 pdal pipeline dtm-gdal.json --verbose 8 
(PDAL Debug) Debugging...
(pdal pipeline Debug) Attempting to load plugin '/usr/local/lib/'.
(pdal pipeline Debug) Loaded plugin '/usr/local/lib/'.
(pdal pipeline Debug) Initialized plugin '/usr/local/lib/'.
(pdal pipeline readers.las Debug) GDAL debug: OGRSpatialReference::Validate: No root pointer.
(pdal pipeline readers.las Debug) GDAL debug: OGRSpatialReference::Validate: No root pointer.
(pdal pipeline readers.las Debug) GDAL debug: OGRSpatialReference::Validate: No root pointer.
(pdal pipeline Debug) Executing pipeline in standard mode.
(pdal pipeline filters.gridprojection Debug) 		Process GridProjectionFilter...
[pcl::GridProjection::getBoundingBox] Size of Bounding Box is 1
[pcl::GridProjection::getBoundingBox] Lower left point is [-2.500000, -2.500000, -2.500000]
[pcl::GridProjection::getBoundingBox] Upper left point is 2
[pcl::GridProjection::getBoundingBox] Padding size: 3
[pcl::GridProjection::getBoundingBox] Leaf size: 0.500000
(pdal pipeline filters.gridprojection Debug) 		3141373 before, 180 after
(pdal pipeline filters.gridprojection Debug) 		180
double free or corruption (!prev)
fish: “pdal pipeline dtm-gdal.json --v…” terminated by signal SIGABRT (Abort)

Cura 3D Print Slice

Cura can take STL inputs. Converting the PLY into STL is simple.

sudo apt install openctm-tools

Then ctmconv red-rocks-smrf-only-delaunay.ply red-rocks-smrf-only-delaunay.stl can convert ply to stl

ctmviewer red-rocks-smrf-only-delaunay.ply visualizes the ply. Which is what I used above.

Poisson and Delauny surface models, side-by-side.

Looks like the Poisson is prettier.

I'll continue writing this latter, until I have something printed. 🙂

  1. 500000, 6.000000, 5.000000
  2. 000000, 3.500000, 2.500000