Converting PLY to STL in Meshlab

Meshlab can do this, if you don't have access to a linux machine and use

ctmconv red-rocks-smrf-only-delaunay.ply red-rocks-smrf-only-delaunay.ply

Meshlab seems great, may even be as useful as Blender.

On Windows 10, the default viewer.

I'm going to use the Creality Slicer (which is based off Cura). It doesn't quite fit.

This seems reasonable to print.

Will fire up the printer and see!

Converting Point Cloud to 3D Surface Map

Source Data

Looking at Red Rocks.

PDAL pipeline

# This is a hjson file, https://hjson.org/
# Linux bash
#GET=https://github.com/hjson/hjson-go/releases/download/v3.0.0/linux_amd64.tar.gz
# macOS bash
#GET=https://github.com/hjson/hjson-go/releases/download/v3.0.0/darwin_amd64.tar.gz
# Install
#curl -sSL $GET | sudo tar -xz -C /usr/local/bin
# Translate to Json
#hjson -j pipeline.hjson > pipeline.json
#pdal pipeline pipeline.json --verbose 8

{
  pipeline:
  [
# Input
    {
      # read from our ept server
      # up to 0.5m resolutions
      # type: readers.ept
      # bounds: ([802000, 802500], [2493000, 2493500])
      # filename: http://localhost:8080/ept.json
      filename: red-rocks.laz
      # filename: http://na.entwine.io/red-rocks/ept.json
      # resolution: 0.5
    }
#     {
#       # read from our las file
#       type: readers.las
#       filename: small500-no-outliers.laz
#     }


# Filters
    {
      # adds a classification value of 7 to the noise points
      type: filters.outlier
      # method: radius
      # radius: 1.0
      # min_k: 8 # min number of neighbors in radius

      method: statistical
      mean_k: 8
      multiplier: 3

    }

    {
      # voxel-based sampling filter
      # reduce the size of the pc
      # cell size of 0.2 meters in xyz
      type: filters.voxelcenternearestneighbor
      cell: 0.1
    }

    {
      # Need to assign point cloud dimension NumberOfReturns 1
      # Otherwise: "No returns to process."
        type:filters.assign
        assignment : NumberOfReturns[0:0]=1
    }

    {
      # Ground classification, ignore the noise points
      type: filters.smrf
      ignore:Classification[7:7]
    }

    {
      # only allow ground classified points
      type: filters.range
      limits: Classification[2:2]
    }

    {
      # OPTIONAL
      # turn this into a DEM 3D model
      # do not use multiple types
      # type: filters.delaunay
      type: filters.poisson
    }


# Output

# # OPTIONAL PLY IF DEM
    {
      # write to ply
      type:writers.ply
      filename: red-rocks-smrf-only-poisson.ply
      faces:true
      storage_mode: default
    }
# Output
    # {
    #   # write to laz
    #   type:writers.las
    #   filename: red-rocks-ground.laz
    # }
  ]
}

https://gist.github.com/sunapi386/9a9ece302d646ee80a72fc494423a633

Mesh Results

filters.delaunay
The mesh doesn't look right.

Greedy Projection

filters.greedyprojection
Issue seems to be the points are arranged in a sequential fashion

Poisson

Looks nicer with depth: 10 (default 8). The ply file is 84M.

To go a little more "detailed", I put depth to 12. The file went from 84M to 789M. And it is definitely overkill for 3D printing.

Grid Projection

Program failed to compute the grid projection.

 pdal pipeline dtm-gdal.json --verbose 8 
(PDAL Debug) Debugging...
(pdal pipeline Debug) Attempting to load plugin '/usr/local/lib/libpdal_plugin_filter_gridprojection.so'.
(pdal pipeline Debug) Loaded plugin '/usr/local/lib/libpdal_plugin_filter_gridprojection.so'.
(pdal pipeline Debug) Initialized plugin '/usr/local/lib/libpdal_plugin_filter_gridprojection.so'.
(pdal pipeline readers.las Debug) GDAL debug: OGRSpatialReference::Validate: No root pointer.
(pdal pipeline readers.las Debug) GDAL debug: OGRSpatialReference::Validate: No root pointer.
(pdal pipeline readers.las Debug) GDAL debug: OGRSpatialReference::Validate: No root pointer.
(pdal pipeline Debug) Executing pipeline in standard mode.
(pdal pipeline filters.gridprojection Debug) 		Process GridProjectionFilter...
[pcl::GridProjection::getBoundingBox] Size of Bounding Box is 1
[pcl::GridProjection::getBoundingBox] Lower left point is [-2.500000, -2.500000, -2.500000]
[pcl::GridProjection::getBoundingBox] Upper left point is 2
[pcl::GridProjection::getBoundingBox] Padding size: 3
[pcl::GridProjection::getBoundingBox] Leaf size: 0.500000
(pdal pipeline filters.gridprojection Debug) 		3141373 before, 180 after
(pdal pipeline filters.gridprojection Debug) 		180
double free or corruption (!prev)
fish: “pdal pipeline dtm-gdal.json --v…” terminated by signal SIGABRT (Abort)

Cura 3D Print Slice

Cura can take STL inputs. Converting the PLY into STL is simple.

sudo apt install openctm-tools

Then ctmconv red-rocks-smrf-only-delaunay.ply red-rocks-smrf-only-delaunay.stl can convert ply to stl


ctmviewer red-rocks-smrf-only-delaunay.ply visualizes the ply. Which is what I used above.

Poisson and Delauny surface models, side-by-side.

Looks like the Poisson is prettier.

I'll continue writing this latter, until I have something printed. 🙂

  1. 500000, 6.000000, 5.000000
  2. 000000, 3.500000, 2.500000

PDAL Voxel Center Nearest Neighbor

https://pdal.io/stages/filters.voxelcenternearestneighbor.html#filters-voxelcenternearestneighbor

The VoxelCenterNearestNeighbor filter is a voxel-based sampling filter. The input point cloud is divided into 3D voxels at the given cell size. For each populated voxel, the coordinates of the voxel center are used as the query point in a 3D nearest neighbor search. The nearest neighbor is then added to the output point cloud, along with any existing dimensions.

Notice the red dots are much more sparse than the gray intensity dots. Red dots are separated 1.0 meters and gray are 0.1 meters.

To generate this I did converted with PDAL and then used Potree to visualize.

pdal pipeline $HOME/voxcnn-1.0.json 

voxcnn-1.0.json
[
"801403-802580-2493384-2494335.laz",
{
"type":"filters.voxelcenternearestneighbor",
"cell":1.0
},
"801403-802580-2493384-2494335-voxelcenternearestneighbor-1.0.laz"
]

-rw-rw-r-- 1 jsun jsun 65M Apr 11 21:01 801403-802580-2493384-2494335-voxelcenternearestneighbor.laz
-rw-rw-r-- 1 jsun jsun 8.1M Apr 11 21:04 801403-802580-2493384-2494335-voxelcenternearestneighbor-1.0.laz

Airbnb Occupancy Tax Turbotax

Airbnb already pays occupancy taxes for you so you can deduct them as a rent expense.

https://ttlc.intuit.com/questions/4567464-i-run-an-airbnb-in-maine-and-they-send-occupancy-taxes-to-the-state-how-do-i-show-my-airbnb-income-as-exempt-since-the-tax-has-already-been-paid

I run an Airbnb in Maine and they send occupancy taxes to the state. How do I show my Airbnb income as exempt since the tax has already been paid?

Asked by jlouisecarl
TurboTax Premier
 2 months ago


Occupancy taxes for your Airbnb are a completely separate tax from your income tax that you are filing. Having paid the occupancy tax through Airbnb does not make your income exempt from income tax.  All of the rent collected must still be reported as income.  However, you will be able to claim a rental expense for the occupancy tax that has been paid on your behalf since it was taken out of your rental income.

TurboTaxAnnetteB , EA
 TurboTax TaxPro  2 months ago


Dual Networks Profiles Ubuntu 18.04

The setup

I have 2 network adapters, a 10G and a 1G network. In order to connect to both networks, you must have two profiles.

Two profiles: 10G and 1G profile, across 2 adapters.

If you have a single profile, then changing settings for one network adapter will also affect the other. In Ubuntu's Network UI, this is non-obvious.

Once setup correctly, you can verify by being able to ping both networks.

jsun@computer ~ [1]> ping 192.168.1.200
PING 192.168.1.200 (192.168.1.200) 56(84) bytes of data.
64 bytes from 192.168.1.200: icmp_seq=1 ttl=128 time=0.338 ms
64 bytes from 192.168.1.200: icmp_seq=2 ttl=128 time=0.172 ms
^C
--- 192.168.1.200 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1028ms
rtt min/avg/max/mdev = 0.172/0.255/0.338/0.083 ms

jsun@computer ~> ping 10.10.50.1
PING 10.10.50.1 (10.10.50.1) 56(84) bytes of data.
64 bytes from 10.10.50.1: icmp_seq=1 ttl=64 time=0.585 ms
64 bytes from 10.10.50.1: icmp_seq=2 ttl=64 time=0.226 ms
^C
--- 10.10.50.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1022ms
rtt min/avg/max/mdev = 0.226/0.405/0.585/0.180 ms

Space Filling Curves vs. Octree

Octree

An octree is a tree data structure in which each internal node has exactly eight children. Octrees are most often used to partition a three-dimensional space by recursively subdividing it into eight octants. Octrees are the three-dimensional analog of quadtrees.

We humans mostly deal with low dimensional data, so we give this type of structure some names:

  • 1-D data: binary tree
  • 2-D data: quadtree
  • 3-D data: octree
  • K-D data: k-d tree, or k-dimensional tree, is a data structure used in computer science for organizing some number of points in a space with k dimensions.

These are all tree-like data structures, which are very useful for range and nearest neighbor searches.

Octree example

Space Filling Curves

Space filling curves refers to a class of functions that k dimensional data to 1 dimension.

Meaning a class of functions that can map k-dimensional data into a single number n

f(n_1, n_2, ..., n_k) -> n

The caveat is there is a restriction on the number it maps, i.e. n_1, as space filling curves are a fractal functions, it cannot be extended to the reals, but rather to the binary fractions (a subset of the rationals). This lets you get arbitrarily close to any number you want (and cover all the IEEE floating points).

Class of functions means that there are many functions that can be considered a space filling curve. Common to use Hibert and Z-order.

Visualization of a 3D space filling curve may look like with a Hilbert curve function.

Compare and contrast

There are certain optimal use cases for each of these.

  • Trees have the benefit being able to limit the depth of your queries, which makes it especially useful in computer graphics so you can stop querying for points that you don't need.
  • Space filling curves have the benefit of modify data faster, because the location to store that data can be calculated. Because trees have the cost of potentially rebalancing subtrees and creating/updating/deleting.

Other structures?

There are some variants on structures for storing multi-dimensional data.

R-Tree

It's yet another type of tree.

Visualization of an R*-tree for 3D points using ELKI.Hilbert R-Tree is a variant on the R-Tree to achieve better performance.

Data Stores

I won't go in too much detail because this is out of scope, but in programming there are databases and data stores which can handle large amount of high dimensional data.

First, a distinction. A database can handle complex queries. A data store can be dumber, simple storage format and won't handle things like transaction for you.

An analogy could be like "database is like an accountant, who you can ask for certain data and operations, such as 'give me all last year's data for people with last names in T'", whereas "data store is like a library and you have to go find and collect that data yourself, but it's stored in an organized fashion".

Mount locked partition with same volume group name

Background/Setup

  • I have two physical 1TB disks with identical setup.
  • Both are encrypted.
  • I unlocked and booted off one of them.
  • The other disk is still locked at this point.
  • I am using fish shell.

1. Identify which disk you want to unlock

root@computer ~# lsblk -f
sda                                                                                                    
├─sda1                                        vfat              7F3B-9703                              /boot/efi
├─sda2                                        ext2              b6220db2-916c-4322-b64b-c86769f6b18b   /boot
└─sda3                                        crypto_LUKS       3a07b8a9-3e75-41a9-88d4-3be937181613   
  └─luks-3a07b8a9-3e75-41a9-88d4-3be937181613 LVM2_member       uCvHaW-RlQc-PT2d-cBg2-SWyY-WS0A-ZCEvA6 
    ├─ubuntu--vg-root                         ext4              a78662e2-d582-4faa-88b6-b6db5e23aed2   /
    └─ubuntu--vg-swap_1                       swap              5c4ba6cc-4735-4417-9e87-74a76a7fc415   [SWAP]
sdb                                                                                                    
├─sdb1                                        vfat              1EC8-1F58                              
├─sdb2                                        ext2              7108fe1d-dc37-4213-a3bc-8070a8f84f31   /media/jsun/7108fe1d-dc37-4213-a3bc-8070a8f84f31
└─sdb3                                        crypto_LUKS       fc560468-588c-4455-af2c-295998c41c88   

We see that sdb3 is the unmounted target.

2. Unlock the partition

root@computer ~# udisksctl unlock -b /dev/sdb3 
Passphrase:
Unlocked /dev/sdb3 as /dev/dm-3.

See which one is the new unlocked

root@computer ~# ls -la /dev/mapper/ | grep dm-3 
lrwxrwxrwx 1 root root 7 Mar 22 17:32 luks-fc560468-588c-4455-af2c-295998c41c88 -> ../dm-3

luks-fc560468-588c-4455-af2c-295998c41c88 is our target. Let's remember that with a variable.

set target luks-fc560468-588c-4455-af2c-295998c41c88

3.

The VG Name of both drives is the same; this is problematic and will prevent you from being able to mount the drives both at the same time.

root@computer ~# pvdisplay
--- Physical volume ---
PV Name /dev/mapper/luks-3a07b8a9-3e75-41a9-88d4-3be937181613
VG Name ubuntu-vg PV Size 930.53 GiB / not usable 2.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238216
Free PE 0
Allocated PE 238216
PV UUID uCvHaW-RlQc-PT2d-cBg2-SWyY-WS0A-ZCEvA6
--- Physical volume ---
PV Name /dev/mapper/luks-fc560468-588c-4455-af2c-295998c41c88
VG Name ubuntu-vg
PV Size 930.53 GiB / not usable 2.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238216
Free PE 0
Allocated PE 238216
PV UUID lbvecI-E6w6-fpuj-P61G-5NCb-obOK-ooivpe

Get the uuid of the volume

root@computer ~# uuidgen
1ec80451-b05b-4d59-94c1-f1ad70b24255
root@computer ~# vgrename $uuid 1ec80451-b05b-4d59-94c1-f1ad70b24255
Processing VG ubuntu-vg because of matching UUID TJgeFw-xDcf-TaJ2-07dL-RlUQ-yCsb-zGGp4v
Volume group "TJgeFw-xDcf-TaJ2-07dL-RlUQ-yCsb-zGGp4v" successfully renamed to "1ec80451-b05b-4d59-94c1-f1ad70b24255"
root@computer ~# pvs -o +vg_uuid
PV VG Fmt Attr PSize PFree VG UUID
/dev/mapper/luks-3a07b8a9-3e75-41a9-88d4-3be937181613 ubuntu-vg lvm2 a-- 930.53g 0 TAva2M-zNnV-Wh5h-3YcY-Vc5U-W4se-TI27Du
/dev/mapper/luks-fc560468-588c-4455-af2c-295998c41c88 1ec80451-b05b-4d59-94c1-f1ad70b24255 lvm2 a-- 930.53g 0 TJgeFw-xDcf-TaJ2-07dL-RlUQ-yCsb-zGGp4v

The device UUID is TJgeFw-xDcf-TaJ2-07dL-RlUQ-yCsb-zGGp4v. Let's remember that with a variable.

set uuid TJgeFw-xDcf-TaJ2-07dL-RlUQ-yCsb-zGGp4v

Change the volume group

I'm going to generate a UUID, but you can name whatever you want.

root@computer ~# uuidgen
1ec80451-b05b-4d59-94c1-f1ad70b24255
root@computer ~# vgrename $uuid 1ec80451-b05b-4d59-94c1-f1ad70b24255
Processing VG ubuntu-vg because of matching UUID TJgeFw-xDcf-TaJ2-07dL-RlUQ-yCsb-zGGp4v
Volume group "TJgeFw-xDcf-TaJ2-07dL-RlUQ-yCsb-zGGp4v" successfully renamed to "1ec80451-b05b-4d59-94c1-f1ad70b24255"
root@computer ~# pvs -o +vg_uuid
PV VG Fmt Attr PSize PFree VG UUID
/dev/mapper/luks-3a07b8a9-3e75-41a9-88d4-3be937181613 ubuntu-vg lvm2 a-- 930.53g 0 TAva2M-zNnV-Wh5h-3YcY-Vc5U-W4se-TI27Du
/dev/mapper/luks-fc560468-588c-4455-af2c-295998c41c88 1ec80451-b05b-4d59-94c1-f1ad70b24255 lvm2 a-- 930.53g 0 TJgeFw-xDcf-TaJ2-07dL-RlUQ-yCsb-zGGp4v

Check/notice the new volume group name 1ec80451-b05b-4d59-94c1-f1ad70b24255.

Confirm the change

root@computer ~# vgchange -a y
2 logical volume(s) in volume group "ubuntu-vg" now active
2 logical volume(s) in volume group "1ec80451-b05b-4d59-94c1-f1ad70b24255" now active

Remember to rename your volume group back to ubuntu-vg if you want the volume to still be bootable.

Mount

root@computer ~# mkdir /media/badboy
root@computer ~# mount /dev/1ec80451-b05b-4d59-94c1-f1ad70b24255/root /media/badboy
root@computer ~# cd /media/badboy/
root@computer /m/badboy# ls
bin/ cdrom/ etc/ initrd.img@ lib/ lib64/ media/ opt/ root/ sbin/ srv/ tmp/ var/
boot/ dev/ home/ initrd.img.old@ lib32/ lost+found/ mnt/ proc/ run/ snap/ sys/ usr/ vmlinuz@

You can now access your data.

Merge EPT Entwine Point Tile Maps

EPT format is a storage method for point clouds, base on an octree structure. The encoding of the point cloud is up to the user, whether las/laz, binary, or whatever custom format. So let's say you store it in laz. What EPT does is to generate the octree to manage how those laz files are stored.

Merging EPT

Given two geographically separated point clouds, it is possible to merge them into the same EPT structure, given they use the same frame of reference. Because Potree doesn't handle latitude and longitude rendering (as they are angles and not euclidean), you have to use euclidean systems, such as UTM coordinates.

Merging two point clouds together

You must first specify a large bounds for the initial build, and add more files later like this:

entwine build -i ~/data/xyz -o ~/entwine/xyz -b "[100, 100, 0, 400, 400, 200]"
entwine build -i ~/data/abc -o ~/entwine/xyz

See Entwine issue 109

Caveats

A caveat is that when you're generating the first frame, you need to custom input the bounding cube bounds, because by EPT design the octree cannot be rebalanced (without recomputing for all points).

Another caveat is that UTM zone is not encoded in the laz file, using UTM cannot be extended over geographical regions. Thus we may have to store xyz points in lat/lng/alt format.

Yet another caveat is the source ID for these points are going to collide. E.g. in file1.laz you have frames ID 0-100 and file2.laz you have frames ID 0-10, then the 0-10 IDs would collide. A workaround would be to assign global unique IDs.