Gary Chien

Overview

What I did: I visualized a lecture hall point cloud using Paraview and using an online LIDAR point cloud viewer (lidarview.com).

Data source: kos.informatik.uni-osnabrueck.de/3Dscans/

How long it took: ~2 hours total

The visualized points become sparse when adjusting the view in Paraview.

The Process

I found a point cloud dataset of a university lecture hall in CSV format, where the first three columns correspond to x, y, and z coordinates. In order to make the CSV file readable by Paraview, I added a header line at the top to label the x, y, and z coordinates:

I added arbitrary labels to the irrelevant last 2 columns.

I then opened this CSV file in Paraview. In the Properties panel, I made sure that "Have Headers" was checked, and clicked "Apply". Because the CSV file is so large (838.5MB), it took about 5 minutes for Paraview to finish processing the file.

The CSV data was then displayed in a table:

I went to the menu and selected Filters->Alphabetical->Table to Points. I then went in the Properties panel and assigned the appropriate X/Y/Z Column headers, and clicked apply:

I then closed out of the "Layout #1" tab (which showed the table from before), and selected "Render View" under "Create View":

This yielded the following visual:

As you can see, the points are a little too densely packed together, which made it difficult to see the interior of the point cloud (such as the chairs of the lecture halls). After playing around with the Paraview's visualization settings, I found that setting the opacity of the points to 0.005 vastly improved the visualization:

Once I got this working, I saved the data as a .vtk and headed down to the VR lab, where I was able to visualize this point cloud with a Vive.

Because Paraview took such a long time to load the CSV file, I wondered if other visualization methods would be faster. I found an online LIDAR point cloud viewer, which takes XYZ files as input. Converting the CSV file to XYZ was simply a matter of getting rid of the last two columns of every row using a simple Python script. Once this was done, I uploaded the file to the website. A cool feature of the site is that it dynamically generates the point-cloud while it reads the XYZ file, so that you can see the pointcloud slowly grow over time. However, one downside became immediately apparent: it seems to be incapable of loading extremely large files. I was only able to get 50% of the point cloud data to load, before the visualizer crashed. The result can be seen below: