Ross Briden journal

Activity Log

  • 1/25/19, 7:00-8:00 AM, created a journal page; this took longer than I expected because I had trouble editing the site.

  • 1/26/19, 7:00-8:00 AM; 9:00-11:00 AM, installed Paraview; again, this took longer than I expected. The main issue was that the install instructions weren't correct for my version of Linux. So, I tried installing Paraview via apt-get, which worked but only installed an older version of Paraview. Finally, I compiled Paraview from its source code, which ended up being the best solution.

  • 1/28/19, 7:00-9:30 PM, Read the History of VR and several other articles on the wiki, researched .las files and possible software/libraries for processing these files, and updated the description of .las files on the wiki based on my research. I also updated the wiki entry on Unity3D, including a brief tutorial on how to install Unity3D for Ubuntu. Finally, I started to install MinVR.

  • 1/30/19, 9:00-11:30 PM, Originally, I planned on working with ecology data for my project (more on that shortly), but there are some other ideas that may be interesting to explore too. While broad, topology/geometry may be fun to experiment with; for instance, consider the hypersphere packing problem; would it be easier to solve such problems if VR visualization tools were developed? Or consider Dugan Hammock, who explores the relationship between 4D geometries and quantum gravity; could VR aid exploratory learning in that domain? Another interesting domain is medicine. Medical imaging, in particular, seems to be ripe for VR visualization. For instance, consider the process of segmenting brain tumors in MRI scans; typically, this task is performed by neural networks and other algorithms; however, doctors often refine these segmentations, so would it be beneficial to visualize these brain scans as either a volume rendering or surface rendering? dicomvr is a good example of well-executed medical imaging for VR. Nevertheless, it seems that (a) VR interfaces are still too cumbersome, at least for non-technical users, and (b) VR are graphics are often quite poor. I think that AR may resolve some of these issues but its hard to tell.

  • 1/30/19, 9:00-11:30 PM (continued), Aside from project ideas, I downloaded the ecology data from the course website; however, I'm still in the process of parsing it. It's unclear what each file represents since they have names like Zofin_04162018_-744000_-1202250. I'm using PyLidar to parse the data into a VTK file, so I can visualize it in Paraview.

  • 2/1/19, 10:00-11:00 PM, Finished building MinVR. Hopefully, it will run nicely on Linux.

  • 2/3/19, 8:00-11:00 PM, Tested various .las point cloud viewers. For Linux, Displaz seems to be a decent option. It is a pretty hackable piece of software; however, the process of installing it is quite painful. No .deb or binary files are provided, so you have to build it from the source, a non-trivial endeavor. In particular, Displaz requires Qt which adds a whole layer of complexity to the build process. Luckily, after reading through a couple forum posts on Github, it appears that you can install Displaz via Flatpak. Nevertheless, after booting up the program, it visualizes .las pretty easily. Also, lidarview.com is an excellent alternative. Even though its web-based, it runs on top of WebGL, so it is pretty fast! However, it's not open-source, so if you experience any issues, your pretty much out of luck.

  • 2/4/19, 8:00-10:00 PM, Tried installing FrugoViewer, a popular Lidar viewer; however, it's for Window only, so it didn't run on my Linux machine. Also, I'm in the process of implementing a pipeline for converting .las files to .vtk files, so we can view our lidar data in Paraview. This pipeline uses laspy for loading .las files. I'm planning on rendering the forest scene as a volume render, or maybe a surface rendering. Then, I would need to convert this model into something compatible with OpenGL and MinVR. I'm unsure what that might be.

  • 2/4/19, 8:00-10:00 PM (continued) Now, I have two project ideas I'm considering: ecology/lidar visualization in the YURT, topology/geometry visualization for VR or YURT, or something related to neuroscience. For the ecology project, I would, as I mentioned previously, build a pipeline to convert .las files into .vtk files, render the lidar data as a volume render, and render that model in OpenGL. For the topology project, I'm not quite sure what I would render, so I think that I would need to collaborate with a mathematics professor to elaborate on this idea. Now, for the neuroscience visualization, it might be interesting to visualize models created with CLARITY imaging techniques; I'm wondering if there are any neuroscience professors at Brown who would be interested in such a project. I believe that neuroscience would be an interesting domain for VR visualization since the brain and its functionality are inherently volumetric. I will follow up on the last idea.

  • 2/5/19, 7:00 - 10:00 PM, Some updates regarding visualizing CLARITY image data. Several labs have released CLARITY data, but Diesseroth lab at Stanford appears to have some of the most accessible datasets. In particular, I'm working with an image of a mouse brain. The CLARITY scan is stored as a collection of .tif files, so I need to convert these individual images into a volume/surface rendering. I'm not sure how exactly to do that, but I'm working on a solution. Also, I plan to add a page to the wiki on CLARITY images.

  • 2/6/19, 8:00 - 11:00 PM, I'm still debating on whether to work with diffusion MRI data / CLARITY image data or las files and attempt to perform some manifold learning visualization (need to follow up on this). Since the former is more concrete, I will draft a pre-project plan using that idea.

Pre-project Plan

  • Title: Visualizing CLARITY and Diffusion MRI data

  • Activities:

    • Download and process CLARITY image dataset.

    • Find new diffusion MRI datasets.

    • Add information about CLARITY and diffusion MRI datasets to the wiki.

    • Add and compare software for visualizing these formats.

    • Learn how to convert CLARITY imaging data to volume render.

    • Install Unity and download necessary software to support VR visualization in Unity.

    • Convert diffusion MRI data to volume renders and visualize in HTC Vive.

    • Port CLARITY imaging renders to the YURT?

    • Comparison between Paraview, MinVR, and VTK.

  • Milestones:

    • 2/12: Convert CLARITY imaging data to a usable format and render data in Paraview.

    • 2/14: Add volume renders to Unity and build a steam VR app to visualize data in HTC Vive.

    • 2/21: Finish VR visualization and get user feedback from fellow classmates.

  • Deliverables:

    • New wiki entries on CLARITY imaging and diffusion MRI imaging.

    • An app which displays a high resolution volume rendering of CLARITY image data.

    • A survey on people's reactions to CLARITY imaging.

    • Comparisons of different neuroscience imaging software, in terms of speed and ease of use.

New Pre-project Plan

  • Title: Visualizing Ecology LiDAR data in Virtual Reality and YURTs

  • Activities:

    • Examine Kellner lab and determine possible visualization methods for data.

    • Compare and contrast various LiDAR data processing tools and libraries; in terms of usability, installation time, and rendering quality.

    • Explore different mesh rendering techniques for visualizing ecology data.

    • Document process of rendering LiDAR data in VR and add a possible tutorial to the wiki.

    • Explore methods for mapping LiDAR data into a continuous visualization; in other words, determine how to stitch LiDAR data into a cohesive visualization.

    • Create a tutorial for MinVR and DinoYURT.

  • Milestones:

    • 2/12/2019, Research Kellner Lab.

    • 2/14/2019, Convert LAS files to readable format for DinoYURT.

    • 2/21/2019, Finalize data conversion process, if necessary, and begin HTC Vive Visualization.

    • 2/26/2019, Finish HTC Vive Visualization.

    • 2/28/2019, Visualize preliminary ecology data in YURT.

    • 3/5/2019, Explore new LiDAR ecology samples from dataset.

    • 3/7/2019, Render new ecology samples in YURT.

    • 3/12/2019, Research algorithms for stitching ecology samples together.

    • 3/14/2019, Continue exploration of stitching together.

    • 3/19/2019, Implement a simple algorithm for stitching N ecology samples.

    • 3/21/2019, Render stitched images in the YURT.

    • 4/2/2019, Publish comparison between MinVR and Paraview for rendering LiDAR data.

  • Deliverables:

    • A VR visualization of LiDAR ecology data using the HTC Vive.

    • Comparisons of various LiDAR visualization and processing software.

    • Tutorials on MinVR and developing software for the YURT.

  • In-Class Activities:

    • Exploring volumetric rendering with LiDAR data.

      • Develop a short activity where we explore techniques for rendering and visualizing LiDAR data. We would most likely use Unity to visualize this data.

  • 2/12/19, 7:00 - 10:00 PM, Wrote a script file to convert LAS files to .out files! I will still need to modify it to scale to larger datasets.

  • 2/13/19, 2:00 - 3:00 PM, 10:30 - 12:20 PM, Finished LAS to out file conversion with normalization features. I scaled everything in the .las file to a fixed range [-1, 1]. However, I think that may be too conservative, particularly in the z-axis. Also, expect massive data files after conversion! For instance, a 2.7 GB .las file inflates to a ~10 GB .out file; conversion times are also a bit lengthy; approximately, 10 minutes on Intel i5 4300U, no multi-threading, batch write size of 1,000,000 doubles.

  • 2/14/19, 8:00 - 9:00 AM, Added dimensional normalization to .las conversion script and setup .config file.

  • 2/14/19, 11:45 AM - 12:45 PM, edited journal and added project evaluation.

Total Time: 32.30 Hours

Project Evaluation

  • The proposed project clearly identifies deliverable additions to our VR Software Wiki: Agree. My project will contribute a series of tutorials on LiDAR visualization software, comparisons between Paraview and MinVR, and in-depth information on .LAS files.

  • The proposed project will inform future research, ie, advancing human knowledge: Agree. My project is targeting a particular research application: deriving biological insights from LiDAR data. So, I believe that my project with help advance the state of ecology research, particularly for the Kellner lab.

  • The proposed project involves large data visualization along the lines of the "Data Types" wiki page and identifies the specific data and software that it will use: Agree. My project specifies concrete data types, including .las files, and seeks to visualize very large LiDAR files.

  • The proposed project has a realistic schedule with explicit and measurable milestones at least each week and mostly every class: Agree. I believe that my project goals are explicit and reasonably ambitious, with milestones every day of class and week.

  • The proposed project includes an in-class activity: Agree. I have included an in-class activity in my project proposal; however, it could be more concrete.

  • The proposed project has resources available with sufficient documentation: Agree. I have access to all of the necessary data and programs to implement my project.


Journal Evaluation

  • Journal activities are explicitly and clearly related to course deliverables: 4, Journal entries are reasonable in length, with often explicit and detailed descriptions.

  • Deliverables are described and attributed in wiki: 4, Within each journal entry, my deliverables are clearly stated and are either relevant to the wiki and/or my project.

  • Report states total amount of time: 4, Total amount of time is clearly stated and hour-by-hour accounts are included.

  • Total time is appropriate: 4, A reasonable amount of time is spent researching, programming, and adding wiki entries; however, more time could be spent working.

  • 2/18/19, 7:00 - 8:30 PM, 1:30 TOTAL, Tested .out files at the YURT.

  • 2/19/19, 3:00 - 5:30 PM, 2:30 TOTAL, Added a tutorial on using LASPY to process and convert .las files to .out files to the wiki. Also, I added information on .out files to the wiki. See here for the additions.

  • 2/19/19, 9:00 - 10:00 PM, 1:00 TOTAL, Debugged .out visualization in YURT.

  • 2/20/19, 8:00 - 9:50 AM, 2:50 TOTAL, Revised final project proposal, created presentation for next class, read about GPS Time,, read about current approaches toward forest LiDAR visualization, and added link to Displaz -- a .las file visualization program.

  • 2/20/19, 11:00 - 11:40 AM, 0:40 TOTAL, Finished presentation for next class.

  • 2/21/19, 9:30 - 10:20 AM, 0:50 TOTAL, Studied subsampling algorithms for point cloud data. Point Cloud Library (PCL) seems to be a good option, and it has a Python binding which I have linked in the wiki.

  • 2/23/19, 8:00 - 11:30 PM, 3:30 TOTAL, Built Python-PCL and wrote script for subsampling; still needs testing!

  • 2/24/19, 2:30 - 3:00 PM, 4:00 - 6:00 PM, 2:30 TOTAL, Created a tutorial for Python-PCL'. My next step is to upload my script to ccv and subsample each model; unfortunately, my computer does not have enough memory to perform this task, so I hoping the CCV will.

  • 2/25/19, 9:00 - 10:20 AM, 1:20 TOTAL, Installed Python-PCL on a PC in the graphics lab; I'm hoping this PC with have enough memory to process the .las files.

  • 2/26/19, 9:00 - 10:00 AM, 11:00 AM - 12:00 PM, 2:00 TOTAL, Python-PCL is actually not installed correctly; it appears there is some error with GTK+ for Windows. This is a painful process, so I will update my Python-PCL tutorial to include Windows Installation steps.

  • 3/1/19, 8:00 - 10:00 AM, 12:00 - 1:00 PM, 6:00 - 9:00 PM, 6:00 TOTAL, Finalized subsampling algorithm, and tested LiDAR data in the YURT. I'm using Open3D to visualize pointcloud data on my laptop, Python-PCL to sample entire pointclouds, and Python FLANN for nearest neighbor sampling. Currently, this approach can subsample lidar data in batches of ~50,000,000 points. However, I'm still tuning the process to

  • 3/2/19, 3:00 - 6:10 PM. 3:10 TOTAL, Debugged pointcloud data in the YURT. I think everything works correctly.

  • 3/3/19, 4:30 - 5:40 PM, 1:10 TOTAL, Experimented with different subsampling batches. My current approach captures too much ground and too little foliage; I think the approach to fixing this will be taking the point with the largest z-value in a batch and apply the nearest neighbor algorithm w.r.t that point.

  • 3/5/19, 8:00 - 10:00 AM, 2:00 TOTAL, Created a box based subsampling script; previously, I worked with nearest neighbor and voxel filtering to subsample lidar data; however, I found the combining a box based subsampling method with voxel filtering produced dense, useful results.


A box subsampled pointcloud with 28k points.

  • 3/6/19, 7:00 - 9:00 PM, 2:00 TOTAL, Experimented with box subsampling over an entire scene and researched papers on visualizing very large points clouds. Apparently, a GTX 1080 can handle up to 1 billion points; however, I'm not sure if the YURT can handle such a load.

  • 3/9/19, 3:00 - 7:00 PM, 4:00 TOTAL, Tested a 400k point dataset with the YURT; No errors were thrown but nothing was displayed, more debugging is needed.

  • 3/10/19, 3:00 - 7:00 PM, 4:00 TOTAL, Attempted to test new datasets on YURT; cave nodes were down, however, so this was a pointless endeavor.

  • 3/12/19, 6:30 - 9:30 PM, 3:00 TOTAL, Tested models in the YURT; it seems that 500k points is the largest number of points that can be rendered using the DinoYURT program.

Final Proposal

  • Title: Visualizing Ecological LiDAR data in Virtual Reality and YURTs

  • Objective: Document and explore how MinVR and Paraview can be used to visualize composed ecological LiDAR data.

  • Remark: A composed LiDAR model is several .las files stitched together based on a metric (e.g. GPS Time).

  • Questions:

    • How can LiDAR data be meaningfully composed? In other words, what metrics can we use to perform this task?

    • What software is available for visualizing LiDAR files?

    • What are the distinctions between MinVR and Paraview for visualizing LiDAR data in VR / YURTs?

    • What scientific insights can be derived from visualizing ecology data in VR / YURTs?

  • Activities:

    • Examine Kellner lab and determine possible visualization methods for data.

    • Compare and contrast various LiDAR data processing tools and libraries; in terms of usability, installation time, and rendering quality.

    • Explore different mesh rendering techniques for visualizing ecology data.

    • Document process of rendering LiDAR data in VR and add a possible tutorial to the wiki.

    • Explore methods for mapping LiDAR data into a continuous visualization; in other words, determine how to stitch LiDAR data into a cohesive visualization.

    • Add pixel-wise coloring to LiDAR data based on point height and other metrics.

    • Determine correct normalization factors for ecology data.

    • Learn more about LiDAR data and in order to motivate composing multiple LiDAR files.

    • Read MinVR documentation and review OpenGL.

    • Install Paraview LiDAR plugin.

    • Install MinVR on graphics lab computer.

    • Read papers on LiDAR stitching.

    • Create support code for in-class activity.

    • Download new LiDAR samples from repository.

    • Create a tutorial on utilizing Laspy for .las data processing.

  • Milestones:

    • 2/12/2019, Research Kellner Lab.

    • 2/14/2019, Convert LAS files to readable format for DinoYURT.

    • 2/21/2019, Finalize data conversion process, if necessary, and begin YURT Visualization of the first three LiDAR models.

    • 2/26/2019, Finish YURT visualization for first three LiDAR models.

    • 2/28/2019, Download and visualize a new batch of LiDAR ecology models (at least N=5).

    • 3/5/2019, Visualize all of the previous LiDAR models in VR (HTC Vive).

    • 3/7/2019, Create a tutorial for visualizing .out files in the YURT using MinVR.

    • 3/12/2019, Create a composite LiDAR model, composed of three separate .las files, and visualize it in the YURT.

    • 3/14/2019, Create a tutorial on composing/stitching .las files and visualizing the results using Paraview / MinVR

    • 3/19/2019, Create a script for composing N .las files based on GPS time.

    • 3/21/2019, Compose N > 10 .las files and visualize the results in the YURT.

    • 4/2/2019, Publish in-depth comparison between MinVR and Paraview for rendering and visualizing LiDAR data in the YURT .

  • Deliverables:

    • A VR visualization of LiDAR ecology data using the HTC Vive and tutorials to accompany this process.

    • Comparisons of Paraview, Lidarview, and MinVR for visualizing and rendering LiDAR data.

    • Scripts / tutorials on composing .las files arbitrarily and based on GPS time.

    • A YURT visualization of composed LiDAR data.

  • In-Class Activities:

    • .las file conversion and visualization:

      • Everyone downloads and converts one .las file in the repository to a .out file, then we will display their results in the YURT.

      • This will visualize novel .las files, test the rigidity of my data conversion pipeline, and provide a comparison between Paraview and MinVR for visualizing LiDAR data.

      • This might be an interesting collaboration opportunity with Ronald; he could provide a Paraview tutorial, and I would provide a MinVR tutorial.