Ross Briden journal

Activity Log

  • 1/25/19, 7:00-8:00 AM, 1 HR, created a journal page; this took longer than I expected because I had trouble editing the site.

  • 1/26/19, 7:00-8:00 AM; 9:00-11:00 AM, 3 HR, installed Paraview; again, this took longer than I expected. The main issue was that the install instructions weren't correct for my version of Linux. So, I tried installing Paraview via apt-get, which worked but only installed an older version of Paraview. Finally, I compiled Paraview from its source code, which ended up being the best solution.

  • 1/28/19, 7:00-9:30 PM, 1 HR 30 MIN, Read the History of VR and several other articles on the wiki, researched .las files and possible software/libraries for processing these files, and updated the description of .las files on the wiki based on my research. I also updated the wiki entry on Unity3D, including a brief tutorial on how to install Unity3D for Ubuntu. Finally, I started to install MinVR.

  • 1/30/19, 9:00-11:30 PM, 2 HR 30 MIN, Originally, I planned on working with ecology data for my project (more on that shortly), but there are some other ideas that may be interesting to explore too. While broad, topology/geometry may be fun to experiment with; for instance, consider the hypersphere packing problem; would it be easier to solve such problems if VR visualization tools were developed? Or consider Dugan Hammock, who explores the relationship between 4D geometries and quantum gravity; could VR aid exploratory learning in that domain? Another interesting domain is medicine. Medical imaging, in particular, seems to be ripe for VR visualization. For instance, consider the process of segmenting brain tumors in MRI scans; typically, this task is performed by neural networks and other algorithms; however, doctors often refine these segmentations, so would it be beneficial to visualize these brain scans as either a volume rendering or surface rendering? dicomvr is a good example of well-executed medical imaging for VR. Nevertheless, it seems that (a) VR interfaces are still too cumbersome, at least for non-technical users, and (b) VR are graphics are often quite poor. I think that AR may resolve some of these issues but its hard to tell.

  • 1/30/19, 9:00-11:30 PM (continued), Aside from project ideas, I downloaded the ecology data from the course website; however, I'm still in the process of parsing it. It's unclear what each file represents since they have names like Zofin_04162018_-744000_-1202250. I'm using PyLidar to parse the data into a VTK file, so I can visualize it in Paraview.

  • 2/1/19, 10:00-11:00 PM, 1 HR, Finished building MinVR. Hopefully, it will run nicely on Linux.

  • 2/3/19, 8:00-11:00 PM, 3 HR, Tested various .las point cloud viewers. For Linux, Displaz seems to be a decent option. It is a pretty hackable piece of software; however, the process of installing it is quite painful. No .deb or binary files are provided, so you have to build it from the source, a non-trivial endeavor. In particular, Displaz requires Qt which adds a whole layer of complexity to the build process. Luckily, after reading through a couple forum posts on Github, it appears that you can install Displaz via Flatpak. Nevertheless, after booting up the program, it visualizes .las pretty easily. Also, lidarview.com is an excellent alternative. Even though its web-based, it runs on top of WebGL, so it is pretty fast! However, it's not open-source, so if you experience any issues, your pretty much out of luck.

  • 2/4/19, 8:00-10:00 PM, 2 HR, Tried installing FrugoViewer, a popular Lidar viewer; however, it's for Window only, so it didn't run on my Linux machine. Also, I'm in the process of implementing a pipeline for converting .las files to .vtk files, so we can view our lidar data in Paraview. This pipeline uses laspy for loading .las files. I'm planning on rendering the forest scene as a volume render, or maybe a surface rendering. Then, I would need to convert this model into something compatible with OpenGL and MinVR. I'm unsure what that might be.

  • 2/4/19, 8:00-10:00 PM (continued) Now, I have two project ideas I'm considering: ecology/lidar visualization in the YURT, topology/geometry visualization for VR or YURT, or something related to neuroscience. For the ecology project, I would, as I mentioned previously, build a pipeline to convert .las files into .vtk files, render the lidar data as a volume render, and render that model in OpenGL. For the topology project, I'm not quite sure what I would render, so I think that I would need to collaborate with a mathematics professor to elaborate on this idea. Now, for the neuroscience visualization, it might be interesting to visualize models created with CLARITY imaging techniques; I'm wondering if there are any neuroscience professors at Brown who would be interested in such a project. I believe that neuroscience would be an interesting domain for VR visualization since the brain and its functionality are inherently volumetric. I will follow up on the last idea.

  • 2/5/19, 7:00 - 10:00 PM, 3 HR, Some updates regarding visualizing CLARITY image data. Several labs have released CLARITY data, but Diesseroth lab at Stanford appears to have some of the most accessible datasets. In particular, I'm working with an image of a mouse brain. The CLARITY scan is stored as a collection of .tif files, so I need to convert these individual images into a volume/surface rendering. I'm not sure how exactly to do that, but I'm working on a solution. Also, I plan to add a page to the wiki on CLARITY images.

  • 2/6/19, 8:00 - 11:00 PM, 3 HR, I'm still debating on whether to work with diffusion MRI data / CLARITY image data or las files and attempt to perform some manifold learning visualization (need to follow up on this). Since the former is more concrete, I will draft a pre-project plan using that idea.

  • 2/12/19, 7:00 - 10:00 PM, 3 HR, Wrote a script file to convert LAS files to .out files! I will still need to modify it to scale to larger datasets.

  • 2/13/19, 2:00 - 3:00 PM, 10:30 - 12:30 PM, 3 HR, Finished LAS to out file conversion with normalization features. I scaled everything in the .las file to a fixed range [-1, 1]. However, I think that may be too conservative, particularly in the z-axis. Also, expect massive data files after conversion! For instance, a 2.7 GB .las file inflates to a ~10 GB .out file; conversion times are also a bit lengthy; approximately, 10 minutes on Intel i5 4300U, no multi-threading, batch write size of 1,000,000 doubles.

  • 2/14/19, 8:00 - 9:00 AM, 1 HR, Added dimensional normalization to .las conversion script and setup .config file.

  • 2/14/19, 11:45 AM - 12:45 PM, 1 HR, edited journal and added project evaluation.

  • 2/18/19, 7:00 - 8:30 PM, 1 HR 30 MIN, Tested .out files at the YURT.

  • 2/19/19, 3:00 - 5:30 PM, 2 HR 30 MIN, Added a tutorial on using LASPY to process and convert .las files to .out files to the wiki. Also, I added information on .out files to the wiki. See here for the additions.

  • 2/19/19, 9:00 - 10:00 PM, 1 HR, Debugged .out visualization in YURT.

  • 2/20/19, 8:00 - 9:50 AM, 2 HR 50 MIN, Revised final project proposal, created presentation for next class, read about GPS Time,, read about current approaches toward forest LiDAR visualization, and added link to Displaz -- a .las file visualization program.

  • 2/20/19, 11:00 - 11:40 AM, 40 MIN, Finished presentation for next class.

  • 2/21/19, 9:30 - 10:20 AM, 50 MIN, Studied subsampling algorithms for point cloud data. Point Cloud Library (PCL) seems to be a good option, and it has a Python binding which I have linked in the wiki.

  • 2/23/19, 8:00 - 11:30 PM, 3 HR 30 MIN, Built Python-PCL and wrote script for subsampling; still needs testing!

  • 2/24/19, 2:30 - 3:00 PM, 4:00 - 6:00 PM, 2 HR 30 MIN, Created a tutorial for Python-PCL'. My next step is to upload my script to ccv and subsample each model; unfortunately, my computer does not have enough memory to perform this task, so I hoping the CCV will.

  • 2/25/19, 9:00 - 10:20 AM, 1 HR 20 MIN, Installed Python-PCL on a PC in the graphics lab; I'm hoping this PC with have enough memory to process the .las files.

  • 2/26/19, 9:00 - 10:00 AM, 11:00 AM - 12:00 PM, 2 HR, Python-PCL is actually not installed correctly; it appears there is some error with GTK+ for Windows. This is a painful process, so I will update my Python-PCL tutorial to include Windows Installation steps.

  • 3/1/19, 8:00 - 10:00 AM, 12:00 - 1:00 PM, 6:00 - 9:00 PM, 6 HR, Finalized subsampling algorithm, and tested LiDAR data in the YURT. I'm using Open3D to visualize pointcloud data on my laptop, Python-PCL to sample entire pointclouds, and Python FLANN for nearest neighbor sampling. Currently, this approach can subsample lidar data in batches of ~50,000,000 points. However, I'm still tuning the process to

  • 3/2/19, 3:00 - 6:10 PM. 3 HR 10 MIN, Debugged pointcloud data in the YURT. I think everything works correctly.

  • 3/3/19, 4:30 - 5:40 PM, 1 HR 10 MIN, Experimented with different subsampling batches. My current approach captures too much ground and too little foliage; I think the approach to fixing this will be taking the point with the largest z-value in a batch and apply the nearest neighbor algorithm w.r.t that point.

  • 3/5/19, 8:00 - 10:00 AM, 2 HR, Created a box based subsampling script; previously, I worked with nearest neighbor and voxel filtering to subsample lidar data; however, I found the combining a box based subsampling method with voxel filtering produced dense, useful results.


A box subsampled pointcloud with 28k points.

  • 3/6/19, 7:00 - 9:00 PM, 2 HR, Experimented with box subsampling over an entire scene and researched papers on visualizing very large points clouds. Apparently, a GTX 1080 can handle up to 1 billion points; however, I'm not sure if the YURT can handle such a load.

  • 3/9/19, 3:00 - 7:00 PM, 4 HR, Tested a 400k point dataset with the YURT; No errors were thrown but nothing was displayed, more debugging is needed.

  • 3/10/19, 3:00 - 7:00 PM, 4 HR, Attempted to test new datasets on YURT; cave nodes were down, however, so this was a pointless endeavor.

  • 3/12/19, 6:30 - 9:30 PM, 3 HR, Tested models in the YURT; it seems that 500k points is the largest number of points that can be rendered using the DinoYURT program.

  • 3/15/19, 3:00 - 6:00 PM, 3 HR, Finalized dense and whole models for first LAS ecology file.

  • 3/18/19, 7:00 - 10:00 PM, 3 HR, Finalized dense and whole models for second LAS ecology file.

  • 3/21/19, 8:00 - 10:00 AM, 8:00 - 10:00 PM, 4 HR, Finalized dense and whole models for last LAS ecology file; ready for demonstration at the YURT! Also, I began a comparison of various pointcloud visualization software and researched some pointcloud visualization whitepapers.

  • 3/24/19, 2:00 - 6:00 PM, 4 HR, Finished pointcloud visualization software comparison and continued reading whitepapers on pointcloud rendering.

  • 3/31/19, 5:30 - 7:30 AM, 3:00 - 5:30 PM, 4 HR 30 MIN, Continued reading whitepapers on pointcloud rendering / visualization; added two new datatypes to the wiki: CLARITY imaging data and MRI imaging data.

  • 4/1/19, 3:00 - 5:30 PM, 7:00 - 10:30, 6 HR, Finalized wiki contributions and created presentation for Tomorrow.

  • 4/2/19, 8:00 - 9:15 PM, 1 HR 15 MIN, Continued research paper search!

  • 4/9/19, 8:00 - 10:00 PM, 2 HR, Removing pointline functionality from DinoYURT.

  • 4/14/19, 5:00 - 10:10 PM, 5 HR 10 MIN, Added summaries to pointcloud paper page and started building an in-class tutorial.

  • 4/15/19. 6:00 - 8:00 PM, 2 HR, Continued work on tutorial.

  • 4/16/19, 2:30 - 10:10, 7 HR 40 MIN, Continued work on in-class tutorial; errors still persist!

  • 4/17/19, 11:00 - 12:00, 2:30 - 7:30, 8:00 - 1:30, 11 HR 30 MIN, In-class tutorial finished!

  • 4/19/19, 9:00 - 10:00 AM, 1 HR, Meeting with Kellner lab; added Quick Terrain Software to the wiki.

  • 4/23/19, 7:00 - 8:00 PM, 1 HR, Subsampling errors fixed (i.e. fixed double floor effect).

  • 4/28/19, 5:00 - 7:30 PM, 2 HR 30 MIN, Worked on ground cropping algorithm; still needs some refinements, in particular convex adaptation.

  • 5/1/19, 8:00 - 10:00 PM, 12:00 - 1:00 AM, 3 HR, Continued work on ground cropping algorithm; currently, it's O(n^3), so I'm implementing an Octree version to increase performance.

  • 5/2/19, 7:00 - 10:00 AM, 3 HR, Worked on floor cropping algorithm.

  • 5/4/19, 9:00 PM - 1:00 AM, 4 HR, New page and tutorial for PointSample, my collection of pointcloud subsampling algorithms.

  • 5/6/19, 9:00 PM - 11:00 PM, 2 HR, Debugged DinoYURT for loading and visualizing ground segmentation data. However, DinoYURT is still not reading my data!

  • 6/7/19, 9:00 PM - 11:50 PM, 2 HR 50 MIN, Finalized models for demo day.

  • 6/8/19, 4:00 PM - 6:00 PM, 2 HR, Worked on poster design for demo day.

  • 6/9/19, 1:00 PM - 5:00 PM, 4 HR, Finished and printed posted; worked on meshing.

  • 6/9/19, 10:00 PM - 11:00 PM, 1 HR, Integrated meshing for demo day; prepared scripts for demo day. Everything is ready!

Total Hours: 158.749

Wiki Additions

  • Unity Ubuntu Installation Tutorial -- 100% complete.

  • Data Types and Examples / Point Cloud Data

    • Added a description of .las files -- 100% complete

    • Added an extensive list of software for processing and visualizing .las file -- 100% complete

    • Added a tutorial on Laspy and Python-PCL -- 100% complete; should be extensive enough for new users.

    • Added a papers page on processing LiDAR / pointcloud data

    • Added two new data types: CLARITY imaging data and MRI imaging data -- 100% complete.

    • Added a comparison of several types of pointcloud visualization software -- 100% complete.

    • Added a link to my PointSample github repository, which contains my subsampling algorithms along with examples, and a short tutorial -- 100% complete.

  • DinoYURT Tutorial -- 100% complete

Proposals


Current Project Proposal:

Timeline

  • 4/4, Finish research paper summaries (35%)

  • 4/9, Test DinoYURT w/o distance parameter

  • 4/11, Continue optimizing DinoYURT (50%)

  • 4/16, Finalize DinoYURT optimizations (50%)

  • 4/18, Test different rendering representations: volume v. surface

  • 4/23, Continue Testing pointcloud representations

  • 4/25, Research pointcloud stitching algorithms

  • 4/30, Continue research and begin potential implementations

  • 5/2, Continue stitching algorithm implementations

  • 5/7, Test implementations, finalize MinVR tutorial

Deliverables

  • DinoYURT Tutorial

  • Ground floor subsampling

  • Research paper summaries

  • PointSample Python Module


Evaluations

Project Evaluation

  • The proposed project clearly identifies deliverable additions to our VR Software Wiki: Agree. My project will contribute a series of tutorials on LiDAR visualization software, comparisons between Paraview and MinVR, and in-depth information on .LAS files.

  • The proposed project will inform future research, ie, advancing human knowledge: Agree. My project is targeting a particular research application: deriving biological insights from LiDAR data. So, I believe that my project with help advance the state of ecology research, particularly for the Kellner lab.

  • The proposed project involves large data visualization along the lines of the "Data Types" wiki page and identifies the specific data and software that it will use: Agree. My project specifies concrete data types, including .las files, and seeks to visualize very large LiDAR files.

  • The proposed project has a realistic schedule with explicit and measurable milestones at least each week and mostly every class: Agree. I believe that my project goals are explicit and reasonably ambitious, with milestones every day of class and week.

  • The proposed project includes an in-class activity: Agree. I have included an in-class activity in my project proposal; however, it could be more concrete.

  • The proposed project has resources available with sufficient documentation: Agree. I have access to all of the necessary data and programs to implement my project.


Personal Journal Evaluation

  • Journal activities are explicitly and clearly related to course deliverables: 4, Journal entries are reasonable in length, with often explicit and detailed descriptions.

  • Deliverables are described and attributed in wiki: 4, Within each journal entry, my deliverables are clearly stated and are either relevant to the wiki and/or my project.

  • Report states total amount of time: 4, Total amount of time is clearly stated and hour-by-hour accounts are included.

  • Total time is appropriate: 4, A reasonable amount of time is spent researching, programming, and adding wiki entries; however, more time could be spent working.

Journal / Wiki Contributions Peer Review

  • Name: Ronald Baker

  • Wiki Additions:

    • TinkerCAD Tutorial:

      • Well formatted, informative and detailed; lots of pictures; includes difficultly levels for different types of users.

      • Overall: Excellent tutorial!

    • Blender / LiDAR Tutorial:

      • Brief and well written; includes step by step videos to compliment tutorial (very nice!)

      • Overall: Excellent!

    • Paraview .las to .vtk Tutorial:

      • Includes a detailed tutorial but no wikipage; nevertheless, it serves its purpose.

      • Overall: Could be improved by adding a standalone wiki page.

    • Example LiDAR data

    • Photoshop 3D Modeling tutorial:

      • Very detailed and pretty! Each step includes a high-resolution picture to complement the text; very accessible!

      • Overall: Amazing tutorial!

    • Veloview LiDAR Tutorial:

      • Relatively technical and somewhat daunting for beginners but still well written; could use some more pictures.

      • Overall: Good tutorial on a very technical topic.

    • .pcap files added to Wiki

  • Journal Evaluation:

    • Includes project objectives, milestones, deliverables; also, Ronald's journal contains links to his past presentations and a list of software he evaluated over the semester.

    • Well formatted and includes weekly / daily updates; could include more entries.

    • Links to videos, tutorials, and other other awesome content!

    • Total Time: 112 Hours

    • Overall: Well kept journal! Reasonably detailed and excellently formatted!

  • Conclusion: Ronald has made many great contributions to the wiki and has kept a detailed journal! Excellent job!

Gallery

Presentations

Archives