Eric Wu


  • First day in class. Joined Slack, created journal page and Oscar account

  • Read about the Cave, the Yurt, and the possible applications of a 3D environment

  • Downloaded ParaView. Downloaded example dataset and tested.

Time spent: 1.5 hours


Time spent: 45 minutes


I was impressed by the efficacy of VR in simulating compelling datasets that are more comprehensible than standard bar graphs/2-dimensional charts.

Studied journals from last year—catching up on a late start

Looking for datasets with file type “.vtk”

Read page on volume rendering.

Played around with Paraview/tutorial.vtk

Watched file format and basic usage tutorial video on Paraview.

Studied VTK file format and its data attributes/cell types

Time spent: 2 hours


I researched different uses and considered the following ideas:

  1. Use VR to model carbon emissions and other pollutants in the different layers of Earth’s atmosphere.

  2. Use VR to model data of joint degeneration over time.

However, I am most interested in modeling data from a depth-sensing camera.

I looked into different cameras/3D scanning technologies. Time-of-Flight cameras seem to be the most effective.

Reading about the 2D/3D coordinate systems used by Time-of-Flight technology (specifically Azure Kinetic).

Created the following project proposal:

Eric Wu

February 8, 2020

Computer Science 1951K

Project #1 Proposal

For my first project, I would like to visualize data from a depth-perceiving camera into a comprehensible model.

I plan to borrow an Azure Kinect V2 camera from the Brown University’s Robotics Laboratory. This camera uses Time-of-Flight technology to send waves between the device and its surroundings to capture depth.

Modeled on an X, Y, Z plane, I want to visualize this data on a software platform: ideally, Paraview. Transferring data from the Azure Kinect into a format that can be read by Paraview will pose challenges.

Several questions arise: What objects are intended for the Azure Kinect to detect? Also, how can an object’s 3-dimensional data be applied to a modeling platform?

I will try to address these questions throughout the following timeline:

Week of 2/10/2020:

1. Obtain camera

2. Learn how to use camera

3. Learn how to stream data from the camera onto a laptop.

4. Have access to and download required SDK software.

5. Successfully use SDK software to extract data from the Azure Kinect

6. During this process, I will document the utility and features of both the Azure Kinect Development Kit and the SDK streaming software needed to obtain the camera’s data.

Week of 2/17/2020

1. Convert data to a format conducive with Paraview, and, if not, other modelling applications

2. Having data from the camera now in a workable file format, I will transfer the data into Paraview for modeling.

3. Document compatibility issues with SDK streaming software and modelling software

Week of 2/24/2020

1. Data is now into Paraview application

2. Data manipulated to form an effective representational model

3. Slicer application may be required depending upon the time available and the content of the extracted data

Time spent: 6 hours


Troubleshot with kinetic azure camera which uses Time-of-Flight technology to perceive depth.

Camera requires Windows for software to operate.

Time spent: 1 hour


Tried to go to computer lab (Sun Lab) but was denied access. Contacted IT department and was given access to computer login.

Time spent: 30 minutes


Went to computer lab again (VR lab) and was unable to download Microsoft Azure SDK for the camera.

Rented a computer from the IT desk at Paige Robinson.

Downloaded the Microsoft Azure SDK software and created a subscription account to use their product

Tried to connect camera, however, an error is read by the computer. Began to troubleshoot to launch the SDK software.

Time spent: 3 hours


Uninstalled and reinstalled Microsoft SDK several times. Application still wouldn't launch. I contacted Microsoft customer support and received instructions via phone. Software still wouldn't launch.

Time spent: 3 hours


Microsoft SDK now works. I successfully connected the Azure Kinect to the Dell laptop.

Time spent: 2 hours


I met with TA Russ to discuss options for extracting the data from the Azure Kinect to the Dell laptop. The camera seems to work, however, further code is required to extract the x, y, and z coordiantes detected by the camera. With Ross' help, we installed a Visual Studio software that allows for code to be taken from the Microsoft website and into a program that allows the camera's data to be released.

Time spent: 6 hours


After receiving help from Ross, I am looking online for code to get an image's depth data.

Time spent: 1 hour


The Dell laptop I am working on is requested by the Information Services desk. I extended my rental period to continue project.

Time spent: 15 minutes


I am continuing my search for the correct code to use to extract image's depth data. Primarily looking at the Microsoft website for information.

Time spent: 45 minutes


Found this information on the depth from the Azure camera.

"Depth image type DEPTH16. Each pixel of DEPTH16 data is two bytes of little endian unsigned depth data. The unit of the data is in millimeters from the origin of the camera. Stride indicates the length of each line in bytes and should be used to determine the start location of each line of the image in memory. " (

Amy Kingzett from Microsoft support gave me a follow-up call. We troubleshot different ways of opening the Microsoft Kinect SDK software. After installing and reinstalling again, the software still did not launch. Amy gave me a couple useful resources: I scheduled a follow up call with another Microsoft support representative in the likelihood the SDK software continues to have technical problems.

This seems best:

Time spent: 3 hours


Looked at Meshlab software for viewing the pointcloud data produced by the Azure camera

Viewed Microsoft SDK data transformation examples on GitHub:

They provided a function for writing a depth image to a file in transformation_helpers.cpp

Downloaded Meshlab on Dell laptop.

Met with TA Ross and he helped with putting together code to extract x,y,z coordinates from the Azure camera.

This primarily consisted of editing the visual studio code, using example templates to concatenate strings in C++, finding code that correctly puts strings into buffer, deleting color obtaining code to make the software run smoother, and calling a function to print the width, height, and depth of objects perceived by the Azure Kinect. Also, buffer errors were fixed with strncpy_s function and breakpoints were found within the transformstion_helpers_write_point_cloud. After, the file was placed into a .ply format, located, and accessed.

Time spent: 5 hours


Data can now be accessed from the Azure Kinect camera right onto the laptop. I really appreciate Ross' help and the Microsoft coding templates throughout this process.


Worked on code to extract xyz coordinates.

Time: 4 hours


Code is finished and Kinect's data should be able to be exported into DinoYurt. The rented Dell laptop will need to be returned at some point so I will transfer the Visual Studio software, Microsoft SDK, and code onto a desktop in the computer lab. The files needed to export are zipped up and ready to go.

Time: 2 hours


From here on out, I should be able to scp the .out (data) files to visualize using PuTTY. I will configure a data file for it to be easily ran into the Yurt.

relevance - 5

contributed to wiki - 3 (written but unpublished)

time mentioned - 2 (no but very clear descriptions, evident of time)

enough time - 4 (evident work)

Eric Project Evaluation by David 02/07/20

Looking forward to a description and presentation on Tuesday! Holler in slack if questions!