Total: 213 hours

HOURS journal

before after

---- ----

1 | 5 | Goal 0: example goal showing "novice" score before and "expert mentor" after

1 | 5 | Goal 1: articulate VR visualization software tool goals, requirements, and capabilities;

1 | 5 | Goal 2:construct meaningful evaluation strategies for software libraries, frameworks, and applications; strategies include surveys, interviews, comparative use, case studies, and web research;

2 | 5 | Goal 3:execute tool evaluation strategies;

2 | 5 | Goal 4:build visualization software packages;

1 | 4| Goal 5:comparatively analyze software tools based on evaluation;

1 | 3 | Goal 6:be familiar with a number of VR software tools and hardware;

3 | 4 | Goal 7:think critically about software;

3 | 5 | Goal 8:communicate ideas more clearly;

1 | 5 | Goal 8:able to start on a VR research project

HW 1/30

total: 3 hours

Nine Seperate Changes

Takes 10 minutes:

1.The "HelloWorld Unity tutorial" cannot be found. The old link doesn't appear. (completed the link)

2."Into to WebVR" link on homepage does not respond when clicking

3.Maybe can circle out 1 or 2 best VR hardware and software. For normal users, we would just go and find the most popular ones to use.

Takes 1 hour:

1.Make a chart o compare the pros and cons for each software

2.Probably visualize the number of sales for each VR hardware last year. (we probably want to buy the most popular ones)

3.In the Unity pages, there is "metrics, Accessibility: The estimated time for someone to create Hello World in VR". I think it's not a good comparison. After a learning, I can create Hello World very fast, while knowing nothing about Unity....

Takes 10 hours:

1.Find the most useful packages for Unity. Some of them (as I tried Lux Water) seems not capable to use right now(?) I hope that we can suggest useful (and cheap) packages that new comers can use in Unity.

2.A list of userful shortcuts to use Unity. (based on different computers)

3.Some learning curve/ expected learning time/ learning experience introduction on Unity would be helpful

HW 2/02

total: 6 hours

Quest 2 Setup:

Finished Quest2 setup, login to paperspace virtual machine, and played Google Earth VR (found my home back at Beijing on it)

Read Past Projects:

1.Shreya D'Souza's project on visualising brain tumour progression in repsonse to chemotherapy 

2.Beatrice's project on VR tools to aid Historical Artifact Archiving

3.Paul Molnar's project on Underwater 3D Cave Exploration

Read Research Projects:

1.CoVAR: a collaborative virtual and augmented reality system for remote collaboration

1.Multi-User Framework for Collaboration and Co-Creation inVirtual Reality

Software Choice:


2.Unreal Engine 

Both are very popular 3D engines (for game development). However, Unreal Engine is much harder to use while producing better graphics effects. Furthermore, as I tried learning the tutorials of Unity and Unreal, I think Unity is easier for a newcomer to learn and quickly work on some projects. I would probably use Unity for my projects.

Project Ideas:

1.Masterpiece layers: many great artworks have multiple layers. I want to separate them one by one to see the effects.

2.Visualize population density change in Manhattan with respect to time. 

3.Visualize the change of RGB color in 3D space. For example the original point represents RGB(0,0,0), the right upper top most point represents RGB(255,255,255). 

4.Implement NeRF (Neural Radiance Fields) in VR. One user can input a set of images, while our system would synthesis and recreate the 3D model in VR.


HW 2/07

total: 6 hours


Installed DinoVR, and played for 20 mins. (Waiting for class activity next Tuseday)

Solidify Project Ideas:

VR Nerf:

idea: Implement NeRF (Neural Radiance Fields) in VR. One user can input a set of images, while our system would synthesis and recreate the 3D model in VR, so that the other user can play with it

Things to do: 1)Implement the NeRF paper 2)Connect the NeRF output with VR 3)VR visualization of our result

Class Activity: One student can take a couple of photos of something they saw, and our system recreate it in the VR so that other students can manipulate it.

Deliverables: VR Development Software -> Xcode -> NeuralNetworks. We are demonstrating how more complicated algorithms can be used in VR and how to connected neural networks with VR.

Metrics:1)The efficency of our process 2)How detailed the reconstructed result is

Software:Unity 3D/ Python/ Tensorflow

Data: input by the user and generated (recreated) by our system. 

Masterpiece layers

idea:Many great artworks have multiple layers. I want to separate them one by one to see the effects.

Things to do: 1)Find dataset for the works that contain multiple layers, and find each layer's data 2)Separate the layers and create the Z-axis 3)Visualuize the layers in 3D in VR

Class Activity: Students can play around the layers of the masterworks and recreate the works. Including some (but not all) layers might result in special perspectives of the artworks. 

Deliverables: It can go to Applications of VR -> VR is Art History. (we don't have this subset yet)

Metrics: 1)Does this new method make it clearly to view the artwork? 2)Can we still sense the original work?

Software:Unity 3D

Data:Still finding it

3D Colors:

idea: Visualize the change of RGB color in 3D space. For example the original point represents RGB(0,0,0), the right upper top most point represents RGB(255,255,255). 

Things to do: 1)Prepare the dataset with 255^255^255 values 2)Convert the values to RGB colors in VR headset 3)Allow user control on color changes

Class Activity: Students can play around the different colors and see their transitions. Furthermore, they can combine different colors and  see their production (by simply adding the values)

Deliverables: VR visualization software: while we are not using any new software, we exploring how to write simple algorithms that can be visualized (and adapt to changes from user input) 

Metrics:Can we clearly see the transition of colors?

Software:Unity 3D

Data:Can produce it myself. But need a good way to convert values into colors in VR.

Google VR



Important Place

(Temple of Heaven, Beijing)

Google Web



Important Place


Screen recording of visitng temple of heaven in Google Earth VR

Screen Recording 2023-02-06 at 3.45.05 PM.mov

Screen recording of visiting my home back at Beijing 

HW 2/09

total: 4 hours


Done playing with DinoVR. 


Project (1 sentence): Nerf on VR

Deliverables (Wiki Contributions):

1)Create new page: Applications of VR -> VR in Machine Learning 

Explicitly discuss how VR can output with Neural Networks's results and the data format required for Neural Networks for VR. 

2)Add to Unity page. VR Development Software -> Unity Photogrammetry (a plugin to visualize Point Cloud datatype) 

Explicitly discuss how to use Point Cloud as input datatype on VR.

In Class Activity:

before class (estimate time: 10mins for students, 3 hrs for me):

students can take various images of a single object from different perspectives and send them to Yuanbo. Yuanbo will use Nerf and recreate a 3D model in VR to be shown in class time.

In class (estimate time: 10mins) : 

1)Students can compare if the resulted 3D model in VR looks like the original object

2)Students can go inside the object, as predicted by VR. As we cannot go inside the object in real world, VR (might) help us imagine what looks like inside the object.

Collaborative Functionalities:

One user can take various images of an object, and another user can view the recreated result. 

Data Format:

The data produced by NeRF should be in Point Cloud format, with each point available for further predictions.


1)Result authenticity. The students should compare the visualized result in VR with the original object in real world. From 1 to 5, students evaluate the amount of authenticity

2)Object inside prediction. Students can also explore what is inside the object in VR. They should evaluation if such result align with their expectation.


Nerf: Nerf paper on arXiv

Point Cloud: point cloud is a popular 3D model presenting technique, its tutorial can be found on many websites. I'll use PyRender and its export to point clouds.

Unity3D: official unity tutorial, and also Unity Point Cloud plugin 

1)Read NeRF paper and get familiar with it 

2)Prepare the 3 mins presentation

1)Implement input image processing to be used on NeRF

1)Implement the fully-connected neural network for NeRF

2)Do tutorial on Unity 

1)Optimization of training method, prepare a workable NeRF on computer

2)Do tutorial on Unity Point Cloud Plugin

3)Use Pyrender to export NeRF produced data into Point Cloud

1)Use Unity Point Cloud Plugin to read point cloud data on computer

2)Make a working version of the system 

1)Finish VR data visualization, prepare demos/ class activities

1)Contribute to wiki with data: connec Neural Network 

HW 2/14

total: 5 hours

Preparing project1 and its presentation

HW 2/18

total: 27 hours

Works on NeRF (15h)

(1)NeRF training 

Investigated in three NeRF training method:

a)Implementation by the original team of the NeRF paper

b)NeRF Studio 

c)Nvidia's Instant NeRF

Among them, I choose to use Nvidia's instant NeRF for the following reasons: 1)it can train a new model in seconds, comparing to 4 hours of other models. It allows the class activity 2)It has a GUI that allows VR view

(2)Data preparation 

I would like to train my own data on NeRF. My input should be an video or a set of images. To this end, I explored several strategies:

a)using COLMAP python package. However, as I tried in on three different virtual machines it seems COLMAP have some compatiable issue with their system...

b)using Record3D, an ios App. This App is very easy to use. Users only need to use their phon

Works on Data Type Transformation (2h)

Explored Blender following this guide 

My goal is to export NeRF generated model to blender and then transform it into readable Point Cloud data for Unity

Works on Untiy (10h)

Learnt basic Unity tutorial on creating a VR game

Learnt Unity locomotion and continuous movements following these videos. 

Learnt Unity Mesh usage following this tutorial 

HW 2/23 - HW 3/7

total: 20 hours

Works on NeRF (2h)

Prepare data; trained 3D objects to be visualized in unity; mesh exportation 

Works on Photogrammetry (5h)

Explore Apple Photogrammetry

Works on Unity (13h)

Create an art gallery in Unity to visualize results

Unity techniques including: locomotion, mesh importation, 3D asset manipulation

Get the App here! 

Tutorial for the App here!

HW 3/9 - HW 3/14

total: 13 hours

Testing on Texture Mapping (3h)

Learnt how to do texture mappings, so the meshes can have color :) 

Contributing to Wiki (10h)

Writing 5 different wiki pages including:

1)Introduction to recreating real world data in VR

2)Comparison between photogrammetry and NeRF

3)Tutorial on Nvidia Instant NeRF

4)Comparison between techniques to export mesh to VR

5)Texture mapping in Unity

HW 3/16

total: 5 hours

Work on Journals + Write Reflection(2h)

Prepare Wednesday Presentation (3h)

Link to presentation

HW 3/21 - 3/23

total: 6 hours

Reading Seven Scenarios paper(2h)

Prepare Project2 Proposal + Slides (3h)

Link to presentation

HW 3/23 - 4/04

total: 12 hours

Preparations of Project 2

(1)Narrow done the question: (3h)

I’ve talked with some researcher in East Asian Studies (including a current PhD student in Chinese Literature at Columbia University), and I have narrowed my target of Project 2 “Visualization Emotions of Chinese Poetry in VR”

There are two major interests of researchers:

1)What Chinese characters are used to express a specific emotion 

2)How different Poets use different Chinese characters to express the same emotion.

(2)Find the dataset: (4h)

A good and accurate translation of Chinese Poetry to English in the key. Google Translator or ChatGPT definitely doesn’t work as if often misunderstand the works. I choose to use Arthur Waley’s (one of the most famous Chinese Literature Scholar in the world) translation of Li Po and BaiJuyi as two datasets. Each of them contain 100+ poems 

(3)Similarity Matrix (5h)

To separate the poems into groups expressing similar emotion, I’m using Paul Ekman’s research in 1992 that groups emotion into six categories : “anger”, “disgust”, “fear”, “happiness”, “sadness”, “surprise”. I’m calling GPT-3.5 API to do this work. 

HW 4/04 - 4/13

total: 8 hours

key word: dataset preparation and coding

Preparations of the dataset: (2h)

Copied and sorted the dataset into a JSON file

Emotion analysis on the dataset: (6h)

Implemented GPT-3.5 bot to classify the emotions in the datasets and export to another JSON file

HW 4/13 - 4/20

total: 25 hours

key word: VR fundalmentals design

Design and Construct Over Structure: (8h)

Experimented with various structures and decided to use the six-corridors structure

Construct the whole six-corridors structure and the altar in the middle

Fonts Editing: (6h)

Import CJK (Chinese, Japanese, Korean) fonts based on unicode into Unity. And resolved problems with rare Chinese characters

Poetry Filling: (6h)

Filled out the poetries based on their classified emotions.

Multiplayer Implementation: (5h)

Implemented multiplayer control using Photon Pun, by following Youtube videos.

HW 4/20 - 4/27

total: 5 hours

key word: Finishing up

Made small adjustments on VRPoetry(3h)

Prepare for in class activities & handout: (2h)

HW 4/27 - 5/04

total:  10 hours

Write Wiki Contributions (10h)

Prepare slides and filling out journal (1h)

HW 5/04 - 5/17

total:  38 hours

Making Poster, preparing flash talk (3hrs)

Reading Papers in VR Reading Researchers (15hrs)

Exploring/ Implementing reading techniques in VR (20hrs)

HW 5/19

total:  10 hours

Adding Wiki contributions on current progress update