Proposal Presentation for Project 1
Milestone Presentation for Project 1
End Presentation for Project 1
Project 2 Proposal <ADD LINK>
Presentation for Project 2 Proposal <ADD LINK>
Poster <ADD LINK>
In-class Activity <ADD LINK>
Public Demo <ADD LINK>
ALL OTHER WIKI CONTRIBUTIONS
[Introduction to Visualizing Real World Data in VR]
[Comparison Between Photogrammetry and NeRF]
[Nvidia Instant NeRF Tutorial]
[Comparison Between Mesh Visualization Stratgies]
CONTRIBUTION N [short description] <ADD LINK>
Total: 102 hours
1 | 5 | Goal 0: example goal showing "novice" score before and "expert mentor" after
1 | 5 | Goal 1: articulate VR visualization software tool goals, requirements, and capabilities;
1 | 5 | Goal 2:construct meaningful evaluation strategies for software libraries, frameworks, and applications; strategies include surveys, interviews, comparative use, case studies, and web research;
2 | 5 | Goal 3:execute tool evaluation strategies;
2 | 5 | Goal 4:build visualization software packages;
1 | 4| Goal 5:comparatively analyze software tools based on evaluation;
1 | 3 | Goal 6:be familiar with a number of VR software tools and hardware;
3 | 4 | Goal 7:think critically about software;
3 | 5 | Goal 8:communicate ideas more clearly;
1 | 5 | Goal 8:able to start on a VR research project
total: 3 hours
Nine Seperate Changes
Takes 10 minutes:
1.The "HelloWorld Unity tutorial" cannot be found. The old link doesn't appear. (completed the link)
2."Into to WebVR" link on homepage does not respond when clicking
3.Maybe can circle out 1 or 2 best VR hardware and software. For normal users, we would just go and find the most popular ones to use.
Takes 1 hour:
1.Make a chart o compare the pros and cons for each software
2.Probably visualize the number of sales for each VR hardware last year. (we probably want to buy the most popular ones)
3.In the Unity pages, there is "metrics, Accessibility: The estimated time for someone to create Hello World in VR". I think it's not a good comparison. After a learning, I can create Hello World very fast, while knowing nothing about Unity....
Takes 10 hours:
1.Find the most useful packages for Unity. Some of them (as I tried Lux Water) seems not capable to use right now(?) I hope that we can suggest useful (and cheap) packages that new comers can use in Unity.
2.A list of userful shortcuts to use Unity. (based on different computers)
3.Some learning curve/ expected learning time/ learning experience introduction on Unity would be helpful
total: 6 hours
Quest 2 Setup:
Finished Quest2 setup, login to paperspace virtual machine, and played Google Earth VR (found my home back at Beijing on it)
Read Past Projects:
1.Shreya D'Souza's project on visualising brain tumour progression in repsonse to chemotherapy
2.Beatrice's project on VR tools to aid Historical Artifact Archiving
3.Paul Molnar's project on Underwater 3D Cave Exploration
Read Research Projects:
1.CoVAR: a collaborative virtual and augmented reality system for remote collaboration
1.Multi-User Framework for Collaboration and Co-Creation inVirtual Reality
Both are very popular 3D engines (for game development). However, Unreal Engine is much harder to use while producing better graphics effects. Furthermore, as I tried learning the tutorials of Unity and Unreal, I think Unity is easier for a newcomer to learn and quickly work on some projects. I would probably use Unity for my projects.
1.Masterpiece layers: many great artworks have multiple layers. I want to separate them one by one to see the effects.
2.Visualize population density change in Manhattan with respect to time.
3.Visualize the change of RGB color in 3D space. For example the original point represents RGB(0,0,0), the right upper top most point represents RGB(255,255,255).
4.Implement NeRF (Neural Radiance Fields) in VR. One user can input a set of images, while our system would synthesis and recreate the 3D model in VR.
total: 6 hours
Installed DinoVR, and played for 20 mins. (Waiting for class activity next Tuseday)
Solidify Project Ideas:
idea: Implement NeRF (Neural Radiance Fields) in VR. One user can input a set of images, while our system would synthesis and recreate the 3D model in VR, so that the other user can play with it
Things to do: 1)Implement the NeRF paper 2)Connect the NeRF output with VR 3)VR visualization of our result
Class Activity: One student can take a couple of photos of something they saw, and our system recreate it in the VR so that other students can manipulate it.
Deliverables: VR Development Software -> Xcode -> NeuralNetworks. We are demonstrating how more complicated algorithms can be used in VR and how to connected neural networks with VR.
Metrics:1)The efficency of our process 2)How detailed the reconstructed result is
Software:Unity 3D/ Python/ Tensorflow
Data: input by the user and generated (recreated) by our system.
idea:Many great artworks have multiple layers. I want to separate them one by one to see the effects.
Things to do: 1)Find dataset for the works that contain multiple layers, and find each layer's data 2)Separate the layers and create the Z-axis 3)Visualuize the layers in 3D in VR
Class Activity: Students can play around the layers of the masterworks and recreate the works. Including some (but not all) layers might result in special perspectives of the artworks.
Deliverables: It can go to Applications of VR -> VR is Art History. (we don't have this subset yet)
Metrics: 1)Does this new method make it clearly to view the artwork? 2)Can we still sense the original work?
Data:Still finding it
idea: Visualize the change of RGB color in 3D space. For example the original point represents RGB(0,0,0), the right upper top most point represents RGB(255,255,255).
Things to do: 1)Prepare the dataset with 255^255^255 values 2)Convert the values to RGB colors in VR headset 3)Allow user control on color changes
Class Activity: Students can play around the different colors and see their transitions. Furthermore, they can combine different colors and see their production (by simply adding the values)
Deliverables: VR visualization software: while we are not using any new software, we exploring how to write simple algorithms that can be visualized (and adapt to changes from user input)
Metrics:Can we clearly see the transition of colors?
Data:Can produce it myself. But need a good way to convert values into colors in VR.
（Temple of Heaven, Beijing)
Screen recording of visitng temple of heaven in Google Earth VR
Screen recording of visiting my home back at Beijing
total: 4 hours
Done playing with DinoVR.
Project (1 sentence): Nerf on VR
Deliverables (Wiki Contributions):
1)Create new page: Applications of VR -> VR in Machine Learning
Explicitly discuss how VR can output with Neural Networks's results and the data format required for Neural Networks for VR.
2)Add to Unity page. VR Development Software -> Unity Photogrammetry (a plugin to visualize Point Cloud datatype)
Explicitly discuss how to use Point Cloud as input datatype on VR.
In Class Activity:
before class (estimate time: 10mins for students, 3 hrs for me):
students can take various images of a single object from different perspectives and send them to Yuanbo. Yuanbo will use Nerf and recreate a 3D model in VR to be shown in class time.
In class (estimate time: 10mins) :
1)Students can compare if the resulted 3D model in VR looks like the original object
2)Students can go inside the object, as predicted by VR. As we cannot go inside the object in real world, VR (might) help us imagine what looks like inside the object.
One user can take various images of an object, and another user can view the recreated result.
The data produced by NeRF should be in Point Cloud format, with each point available for further predictions.
1)Result authenticity. The students should compare the visualized result in VR with the original object in real world. From 1 to 5, students evaluate the amount of authenticity
2)Object inside prediction. Students can also explore what is inside the object in VR. They should evaluation if such result align with their expectation.
Point Cloud: point cloud is a popular 3D model presenting technique, its tutorial can be found on many websites. I'll use PyRender and its export to point clouds.
Unity3D: official unity tutorial, and also Unity Point Cloud plugin
1)Read NeRF paper and get familiar with it
2)Prepare the 3 mins presentation
1)Implement input image processing to be used on NeRF
1)Implement the fully-connected neural network for NeRF
2)Do tutorial on Unity
1)Optimization of training method, prepare a workable NeRF on computer
2)Do tutorial on Unity Point Cloud Plugin
3)Use Pyrender to export NeRF produced data into Point Cloud
1)Use Unity Point Cloud Plugin to read point cloud data on computer
2)Make a working version of the system
1)Finish VR data visualization, prepare demos/ class activities
1)Contribute to wiki with data: connec Neural Network
total: 5 hours
total: 27 hours
Works on NeRF (15h)
Investigated in three NeRF training method:
a)Implementation by the original team of the NeRF paper
Among them, I choose to use Nvidia's instant NeRF for the following reasons: 1)it can train a new model in seconds, comparing to 4 hours of other models. It allows the class activity 2)It has a GUI that allows VR view
I would like to train my own data on NeRF. My input should be an video or a set of images. To this end, I explored several strategies:
a)using COLMAP python package. However, as I tried in on three different virtual machines it seems COLMAP have some compatiable issue with their system...
b)using Record3D, an ios App. This App is very easy to use. Users only need to use their phon
Works on Data Type Transformation (2h)
Explored Blender following this guide
My goal is to export NeRF generated model to blender and then transform it into readable Point Cloud data for Unity
Works on Untiy (10h)
Learnt basic Unity tutorial on creating a VR game
Learnt Unity locomotion and continuous movements following these videos.
Learnt Unity Mesh usage following this tutorial
HW 2/23 - HW 3/7
total: 20 hours
Works on NeRF (2h)
Prepare data; trained 3D objects to be visualized in unity; mesh exportation
Works on Photogrammetry (5h)
Explore Apple Photogrammetry
Works on Unity (13h)
Create an art gallery in Unity to visualize results
Unity techniques including: locomotion, mesh importation, 3D asset manipulation
Get the App here!
Tutorial for the App here!
HW 3/9 - HW 3/14
total: 13 hours
Testing on Texture Mapping (3h)
Learnt how to do texture mappings, so the meshes can have color :)
Contributing to Wiki (10h)
Writing 5 different wiki pages including:
1)Introduction to recreating real world data in VR
2)Comparison between photogrammetry and NeRF
3)Tutorial on Nvidia Instant NeRF
4)Comparison between techniques to export mesh to VR
total: 5 hours
HW 3/21 - 3/23
total: 6 hours