Michael Colonna journal

Cumulative: 122 hours

Project 1 proposal can be found here.

Project 2 proposal can be found here.

Poster can be found here.

2/1 (3.5 hrs; cumulative 3.5 hours) – downloaded Paraview, followed Tutorial #1 and VesselVR (30 min). Read the History of VR and What is VR? articles from the wiki. Reviewed literature for VR in academia, in pop culture, and in education. Skimmed "Emotional activity in early immersive design" (2016) and read "What's Real about Virtual Reality?" (1999) (2 hrs). Researched online about data visualization in VR, and then later about education in VR (1 hr).

2/2 (2.5 hrs; cumulative 6 hours) – read over many of the site resources. Read "Effectiveness of virtual reality-based instruction on students’ learning outcomes in K-12 and higher education: A meta-analysis" (2014), a paper that asserts the increased level of comprehension in students that VR-based instruction enables (1 hr). Thought about and developed 3 project ideas with corresponding activities (1 hr). Read "Virtual Reality Learning Activities for Multimedia Students to Enhance Spatial Ability" (2018) and added it to the VR in Education articles page (30 min).

2/4 (2 hrs; cumulative 8 hours) – followed a basic Blender modeling tutorial (2 hrs).

2/6 (3.5 hrs; cumulative 11.5 hours) – researched geospatial datasets to potentially visualize using ParaView. Found this interesting oceanographic survey that measures the temperature and salinity of the world's oceans over various decades, by latitude and longitude. Investigated ways to bring this dataset into ParaView, discovered it uses the NetCDF file format, gave up on importing the data (for now) (2 hrs). Spent time thinking about the project, researching various visualization programs, and writing a pre-project plan (1.5 hr). Briefly considered how to tackle education in VR, especially with regard to machine learning and data science, before deciding that I am not qualified to teach this material. I am a little confused about how to tackle the project, which I hope to bring up and clarify in class.

Tentative Project Titles:

  1. Analyzing the effectiveness of VR for comprehension of difficult school subjects

    • Activities:

      • Research design of effective learning environments in 3D space

      • Create basic learning environments in VR and corresponding traditional lessons

      • Collect data on effectiveness of each method – both immediately after lessons and a week after lessons

    • In-class:

      • Background survey, learning experience (VR vs. traditional), comprehension test

  2. Comparing the accessibility of VR visualization software using geospatial climate data

    • Activities:

      • Research and learn how to use OpenSpace, OmegaLib (two visualization softwares without tutorials)

      • Create tutorials for each software using the same geospatial dataset

      • Collect data on effectiveness and accessibility of each program

    • In-class:

      • Background survey, 2 tutorials – one for each program, and post-tutorial survey

  3. Comparing the immersiveness of VR HMD's

    • Activities:

      • Research and catalogue the differences in accessible HMD's (Vive, Oculus, Cardboard, Gear)

      • Design a study in which participants interact with the same virtual environment using different pieces of hardware

      • Develop a survey and collect data on the immersiveness of each piece of hardware

    • In-class:

      • Background survey, study with different pieces of hardware, post-study survey

2/9 (2 hrs; cumulative 13.5 hours) – researched, downloaded, configured, and generated OpenSpace. Thought about what data I would like to display in this context and the VR capabilities of the software.

2/10 (4 hrs; cumulative 17.5 hours) – followed introductory OpenSpace tutorials. Investigated VR capabilities of OpenSpace. Spent time downloading and configuring OpenVR so that I can build OpenSpace with it.

2/11 (5.5 hrs; cumulative 23 hours) – wrestled with cmake and OpenSpace to enable VR with OpenVR – giving up (for now) so that I can continue with pre-project activities. Installed and attempted to build OmegaLib, but the severe lack of documentation and upkeep made this very frustrating. Subsequently began revisiting my project idea, considering how to proceed. Did research on web-based VR technologies. Brainstormed ideas for a new project involving technologies like WebVR, VTK.js, and React360.

2/13 (4 hrs; cumulative 27 hours) – followed the React 360 Hello World tutorial. Read the documentation for the software in-depth, taking notes in a journal as I read, attempting to understand all of the components offered. Added React 360 to the VR Development Software page – it's pretty bare bones right now, but I intend to flesh it out (e.g. with accessibility metrics) once I really dig deep and get my hands into the software. Beginning to piece together what React 360 is capable of, and how I might link it to VTK.js. Brainstormed project risks and considerations. Came up with a tentative project schedule with milestones and deliverables.

2/17 (1.5 hrs; cumulative 28.5 hours) – read up on VTK.js API. Started to read the VTK textbook and VTK user guide.

2/18 (3.5 hrs; cumulative 32 hours) – continued to read VTK user guide to get a sense of how datasets are visualized and how I might best connect VTK.js with React 360. Determined that I may have to write a custom implementation of vtkOBJExporter in order to export files from VTK.js -> .obj, so that I may upload them as an Entity in React 360 (since VTK.js does not have a JS implementation). Updated the Omegalib and OpenSpace pages with earlier findings. Created the VTK.js page under VTK. Created a draft project proposal (seen below). Created basic slides for Thursday presentation.

2/24 (3 hrs; cumulative 35 hours) – thoroughly designed application interactions, including client-server routing and data passing between client, routes, vtk modules, and back to the client to be displayed in VR. Investigated how to use in context the vtkOBJExporter from VTK, and designed a DataReader class that could implement this logic. Decided to support only .vtk files for now, for the sake of simplicity. Created the initial application framework using the express.js app generator, and installed the required dependencies, including react, react-360, vtk.js, among many others.

2/26 (1.5 hrs; cumulative 36.5 hours) – set up the framework of the web application.

2/27 (4.5 hrs; cumulative 41 hours) – rebuilt the framework, using Express.js for server routing and Webpack. Was able to get a simple page running with simple React bootstrap buttons, but when it came time to implement a button to launch the React360 VR environment, I began running into a host of web development problems (mostly due to deprecated packages). Still stuck on one bug in which my code is not being transpiled by Webpack properly. Will continue to debug tomorrow.

2/28 (3 hrs; cumulative 44 hours) – after wrestling with Babel and Webpack for many hours trying to get my code to compile, I realized that I had to pre-compile the React360 code separately before I could add it to my web application. There must be some specific compiling method that Facebook built into React360 projects to enable its compilation (as I could not get it to compile normally!). Here's a video of the UI now.

3/1 (1.5 hrs; cumulative 45.5 hours) – spent time messing around with Entities in React 360. Got a low poly model to show up in my React 360 Hello World app.

3/2 (3 hrs; cumulative 48.5 hours) – spent time implementing some basic interactions with Entities in React 360. Discovered that the 3D coordinates of the intersection between the Entity and the ray casted by the user is hidden from React 360 developers (for some reason). So I decided to spend time working on a panel of buttons with which to control the positions and rotations of Entities – which is currently a WIP.

3/4 (3 hrs; cumulative 51.5 hours) – created a tutorial for Visualizing and Interacting with a 3D Model in React 360.

3/6 (3.5 hrs; cumulative 55 hours) – spent time trying to understand how to share states between React 360 root components. Discovered this sample code that deals with this exact problem, which is what I will model my own application after. Afterward, I transitioned into vtk.js. Specifically, I attempted to confront the task of converting .vtk files to .obj files by writing my own implementation of the vtkOBJExporter. As expected, this has not been easy, especially since many of the functions and classes in vtk.js are not up-to-date with the latest library of VTK. Unless I hit an insurmountable wall, I will continue with this for the time being.

3/10 (3 hrs; cumulative 58 hours) – created a basic tutorial for vtk.js, going over an example application and the basics of the VTK visualization pipeline.

3/11 (1.5 hrs; cumulative 59.5 hours) – as vtk.js may not be fleshed out enough to provide the exporting capabilities my app requires, I've transitioned to investigating alternative means of converting .vtk files to .obj files. This was certainly a risk of the project, so thankfully this was not unexpected. I have pivoted to downloading and building the original VTK C++ library, with plans to call it from my Node server with N-API.

3/12 (1 hr; cumulative 60.5 hours) – cleaned up my vtk.js tutorial and created a feedback form. Created a system diagram for my application so that I can show it in class on Thursday.

3/13 (1 hr; cumulative 61.5 hours) - created slides for my project progress presentation.

3/18 (4 hrs; cumulative 65.5 hours) – created the server routes and managed to enable file passing between client-server. Next step: need to test opening an .obj file from the client. Finally: need to create the C++ script to convert my .vtk file to a .obj file and call it using N-API.

3/19 (1 hr; cumulative 66.5 hours) – successfully tested retrieving an .obj file located on my server and loading it into my pre-compiled React 360 environment (which in itself has no .obj file, only a filepath to that file). This paves the way for the C++ VTK script to create an .obj file, store it on the server, and have the virtual environment pick it up when it loads up.

3/21 class activity – Contributions to the Wiki:

3/23 (1 hr; cumulative 67.5 hours) – began getting acquainted with N-API. Had some troubles compiling the example code since the compiler only works with Python 2.7.

3/24 (3 hrs; cumulative 70.5 hours) – successfully compiled my project with additional C++ scripts. Right now, they're simply a "Hello World" output, but it should not be too hard to extend this script. Next steps: include the VTK library and call the right functions to process the .vtk file to an .obj file.

Week of 3/25 – 3/31 (3 hrs; cumulative 73.5 hours) – intermittently worked on project over break. Wrestled with N-API and successful compilation of my project. N-API requires that Python2.7 be used by NPM, which was quite difficult to figure out how to make work with my project.

4/1 (3 hrs; cumulative 76.5 hours) – Began importing the VTK libraries into N-API. Unfortunately, the N-API is extremely tricky to deal with. I fiddled with importing the built VTK libraries I need in order to begin working with the vtkOBJExporter, but my knowledge of C++ is limited as is, and N-API only adds extra layers of weirdness to importing external libraries. I may have hit an insurmountable wall.

4/2 (2 hrs; cumulative 78.5 hours) – scrapped the C++ add-on entirely, as well as all libraries associated with it. Simply redirected my project to only accept .obj file uploads, which are then displayed in React 360 VR.

4/3 (1 hr; cumulative 80.5 hours) – prepared application, media, and slides for presentation.

4/4 (2 hrs; cumulative 82.5 hours) – spent time researching various topics, including web-based AR with AR.js. Decided that researching this library would be a nice complement to my previous project, and wrote up my second project proposal based around it.

4/5 – 4/11 (no hours) – hiatus for interviews.

4/12 (1.5 hrs; cumulative 84.5 hours) – began researching AR.js and its component libraries, A-Frame and ARToolKit. Read through the AR.js Medium blog to familiarize myself with the technology, as well as read through some intro documentation for A-Frame (upon which AR.js functions).

4/14 (1 hr; cumulative 85.5 hours) – created a Glitch account and started messing around with AR.js. Built some silly Hello World apps.

4/15 (1.5 hrs; cumulative 87 hours) – created an AR.js wiki page. Will flesh out once I have more experience with the library.

4/16 (2 hrs; cumulative 89 hours) – created a basic web app with AR.js that uses a marker to display a simple .obj model (you can check it out on a mobile device here!). Found more WebAR libraries, in particular, aframe-ar and aframe-xr. Reconfigured my project proposal to make it a comparison between these 3 libraries.

4/18 (2 hrs; cumulative 91 hours) – created a small web app with AR.js that allows a user to rotate an object around its y-axis in AR. You can see results here. And you can try it out for yourself here! (glitch.com is awesome!). Here's a link to the marker I used. Planning to make a tutorial based on this app.

4/19 (2.5 hrs; cumulative 93.5 hours) – created two tutorials for AR.js. The first tutorial sets up a simple AR.js app. The second tutorial implements event handling.

4/21 (1 hrs; cumulative 94.5 hours) – created slides for Tuesday.

4/22 (1 hrs; cumulative 95.5 hours) – did research on aframe-ar, and especially the technology off of which it is built – WebXR. Got a few of the examples running in Mozilla's experimental WebXR web browser app. You can see me access the app here, and play around with a more advanced example here. Unfortunately, plane tracking doesn't seem to be too powerful. Will be playing around more with this over the next week.

4/23 (2 hrs; cumulative 97.5 hours) – built and ran the WebARonARKit experimental browser from Google. Was able to run some test applications on the browser, including some aframe-ar applications. This helped me understand the power of aframe-ar (over aframe-xr) as it enables A-Frame AR support across these experimental browsers. I created a page for aframe-ar accordingly. I also completed the page for AR.js.

4/28 (4 hrs; cumulative 101.5 hours) – first, experimented with raycasting in aframe-ar, but stopped when I realized that raycasting in A-Frame involved static rays positioned on screen by the developer rather than rays casted from the camera origin. Next, experimented with simple event handling in aframe-ar, creating a simple script that changes an entity's color randomly when clicked. Tried this on WebXR and WebARonARKit and found that both had significant latency issues, but were usable nonetheless. After this, I wrote a short tutorial on how to create this application for beginners in aframe-ar.

5/1 (4 hrs; cumulative 105.5 hours) – started working on two AR apps, at least one of which I hope to show for my presentation. One of them is an AR "business card" which simply displays images and links to various social media / GitHub. The other is an attempted data visualization of GeoJSON data using three.js, A-Frame, and d3.js.

5/2 (3.5 hrs; cumulative 109 hours) – after fiddling with my data visualization app for hours, I finally got my data to show up with AR.js! Check it out here. This is using a GeoJSON dataset, specifically this one.

5/4 (2 hrs; cumulative 111 hours) – created a comparison page between AR.js and A-Frame AR, using my findings from developing the two Hello World applications.

5/5 (3 hrs; cumulative 114 hours) – spent time refactoring my visualization code to make it more generalizable to different datasets. Imported a new dataset, figured out its proper visualization parameters. Then, created a simple visualization control system that allows me to swap between the two datasets.

5/6 (3 hrs; cumulative 117 hours) – created slides for presentation. Finished filling out the A-Frame AR page. Spent a long time trying to implement raycasting functionality in my data viz app. Unfortunately, I was only able to achieve poor buggy results on desktop, which did not bode well for mobile, so I decided to abandon that functionality.

5/8 (3 hrs; cumulative 120 hours) – created a poster for Friday's presentation. Also, polished the GeoJSON data visualization app to have a usable mobile interface that allows users to switch between the two datasets easily.

5/9 (2 hrs; cumulative 122 hours) – picked up poster board, adjusted my poster file so it fits a 36x48 board, printed poster.

Contributions to the Wiki (2nd half of semester):

CSCI1951T Project 1 Proposal