Semester Learning Goals
before after
---- ----
2 | 4 | Goal 1: articulate AR/VR visualization software tool goals, requirements, and capabilities
1 | 4 | Goal 2: construct meaningful evaluation strategies for software libraries, frameworks, and applications; strategies include surveys, interviews, comparative use, case studies, and web research
2 | 4 | Goal 3: execute tool evaluation strategies
1 | 3 | Goal 4: build visualization software packages
2 | 5 | Goal 5: comparatively analyze software tools based on evaluation
2 | 4 | Goal 6: be familiar with a number of AR/VR software tools and hardware
3 | 5 | Goal 7: think critically about software
3 | 5 | Goal 8: communicate ideas more clearly
1 | 4 | Goal 9: be able to convey effective interactive data stories with AR/VR visualizations
Project 1 Proposal - Google Doc
Presentation for Project 1 Proposal - Slides
End Presentation for Project 1 - Slides
Project 2 Proposal - Google Doc
Presentation for Project 2 Proposal - Slides
Project 2 Progress - Slides
Poster <ADD LINK>
In-class Activity <ADD LINK>
Public Demo <ADD LINK>
Project 2 Progress Slides
CONTRIBUTION 1: VR in Architecture/Urban Planning (created)
Created a page for VR and AR applications in architecture and urban planning to highlight diverse software and uses of AR within this filed. Compiled research on Agora World (a software for visualizing physical sites and buildings + their data in VR).
CONTRIBUTION 2: Scientific Data (added to)
Added water resource data that I researched and intend to use for Project 1 (several datasets showing range of water data -- from water availability to water withdrawal by country and year). Also added earthquake datasets that I researched and used for project 2 (showing earthquake depth, magnitude, error, location, time, measurement metric, seismographs, and audio)
CONTRIBUTION 3: AR Software for Fluid Simulations (created)
Compiled research for different software to build fluid simulations, particularly those incorporating time-series fluid data, for use in AR environments. Added evaluation criteria for both fluid visualization software, and fluid visualizations themselves (to be used for any project incorporating fluids in AR). Added a comparison table for the researched fluid visualization software.
CONTRIBUTION 4: Quest Controller Input Mapping for Unity - Tutorial and Overview (created)
Tutorial on how to incorporate Quest controller button mapping into Unity projects. Explained how controller mapping works, different types of controller mapping, and step by step tutorial on how to build a simple Unity project with objects that interact with controller input (i.e. button clicks). Also included links to resources for further exploration of more complex and customized controller input mapping (incorporating scripts).
CONTRIBUTION 5: AR for Climate Awareness and Sustainability (created)
Page for different applications of AR and VR technology in climate awareness and sustainability. Complied research, images, videos, and links for examples of AR applications in a range of related fields, including environmental education and awareness, virtual field trips, wildlife conservation, sustainable fashion, immersive environmental storytelling, and disaster management.
CONTRIBUTION 6: Unity Water/Fluid Data Visualization Tutorial (created)
Tutorial on how to build a water or fluid visualization in Unity. Added explanations of what shaders and materials are in Unity and how they can be used to build water-like textures, appearances, and movements for particular objects. Covered steps on how to add custom materials to object in Unity, and adjust material settings / underlying shaders to meet specifications. Also added resources for creating custom water shaders (from scratch) using shader graphs for more control and precision. Added a section on data-driven transformations, what they may be usef for, and how they can be incorporated into water/fluid visualizations in Unity via C# scripts. Provided an example script for reference.
CONTRIBUTION 7: Unity Timeline for Simple Animations Tutorial (created)
Tutorial on using Unity’s Timeline for simple animations. Added step-by-step instructions on creating and editing animations using the Timeline window. Covered how to add keyframes, adjust animation properties, and control object movement without scripting. Included images for clarity and links to further resources for experimentation, as well as related Unity documentation and tutorials.
CONTRIBUTION 8: Haptic Integration Software Comparison (created)
Compared different software for integrating haptics into AR/VR projects— Unity XR, Meta Haptics Studio, A-Frame haptics, and Unreal Engine haptics. Made a comparative table for each of the 4 software on a range of 8 different criteria: learning curve, haptic feature, depth, cross-platform integration, customizability, real-time modulation, ease of prototyping, documentation and tutorials, and software maintenance/activity. Gave each software a score from 1-5 on each of these criteria, with researched justifications in the table for reference. Listed pros/cons of development in each.
CONTRIBUTION 9: Meta Haptics Studio Interactions Tutorial (created)
Tutorial on Meta Haptics Studio, covering steps to download the software, pair the headset, analyze audio, and create/export custom haptics. Added a detailed walkthrough of a sample haptics project on footsteps, walking through different settings and how they can modified edited for specific audio clips (i.e. concrete vs grass vs metal). Added additional resources and links to tutorials for reference.
CONTRIBUTION 10: Haptics Applications and Technologies in AR/VR (created)
List of different types of haptics software, with examples and links for each, an explanation of how they work, and their uses + accompanying videos and resources to learn more about them. Included research on controller-based haptics, mid-air haptics, and wearable haptics. Outlined different applications for each, including in space training, medical training, social education, remote collaboration, and industrial design. Included videos and links with examples.
CONTRIBUTION 11: Mapbox Sample Scenes in Unity (created)
Page outlining how to set up Mapbox and steps to integrate Mapbox with Unity via Mapbox SDK. Listed steps for debugging and navigating errors related to outdated Mapbox components that are no longer compatible with Unity (such as the AR toolkit). Also gave a walkthrough of opening sample Mapbox scenes, configuring the map settings, and playing around with different options provided by the Mapbox SDK. Included links to further resources, tutorials, and documentation for additional guidance and help
CONTRIBUTION N [short description] <ADD LINK>
Area I grew up in
Dorm in Brown
Place of significance (my highschool in Delhi)
Area I grew up in
Dorm in Brown
Place of significance (my highschool in Delhi)
Last landmark to current location (dorm)
Last landmark to current location (dorm)
Total: 134.5 hours (Updated 4/23/25)
Created journal
Joined slack group and completed self-introduction
Read biography of Kennedy Gruchalla. Glanced over some of his research and read paper abstracts. Listed questions to ask him in the activity document
Read background papers:
SpatialTouch: Exploring Spatial Data Visualizations in Cross-Reality
Augmented Virtual Reality in Data Visualization
Researched Agora World
Homework Exercise:
Three changes should each require ~10 minutes to complete.
Add description of more related "-reality" terms in the AR/VR comparison page— such as MR (mixed reality) and XR (extended reality)
Adding links/images to the products listed in the Low Budget VR Headsets page
Including city bike (e-bike) data as sample data for VR data in the Scientific Data page
Three changes should each require ~1 hour to complete.
Including sub-sections on "VR in Music", and "VR in Performance Art" in the VR in The Performing Arts page -- i.e. finding and attaching some articles
Including a section in Scientific Data describing where and how to find good data (i.e. how to know if a dataset is appropriate to use) for projects
Including a section on prominent VR installations globally
The final three changes should each require ~10 hours to complete.
Uses of VR in disaster preparedness and emergency management, in the "What is VR for?" Literature Review -- requires finding, reading, and linking multiple related research papers (as there are quite a few)
Creating a page on VR Simulation Software (e.g. for aviation simulations, driving simulations, medical simulations)
Creating an "Opinion" or "Debate" page where people can each write differing viewpoints/perspectives on VR/AR, backed up with data, research papers, etc -- kind of like a discussion page for contentious VR topics (ethics, accessibility)
Complete one of the 10-minute changes.
Completed second change (adding useful links)
Created page for VR in Architecture/Urban Planning, compiled research on Agora World
Completed set up of Meta Quest 3 and connected to paperspace machine
Read through 3 previous projects
Potential pieces of software to explore and evaluate for my research project
Unity
Paraview
Fieldview
ReSpo Vision
Potential Project Ideas
Create 3D visualizations to represent traffic data on maps and map-like views— could be tailored for a specific vehicle, like e-bikes, and represent denser dropoff/pickup locations to make it easier to find and drop bikes. Could visualize energy usage and emission data. E-bikes in particular have vast datasets online (eg: https://www.kaggle.com/datasets/hassanabsar/nyc-citi-bike-ride-share-system-data-2023). Compare different software suited for this task
Compare and contrast different AR/VR software for responsiveness to external stimutli, like lighting, sound, or non-user movement (i.e. pets) — for example, an app that interacts with external light sources to change the view of the user would need a strong responsiveness, so it could be useful to evaluate which software would best allow for these functionalities in different domains
Represent drought and water inequality data in 3D to give visual references to the lack of water resources available in drought prone areas. Could be coupled with visualizing data to represent changes climate, increases in temperature, and frequency of droughts over time.
Completed Google Earth VR/Web Activity - added Google Earth VR and Google Earth Web sections to journal
Completed Google Earth VR vs Web form
AFrame Development Lab
Bezi Lab -- building my own Bezi scene
Installed DinoVR
Read DinoVR paper
Questions:
It was noted that most of the participants were native English language learners and the remaining non-native learners still had a strong proficiency in English. Is it likely that the results may have been different for the same experiment conducted in a different language, or with people with varying proficiencies?
It was also noted that "the reading of numerical data might lead to different outcomes" — I wonder if the outcomes would also differ for other types of reading? Such as close or analytical reading (that may be done in a History or English class), versus scientific reading (like a scientific paper), reading code from a particular programming language, or reading of signage on streets/buildings, etc.
Do other factors, like accompanying animations, images, graphics, or textual alginemnt, also have an impact on reading speed and feasibility in VR?
Thoughts
The study makes me think about how much of our reading ability is shaped by the physical constraints of 2D text on paper or screens, and how those expectations shift in immersive environments.
The findings also make me wonder about if trade-offs exist in textual design, and when they are (or should be) made. Should we prioritize aesthetics and depth at the cost of cognitive ease, or flatten the experience and stick to a particular font size / layout / paneling to make reading faster? The balance between strain and immersion in this case can be interesting to think about.
The results also suggest that denser visual environments with occlusion slow reading, which makes sense, but I wonder if there’s a threshold where complexity could actually enhances engagement? Couldn't some dense visual features help draw and focus attention rather than just act as "occlusions"?
Brainstorm software evaluation metrics
Movement tracking accuracy
Rendering quality
Latency -- delay/lag between user (or external) actions and the software's response
Shadow and lighting consistency for rendered objects (i.e. how realistic/integrated it is in passthrough)
Naturalness/intuitiveness of the interactions (i.e. how "natural" does a particular interaction feel when using this software -- this is more subjective)
Means and extent of collaboration (in its features)
Power consumption
Ease of use + understanding for a novice learner
Refining project ideas:
Idea #1: Vehicle traffic and emission visualization
What I will do:
Building a visualization prototype of an application that overlays AR emission data when pointed at different vehicle types (cars, e-bikes, motorcycles, etc.)
Building a visualization representing the real-life size of emissions (maybe in terms of kilos of coal burnt) for different vehicles -- could be interactive with distance (i.e. lump of coal grows as distance increases)
Evaluating different software for visualizations with dynamically user-controlled inputs (like distance and time)
Class activity:
Each student interacts with a visualization that allows them to choose a vehicle type, distance travelled, and visualization method (like lumps of coal, or cubic meteres of CO2) to represent emissions for a given trip -- and actually having that physical comparison show up life-size in AR
Potential Deliverables:
Comparative table evaluating different AR software for visualization dynamic (i..e actively changeable by the user) inputs
Wiki page on AR for vehicular emission applications
Idea #2: Comparing AR software for responsiveness to external stimuli
What I will do:
Test a range of AR software/applications designed to respond to changes in surroundings
Document how effective each software is on a range of responsiveness stimuli to come up with a conclusive ranking / report on what each is good and bad at
Build an interactive visualization in each software that engages with stimuli in the external world (like lighting, sound, movement)
Class activity:
Building an AR experience where the class can make changes in their physical space to trigger changes in the AR environment (make loud noises to power an AR light for example, or switch of the light to change the color palette of the scene) -- test this with different software and ask them which worked best/why
Potential Deliverables:
Comparative table evaluating different AR software on different responsiveness metrics (such as reaction time for light changes, sound changes, tactile changes, etc)
Tutorial on how to use a particular software and build a feature that responds to stimuli of your choice
Wiki page comparing features of best AR softwares for light/audio detection
Idea #3: Visualizing water-availability data in areas with limited water access / drought-prone areas (over time)
What I will do:
Build a visualization representing water availability per household in water-scarce areas, showing it in a tub in AR (or taking up the entire room), and showing it change (rise/fall) over time as water availability changes
Implement different water-visualization methods, including water-level projections, droguht-impact heatmaps, and water-footprint for different activities (showering, flushing, cooking, etc.)
Evaluate different AR software for simulating water-flow and movement
Class activity:
Interactive activity where students choose from a list of water-scarce areas and see the room fill up with the total amount of water available to a household in that area. Students can then choose to compare with similar visualizations of how much water daily activities like showering may take.
Potential Deliverables:
AR prototype showcasing real-time and historical water data overlays
Tutorial on creating water simulations in a particular AR software
Wiki page on AR applications for climate awareness and sustainability
Completed portions of Dino VR activity that I was unable to complete in class due to technical difficulties
Completed Dino VR Feedback Form
Researched potential datasets for the project I am most leaning to right now (Idea #3)
World Bank Data (from 1992 to 2014) - Renewable internal freshwater resources per capita
Worldometers - Water Use by Country (probably not as reliable)
Water Footprint Calculator - Water Footprint Comparisons by Country
FAO (Food and Argiculture Organization of the UN) Aquastat Data - range of very comprehensive water-use data by country and year
Better summarized by the World Population Review
OECD - Water Withdrawal Data (per capita)
Existing 2D Visualizations of the above data (and more)
Our World in Data (most recent revision in 2024) - Water Use and Stress
Our World in Data (most recent revision in 2024) - Clean Water
WorldMapper - No Water Access Per Capita
Essential Need - World Water Data for Safe Drinking
Read and gave feedback + comments on my classmates' (Vishaka and Connor) project ideas in the Activity Doc
Refined a project plan for my chosen idea (Idea #3) — created project proposal (refer to proposals above)
Refined project proposal based on classmate feedback (being more specific in terms of deliverables and additions to the wiki)
Reformatted project proposal to be in a saparate document (pdf), rather than in different scattered places across my journal
Added Dino images to class activity document
Researched AR software for water simulations based on time-series data
Compiled research and comparisons into a new wiki page - AR Software for Fluid Simulations
Refined project 1 proposal based on feedback from peers + Professor Laidlaw
Created standalone a project 1 page (linked at the top and here too)
Completed proposal presentation
Self evaluation of project :
Rubric
strongly disagree
disagree
neither agree nor disagree
agree
strongly agree
The proposed project clearly identifies deliverable additions to our VR Software Wiki -- 4 (deliverables listed in timeline on project page)
involves passthrough or “augmented” in VR -- 4 (list of ideas on project page for using AR/passthrough for data viz)
The proposed project involves large scientific data visualization along the lines of the "Scientific Data" wiki page and identifies the specific data type and software that it will use -- 5 (list of scientific datasets to focus on for this project on project page + proposal slides and document
The proposed project has a realistic schedule with explicit and measurable milestones at least each week and mostly every class -- 4 (detailed schedule on project page + proposal slides and document)
The proposed project explicitly evaluates VR software, preferably in comparison to related software -- 4 (evaluation of AR/2D data visualization via class activity, and evaluation of different software for building fluid simulations to use in AR)
The proposed project includes an in-class activity, which can be formative (early in the project) or evaluative (later in the project) -- 4 (class project activity described on project page + proposal document/slides)
The proposed project has resources available with sufficient documentation -- 4 (resources and software listed on project page)
Updates to the wiki page I previously made: AR Software for Fluid Simulations
Completed software evaluation criteria and research for fluid visualizations
Completed tabular software comparison (will guide my own software choices for my project, but can also be useful for other projects with fluid vizualizations)
Activities logging rubric evaluation
key for each criterion:
5 == exceeds expectations
4 == meets expectations
3 == mostly solid with some gaps
2 == shows some progress
1 == some attempt
0 == not found
Criteria:
Journal activities are explicitly and clearly related to course deliverables - 4
deliverables are described and attributed in wiki - 4
report states total amount of time - 4
total time is appropriate - 4
Developed an assessment criteria and usability target for fluid visualizaitions, particularly to evaluate the final visualization I will produce in Project 1, but also to evaluate existing fluid visualizations and guide choices in developing new ones (beyond just this project). Attached in:
Updated in AR Software for Fluid Simulations
Also updated in Aarav Kumar Project 1
Based on assessment criteria and usability targets for my visualization, and also based on my software comparison research for fluid simulations, I decided to use Unity with MetaXRSDK to build and deploy my water level fluid visualization
Installed and set up Unity account to begin prototyping
Set up unity project for XR development (installed Meta XR Core SDK, XR Plugin Management, and configured project for Andord) and set up Meta Quest device and developer account to allow for Unity app development
Researched tutorials for building water (or other fluid) simulations in Unity - began prototyping a simple water pool.
Continued refining fluid simulation prototype with guidance from tutorials on water texture and effects for Unity's custom water pools
Wrote a C# script to read CSV data on water levels in various countries over time, and dynamically change the water level of my pool object over time
Continued testing and debugging my script till water levels were changing for any inputted country from the dataset
Continued refining water movement and fluid dynamics for a more real and interactive appearance
Paraview and Ben's Volume Rendering installation and setup
When trying to deploy my Unity project to my headset, I realized I had made a crucial error in the render pipeline I was using. As it turns out, I accidentally made my project on a High Definition Render Pipeline (HDRP), which is incompatible with Android devices like the Meta Quest Headset. To prevent having to start all over again, I began researching ways to possibly be able to keep my current visualization and render pipeline and use AR render streaming to display the Unity scene on my headset, like illustrated the tutorial below:
However, after further research I found that it was mostly recommended to just switch the render piepline to prevent rendering issues down the line, so I ultimately decided to restart my project on the Universal Render Pipeline (URP), confirming that it is compatible with Meta Quest 3. This set me back on my milestones quite a bit, but was a necessary pivot for my visualization to deploy into the headset correctly.
In URP, Unity's custom water systems and pools do not work, so I began researching ways to apply realistic water effects to existing GameObjects using shaders and Unity's Shader Graph. Rather than making my own shader, I found two water texture shaders on the Unity Asset Store, with linked tutorials on how to customize and implement them
Water Shader 1: https://assetstore.unity.com/packages/2d/textures-materials/water/simple-water-shader-urp-191449
Video Tutorial / Demo: https://www.youtube.com/watch?v=mWg4CE6ybKE
Water Shader 2: https://assetstore.unity.com/packages/vfx/shaders/urp-stylized-water-shader-proto-series-187485
Video Tutorial / Demo: https://www.youtube.com/watch?v=fHuN7WkrmsI&t=1s
I tested both shaders and tutorials to see which would work better. The material of the first shader was rendering very unnaturally for me, despite trying multiple troubleshooting tips and following the tutorial, so I decided to go with the second one.
The water shader I selected (the second one) was more stylized than I was aiming for, so I manually refined its settings and underlying shader graph for a more realistic appearance, referencing the tutorial for guidance.
Completed the Apple Unity Lab with Sofia and Jakobi in the VCL
Continued refining the shader graph settings to match my assessment criteria and usability target for my visualization (explained in my project page).
Repurposed my older C# script (from the HDRP project) with changes to object references to control water level changes (based on CSV data).
Began adding UI elements (slider and text) to display the year by year fluctuation of water levels (I wanted to have a slider moving to represent changes in time, and the text changing to reflect the year the data is from)
Incorporated the UI elements into my C# script to allow dynamic updates to the slider and text as the water level changes. Followed tutorials for implementing dynamic updates to text and slider elements
Created presentation to update class on project 1 progress. Presentation linked here, as well as on project page.
Incorporated Meta Building blocks to enable AR passthrough and AR controller detection when deploying my scene.
Followed tutorial for deploying my Unity app into my headset: https://developers.meta.com/horizon/documentation/unity/unity-env-device-setup/
I found that my visualization was currently floating in mid-air and not anchoring to anything, which was a problem, as I designed it with the assumption that people will be able to see it scaled on the floor and walk around it to be able to relate to the volume in real space. So, I began researching and experimenting ways to anchor my visualization.
To anchor my visualization, I tried using spatial anchors, but was unable to get them to work despite following some tutorials and troubleshooting tips. I also tried building custom action itmes to allow the controllers to interact with visualization itself (to scale and transform it in AR), however, this also didn't work for me after experimenting for a long time.
To test another avenue for anchoring, I decided to pivot to using Unity XR's "Grabbable" item, to allow users to grab on to the visualization and anchor it themselves. I nested the visualization under the grabbable item, and after playing around a bit, found that configuring the settings of the Meta Controller building block so that it selects the grabbable item by default allowed for the visualization to anchor to the controller. Placing the controller on the floor (or wherever I wanted) then also moved the visualization with it, allowing me to view it and walk around it as expected. However, this setting scaled up my visualization dramatically, making it almost too big to even see.
Simply altering the "scale" property of my visualization in the inspector did not fix my problem, as this changed the way that the water object was changing in position and height to represent shifts in water levels. I realized this was because my script to control the water level relied on global positioning, not the local positioning of the entire visualization parent object, so reconfiguring the script to consider local coordinate transformations and adjusting the settings of the water object's position fixed this issue.
Following this, I tried adding further action items to link to the controllers, allowing them to interact with the canvas elements such as the text and the slider. However, I wasn't able to figure out how to make this work, as my controllers didn't seem to respond to any button I programmed to alter the visualization. I will likely go to Wednesday's office hours with Melvin and Jakobi.
To add an even better sense of scale to the visualization, I decided to add a relatable object to represent the correct sizing of what the visualization should look like, preparing for a case where I run into scaling errors in AR again. I chose to use a scaled chair for this, as this would give a good relative sense of scale to the viewer to suggest how big the visualization actually should be, in the case where it is not the expected size. I found a chair asset on the Unity Asset Store that worked effectively for my model:
I also realized that the app I deployed each time on my headset from Unity seemed to always deploy in the "edit" mode, not the "player" mode, in which the water level changes could actually be seen (as the script for that is only activated on play). I tried various troubleshooting tips to get my app to deploy and run in player mode, but was unsuccessful.
After going to office hours and seeking help from Melvin, I realized that the issue I was facing with my script not running in my deployed app was that it was reading CSV data, which I guess was stored locally and not as part of the scene I was deploying — as a result, because there was no CSV to read in the app, the water level stayed constant. So, I changed my script such that the water data is hardcoded into it directly, and this fixed the issue.
Continued refining Unity AR model and worked on enabling controller tracking in my app — which had strangely stopped working despite me having been able to enable it earlier. I really struggled with this, especially due to the lack of documentation available on the Meta Building Blocks in Unity. After playing around with the settings for a while and rebuilding relevant parts of my scene (like the controller anchors and camera rig), I was able to figure out how to get both hand and controller tracking to work.
Incorporated ray-cast elements to allow users to point at the water visualization with their controllers and hands
Incorporated an input field text box in the UI canvas (to allow users to input the country name for which they want to see the water data) — just a static object at this point, but the goal is to make this interactive and allow users to type into the field with a virtual keyboard.
Tried (and failed continuously) to incorporate a virtual keyboard that would allow users to type into the text box to specify a country. I was trying to do this with the Virtual Keyboard Building block provided by Meta on Unity, but as before, the lack of documentation on its features made it very difficult to navigate and make the keyboard functional. The limited documentation that was present on the Meta Horizon website (ex1, ex2) seemed to be outdated, as some of the features it described were missing or had different names than on my system, and my virtual keyboard did not appear in my scene like in their example. Additionally, Given the newness of the feature, there were also no helpful YouTube tutorials that explained the Virtual Keyboard feature, or resolved discussion posts on Unity about the feature either.
Tried enabling a button mapper for my controllers to select the input field with a button (or trigger) and get the system keyboard to show up (rather than having to add my own virtual keyboard), but this also didn't seem to work, and the input field on its own was largely un-interactive.
Having given up on using the virtual keyboard at this point, I decided to explore other avenues of getting the regular system keyboard to show up (without necessairly having to map the keyboard visibility to a button, as I was trying before). With some research, I was able to build a script that enables and displays the system keyboard on app start, and links it to the input field text. Thankfully, this seemed to work, and with some further tweaking of the script, I was able to reflect changes in the keyboard (i.e. what is being typed) in the input field as well, allowing users to interact with it.
Worked on integrating the text in the input field into my water level controller script, so that the water visualization can change dynamically based on the input that is typed in by the user.
Removed redundant or unused assets, scripts, and objects cluttering my scene that had been collected over time as I was playing around with different features that I didn't end up using (or that didn't end up working).
Debugged script for slider updates, which had broken since I enabled the text input field to interact with the water level script.
Rearranged UI elements to prevent blocking user's view of the actual water visualization.
Enabled re-centering of the visualization (i.e. when Meta button is pressed on the controllers).
Updated project page with a list of resources and tutorials I used for development and debugging help
Worked on refining the final elements of my visualization for the class activity (making sure that the water color and texture is feasible, disabling unused scripts, etc). Unfortunately, while trying to refine my visualization and remove redundant elements, at the very last step, I removed a "Grab Interactable" element that I didn't realize was critical to my scene. As soon as I did this, my entire visualization broke — the water cube now appeared as an overlay on my headset, rather than as an object in passthrough. Unfortunately Unity doesn't offer free version control, and once you save something on Unity, you can't undo your changes even if you never closed the project window. As a result, I couldn't go back to my old version, and trying to build another Grab Interactable element simply was not working.
I tried other ways of potentially fixing the issue — fixing any console errors that were showing up, even if they didn't seem relevant, rebuilding the camera rig and passthrough elements, resetting my XR settings on Unity, restarting Unity and my headset to see if the problem was with them.
None of these were working, so I decided to try going back to my old approach of having my visualization be anchored to one of my controllers. I attempted to follow some YouTube tutorials on spatial anchoring, but this also failed to work with my setup, as my controller button mapper was different to what each tutorial was displaying (probably because of an update) and I was only able to anchor small objects in place, not my entire visualization (which was very large by comparison).
I also attempted raycasting to get my visualization to anchor to the ground (via a C# script on the camera), but this also did not work.
I now tried following some tutorials on creating "grabbable" objects in Unity with Meta's building blocks, just to make a grabbable cube (with the hope of then being able to make my visualization grabbable), but in doing so I found that the cube itself was anchored to the ground, so if I could find a way t nest my visualizaiton under the hierarchy of the cube, then hopefully it would also anchor to the ground. After trying this and making some refinements to the scale and position, I was finally able to make my visualizaiton anchor to the ground as it was before. I decided not to touch it anymore so I don't break anything else.
Wrote setup instructions + homework on class timeline, and tested them out myself to see if I could build the app onto my headset without having to go through Unity (which worked).
Made class activity page with instructions on how to complete each activity (both 2D and AR versions). Added GIFs for visual support + guidance.
Explored my own visualization to see which countries had the most interesting data to see in AR, to give suggestions for which countries people may want to pick during the class activity.
Made Google Form for the class activity to ask for evlautive feedback on the 2D vs AR representation of water data.
Meet with classmate Colby to test that the setup is working and class activity instructions are clear (tested each other's app on each other)
After my class activity, I realized something I needed to work on and refine was interaction with my controller input, as for some reason my deployed project's keyboard would turn off if I pressed any buttons on my controller or clicked somewhere else, so it would be useful to have a button mapped to the keyboard -- that way if it ever turns off (i.e. hides), a button press can turn it on again. So, I did some research on how to get controller button mapping for Quest controllers in Unity, and decided to create a tutorial wiki page on Quest Controller Input Mapping in Unity to help others that may be struggling with this same problem.
Researched controller mapping for Meta Quest devices, and added overview of how controller mapping works in Meta Quest and how it can be accessed in Unity to the Wiki page.
Continued adding to the Quest Controller Input Mapping in Unity wiki page.
Added a detailed tutorial on how to use controller button mapping in Unity with step by step instructions to build a basic project recognizing button input and incorporating interactive features that react to the Meta Quest controller inputs when deployed. Added further resources to the tutorial to make more advanced mappings (with scripts), as well as links to guides for further explorations and more complex projects incorporating controller inputs.
Self-tested the Unity tutorial and added pictures + videos of the process and final result for visual guidance on how to manage controller input in Unity. Added instructions for setup and configurations as well.
Created a wiki page for AR for Climate Awareness and Sustainability, as this relates closely to the work that my project tries to achieve, particularly in making environmental data representing changes in climate and our sustainable resource levels (renewable internal freshwater resources, for my project). Added examples of different (existing) AR applications in varying domains of climate action, including environmental education and awareness, virtual field trips, wildlife conservation, and sustainable fashion.
Continued adding to the wiki page I created, AR for Climate Awareness and Sustainability
Added sections on AR applications in disaster prevention and management, as well as immersive environmental storytelling.
Added pictures and reference videos for each application of AR discussed.
Added resources for further research / information for each of the fields
Measured the dimensions of my AR water visualization application to determine the accuracy to real-life scaled volumetric data (i.e. to check if 1m in my visualization actually represents 1m in real space, and to what extent). Repeated the measurements across multiple trials in different lighting / spatial contexts to ensure consistency and reliability.
Created wiki page for Unity Water/Fluid Data Visualization Tutorial
Added explanations of what shaders and materials are in Unity and how they can be used to build water-like textures, appearances, and movements for particular objects.
Covered steps on how to add custom materials to object in Unity, and adjust material settings / underlying shaders to meet specifications.
Added resources for creating custom water shaders (from scratch) using shader graphs for more control and precision.
Added a section on data-driven transformations, what they may be for, and how they can be incorporated into water/fluid visualizations in Unity via C# scripts. Provided an example script for reference.
Added water resource data to Scientific Data wiki page (the ones I researched + used for my project)
Created wiki page for Unity Timeline for Simple Animations Tutorial
Guided tutorial on using Unity’s Timeline for simple animations.
Added step-by-step instructions on creating and editing animations using the Timeline window. Covered how to add keyframes, adjust animation properties, and control object movement without scripting.
Included images for clarity and links to further resources for experimentation, as well as related Unity documentation and tutorials.
Updated Project 1 Page with class activity results and evaluation phase (evaluating my project, deliverables, class activity, and more)
Created Project 1 Final Presentation to send to David + Melvin
Completed missed Class Activities (Eunjin and Papa-Yaw)
Completed 7 scenarios reading. Learned that evaluation in information visualization can be categorized into distinct scenarios serving different goals, and reflected on how these scenarios can guide the design, evaluation, and documentation of my initial project 2 ideas.
Project 2 Draft Plan:
Visualizing earthquake data (likely focused on a specific region rather than a broad cross-country comparison) to represent key attributes such as magnitude, geographic impact (scale), depth, and economic consequences. Additionally, the project may incorporate haptic feedback to enhance the experience of earthquake magnitude.
To achieve this, I plan to use Meta’s Haptics Studio and Haptics SDK, which integrate with Unity, to develop custom haptic effects. These effects could be generated based on seismic waveforms, earthquake audio data, or magnitude charts, allowing users to physically sense the intensity of different earthquakes.
I also aim to compare different versions of the visualization to better understand their effectiveness in various scenarios. Instead of simply contrasting a 2D vs. AR version (as in my first project), this project will instead explore
A focused visualization, representing only scale and magnitude to emphasize core earthquake properties.
A comprehensive visualization, combining multiple variables (magnitude, scale, depth, economic impact, and geographic distance) to assess how layering information affects user interpretation and engagement.
Resources:
Haptics in Unity (for Meta Quest)
Potential datasets to explore
https://www.usgs.gov/programs/earthquake-hazards (range of data for worldwide earthquakes with interactive maps)
https://corgis-edu.github.io/corgis/csv/earthquakes/ (CSV data for earthquake magnitude, location, depth, significance -- but only till 2016)
A-Frame Haptics Component:
Based on the seven evaluation scenarios from the paper, I formulated some key questions or ideas for each scenario, relating to my project:
Understanding Environments and Work Practices (UWP)
For my project, I could look at how emergency responders, geologists, and policymakers currently analyze earthquake data, and what challenges they face in interpreting it
What role could AR-enhanced haptic feedback play in improving earthquake preparedness training?
Evaluating Visual Data Analysis and Reasoning (VDAR)
Do users gain meaningful insights from the focused vs. comprehensive visualization, and which approach supports better analysis?
In my evaluation for this project, I could look at if adding haptics improves users’ ability to detect patterns or compare earthquake severities more effectively
Evaluating Communication through Visualization (CTV)
How effectively does the visualization (with or without haptics) communicate the severity of earthquakes to the general public (i.e. our class)?
Does an immersive AR experience lead to greater information retention compared to traditional earthquake data visualizations?
Evaluating Collaborative Data Analysis (CDA)
How well can multiple users interact with the visualization to discuss and interpret earthquake data together?
Does the haptic-enhanced AR experience foster more engagement or agreement in team-based decision-making scenarios?
Evaluating User Performance (UP)
In my class activity, I could compare how quickly and accurately users identify high-risk earthquake zones using different visualization versions.
How does haptic feedback impact response time when assessing earthquake severity?
Evaluating User Experience (UE)
I could conduct usability test for my class activity to assess whether haptics improve immersion and engagement or introduce cognitive overload.
What aspects of the visualization cause confusion, frustration, or cognitive overload, especially in the comprehensive version?
Evaluating Visualization Algorithms (VA)
Can haptic feedback be generated and synchronized smoothly with the earthquake visual data without performance lag?
To evaluate my deliverable, I could analyze how well does my system scales to larger datasets, and can it render multiple earthquakes without performance lag?
Completed Project 2 Proposal slides to send to Melvin and David
Completed Project 2 Timeline, with milestones, deliverables, and wiki additions for 4/01, 4/03, 4/08, 4/10, 4/15, 4/17, 4/22, 4/24, 4/30, 5/01
Evaluation of Project 2 Plan based on rubric:
Clearly identifies deliverable additions to our VR Software Wiki (4/5) — Deliverables + wiki additions are identified on timeline (with respective dates and locations of where in the Wiki pages/content will be added)
Involves collaboration in VR (1/5) — The rubric encourages collaboration in AR/VR, yet my plan has detailed still a single-user experience with no multi-user interaction or shared data exploration. Possible fixes could be implement multi-user earthquake analysis, collaborative haptic experiences, or AR-based discussions.
Involves large scientific data visualization along the lines of the "Scientific Data" wiki page and identifies the specific data type and software that it will use (4/5) — Have 2 datasets to explore earthquake severity, scale, radius, distance, depth, location, and economic impact
Has a realistic schedule with explicit and measurable milestones at least each week and mostly every class (4/5) — Timeline included in project page, clearly outline dates for weekly milestones and deliverables + progress checks
Explicitly evaluates VR software, preferably in comparison to related software (4/5) — Has a comparative component for comparing software for building AR applications with haptics. Additionally, exploring focused vs. comprehensive, and haptic vs non-haptic visualizations allows me to have an evaluative and comparative portion for my class activity.
Includes an in-class activity, which can be formative (early in the project) or evaluative (later in the project) (2/5) — I had a class activity planned, but did not explicitly explain it in my draft so it wasn't clear what my activity is going to involve. This needs to be explicit and fleshed out, as it is a key component of the project.
Has resources available with sufficient documentation (4/5) — Resources for getting started with haptics on Unity with Meta Haptics SDK, and on Bezi with the a-frame haptics component.
Revised Project 2 Plan:
Overview: This project will focus on visualizing earthquake data in AR, representing attributes such as magnitude, geographic impact, depth, and economic consequences. It will also incorporate haptic feedback to simulate earthquake intensity, allowing users to physically sense seismic events.
Key Features & Approach:
Custom Haptics: Using Meta’s Haptics Studio + Haptics SDK, seismic waveforms, earthquake audio, or intensity charts for haptic effects.
Comparative Visualizations:
Focused Visualization – Displays only magnitude and geographic scale to highlight essential earthquake properties.
Comprehensive Visualization – Combines magnitude, scale, depth, economic impact, and geographic distance to assess the impact of layering complex data in AR.
In-Class Activity: Students will participate in a hands-on AR earthquake simulation where they experience different earthquake data through haptic and visual feedback. They will compare multiple (2-4) visualizations of the same earthquake areas, but with different representations of the data, particularly comparing:
Haptic vs. non-haptic experiences – Does touch feedback improve understanding?
Focused vs. comprehensive visualization – Which is more effective for data interpretation?
Collaboration in AR: A multi-user AR mode where users can collaboratively analyze earthquake data, compare findings, and discuss seismic risks, and shared haptic experiences where multiple users feel synchronized earthquake intensity to study how haptics influence perception and decision-making.
Data Source: USGS (United States Geological Survey) real-time and historical earthquake data.
Software Used: Unity + Meta Haptics SDK for AR and haptic rendering.
Wiki Contributions:
Comparison page on software and tools for integrating haptic interaction with Meta Quest (Bezi vs Unity vs WebXR)
Assessment criteria + usability target for (my) haptic AR earthquake models
Tutorial on how to integrate haptics into a project in Unity (or additional software) using different SDKs (compare these SDKs)
Wiki page on applications of haptics and sensory stimuli in AR
Wiki page on 3D mapping development tools for visualizing geospatial data (software/tool comparison + analysis)
Created Project 2 Page to compile all proposal documents, resources, and development progress.
Refined project 2 proposal based on feedback from Professor Laidlaw and Jakobi
Evaluation questions (HW) from 7 scenarios paper for my 2nd project:
Question 1 (UE) : How does haptic interaction support (or hinder) user experience and engagement with the earthquake visualization? Does it augment understanding or increase distractions?
Question 2 (CTV, UP): How does changing the number of variables displayed in the visualization impact user’s understanding of the model and their understanding of each variable? How does it impact their overall task performance?
Question 3 (UP, UE): What aspects of the visualization cause confusion, frustration, or cognitive overload, as compared to existing visualizations (2D versions)? In which version can tasks be performed more efficiently?
Completed research on potential software to use for visualizing earthquake data — decided to use Unity due to familiarity (from last project) and range of available SDKs + plugins providing with haptic feedback and map integration.
Finalized the dataset I will work with based on the ones I have researched.
Decided to use USGS Earthquake Dataset due to wide data range for global regions, diverse types of data formats (CSV, GeoJSON), ability to specify time, magnitude, and geographic location for earthquake searches, and ease/intuitiveness of webpage UI (meaning it could be useful to compare my AR app with this 2D version)
Experimented with different geographic data regions and time periods to see which provides the most diverse range of earthquakes (in terms of magnitude, depth, etc.) in a reasonable timeframe. Referencing the list of biggest earthquakes this century, I particularly investigated the:
Turkey/Syria region between 2015-2025 (particularly to capture the 2023 earthquakes upto 7.8 in magnitude)
Myanmar/Thailand region between 2015-2025 (particularly to capture the very recent 2025 earthquakes upto 7.7 in magnitude)
Japan region between 2010-2020 (particularly to capture the 2011 Tōhoku earthquake and tsunami data, upto 9.1 in magnitude)
Indonesia region between 2004-2015 (particularly to capture the 2004 Indian Ocean earthquakes, upto 9.3 in magnitude)
Analyzing anything more than 10 years at a time was quite difficult on the UI, as it would often just crash by saying "too much data to show", so I was limited to shorter searches in smaller regions, making it more difficult to find regions with the most diverse earthquake data.
After some experimentation, I finalized the location and data I will visualize: the Sumatra region of Indonesia, between 2004-2015.
The reason for this was partially because it contained the biggest earthquake recorded this century, upto 9.3 in magnitude on the Richter scale.
This was was also because I felt this data range and period had a lot of diverse groups of earthquakes with a range of magnitudes and depths -- for the other regions, finding earthquakes with lower magnitude was quite difficult (probably due to a lack of data), and I want to show a range of data in my application to maximise the integration of my haptics component (for a more obvious tactile difference between more extreme magnitudes, whether high or low)
Started wiki comparison page on different software for haptic interaction into AR projects.
Researched software such as A-frame haptics for WebXR, Unity with XR Interaction Toolkit Haptics, Unreal Engine built-in haptics, and Meta Haptics Studio + Haptics SDK (for Unity and Unreal)
Researched tutorials and documentation for each of the above software to include on the wiki page, and summarized their key features.
Developed an extensive pro/con list for each software listed above for easier comparison
Set up Unity project and installed all necessary SDKs and packages for AR development, and changed to Android build mode for compatability with Meta Quest headsets.
Researched methods of map integration into Unity projects. Watched tutorials and found wiki page for Mapbox integration.
Following some YouTube tutorials for integrating Unity with Mapbox, I set up a Mapbox account, and after spending some time debugging, was able to successfully incorporate a Mapbox map into my Unity project
I found that the imported materials from Mapbox components, however, were not showing up correctly, so I continued with some debugging to figure out how to correctly show a Map in different views (it showed up all pink otherwise).
After debugging this, I experimented with different views for the region I wanted to display (Sumatra, Indonesia), and decided on a static square map with slightly raised terrain in satellite view, to allow for a realistic visualization without too many distractions (like a globe view, or road view might provide.
Completed wiki comparison page on different software for haptic interaction into AR projects.
Incorporated further research and resources, and a comparative table for each of the 4 software on a range of 8 different criteria: learning curve, haptic feature, depth, cross-platform integration, customizability, real-time modulation, ease of prototyping, documentation and tutorials, and software maintenance/activity
Gave each software a score from 1-5 on each of these criteria, with researched justifications in the table for reference
Installed Meta Hatpics Studio and Haptics SDK into my project, and began experimenting with the tool to get familarized with its features. I found the resources and tutorials below particularly useful:
Meta Haptics Studio Release Notes and Tutorial: https://developers.meta.com/horizon/blog/haptics-public-release-enhance-your-immersive-experiences-on-meta-quest/
Meta Haptics SDK for Unity Documentation : https://developers.meta.com/horizon/documentation/unity/unity-haptics-sdk/
Meta Haptics Studio Tutorial: https://www.youtube.com/watch?v=qr-k3swrLQE
Meta Haptics SDK Full Walkthrough: https://www.youtube.com/watch?v=oafaIzdrj_Y
Completed Project 2 Progress presentation to send to Melvin and David -- included at top of journal and on project 2 page
Added Meta Building Blocks to my scene in Unity to allow for Passthrough components, controller tracking, and basic interactions
Completed test build of project onto Meta Quest to test that what I have so far (the Mapbox integration, and Meta Building Blocks for AR interactions) appears as expected
4/8/25 - 2 Hours
Debugged Mapbox map scaling
For some reason, the scale of the Mapbox component in my Unity project would reset to the base scale (1:1:1) any time I run or deployed my project (which was too large). I tried debugging this by playing around with the “zoom” component of the Mapbox feature, but this was very glitchy and kept re-rendering the map to be in completely different locations, and was very difficult to centre.
I also tried reducing the number of tiles within the Mapbox component, but this was not sufficient as even a single tile at standard scale was over a 100 times larger than what would be a reasonable scale to view the component in the VR headset.
I also followed some links for debugging:
I then considered exporting Mapbox component into a static OBJ file, which I could then import into my Unity project and scale as a static object — however, this wouldn’t give me the flexibility to change things like the location, the view of the map, and its detail/height, so I decided to leave this as a last resort
As a final fix, I tried to attach a custom script to the Mapbox component to scale it down by a factor of 100 on runtime, and this worked! I then continued to adjust the view and tile settings for the optimal viewing scale
4/10/25 - 3 Hours
Downloaded Meta Haptics Studio, and installed Haptics SDK onto my Unity project
Followed a tutorial on how to link my headset to Haptics Studio for real time prototyping (LINK)
Followed a tutorial on how to use different features in Meta Haptics Studio, incorporating audio files, editing breakpoints, and adjusting frequency/amplitude for fine tune haptics control (LINK)
Played around on with a starter prototype of custom earthquake sounds on Haptics Studio to experiment how I can integrate it into my project
Filtered my Earthquake dataset to a reasonable size for my class activity (so there are not too many data points on the map, as the original dataset has 3000+ earthquakes)
Filtered to include a range of magnitudes, noting the minimum and maximum in the range (2.7 and 9.1, respectively)
Filtered to include a range of depths, noting the minimum and maximum (0 to 218 km, respectively)
Filtered to include a range of depth errors, noting the minimum and maximum (0 to 60 km, respectively)
Filtered to include a range of longitudes and latitudes, so as to not have overlapping data points on the map
After combining my filtered datasets, I was able to reduce the 3000+ data points to just 25-30, which felt much more reasonable for the scale of the map I was using. The filtering allowed me to ensure a range of depths, magnitudes, and errors could be visualized on my map.
4/12/25 - 4.5 Hours
Since Haptics Studio primarily uses sound input to convert into haptics interactions (i.e. files with the .haptic extension), I researched different resources/tools that store audio clips of earthquakes that I could input into my Haptics Studio project — unfortunately, after some research, I couldn’t find a comprehensive tool that had audio data for different earthquakes, and was only able to find individual audio clips for different earthquakes on YouTube
Since I was not abe to find a tool or database storing earthquake audio, I decided to research ways in which I could convert seismographic earthquake data into audio, given both are captured as “waves”.
I came across this really interesting YouTube video of a sound artist who built a toll to convert seismographs to audio clips
I tried using his tool, but found that the resultant audio was too short for me to use (often under a second long), which made sense as the tool was primarily built to convert seismographs into musical “notes”, rather than extended pieces of audio to be used for haptic interaction.
Since seismographic data was not available for all earthquakes on the USGS website directly, I researched different tools to find such data to input into the audio converter -- I ultimately found a website by the "Seismological Facility for the Advancement of Geoscience" that contained comprehensive data for earthquakes, including seismographic data across global network locations.
Then, for each of the earthquakes in my filtered dataset, I had to search this database for corresponding seismographic images.
This took a very long time (2.5 hours) because I had to search for the image for each earthquake seismograph manually, and often had to search through different detection locations, as many of them did not have data for a particular earthquake at all (the tool did not automatically filter these network locations out, for some reason).
I was worried I won't find seismographic images for the exact earthquakes in my filtered dataset, but i was ultimately (pleasantly) surprised, as I was able to find images for all but 2 of them (which I thus decided to omit).
I limited my search to Broadband Seismometers (BHZ and HHZ), as these captured data for both low and high magnitude earthquakes, unlike more specialized seismometers.
4/13/25 - 4.5 Hours
After some research, I found that there was not really a reliable way to reconstruct a piece of audio just from an image of a waveform -- so, to make use of the seismograph images I had found, I decided to instead use a base piece of earthquake audio I found (link) and adjust its amplitude to match the seismographic waveform.
To do this, I developed a Python script that processes the waveform images (in PNG format) by:
Converting each image to grayscale and binarizing it to isolate the black waveform line from the background.
Scanning each vertical column of pixels to detect the highest and lowest black pixels, then measuring height from the vertical centerline to determine amplitude at each time step.
Normalizing the height values to range between 0 and 1, and resampling the extracted amplitude vector to match the number of samples in the base audio file.
Applied this resampled amplitude as a gain envelope to the base audio signal using librosa and numpy, ensuring that louder parts of the audio correspond to taller segments of the waveform (e.g., if the waveform was taller at second 3, the audio would be louder at second 3).
Saved the resulting audio clips with filenames linked to the original seismograph images for easy importing.
I also mapped the earthquake locations from my filtered dataset into my Mapbox component in Unity. I converted earthquake coordinates (latitude, longitude) into Unity world positions using Mapbox's coordinate conversion tools, and placed a sphere at each location to represent an earthquake.
4/14/25 - 3.5 hours
To convert the adjusted audio clips into haptic feedback, I imported each of them into Haptic Studio.
For each one, I adjusted the amplitude and frequency so that it reflected the earthquake’s magnitude — for example, a 9.1 earthquake had a peak amplitude of 0.91, while a 2.7 was just 0.27.
I also edited the sensitivity and breakpoint curves to better match the original seismograph’s “feel.” This part was surprisingly intuitive — Haptic Studio let me audition each clip directly on my Meta Quest controllers, which saved a lot of time since I didn’t need to rebuild Unity every time I made a small change.
Once I was happy with each, I imported the .haptic file and corresponding audio into my Unity project
To trigger the haptic feedback and audio, I wrote a C# script so that when my controller touches an earthquake sphere, it plays both the corresponding audio clip and the haptic file. It took a few tries to make sure the clips were correctly matched and triggered on collision, but in the end, it felt really cool to “feel” each earthquake— especially when the magnitude difference was noticeable in both audio and touch.
4/15/25 - 2 hours
To add the earthquake depth data to my map, I decided to represent the depth as physical cylinders on the underside of the map, with their depth corresponding to the actual earthquake depth. To do this, I converted the real world depth data (in km) to distances in the Unity Mapbox component, and placed individual cylinders underneath the corresponding earthquake spheres.
To give numerical references to the data, I also added text over each earthquake sphere representing the earthquake's magnitude, as well as text next to each cylinder to represent the numerical depth (in km).
4/16/25 - 1 hour
I wrote C# scripts adding map movement controls to the map components, so users can explore the visualization more freely. The right joystick allows up/down and left/right movement, while pressing both triggers moves the map forward and backward.
I also scripted a button-based map style toggle so users can switch between light, dark, and satellite views -- toggling between the options that Mapbox provides in its SDK.
4/17/25 - 2.5 hours
I implemented functionality to change the terrain elevation using the left joystick, letting users exaggerate or flatten terrain features to better contextualize earthquake locations.
While testing this, I noticed an issue where earthquake pins were not correctly updating with terrain changes, I realized the pins needed to be repositioned or re-parented based on elevation changes.
I also added vertical error bars to show the depth uncertainty of each earthquake, and scripted toggles (mapped to buttons on the Meta Quest controllers) so users can turn these error bars and the text labels on and off during exploration.
After testing the build, I found many of the error bars obscured the depth bars entirely, and it was difficult to tell where the cneter of the error bars were.
To combat this, I added a script to make the error bars more transparent, so that the depth bars underneath are still visible.
4/18/25 - 1 hour
As another feature to toggle, I decided to add dynamically sized earthquake spheres that have radii proportional to the earthquake magnitude, so it is easier to visually tell which earthquake has a larger magnitude.
I also scripted a toggle so users can switch between fixed-size pins and magnitude-scaled pins, depending on whether they want uniformity or data emphasis.
4/19/25 - 1.5 hours
I added a tectonic plate overlay to the map to give users more geological context, and scripted a toggle so they can turn the overlay on and off as needed.
I sourced the tectonic boundary layers from: https://www.arcgis.com/apps/View/index.html?appid=19c87b8bd7b64f2f91cfd867450b509a
and the plate labels from: https://www.arcgis.com/apps/instant/atlas/index.html?appid=0cd1cdee853c413a84bfe4b9a6931f0d&webmap=16127f8086aa49e489aed1414914533e
I also created an instructions canvas within the app itself, which displays basic navigation and interaction info in the headset. This makes the experience more self-contained and accessible without needing external help.
4/20/25 - 3 hours
I added a few minor but helpful features to polish the experience:
a legend explaining the tectonic overlay,
a compass to help with spatial orientation,
a scale along the map’s edge for real-world distance reference.
As a second version of my visualization, I also created a second Unity scene with fewer interactive parameters and no toggles. I decided to use this version as a baseline in the in-class activity, testing the impact of more choices as compared to preset settings, in terms of both understanding the data and immersive experience.
Created the class activity page with all instructions for the AR experience and USGS 2D visualizer. I also created questions for the post-activity Google Form, inspired by the NASA questions, as well as suggestions from the class provided during our project progress discussions.
Tested installation of my app from scratch + class activity instructions to make sure everything was working as expected.
4/21/25 - 2.5 hours
Created a tutorial page for Meta Haptics Studio Interactions
Covered steps to download the software, pairing the headset, analyze audio, and create/export custom haptics.
Added a detailed walkthrough of a sample haptics project on footsteps, walking through different settings and how they can modified edited for specific audio clips (i.e. concrete vs grass vs metal).
Added additional resources and links to tutorials for reference.
4/23/25 - 4 hours
Created a wiki page for Haptics Applications and Technologies in AR/VR
Listed different types of haptics software, with examples and links for each, an explanation of how they work, and their uses + accompanying videos and resources to learn more about them.
Outlined different applications for haptics technologies, including in space training, medical training, social education and remote collaboration, and industrial design. Included videos and links with examples.
Created a wiki page for Mapbox Sample Scenes in Unity
Outlined how to set up Mapbox and steps to download and integrate Mapbox with Unity via Mapbox SDK
Provided steps for debugging and navigating errors related to outdated Mapbox components that are no longer compatible with Unity (such as the AR toolkit)
Provided steps for setting up Unity project to be compatible with Mapbox components
Provided walkthrough of opening sample Mapbox scenes, configuring the map settings, and playing around with different options provided by the Mapbox SDK
Provided steps for best practices to minimize bugs and lag when loading maps
Provided links to further resources, tutorials, and documentation for additional guidance and help
4/25/25 - 2 hours
Analyzed class activity results on Google Sheets
Performed data cleaning to remove words from numerical responses
Aggregated data to compare AR (part 1), AR (part 2), and 2D platforms on different criteria
Created charts for comparison of the 3 modes - to include in final project presentation
4/27/25 - 2 hours
Completed final project 2 presentation to send to David and Melvin — gathered images and charts from class activity form responses, included data analysis results, takeaways and challenges, and gathered progress pictures + images of wiki deliverables to discuss in the presentation
Added earthquake and seismograph datasets used in project 2 to scientific datasets wiki page