Aarav's Journal
Semester Learning Goals
before after
---- ----
2 | 4 | Goal 1: articulate AR/VR visualization software tool goals, requirements, and capabilities
1 | 4 | Goal 2: construct meaningful evaluation strategies for software libraries, frameworks, and applications; strategies include surveys, interviews, comparative use, case studies, and web research
2 | 4 | Goal 3: execute tool evaluation strategies
1 | 3 | Goal 4: build visualization software packages
2 | 5 | Goal 5: comparatively analyze software tools based on evaluation
2 | 4 | Goal 6: be familiar with a number of AR/VR software tools and hardware
3 | 5 | Goal 7: think critically about software
3 | 5 | Goal 8: communicate ideas more clearly
1 | 4 | Goal 9: be able to convey effective interactive data stories with AR/VR visualizations
PROPOSALS
Project 1 Proposal - Google Doc
Presentation for Project 1 Proposal - Slides
End Presentation for Project 1 - Slides
Project 2 Proposal <ADD LINK>
Presentation for Project 2 Proposal <ADD LINK>
Poster <ADD LINK>
In-class Activity <ADD LINK>
Public Demo <ADD LINK>
ALL OTHER WIKI CONTRIBUTIONS
CONTRIBUTION 1: VR in Architecture/Urban Planning (created)
Created a page for VR and AR applications in architecture and urban planning to highlight diverse software and uses of AR within this filed. Compiled research on Agora World (a software for visualizing physical sites and buildings + their data in VR).
CONTRIBUTION 2: Scientific Data (added to)
Added water resource data that I researched and intend to use for Project 1 (several datasets showing range of water data -- from water availability to water withdrawal by country and year).
CONTRIBUTION 3: AR Software for Fluid Simulations (created)
Compiled research for different software to build fluid simulations, particularly those incorporating time-series fluid data, for use in AR environments. Added evaluation criteria for both fluid visualization software, and fluid visualizations themselves (to be used for any project incorporating fluids in AR). Added a comparison table for the researched fluid visualization software.
CONTRIBUTION 4: Quest Controller Input Mapping for Unity - Tutorial and Overview (created)
Tutorial on how to incorporate Quest controller button mapping into Unity projects. Explained how controller mapping works, different types of controller mapping, and step by step tutorial on how to build a simple Unity project with objects that interact with controller input (i.e. button clicks). Also included links to resources for further exploration of more complex and customized controller input mapping (incorporating scripts).
CONTRIBUTION 5: AR for Climate Awareness and Sustainability (created)
Page for different applications of AR and VR technology in climate awareness and sustainability. Complied research, images, videos, and links for examples of AR applications in a range of related fields, including environmental education and awareness, virtual field trips, wildlife conservation, sustainable fashion, immersive environmental storytelling, and disaster management.
CONTRIBUTION 6: Unity Water/Fluid Data Visualization Tutorial (created)
Tutorial on how to build a water or fluid visualization in Unity. Added explanations of what shaders and materials are in Unity and how they can be used to build water-like textures, appearances, and movements for particular objects. Covered steps on how to add custom materials to object in Unity, and adjust material settings / underlying shaders to meet specifications. Also added resources for creating custom water shaders (from scratch) using shader graphs for more control and precision. Added a section on data-driven transformations, what they may be usef for, and how they can be incorporated into water/fluid visualizations in Unity via C# scripts. Provided an example script for reference.
CONTRIBUTION 7: Unity Timeline for Simple Animations Tutorial (created)
Tutorial on using Unity’s Timeline for simple animations. Added step-by-step instructions on creating and editing animations using the Timeline window. Covered how to add keyframes, adjust animation properties, and control object movement without scripting. Included images for clarity and links to further resources for experimentation, as well as related Unity documentation and tutorials.
CONTRIBUTION N [short description] <ADD LINK>
Google Earth Web
Area I grew up in
Dorm in Brown
Place of significance (my highschool in Delhi)
Google Earth VR
Area I grew up in
Dorm in Brown
Place of significance (my highschool in Delhi)
Last landmark to current location (dorm)

Last landmark to current location (dorm)

HOURS SUMMARY
Total: 84.5 hours (Updated 3/19/25)
HOURS Journal
1/26/25 - 4 Hours
Created journal
Joined slack group and completed self-introduction
Read biography of Kennedy Gruchalla. Glanced over some of his research and read paper abstracts. Listed questions to ask him in the activity document
Read background papers:
SpatialTouch: Exploring Spatial Data Visualizations in Cross-Reality
Augmented Virtual Reality in Data Visualization
Researched Agora World
Homework Exercise:
Three changes should each require ~10 minutes to complete.
Add description of more related "-reality" terms in the AR/VR comparison page— such as MR (mixed reality) and XR (extended reality)
Adding links/images to the products listed in the Low Budget VR Headsets page
Including city bike (e-bike) data as sample data for VR data in the Scientific Data page
Three changes should each require ~1 hour to complete.
Including sub-sections on "VR in Music", and "VR in Performance Art" in the VR in The Performing Arts page -- i.e. finding and attaching some articles
Including a section in Scientific Data describing where and how to find good data (i.e. how to know if a dataset is appropriate to use) for projects
Including a section on prominent VR installations globally
The final three changes should each require ~10 hours to complete.
Uses of VR in disaster preparedness and emergency management, in the "What is VR for?" Literature Review -- requires finding, reading, and linking multiple related research papers (as there are quite a few)
Creating a page on VR Simulation Software (e.g. for aviation simulations, driving simulations, medical simulations)
Creating an "Opinion" or "Debate" page where people can each write differing viewpoints/perspectives on VR/AR, backed up with data, research papers, etc -- kind of like a discussion page for contentious VR topics (ethics, accessibility)
Complete one of the 10-minute changes.
Completed second change (adding useful links)
1/9/25 - 5 Hours
Created page for VR in Architecture/Urban Planning, compiled research on Agora World
Completed set up of Meta Quest 3 and connected to paperspace machine
Read through 3 previous projects
Potential pieces of software to explore and evaluate for my research project
Unity
Paraview
Fieldview
ReSpo Vision
Potential Project Ideas
Create 3D visualizations to represent traffic data on maps and map-like views— could be tailored for a specific vehicle, like e-bikes, and represent denser dropoff/pickup locations to make it easier to find and drop bikes. Could visualize energy usage and emission data. E-bikes in particular have vast datasets online (eg: https://www.kaggle.com/datasets/hassanabsar/nyc-citi-bike-ride-share-system-data-2023). Compare different software suited for this task
Compare and contrast different AR/VR software for responsiveness to external stimutli, like lighting, sound, or non-user movement (i.e. pets) — for example, an app that interacts with external light sources to change the view of the user would need a strong responsiveness, so it could be useful to evaluate which software would best allow for these functionalities in different domains
Represent drought and water inequality data in 3D to give visual references to the lack of water resources available in drought prone areas. Could be coupled with visualizing data to represent changes climate, increases in temperature, and frequency of droughts over time.
2/2/25 - 2 Hours
Completed Google Earth VR/Web Activity - added Google Earth VR and Google Earth Web sections to journal
Completed Google Earth VR vs Web form
AFrame Development Lab
2/3/25 - 4 Hours
Bezi Lab -- building my own Bezi scene
Installed DinoVR
Read DinoVR paper
Questions:
It was noted that most of the participants were native English language learners and the remaining non-native learners still had a strong proficiency in English. Is it likely that the results may have been different for the same experiment conducted in a different language, or with people with varying proficiencies?
It was also noted that "the reading of numerical data might lead to different outcomes" — I wonder if the outcomes would also differ for other types of reading? Such as close or analytical reading (that may be done in a History or English class), versus scientific reading (like a scientific paper), reading code from a particular programming language, or reading of signage on streets/buildings, etc.
Do other factors, like accompanying animations, images, graphics, or textual alginemnt, also have an impact on reading speed and feasibility in VR?
Thoughts
The study makes me think about how much of our reading ability is shaped by the physical constraints of 2D text on paper or screens, and how those expectations shift in immersive environments.
The findings also make me wonder about if trade-offs exist in textual design, and when they are (or should be) made. Should we prioritize aesthetics and depth at the cost of cognitive ease, or flatten the experience and stick to a particular font size / layout / paneling to make reading faster? The balance between strain and immersion in this case can be interesting to think about.
The results also suggest that denser visual environments with occlusion slow reading, which makes sense, but I wonder if there’s a threshold where complexity could actually enhances engagement? Couldn't some dense visual features help draw and focus attention rather than just act as "occlusions"?
Brainstorm software evaluation metrics
Movement tracking accuracy
Rendering quality
Latency -- delay/lag between user (or external) actions and the software's response
Shadow and lighting consistency for rendered objects (i.e. how realistic/integrated it is in passthrough)
Naturalness/intuitiveness of the interactions (i.e. how "natural" does a particular interaction feel when using this software -- this is more subjective)
Means and extent of collaboration (in its features)
Power consumption
Ease of use + understanding for a novice learner
Refining project ideas:
Idea #1: Vehicle traffic and emission visualization
What I will do:
Building a visualization prototype of an application that overlays AR emission data when pointed at different vehicle types (cars, e-bikes, motorcycles, etc.)
Building a visualization representing the real-life size of emissions (maybe in terms of kilos of coal burnt) for different vehicles -- could be interactive with distance (i.e. lump of coal grows as distance increases)
Evaluating different software for visualizations with dynamically user-controlled inputs (like distance and time)
Class activity:
Each student interacts with a visualization that allows them to choose a vehicle type, distance travelled, and visualization method (like lumps of coal, or cubic meteres of CO2) to represent emissions for a given trip -- and actually having that physical comparison show up life-size in AR
Potential Deliverables:
Comparative table evaluating different AR software for visualization dynamic (i..e actively changeable by the user) inputs
Wiki page on AR for vehicular emission applications
Idea #2: Comparing AR software for responsiveness to external stimuli
What I will do:
Test a range of AR software/applications designed to respond to changes in surroundings
Document how effective each software is on a range of responsiveness stimuli to come up with a conclusive ranking / report on what each is good and bad at
Build an interactive visualization in each software that engages with stimuli in the external world (like lighting, sound, movement)
Class activity:
Building an AR experience where the class can make changes in their physical space to trigger changes in the AR environment (make loud noises to power an AR light for example, or switch of the light to change the color palette of the scene) -- test this with different software and ask them which worked best/why
Potential Deliverables:
Comparative table evaluating different AR software on different responsiveness metrics (such as reaction time for light changes, sound changes, tactile changes, etc)
Tutorial on how to use a particular software and build a feature that responds to stimuli of your choice
Wiki page comparing features of best AR softwares for light/audio detection
Idea #3: Visualizing water-availability data in areas with limited water access / drought-prone areas (over time)
What I will do:
Build a visualization representing water availability per household in water-scarce areas, showing it in a tub in AR (or taking up the entire room), and showing it change (rise/fall) over time as water availability changes
Implement different water-visualization methods, including water-level projections, droguht-impact heatmaps, and water-footprint for different activities (showering, flushing, cooking, etc.)
Evaluate different AR software for simulating water-flow and movement
Class activity:
Interactive activity where students choose from a list of water-scarce areas and see the room fill up with the total amount of water available to a household in that area. Students can then choose to compare with similar visualizations of how much water daily activities like showering may take.
Potential Deliverables:
AR prototype showcasing real-time and historical water data overlays
Tutorial on creating water simulations in a particular AR software
Wiki page on AR applications for climate awareness and sustainability
2/4/25 - 2.5 Hours
Completed portions of Dino VR activity that I was unable to complete in class due to technical difficulties
Completed Dino VR Feedback Form
Researched potential datasets for the project I am most leaning to right now (Idea #3)
World Bank Data (from 1992 to 2014) - Renewable internal freshwater resources per capita
Worldometers - Water Use by Country (probably not as reliable)
Water Footprint Calculator - Water Footprint Comparisons by Country
FAO (Food and Argiculture Organization of the UN) Aquastat Data - range of very comprehensive water-use data by country and year
Better summarized by the World Population Review
OECD - Water Withdrawal Data (per capita)
Existing 2D Visualizations of the above data (and more)
Our World in Data (most recent revision in 2024) - Water Use and Stress
Our World in Data (most recent revision in 2024) - Clean Water
WorldMapper - No Water Access Per Capita
Essential Need - World Water Data for Safe Drinking
2/5/25 - 1.5 Hours
Read and gave feedback + comments on my classmates' (Vishaka and Connor) project ideas in the Activity Doc
Refined a project plan for my chosen idea (Idea #3) — created project proposal (refer to proposals above)
2/6/25 - 1 Hour
Refined project proposal based on classmate feedback (being more specific in terms of deliverables and additions to the wiki)
Reformatted project proposal to be in a saparate document (pdf), rather than in different scattered places across my journal
2/8/25 - 2.5 Hours
Added Dino images to class activity document
Researched AR software for water simulations based on time-series data
Compiled research and comparisons into a new wiki page - AR Software for Fluid Simulations
2/9/25 - 2.5 Hours
Refined project 1 proposal based on feedback from peers + Professor Laidlaw
Created standalone a project 1 page (linked at the top and here too)
Completed proposal presentation
Self evaluation of project :
Rubric
strongly disagree
disagree
neither agree nor disagree
agree
strongly agree
The proposed project clearly identifies deliverable additions to our VR Software Wiki -- 4 (deliverables listed in timeline on project page)
involves passthrough or “augmented” in VR -- 4 (list of ideas on project page for using AR/passthrough for data viz)
The proposed project involves large scientific data visualization along the lines of the "Scientific Data" wiki page and identifies the specific data type and software that it will use -- 5 (list of scientific datasets to focus on for this project on project page + proposal slides and document
The proposed project has a realistic schedule with explicit and measurable milestones at least each week and mostly every class -- 4 (detailed schedule on project page + proposal slides and document)
The proposed project explicitly evaluates VR software, preferably in comparison to related software -- 4 (evaluation of AR/2D data visualization via class activity, and evaluation of different software for building fluid simulations to use in AR)
The proposed project includes an in-class activity, which can be formative (early in the project) or evaluative (later in the project) -- 4 (class project activity described on project page + proposal document/slides)
The proposed project has resources available with sufficient documentation -- 4 (resources and software listed on project page)
2/10/25 - 2.5 Hours
Updates to the wiki page I previously made: AR Software for Fluid Simulations
Completed software evaluation criteria and research for fluid visualizations
Completed tabular software comparison (will guide my own software choices for my project, but can also be useful for other projects with fluid vizualizations)
Activities logging rubric evaluation
key for each criterion:
5 == exceeds expectations
4 == meets expectations
3 == mostly solid with some gaps
2 == shows some progress
1 == some attempt
0 == not found
Criteria:
Journal activities are explicitly and clearly related to course deliverables - 4
deliverables are described and attributed in wiki - 4
report states total amount of time - 4
total time is appropriate - 4
2/12/25 - 2 Hours
Developed an assessment criteria and usability target for fluid visualizaitions, particularly to evaluate the final visualization I will produce in Project 1, but also to evaluate existing fluid visualizations and guide choices in developing new ones (beyond just this project). Attached in:
Updated in AR Software for Fluid Simulations
Also updated in Aarav Kumar Project 1
Based on assessment criteria and usability targets for my visualization, and also based on my software comparison research for fluid simulations, I decided to use Unity with MetaXRSDK to build and deploy my water level fluid visualization
Installed and set up Unity account to begin prototyping
2/14/25 - 2 Hours
Set up unity project for XR development (installed Meta XR Core SDK, XR Plugin Management, and configured project for Andord) and set up Meta Quest device and developer account to allow for Unity app development
Researched tutorials for building water (or other fluid) simulations in Unity - began prototyping a simple water pool.
2/18/25 - 3 Hours
Continued refining fluid simulation prototype with guidance from tutorials on water texture and effects for Unity's custom water pools
Wrote a C# script to read CSV data on water levels in various countries over time, and dynamically change the water level of my pool object over time
Continued testing and debugging my script till water levels were changing for any inputted country from the dataset
Continued refining water movement and fluid dynamics for a more real and interactive appearance
Paraview and Ben's Volume Rendering installation and setup
2/21/25 - 4 Hours
When trying to deploy my Unity project to my headset, I realized I had made a crucial error in the render pipeline I was using. As it turns out, I accidentally made my project on a High Definition Render Pipeline (HDRP), which is incompatible with Android devices like the Meta Quest Headset. To prevent having to start all over again, I began researching ways to possibly be able to keep my current visualization and render pipeline and use AR render streaming to display the Unity scene on my headset, like illustrated the tutorial below:
However, after further research I found that it was mostly recommended to just switch the render piepline to prevent rendering issues down the line, so I ultimately decided to restart my project on the Universal Render Pipeline (URP), confirming that it is compatible with Meta Quest 3. This set me back on my milestones quite a bit, but was a necessary pivot for my visualization to deploy into the headset correctly.
In URP, Unity's custom water systems and pools do not work, so I began researching ways to apply realistic water effects to existing GameObjects using shaders and Unity's Shader Graph. Rather than making my own shader, I found two water texture shaders on the Unity Asset Store, with linked tutorials on how to customize and implement them
Water Shader 1: https://assetstore.unity.com/packages/2d/textures-materials/water/simple-water-shader-urp-191449
Video Tutorial / Demo: https://www.youtube.com/watch?v=mWg4CE6ybKE
Water Shader 2: https://assetstore.unity.com/packages/vfx/shaders/urp-stylized-water-shader-proto-series-187485
Video Tutorial / Demo: https://www.youtube.com/watch?v=fHuN7WkrmsI&t=1s
I tested both shaders and tutorials to see which would work better. The material of the first shader was rendering very unnaturally for me, despite trying multiple troubleshooting tips and following the tutorial, so I decided to go with the second one.
The water shader I selected (the second one) was more stylized than I was aiming for, so I manually refined its settings and underlying shader graph for a more realistic appearance, referencing the tutorial for guidance.
2/22/25 - 3 Hours
Completed the Apple Unity Lab with Sofia and Jakobi in the VCL
Continued refining the shader graph settings to match my assessment criteria and usability target for my visualization (explained in my project page).
Repurposed my older C# script (from the HDRP project) with changes to object references to control water level changes (based on CSV data).
Began adding UI elements (slider and text) to display the year by year fluctuation of water levels (I wanted to have a slider moving to represent changes in time, and the text changing to reflect the year the data is from)
2/23/25 - 2 Hours
Incorporated the UI elements into my C# script to allow dynamic updates to the slider and text as the water level changes. Followed tutorials for implementing dynamic updates to text and slider elements
Created presentation to update class on project 1 progress. Presentation linked here, as well as on project page.
2/25/25 - 5.5 Hours
Incorporated Meta Building blocks to enable AR passthrough and AR controller detection when deploying my scene.
Followed tutorial for deploying my Unity app into my headset: https://developers.meta.com/horizon/documentation/unity/unity-env-device-setup/
I found that my visualization was currently floating in mid-air and not anchoring to anything, which was a problem, as I designed it with the assumption that people will be able to see it scaled on the floor and walk around it to be able to relate to the volume in real space. So, I began researching and experimenting ways to anchor my visualization.
To anchor my visualization, I tried using spatial anchors, but was unable to get them to work despite following some tutorials and troubleshooting tips. I also tried building custom action itmes to allow the controllers to interact with visualization itself (to scale and transform it in AR), however, this also didn't work for me after experimenting for a long time.
To test another avenue for anchoring, I decided to pivot to using Unity XR's "Grabbable" item, to allow users to grab on to the visualization and anchor it themselves. I nested the visualization under the grabbable item, and after playing around a bit, found that configuring the settings of the Meta Controller building block so that it selects the grabbable item by default allowed for the visualization to anchor to the controller. Placing the controller on the floor (or wherever I wanted) then also moved the visualization with it, allowing me to view it and walk around it as expected. However, this setting scaled up my visualization dramatically, making it almost too big to even see.
Simply altering the "scale" property of my visualization in the inspector did not fix my problem, as this changed the way that the water object was changing in position and height to represent shifts in water levels. I realized this was because my script to control the water level relied on global positioning, not the local positioning of the entire visualization parent object, so reconfiguring the script to consider local coordinate transformations and adjusting the settings of the water object's position fixed this issue.
Following this, I tried adding further action items to link to the controllers, allowing them to interact with the canvas elements such as the text and the slider. However, I wasn't able to figure out how to make this work, as my controllers didn't seem to respond to any button I programmed to alter the visualization. I will likely go to Wednesday's office hours with Melvin and Jakobi.
2/26/25 - 3 Hours
To add an even better sense of scale to the visualization, I decided to add a relatable object to represent the correct sizing of what the visualization should look like, preparing for a case where I run into scaling errors in AR again. I chose to use a scaled chair for this, as this would give a good relative sense of scale to the viewer to suggest how big the visualization actually should be, in the case where it is not the expected size. I found a chair asset on the Unity Asset Store that worked effectively for my model:
I also realized that the app I deployed each time on my headset from Unity seemed to always deploy in the "edit" mode, not the "player" mode, in which the water level changes could actually be seen (as the script for that is only activated on play). I tried various troubleshooting tips to get my app to deploy and run in player mode, but was unsuccessful.
After going to office hours and seeking help from Melvin, I realized that the issue I was facing with my script not running in my deployed app was that it was reading CSV data, which I guess was stored locally and not as part of the scene I was deploying — as a result, because there was no CSV to read in the app, the water level stayed constant. So, I changed my script such that the water data is hardcoded into it directly, and this fixed the issue.
2/27/25 - 3 Hours
Continued refining Unity AR model and worked on enabling controller tracking in my app — which had strangely stopped working despite me having been able to enable it earlier. I really struggled with this, especially due to the lack of documentation available on the Meta Building Blocks in Unity. After playing around with the settings for a while and rebuilding relevant parts of my scene (like the controller anchors and camera rig), I was able to figure out how to get both hand and controller tracking to work.
Incorporated ray-cast elements to allow users to point at the water visualization with their controllers and hands
Incorporated an input field text box in the UI canvas (to allow users to input the country name for which they want to see the water data) — just a static object at this point, but the goal is to make this interactive and allow users to type into the field with a virtual keyboard.
2/28/25 - 3 Hours
Tried (and failed continuously) to incorporate a virtual keyboard that would allow users to type into the text box to specify a country. I was trying to do this with the Virtual Keyboard Building block provided by Meta on Unity, but as before, the lack of documentation on its features made it very difficult to navigate and make the keyboard functional. The limited documentation that was present on the Meta Horizon website (ex1, ex2) seemed to be outdated, as some of the features it described were missing or had different names than on my system, and my virtual keyboard did not appear in my scene like in their example. Additionally, Given the newness of the feature, there were also no helpful YouTube tutorials that explained the Virtual Keyboard feature, or resolved discussion posts on Unity about the feature either.
Tried enabling a button mapper for my controllers to select the input field with a button (or trigger) and get the system keyboard to show up (rather than having to add my own virtual keyboard), but this also didn't seem to work, and the input field on its own was largely un-interactive.
3/1/25 - 3 Hours
Having given up on using the virtual keyboard at this point, I decided to explore other avenues of getting the regular system keyboard to show up (without necessairly having to map the keyboard visibility to a button, as I was trying before). With some research, I was able to build a script that enables and displays the system keyboard on app start, and links it to the input field text. Thankfully, this seemed to work, and with some further tweaking of the script, I was able to reflect changes in the keyboard (i.e. what is being typed) in the input field as well, allowing users to interact with it.
Worked on integrating the text in the input field into my water level controller script, so that the water visualization can change dynamically based on the input that is typed in by the user.
Removed redundant or unused assets, scripts, and objects cluttering my scene that had been collected over time as I was playing around with different features that I didn't end up using (or that didn't end up working).
Debugged script for slider updates, which had broken since I enabled the text input field to interact with the water level script.
Rearranged UI elements to prevent blocking user's view of the actual water visualization.
Enabled re-centering of the visualization (i.e. when Meta button is pressed on the controllers).
Updated project page with a list of resources and tutorials I used for development and debugging help
3/2/25 - 4 Hours
Worked on refining the final elements of my visualization for the class activity (making sure that the water color and texture is feasible, disabling unused scripts, etc). Unfortunately, while trying to refine my visualization and remove redundant elements, at the very last step, I removed a "Grab Interactable" element that I didn't realize was critical to my scene. As soon as I did this, my entire visualization broke — the water cube now appeared as an overlay on my headset, rather than as an object in passthrough. Unfortunately Unity doesn't offer free version control, and once you save something on Unity, you can't undo your changes even if you never closed the project window. As a result, I couldn't go back to my old version, and trying to build another Grab Interactable element simply was not working.
I tried other ways of potentially fixing the issue — fixing any console errors that were showing up, even if they didn't seem relevant, rebuilding the camera rig and passthrough elements, resetting my XR settings on Unity, restarting Unity and my headset to see if the problem was with them.
None of these were working, so I decided to try going back to my old approach of having my visualization be anchored to one of my controllers. I attempted to follow some YouTube tutorials on spatial anchoring, but this also failed to work with my setup, as my controller button mapper was different to what each tutorial was displaying (probably because of an update) and I was only able to anchor small objects in place, not my entire visualization (which was very large by comparison).
I also attempted raycasting to get my visualization to anchor to the ground (via a C# script on the camera), but this also did not work.
3/3/25 - 5 Hours
I now tried following some tutorials on creating "grabbable" objects in Unity with Meta's building blocks, just to make a grabbable cube (with the hope of then being able to make my visualization grabbable), but in doing so I found that the cube itself was anchored to the ground, so if I could find a way t nest my visualizaiton under the hierarchy of the cube, then hopefully it would also anchor to the ground. After trying this and making some refinements to the scale and position, I was finally able to make my visualizaiton anchor to the ground as it was before. I decided not to touch it anymore so I don't break anything else.
Wrote setup instructions + homework on class timeline, and tested them out myself to see if I could build the app onto my headset without having to go through Unity (which worked).
Made class activity page with instructions on how to complete each activity (both 2D and AR versions). Added GIFs for visual support + guidance.
Explored my own visualization to see which countries had the most interesting data to see in AR, to give suggestions for which countries people may want to pick during the class activity.
Made Google Form for the class activity to ask for evlautive feedback on the 2D vs AR representation of water data.
Meet with classmate Colby to test that the setup is working and class activity instructions are clear (tested each other's app on each other)
3/5/25 - 2 Hours
After my class activity, I realized something I needed to work on and refine was interaction with my controller input, as for some reason my deployed project's keyboard would turn off if I pressed any buttons on my controller or clicked somewhere else, so it would be useful to have a button mapped to the keyboard -- that way if it ever turns off (i.e. hides), a button press can turn it on again. So, I did some research on how to get controller button mapping for Quest controllers in Unity, and decided to create a tutorial wiki page on Quest Controller Input Mapping in Unity to help others that may be struggling with this same problem.
Researched controller mapping for Meta Quest devices, and added overview of how controller mapping works in Meta Quest and how it can be accessed in Unity to the Wiki page.
3/6/25 - 2 Hours
Continued adding to the Quest Controller Input Mapping in Unity wiki page.
Added a detailed tutorial on how to use controller button mapping in Unity with step by step instructions to build a basic project recognizing button input and incorporating interactive features that react to the Meta Quest controller inputs when deployed. Added further resources to the tutorial to make more advanced mappings (with scripts), as well as links to guides for further explorations and more complex projects incorporating controller inputs.
Self-tested the Unity tutorial and added pictures + videos of the process and final result for visual guidance on how to manage controller input in Unity. Added instructions for setup and configurations as well.
Created a wiki page for AR for Climate Awareness and Sustainability, as this relates closely to the work that my project tries to achieve, particularly in making environmental data representing changes in climate and our sustainable resource levels (renewable internal freshwater resources, for my project). Added examples of different (existing) AR applications in varying domains of climate action, including environmental education and awareness, virtual field trips, wildlife conservation, and sustainable fashion.
3/8/25 - 1.5 Hours
Continued adding to the wiki page I created, AR for Climate Awareness and Sustainability
Added sections on AR applications in disaster prevention and management, as well as immersive environmental storytelling.
Added pictures and reference videos for each application of AR discussed.
Added resources for further research / information for each of the fields
Measured the dimensions of my AR water visualization application to determine the accuracy to real-life scaled volumetric data (i.e. to check if 1m in my visualization actually represents 1m in real space, and to what extent). Repeated the measurements across multiple trials in different lighting / spatial contexts to ensure consistency and reliability.
3/10/25 - 2.5 Hours
Created wiki page for Unity Water/Fluid Data Visualization Tutorial
Added explanations of what shaders and materials are in Unity and how they can be used to build water-like textures, appearances, and movements for particular objects.
Covered steps on how to add custom materials to object in Unity, and adjust material settings / underlying shaders to meet specifications.
Added resources for creating custom water shaders (from scratch) using shader graphs for more control and precision.
Added a section on data-driven transformations, what they may be for, and how they can be incorporated into water/fluid visualizations in Unity via C# scripts. Provided an example script for reference.
Added water resource data to Scientific Data wiki page (the ones I researched + used for my project)
3/11/25 - 2.5 Hours
Created wiki page for Unity Timeline for Simple Animations Tutorial
Guided tutorial on using Unity’s Timeline for simple animations.
Added step-by-step instructions on creating and editing animations using the Timeline window. Covered how to add keyframes, adjust animation properties, and control object movement without scripting.
Included images for clarity and links to further resources for experimentation, as well as related Unity documentation and tutorials.
Updated Project 1 Page with class activity results and evaluation phase (evaluating my project, deliverables, class activity, and more)
Created Project 1 Final Presentation to send to David + Melvin
Completed missed Class Activities (Eunjin and Papa-Yaw)
3/16/25 - 2 Hours
Completed 7 scenarios reading. Learned that evaluation in information visualization can be categorized into distinct scenarios serving different goals, and reflected on how these scenarios can guide the design, evaluation, and documentation of my initial project 2 ideas.
Project 2 Draft Plan:
Visualizing earthquake data (likely focused on a specific region rather than a broad cross-country comparison) to represent key attributes such as magnitude, geographic impact (scale), depth, and economic consequences. Additionally, the project may incorporate haptic feedback to enhance the experience of earthquake magnitude.
To achieve this, I plan to use Meta’s Haptics Studio and Haptics SDK, which integrate with Unity, to develop custom haptic effects. These effects could be generated based on seismic waveforms, earthquake audio data, or magnitude charts, allowing users to physically sense the intensity of different earthquakes.
I also aim to compare different versions of the visualization to better understand their effectiveness in various scenarios. Instead of simply contrasting a 2D vs. AR version (as in my first project), this project will instead explore
A focused visualization, representing only scale and magnitude to emphasize core earthquake properties.
A comprehensive visualization, combining multiple variables (magnitude, scale, depth, economic impact, and geographic distance) to assess how layering information affects user interpretation and engagement.
Resources:
Haptics in Unity (for Meta Quest)
Potential datasets to explore
https://www.usgs.gov/programs/earthquake-hazards (range of data for worlwide earthquakes with interactive maps)
https://corgis-edu.github.io/corgis/csv/earthquakes/ (CSV data for earthquake magnitude, location, depth, significance -- but only till 2016)
Based on the seven evaluation scenarios from the paper, I formulated some key questions or ideas for each scenario, relating to my project:
Understanding Environments and Work Practices (UWP)
For my project, I could look at how emergency responders, geologists, and policymakers currently analyze earthquake data, and what challenges they face in interpreting it
What role could AR-enhanced haptic feedback play in improving earthquake preparedness training?
Evaluating Visual Data Analysis and Reasoning (VDAR)
Do users gain meaningful insights from the focused vs. comprehensive visualization, and which approach supports better analysis?
In my evaluation for this project, I could look at if adding haptics improves users’ ability to detect patterns or compare earthquake severities more effectively
Evaluating Communication through Visualization (CTV)
How effectively does the visualization (with or without haptics) communicate the severity of earthquakes to the general public (i.e. our class)?
Does an immersive AR experience lead to greater information retention compared to traditional earthquake data visualizations?
Evaluating Collaborative Data Analysis (CDA)
How well can multiple users interact with the visualization to discuss and interpret earthquake data together?
Does the haptic-enhanced AR experience foster more engagement or agreement in team-based decision-making scenarios?
Evaluating User Performance (UP)
In my class activity, I could compare how quickly and accurately users identify high-risk earthquake zones using different visualization versions.
How does haptic feedback impact response time when assessing earthquake severity?
Evaluating User Experience (UE)
I could conduct usability test for my class activity to assess whether haptics improve immersion and engagement or introduce cognitive overload.
What aspects of the visualization cause confusion, frustration, or cognitive overload, especially in the comprehensive version?
Evaluating Visualization Algorithms (VA)
Can haptic feedback be generated and synchronized smoothly with the earthquake visual data without performance lag?
To evaluate my deliverable, I could analyze how well does my system scales to larger datasets, and can it render multiple earthquakes without performance lag?
3/19/25 - 2 Hours
Completed Project 2 Proposal slides to send to Melvin and David
Completed Project 2 Timeline, with milestones, deliverables, and wiki additions for 4/01, 4/03, 4/08, 4/10, 4/15, 4/17, 4/22, 4/24, 4/30, 5/01
Evaluation of Project 2 Plan based on rubric:
Deliverables + wiki additions are clearly identified (with respective dates and locations of where in the Wiki pages/content will be added)
Focused on Scientific Data Visualization -- The use of earthquake magnitude, geographic scale, and economic impact aligns well with large-scale scientific visualization.
Comparative Visualization Approach -- Exploring focused vs. comprehensive visualizations allows me to have an evaluative and comparative portion for my class activity
Lack of AR Collaboration Component -- The rubric encourages collaboration in AR/VR, yet my plan has detailed still a single-user experience with no multi-user interaction or shared data exploration. Possible fixes could be implement multi-user earthquake analysis, collaborative haptic experiences, or AR-based discussions.
Lack of Explicit Explanation for Class Activity -- I had a class activity planned, but did not explicitly explain it in my draft so it wasn't clear what my activity is going to involve. This needs to be explicit and fleshed out, as it is a key component of the project.
Revised Project 2 Plan:
Overview: This project will focus on visualizing earthquake data in AR, representing attributes such as magnitude, geographic impact, depth, and economic consequences. It will also incorporate haptic feedback to simulate earthquake intensity, allowing users to physically sense seismic events.
Key Features & Approach:
Custom Haptics: Using Meta’s Haptics Studio + Haptics SDK, seismic waveforms, earthquake audio, or intensity charts for haptic effects.
Comparative Visualizations:
Focused Visualization – Displays only magnitude and geographic scale to highlight essential earthquake properties.
Comprehensive Visualization – Combines magnitude, scale, depth, economic impact, and geographic distance to assess the impact of layering complex data in AR.
In-Class Activity: Students will participate in a hands-on AR earthquake simulation where they experience different earthquake data through haptic and visual feedback. They will compare multiple (2-4) visusalizations, particularly comparing:
Haptic vs. non-haptic experiences – Does touch feedback improve understanding?
Focused vs. comprehensive visualization – Which is more effective for data interpretation?
Collaboration in AR: A multi-user AR mode where users can collaboratively analyze earthquake data, compare findings, and discuss seismic risks, and shared haptic experiences where multiple users feel synchronized earthquake intensity to study how haptics influence perception and decision-making.
Data Source: USGS (United States Geological Survey) real-time and historical earthquake data.
Software Used: Unity + Meta Haptics SDK for AR and haptic rendering.
Wiki Contributions:
Comparison page on software and tools for integrating haptic interaction with Meta Quest (Bezi vs Unity vs WebXR)
Assessment criteria + usability target for (my) haptic AR earthquake models
Tutorial on how to integrate haptics into a project in Unity (or additional software) using different SDKs (compare these SDKs)
Wiki page on applications of haptics and sensory stimuli in AR
Wiki page on 3D mapping development tools for visualizing geospatial data (software/tool comparison + analysis)