Vishaka Nirmal's Journal
Project 2
Project 2 Proposal <ADD LINK>
Presentation for Project 2 Proposal <ADD LINK>
Poster <ADD LINK>
In-class Activity <ADD LINK>
Public Demo <ADD LINK>
Wiki contributions
CONTRIBUTION 1 [VR Modeling Software>Spline Design - Page + content added]
CONTRIBUTION 2 [VR Development Software>VR User Experience>AR Interactions - Page + content added]
CONTRIBUTION 3 [VR Modeling Software>Bezi - Page + content added]
CONTRIBUTION 4 [Scientific Data - Content added]
CONTRIBUTION 5 [Interactive Feedback in AR Usability Test Spring 2025 - Page + content added]
.....
CONTRIBUTION N [short description] <ADD LINK>
Self Evaluations
1/28/2025 - Course goals
| 1 | 3 | articulate AR/VR visualization software tool goals, requirements, and capabilities;
| 3 | 4 | construct meaningful evaluation strategies for software libraries, frameworks, and applications; strategies include surveys, interviews, comparative use, case studies, and web research;
| 2 | 3 | execute tool evaluation strategies;
| 1 | 2 | build visualization software packages;
| 3 | 4 | comparatively analyze software tools based on evaluation;
| 2 | 4 | be familiar with a number of AR/VR software tools and hardware;
| 3 | 4 | think critically about software;
| 4 | 4 | communicate ideas more clearly;
| 2 | 4 | build a vocabulary of terms, cues, and best practices for AR/VR experiences
2/13/2025 - Journal evaluation
Journal activities are explicitly and clearly related to course deliverables - 3, could be more descriptive of some of the work I’ve been doing with research on wiki/elsewhere in prep for the project planning
Deliverables are described and attributed in wiki - 3, have this planned in the project, but unsure if there’s more needed!
Report states total amount of time - 4, added to each journal entry
Total time is appropriate - 4, running tally of total time is added each time I edit
HOURS SUMMARY
Total: 72 hours
Vishaka's Journal
Journal is formatted with most recent at the top, least recent at the bottom ⬇️
Template - # hours
---
3/15/2025 - 1 hour
15 min - Reading seven scenarios paper
45 min - Project 2 drafting
Looking into new passthrough API access
Drafting a new project goal — collaborative digital/physical feedback
Looking into possibilities of using Arduino and physical computing along with headset
3/13/2025 - .5 hour
30 min - Set up and went through Mia’s activity!
3/11/2025 - 5 hours
30 min - drafting presentation
30 min - gathering images, notes, current contributions
3.5 hours - data analysis for in-class activity research findings and adding this information to slideshow
30 min - self evaluation finalization, sent email with presentation + evaluation to teaching team
3/10/2025 - .5 hours
30 min - Sofia’s in class activity - went through each of the three models, then filled out google form and notified Sofia about completion of activity
3/9/2025 - 2.5 hours
45 min - downloading sidequest, getting developer settings on quest set up
30 min - Aarav’s in-class activity, submitted google form, notified Aarav about completion of activity
30 min - Connor’s in class activity, went through each of the 3 versions and provided feedback through the google form, notified Connor about completion of activity
15 min - Started Mia’s activity, went through the deforestation website, but then had trouble getting the WebXR link to load — messaged to get some troubleshooting tips!
30 min - Colby’s in class activity - went through each activity, and then filled out google form. Notified Colby about completion of activity.
2/26/2025 - 2 hours
30 minutes - finished google form, last touches to slides
1 hour - tried working with hover again, still not working, adjusted to pointer down trigger
30 min - final testing through setup, prototype tasks, survey answering, adding slides link to course timeline
2/25/2025 - 8 hours
30 min - Gathering facts about models, generating audio content to use in prototypes
1.5 hour - Setting up rotation for loon model on ui platform
Working with state machine
Troubleshooting with joystick movement triggers
then added same rotations for other models
2 hours - setting up selection/deselection for all models on ui platform
2 hours - setting up audio interaction for the learn more task for all models on ui platform
Issues with hovering — hovering on non active elements
30 mins - adjusting freeform model to use new audio elements
1.5 hour - more issues with hovering??
Hover works well on laptop, but not well with the controllers
seems to work at times, but having trouble with this overall. tried out multiple iterations of states/triggers, but nothing worked well
2/24/2025 - 2.5 hours
30 min - Figuring out input mapping
1 hour - Created slide deck for usability testing plan for in class activity with links, instructions
30 min - Created google form for feedback during in class activity session
30 min - Finished Ben’s volume rendering activity, pasted screenshots in the board and finished the feedback form
2/23/2025 - 8 hours
1 hour - Designing screen interfaces in Figma to then copy over to Bezi. (Note that this is an interesting process and I want to document this in the Bezi page)
2 hours - Translating screen interfaces to Bezi
Interestingly, this process is easier to prototype and test while working, since the triggers (hover/pointer down) are available on computer too
1 hour - Compiled learnings from Bezi interaction prototyping, added to the Bezi page
1 hour - Created a usability testing AR interactions page under the AR interactions page
Added content from my usability plan to this page
1 hour - Created new slide deck for progress update
Added information about current milestones hit, prototype progress, and more detail on usability plan
2 hours - Drafting usability test plans
Fleshed out the setup plan
Ideally going to have two groups that just explore one prototype so I have more time to discuss/gather feedback
Read through task load index resource and the evaluating biometrics visualizations research to look more at questions to ask
Drafted specific feedback questions and ways to gather feedback (open ended questions vs likert scale ratings), based on NASA task load index resource
2/20/2025 - 4.5 hours
4.5 hours - Refining freeform interaction prototypes
Working with state machine, had to do a lot of troubleshooting with how colliders worked and getting different states to work the way I wanted. It was also time consuming to prototype some of the repeated actions, since the states are hard to copy over.
Tried out prototyping with audio, decided to use spatial instead of a point audio, and hooked it up to a physical interaction (”touch”)
Had some bugs to figure out specifically with the body and hand tracking, since i’m using the body rig and controller colliders to trigger some interactions. There’s not a great way for this to be super accurate
Current working prototype plan of interactions to test/compare
selection of a model (movement vs click)
looking at a model (movement around vs adjust with controllers)
learning more about a model (’touch’ vs hovering(?))
2/18/2025 - 6 hours
30 min - Chose three sizes/different types of models, based on size, type of model, and usage on their website
5.5 hours - Working on freeform prototype
Created a first working version where click selects a model, hides others
Got familiar with state manager
Then adjusted to use colliders instead, having a user step closer to the model to select it.
Tested on metaquest, found some improvement opportunities to use smaller collider box, adjusting visuals of text
Added in instructional pieces that appear when one item is selected
2/16/2025 - 2 hours
2 hours - Getting ready to develop in Bezi
Before developing, I mapped out a user flow for how people will interact with this prototype. I plan to have 3 models, taking inspiration from Colby’s project, where I’ll use a variety of sized items that users can explore. In the traditional prototype, users will get a quick overview of how to use controls to manipulate the model, while in the spatial prototype, users will get instructions on how to move around the model.
I created a quick hand drawn sketch, then translated this to rough wireframes in Figma as a starting point.
Watched this video for Bezi, seems like colliders and the body rig are going to be more useful for this project. With this and the size limitations of Spline— have finalized my software choice!
2/16/2025 - .5 hour
10 min - Heard back from the RISD nature lab, looked through documentation they had on the digital use of their models
20 min - Summarized findings, and added this model and relevant information to the [Scientific data] page.
2/14/2025 - .5 hour
10 min - Installed paraview and completed setup process (had to reach out for issues with paperspace earlier, but all solved!)
10 min - Installed volume viewer and completed setup process
10 min - Received documentation back from RISD nature lab, sifted through information to find out what might be useful to add to wiki
2/12/2025 - 3.5 hours
30 min - Sent email to RISD nature lab about current usage with the 3D models, and how they see usage of digital vs physical models
1.5 hours - Setting up models into Spline & Bezi
Began importing one model into Bezi, was able to figure out importing easily.
Bezi setup steps
Created a Bezi file set up for AR
Downloaded Sketchfab as .obj
This came as a zip file with both the model and textures as a .jpeg
In Bezi, went to Object>Upload 3D model
Uploaded .obj file
This imported at a super huge size, needed to scale this down to be visible for the body rig camera
With the new model selected, click on the Material and next to the color picker, there’s a square that will allow you to upload an image for the texture. The models are already mapped for these textures, so it’s very easy to get the textures on!
Learned about interaction design tools within Bezi
Watched this video, which gave a good overview of how I could get specific objects to be interactive
Began setting up the same model in Spline
Spline setup steps
Created an empty Spline design file
Downloaded Sketchfab as .obj
This came as a zip file with both the model and textures as a .jpeg
In Spline, went to Menu>Import>3D Model
Dog skull file was too big! Max import size is 60mb — model is 73mb
Tried again with a Nautilus Shell model (19mb)
Again, imported very huge, needed to scale down to camera size
With the model selected, the Material tab on the right and change to Image type, then upload image. Again, model is mapped for these textures so super easy to add!
Looked into interactivity with Spline
Read through this article, which provides a quick into to interaction, and is very similar to Figma animations/triggers
Also found this list of events that can be used to trigger animations
Created table for comparison of both software setups, documented in project proposal document: https://docs.google.com/document/d/1ldf-Hqjz0hVtMqVJUE2AOSJ20En8BqrCIso0aAUHGQU/edit?tab=t.0
1.5 hour - Adding content to new pages in wiki
Created new Bezi page under VR modeling software (with links to the Lab page)
Added description, setup steps, interaction details
Added to the Spline page I made earlier
With setup steps, interaction details
2/11/2025 - .5 hour
20 min - Self grading journal
Journal activities are explicitly and clearly related to course deliverables - 3, could be more descriptive of some of the work I’ve been doing with research on wiki/elsewhere in prep for the project planning
deliverables are described and attributed in wiki - 3, have this planned in the project, but unsure if there’s more needed!
report states total amount of time - 4, added to each journal entry
total time is appropriate - 4, running tally of total time is added each time I edit
10 min - Decided on in-class activity date and added this to the wiki - 2/27. Class feedback mentioned I could try for an earlier date, but I’m hoping to schedule this right before my travel plans so that the data analysis can be remote!
2/9/2025 - 3 hours
2 hours
Working on first milestones for project 1
Found sources for AR interaction methods, created content for new AR interaction page
Brainstormed potential interactions to develop in project plan
Adjusted project plan slightly to focus on individual interactive feedback vs world/physical setting interactive feedback
Planning to build out two models to test (one with audio/visual/maybe haptic feedback, and the other with more instructional based information)
Mapped out usability testing plan more thoroughly, adding details to how i might test this
1 hour
Finalized slide deck for project proposal, sent to TA!
Refined project goal, milestones, and added info about in class activity
2/8/2025 - 2.5 hours
2.5 hour
Revising project plan, adding more details to usability plan, possibility of using unity for haptic feedback interactive portion
Starting slide deck
2/7/2025 - 1 hour
1 hour
Posting Dino VR screenshots
Creating separate google document + spreadsheet for project 1 proposal, beginning to draft milestones, goals, and evaluation metrics
2/5/2025 - 1.5 hours
.5 hour
Gave feedback in class activity board for Eunjin and Aarav.
1 hour
Refining deliverables and additions to wiki for project plan
2/4/2025 - 2 hours
.5 hour
Finished up dinoVR feedback form
Added project idea to class activity board
1.5 hour
Drafting project plan, mapping out project milestones
2/2/2025 - 3.5 hours
.5 hour - Read through paper, outlined questions for Johannes Novontny
The study that we read was based on text readability within virtual reality. I’m interested to know if there are any changes you’d expect to see in an augmented reality context? Also, since augmented reality allows passthrough and seeing a feed of your surroundings— do you think the text that exists in the real world should also follow these standards? (Thinking specifically of times where I need to grab a code from my mac, while using the headset, and it feels distorted/hard to read)
2.5 hours - Researching data sets, drafting project goals, refining ideas
Possible scientific data sets to use (I think these could be used across all project ideas)
Network data (?): https://sites.research.google/gr/open-buildings/ - Dataset of building footprints, could use for a city visualization of building data
Polygonal Model: https://sketchfab.com/RISDNaturelab - 3D scanned models of collections within the RISD nature lab, could use this for collaboratively viewing natural references
Project idea 1: Research how physical objects might be beneficial for collaborative AR data visualization
Project activities
Using low-code tools, create an AR visualization of a dataset
Design/prototype a physical object that integrates with a collaborative design software that will view the data (not sure entirely how this could work, have hooked up arduino/unity before, but unsure if this would be possible on a collaborative software..)
Develop a usability testing plan for testing AR vs AR + physical object
Class activity
Hands-on usability session where students will experiment with the physical object in collaborative space
Would have a questionnaire or discussion afterwards
Potential deliverables
Document information on collaborative data visualization tools, which ones work best with collaboration [Add to specific software pages]
How to integrate a physical prototype (probably arduino) to the software [Add to specific software page]
Usability report of the interactions with the physical/digital world [Add to VR User Experience page]
Project idea 2: Research the usability of an AR data visualization tool in a collaborative context (likely paraview)
Project activities
Design scenarios for collaborative interaction with the dataset
Conduct user testing to gather qualitative and quantitative feedback.
Develop a set of usability metrics specific to AR collaboration.
Class activity
Conduct a usability testing session, where everyone would have specific collaborative tasks to complete.
Potential deliverables
A usability report of the conducted testing session [Add to specific software page]
Set of visual cues/metrics that that assist with designing for augmented data viz collaboration [Add to VR User Experience page]
Project Idea 3: Research into different interaction techniques for collaborative AR data visualization
Project Activities:
Research current uses of interaction methods such as audio and haptic feedback in AR
Learn about how other interaction methods can be implemented into collaborative software for viewing
Conduct user testing to analyze the effectiveness of different interaction methods with specific data sets
Class Activity:
Conduct a usability testing session, where part of the class interacts with a model giving them audio feedback, while the other half gets haptic feedback. Then gather feedback about the experiences, and discuss the differences in visualization analysis.
Potential Deliverables:
Documentation about integrating different interaction methods (audio/voice) with visualization softwares.
A usability report of the conducted testing session
Brainstorming software evaluation metrics
Interaction methods (list of whether it uses controllers, touch, haptics, movement) [Add to specific software page]
Collaboration techniques (does it allow for direct manipulation, in-sync edits) [Add to VR User Experience page]
.5 hour - adding Spline to the VR modeling software section
2/1/2025 - 1 hour
1 hour - Bezi Lab, DinoVR download
Created Bezi file, viewable here: https://bezi.com/play/6bc57c47-1c74-446c-b393-53dd11d6cb9c
Downloaded DinoVR on paperspace machine, ready to go, just need app ids!
1/31/2025 - .5 hour
.5 hour - Finish activities from class
Wrote out rest of project ideas in class activity board
Google earth demo: https://drive.google.com/drive/folders/1XGUgJKq4p7nYrTCqQcAG_CA3vGQ-kcH4?usp=sharing
1/28/2025 - 5.5 hours
1.5 hour - Set up of Meta Quest, accounts, VM -- Definitely took longer than expected! I had some trouble with my Meta account, which slowed this process down.
1.5 hour - Looked through previous projects and software, spent a lot of time learning more about the available softwares and their learning curves that were outlined on the wiki
Projects
Software
Bezi: Collaborative 3D modelling software with a plugin for Unity
Paraview: Scientific visualization modeling, no code needed
Tilt Brush: VR painting tool with audio visualization features
Vizible: Remote collaboration in VR
2 hours - Project idea brainstorming
How might physical objects be beneficial for collaborative AR data viz?
Based on a research project that developed a tool for tangible AR visualizations, this project would dive into how physical objects would assist/detract from AR collaborative data visualizations. https://dl.acm.org/doi/10.1145/3313831.3376613
Conduct an evaluation for the current AR data viz software to be used with physical objects -- what's possible right now?
How can we determine the usability of specific AR data viz tools? (Thinking of this as a project 1 + 2 progression)
Initial survey of the usability of a specific data visualization tool in AR (Maybe paraview)
Secondary work to design/prototype potential usability best practices based on findings, possibly using something like Bezi/shapes XR to prototype new UI
How well do current no/low-code softwares allow for collaborative data viz? Evaluating the ease of these softwares for data + collaboration (Likely thinking of bezi/shapesxr)
How can we create a set of visual cues that assist with augmented data viz collaboration? Evaluate multiple softwares to track best visual cues — movement, size, etc (coming up with a list similar to the ones mentioned in Kenny's talk)
.5 hour - Rating + adding learning goals
1/26/2025 - 4 Hours
1.5 hours - set up journal, intro on slack, going through syllabus and course timeline, learning about guest speaker
1.5 hours - finding missing pieces of current documentation, mapping out different changes to make
10 minute change
Adding information about product mobile AR to Related Technology page — Examples of Amazon AR in app
Adding information about the new Meta Orion glasses to Hardware section — AR glasses released in 2024 with heads up display
Add Microsoft Mixed Reality Lab link to the Corporate Research Labs section of “Large Displays, Labs, and Papers” ✅
1 hour change
Adding more information and resources to the “User Experience in VR” page — Could use more external links to resources for designing experiences in VR
Add more personas to the “User Exemplars” page — Right now, it just has medical professionals and combat specialists
Add a new section in “Applications of VR” — for VR in textile/clothing production, specifically for adding CLO 3D
10 hour change
Create a new page for VR modeling software “Spline (3D design tool)” — Try out and add information about collaborative 3D design tool, Spline. This time could be spent learning how to use the tool, seeing what integrations are available, and seeing how collaboration works in the 3D tool.
Create a new page in Applications of VR for “VR in Design” — Add in context about the design field, applications of current VR/AR in the design field, and maybe add some more information about collaborative tools that are prevalent in digital design (like Figma, Sketch, etc)
Create a new page for “VR Evaluation tools” — Researching and outlining usability heuristics that can be applied to VR, taking from resources like Nielsen Norman, Interaction Design Foundation, etc
1 hour - scoping out project ideas & reading articles
Learning about past project scopes, looking into various data sets and development software
Read: SpatialTouch: Exploring Spatial Data Visualizations in Cross-Reality.