Content added by Vishaka for Spring 2025 Project 1, on 2/23/2025
Evaluate how individual interactive feedback vs physical setting feedback patterns influence user perception of AR visualizations.
Understand user engagement with models that have different interaction patterns
Learn about what types of feedback help/detract with physical immersiveness in AR
HMDs
Links to software environments to test out (Bezi)
Split participants into 2 groups
Group A: Load model that has interactive feedback (audio/visual/haptic)
Group B: Load model that has onboarding information about surroundings (visual/physical)
(30 minutes, aiming for 25 with a 5 minute buffer)
5 minutes - Introduce agenda
Split participants into 2 groups, one that explores a model with interface feedback and the other which has kinesthetic feedback
5 minutes - Setup instructions and time to set up headsets
5+ minutes - Exploration time (Will be taking down notes, pictures during this time)
10 minutes - Feedback time (take notes, pictures during this time)
Find a way to select one of the three items in this experience.
Adjust your view so the object is facing away from you (180 degrees)
Find a way to learn more about the item in front of you
Find a way to deselect the item, and select a new one.
Goal: Understand user engagement with models that have different interaction patterns
Which nature model did you initially select? [Single select]
Did you explore any others? [Open-ended]
Overall, how would you rate your familiarity with each nature model after this activity? [Likert low familiarity-high familiarity]
How would you rate your success with each task? [Likert low performance-high performance]
Goal: Learn about what types of feedback help/detract with physical immersiveness in AR
How would you rate your mental load with each task? [Likert low load-high load]
Were any of the tasks particularly complex? Or simple? [Open-ended]
How would you rate your physical activity load with each task? [Likert low load-high load]
Were any of the tasks particularly demanding? Or easily achievable? [Open-ended]
This slide deck goes over my main findings, but they are also summarized below.
More users initially chose to interact with the model in the middle
Not all users got to interact with all the models in this experience, which could be attributed to being more immersed in one specific model
There were mixed success rates with each task, with it being harder to touch the model for more information and to go to a different model, since the prototype took up a large space.
There was still low-medium physical activity, but definitely more than the UI prototype.
Most users initially interacted with the left most model, aligning with common UI patterns, where on screen interfaces, the top left corner is most important to users.
All users got through viewing all models in this experience, which could be attributed to the low physical load for this prototype
Users had higher averages overall for their familiarity ratings with each model
There were mixed success rates on certain tasks, like getting more information (since this was a click+hold task)
Overall, low mental and physical load
The animations of models growing as you approach them felt jarring
There needed to be affordances for hand/body colliders
There were bugs with the audio
Positive feedback for both experiences!
Spatial prototype:
The audio was compelling
Felt more interactive
Not enough space for the prototype
Was straightforward to interact with
There was higher physical exertion
Instructions were hard to read without a background
Hard to anticipate audio length
Wanted to adjust scale
UI prototype
Models felt far for viewing
Still some issues with reading text
Refined look of UI made it easy to navigate
Its easy to forget controls
Click + hold felt unintuitive
Easy to switch between models
Wanted to look at models closer