By Lee Hayes
Although there is a lot of research on UI displays for VR experiences in terms of color, opacity, and type size, there aren’t many studies on perceived usefulness of UI for data analyzation. For this project I wanted to explore the ways UI and visual affordances impact information acquisition, the amount of time it takes to answer a series of questions, and accuracy.
Prior to my series of original questions, I asked my 9 participants a few grounding questions. Overall, my participants were familair with using laptops for various tasks and had some familiarity with VR.
I asked participants about videogames and use of laptops to find out how familiar they may be with some of the UI I was developing. The clickable icons were inspired by "tooltips" in video games, and the on hover interaction was inspired by buttons users would commonly find on a laptop's interface. Asking about VR videogames hints out how much my participants have interacted with haptic responses. This doesn't impact visual affordances as much as how much is too much for haptic responses.
Participants were then presented with either Version 1 (V1) or Version 2 (V2) of a link to a space in Shapes XR. Both spaces had a Viewpoint one and Viewpoint two available to choose between. Viewpoint one in both spaces were the same “on hover” glowing UI, while viewpoint two differed. V1 viewpoint two offered clickable icons that would reveal necessary information, and V2 viewpoint two had consistent graphics hovering above.
Participants had to choose an initial viewpoint to answer a series of question and then swap to the next viewpoint and answer another series of question. For each round of questions, participants had 5 minutes to answer. All unanswered questions were to be left blank, and if participants finished early, they would record their time.
Above are some example questions I included for the original questionnaire. The questions vary from information that can be captured at a glance, information that requires a calculator, and shortform questions. At the end of both sections, I asked participants to rate their confidence in their answers and users responded with confidence scores that closely related to their accuracy. You can find the form here to see the rest of the questions I developed.
After participants used the VR space to answer a series of questions, I had them fill out a final questionnaire on UI intuitiveness and immersion. From this questionnaire I gathered that despite some difficulty navigating the space, participants found the interactions to be quite immersive and intuitive. Overall, the visual feedback made participants feel more immersed than haptic feedback like vibrations and sounds.
The spaces were designed to have a gradient of user input and interaction. The viewpoint with constant graphics offers the least amount of interaction but the most immediate information. The “on hover” viewpoint offers both passive and actionable UI elements, allowing participants to acquire information through minimal effort. The clickable UI offers only actionable UI with haptics (vibrations and sound), requiring participants perform more work but providing more immersion in the space.
It was important to test which modality allowed for the highest level of accuracy within the time limit, alongside a sense of immersion.
Below is a graphic representation of each participant’s accuracy in answering questions for both sections. The left row of numbers is the question number and the top row is the individual participant. The color indicated which version the participant was using, and whether or not they answered the question correctly. If it was answered incorrectly then the circle will be black and if left unanswered there will be a blank space.