VR Collaboration Evaluation for Scientific Data Visualizatoin
Created by Dave Song Spring 2023
About Evaluating VR Collaboration for Scientific Data Visualization
Collaboration Evaluation for collaborative Scientific Data Visualization requires a different approach to evaluation. This is because the focus for scientific visualization should be on finding meaningful and profound insights from the dataset which was not possible with other media of data visualization. Therefore, in addition to the conventional evaluation form for the VR collaboration, this evaluation form also introduces the idea of insight-based evaluation of the system.
For the sake of VR collaboration evaluation, we are going to use the definition introduced by Saraiya, P., North, C., and Duca, K which defines an insight as “an individual observation about the data by the participant, a unit of discovery.” [1]
Collaborative insight can be categorized 3 categories
Trivial collaborative insight: Simple ideas (data summary) could be delivered and discussed among the users
Intermediate collaborative insight: Pattern recognition, overall trend, or recognizing particular detail could be shared, discussed, annotated, and evaluated among peers.
Significant collaborative insight: Through collaborative effort(annotating, color coding, scaling, and other collaborative tools), an insight that can support, deny, or create hypotheses could be shared and discussed.
<Before Evaluation Activity>
Form a group.
Have one person from each group as a timekeeper.
If there are more than 3 members in a group and one timekeeper, divide the group into smaller ones and have each group tracked by a timekeeper.
All VR participants will observe the scientific data in the VR environment using the think-aloud protocol — “concurrent verbalization of thoughts”.
When each insight was made, the time keeper will leave a short title and note with a timestamp.
A trivial collaborative insight can earn 1 -2 points depending on its significance
An intermediate insight earns 3 points
A significant insight can earn 4-5 points
Throughout the experiment, each group’s timekeeper will record insights and the time it took for each insight. Evaluation of each insight should be done after the evaluation activity.
<After Evaluation Activity>
[Group Evaluation]
Insight Evaluation
From the timestamp and list of insights generated during the activity,
as a group, evaluate each insight and decide on the category of each insight
Again, a trivial collaborative insight can earn 1-2 points
An intermediate insight can earn 3 points
A significant insight can earn 4-5 points
For each insight, evaluate and decide on the significance point.
Get an Average insight score
Get the average time it took to find an insight collectively
[Individual Evaluation]
Collaboration Task Load
Likert Scale of 1-10
Mental Demand
How much mental and perceptual activity was required to collaborate(verbal and nonverbal) with other users in the same environment? Was the collaboration process easy or demanding, simple or complex?
Physical Demand
Was the VR collaboration physically easy or demanding, slack or strenuous compared to in-person collaboration?
Effort
How hard did you have to work to communicate and collaborate with other users in the space?
Frustration Level
How irritated, stressed, and annoyed versus content, relaxed, and complacent did you feel while communicating and collaborating with other users in the environment? Did the plans and agenda you planned go as you wish?
System Usability for Collaboration
Likert Scale of 1-10
I think that I would like to use this system frequently for future VR collaboration.
I thought the system was easy to use to collaborate with others.
I found the various functions in this system were well integrated to support the flow of natural collaboration and communication.
I would imagine that most people would learn to use this system very quickly.
I found the system very cumbersome to use.
I felt very confident using the system for effective collaboration and communication.
Interaction Evaluation
Likert Scale of 1-10
I could collaborate with others in the system without noticeable latency
I could collaborate with others without noticeable bugs
Verbal communication(volume, quality, latency) resembled and satisfied what is required for the team to complete successful collaboration. (if the software supports 3D audio, the score starts from 7)
Body tracking for the avatars was accurate. From avatar motion tracking, I could tell exactly what other users are doing in the environment.
Through the system’s hand-tracing feature, I was able to tell what other users are doing. It was able to track all ranges of dexterity.
Face expression was tracked and displayed through the avatar representation of my collaborators.
Additional Qualitative Questions
Which features support the natural flow of collaboration that resembles a successful collaboration done in person?
Which features(or lack of them) the system failed to support successful collaboration in the VR system?
Which visualization or annotating tool you found most helpful when finding insights?
How often did you know where your collaborators are located?
During the collaboration, how often did you know what your partner could see?
How often did you know what your partner is directly looking at?
Source:
1: Saraiya, P., North, C., & Duca, K. (2005). An insight-based methodology for evaluating bioinformatics visualizations. IEEE Transactions on Visualization and Computer Graphics, 11(4), 443–456. https://doi.org/10.1109/tvcg.2005.53
For Google Form Template, Click Here.
To use the template, please make a copy and edit.