Evaluating Collaboration in XR
Created by Dave Song, Spring 2023
Evaluating Collaboration in VR
As mentioned in the course description, CSCI1951T aims to explore, evaluate, and compare multi-user VR software and applications. While some students may also choose to explore existing projects or develop their own VR projects that align better with their interests, I wanted to focus on the evaluation of existing VR systems and software. Interested in User Experience research and UI designs, I wanted to study collaborative user experience in VR and the factors that influence the quality of the experience. For the second half of the semester, I conducted half-semester-long research to study and examine current studies on VR or XR collaboration evaluation. Additionally, I developed a standard evaluation tool for VR collaboration experience.
But, why do you care about VR collaboration quality evaluation?
Currently, course wiki lack a standard rubric for evaluating the quality of VR collaboration across VR systems. Hence, each student activity has different questions and quantitative scales when evaluating collaboration experience.
Having different evaluation results on different scale makes it hard to cross-compare the evaluations of multiple collaborative VR systems
Knowing that a single student cannot compare and evaluate all collaborative workspaces and data visualization software, the wiki needs to provide a structured evaluation tool that can be applied to different VR systems,. Having a standard evaluation rubric or tool can make the software comparison and evaluation process easier and coherent for the future developers and students.
Research Report and Key Insights(Related Work)
Developing a standardized evaluation tool for collaborative VR systems required basic knowledge and insights on current practice of evaluating VR systems and applications. First part of the project was to gather insights from related research papers. Some suggested having certain features while others suggested wholistic framework of evaluating vr systems. Listed below is a list of related work as well as key takeaways form each work related to evaluation of VR evaluation quality.
Evaluation of a Multi-User VR System for Collaborative Layout Planning Process
Source: http://essay.utwente.nl/76831/1/Tolman_MA_EEMCS (002).pdf
Evaluation of VR factory layout planning software
Research Question: “What are the affordances and limitations of a multi-user virtual reality system for supporting layout experts, project managers, and project members with expertise in logistics, human factors and maintenance in their collaborative task of planning and evaluating factory layouts of vehicle manufacturers as measured through usability inspection and collaborative joint attention?”
Highlights importance of extensive user evaluation for more effective and pleasant collaborative experience in VR environment
For Evaluation, the paper introduces
Design Heuristics
SUS : System Usability Scale
NASA Task Load Index
Comparative analysis
Lack of comparable evaluation results can results in slower improvements in VR collaboration performance and quality.
Evaluation of Mixed-Space Collaboration
Source:
Evaluation of “mixed-collaboration”. It is defined as a collaborative process where users can move from AR to VR in a collaborative environment.
Categories of Collaboration in XR systems.
Standard Collaboration
Multi-perspective collaboration
Multi-Space Collaboration
Mixed-space collaboration
Transitional Collaboration
Focus Evaluation Type: Mixed-Space Collaboration(users can work separately in different environment)
For Evaluation
performance measure
subjective measure: questions and survey
Using Fully Expressive Avatar to Collaborate in Virtual Reality: Evaluation of Task Performance, presence, and attraction
Source: https://www.frontiersin.org/articles/10.3389/frvir.2021.641296/full
Highly Expressive Avatar(HEA): high levels of nonverbal expressions (eg. body movement, hand gestures, and facial expression)
HEA mediated VR system resulted in more social presence/attraction as well as task performance
An Ontology for Evaluation of Remote Collaboration Using AR
Source: https://dl.eusset.eu/bitstream/20.500.12015/4168/1/ecscw2021-p04.pdf
There are different types and differences in collaboration
characterization of AR collaboration is important when it comes to comprehensive evaluation of the process
Most of the time, collaboration in VR depends context heavy information. Applying conventional usability evaluation can fall short when it comes to analyzing collaboration in AR environment.
Towards Advanced Evaluation of Collaborative XR spaces
controlled evaluations have been conducted. However, there’s need for real-world evaluation and data collection.
With wearable sensors, collaboration in VR can be evaluated without lab control or interruptions to answer survey questions and quantitative questions.
Target area of evaluation is fundamentally the same with NASA Task Load Index, but measured with sensors
Cognitive effort
Pupil diameter: detecting sudden change in pupillary diameter
heat rate variability: measured with Photoplethysmography sensors
Physical effort
Muscle activity and physical stress could be measured by electromyography(EMG)
Flow
notion that the XR system should not minimize any effort to zero, but to match the effort required to produce the optimal flow state of users.
An Insight Based Methodology For Evaluating Bioinformatics Visualizations
They main goal of VR scientific data visualization should be to generate new insights
Defined insight as “an individual observation about the data by the participant, a unit of discovery”
Introduces different types of insights
From this, I could identify some of the key factors that influence the quality and performance of VR collaboration.
Mental Demand required to make meaningful collaboration
Cognitive effort
Physical Demand for VR collaboration
Frustration from the UI and system
Effort to accomplishment ratio
Complexity of features
Learning curve
Avatar quality
body movement tracking
hand gesture tracking
Features targeting certain types of collaboration
VR collaboration Evaluation Form
Collaboration Experience Evaluation Forms
: Collaboration and data handled inside the collaborative VR systems can be highly context based. Therefore, evaluating the quality of VR collaboration for a particular collaboration workflow of a particular group of researchers or experts is essential to provide development -feedback cycle.
VR Collaborative Feature Evaluation Form
: It is also important to have a standard metric for evaluating collaborative features. Developing the evaluation form upon the comparison matrix developed previously, collaborative feature evaluation focus on identifying and displaying features that support collaborative experience in VR.