Evaluating Collaboration in VR
As mentioned in the course description, CSCI1951T aims to explore, evaluate, and compare multi-user VR software and applications. While some students may also choose to explore existing projects or develop their own VR projects that align better with their interests, I wanted to focus on the evaluation of existing VR systems and software. Interested in User Experience research and UI designs, I wanted to study collaborative user experience in VR and the factors that influence the quality of the experience. For the second half of the semester, I conducted half-semester-long research to study and examine current studies on VR or XR collaboration evaluation. Additionally, I developed a standard evaluation tool for VR collaboration experience.
But, why do you care about VR collaboration quality evaluation?
Currently, course wiki lack a standard rubric for evaluating the quality of VR collaboration across VR systems. Hence, each student activity has different questions and quantitative scales when evaluating collaboration experience.
Having different evaluation results on different scale makes it hard to cross-compare the evaluations of multiple collaborative VR systems
Knowing that a single student cannot compare and evaluate all collaborative workspaces and data visualization software, the wiki needs to provide a structured evaluation tool that can be applied to different VR systems,. Having a standard evaluation rubric or tool can make the software comparison and evaluation process easier and coherent for the future developers and students.
Research Report and Key Insights(Related Work)
Developing a standardized evaluation tool for collaborative VR systems required basic knowledge and insights on current practice of evaluating VR systems and applications. First part of the project was to gather insights from related research papers. Some suggested having certain features while others suggested wholistic framework of evaluating vr systems. Listed below is a list of related work as well as key takeaways form each work related to evaluation of VR evaluation quality.
Click here to access Evaluating VR Systems page where more in-depth summaries and analysis are available.
Evaluation of a Multi-User VR System for Collaborative Layout Planning Process
Source: http://essay.utwente.nl/76831/1/Tolman_MA_EEMCS (002).pdf
Evaluation of VR factory layout planning software
Research Question: “What are the affordances and limitations of a multi-user virtual reality system for supporting layout experts, project managers, and project members with expertise in logistics, human factors and maintenance in their collaborative task of planning and evaluating factory layouts of vehicle manufacturers as measured through usability inspection and collaborative joint attention?”
Highlights importance of extensive user evaluation for more effective and pleasant collaborative experience in VR environment
For Evaluation, the paper introduces
Lack of comparable evaluation results can results in slower improvements in VR collaboration performance and quality.
Evaluation of Mixed-Space Collaboration
Evaluation of “mixed-collaboration”. It is defined as a collaborative process where users can move from AR to VR in a collaborative environment.
Categories of Collaboration in XR systems.
Focus Evaluation Type: Mixed-Space Collaboration(users can work separately in different environment)
Using Fully Expressive Avatar to Collaborate in Virtual Reality: Evaluation of Task Performance, presence, and attraction
An Ontology for Evaluation of Remote Collaboration Using AR
There are different types and differences in collaboration
characterization of AR collaboration is important when it comes to comprehensive evaluation of the process
Most of the time, collaboration in VR depends context heavy information. Applying conventional usability evaluation can fall short when it comes to analyzing collaboration in AR environment.
Towards Advanced Evaluation of Collaborative XR spaces
controlled evaluations have been conducted. However, there’s need for real-world evaluation and data collection.
With wearable sensors, collaboration in VR can be evaluated without lab control or interruptions to answer survey questions and quantitative questions.
Target area of evaluation is fundamentally the same with NASA Task Load Index, but measured with sensors
Pupil diameter: detecting sudden change in pupillary diameter
heat rate variability: measured with Photoplethysmography sensors
Muscle activity and physical stress could be measured by electromyography(EMG)
notion that the XR system should not minimize any effort to zero, but to match the effort required to produce the optimal flow state of users.
An Insight Based Methodology For Evaluating Bioinformatics Visualizations
They main goal of VR scientific data visualization should be to generate new insights
Defined insight as “an individual observation about the data by the participant, a unit of discovery”
Introduces different types of insights