Fully Expressive Avatar in VR Collaboration
Spring 2023, Dave Song
Using Fully Expressive Avatar to Collaborate in Virtual Reality: Evaluation of Task performance, presence, and attraction
Motivation: Interested in developing standard evaluation metrics for VR collaboration, I was also interested in exploring avatar as a major factor in the quality of UX. Wanted to explore if the highly expressive avatar developed in the paper had any significant difference.
<Abstract>
Avatar-mediated collaboration in a virtual environment
Study of highly expressive avatar:
definition of a highly expressive avatar they provide is one that can provide high levels of nonverbal expression by tracking behavior such as body movement, hand gestures, and facial expression.
With the avatar-mediated vr system, participants were asked to perform tasks.
Highly expressive avatar
more social presence and attraction
better task performance
compared to low-expressive counterparts. → vr system can benefit from the high level of nonverbal expressiveness.
<Introduction>
avatar realism: the measure of avatar quality
appearance
behavioral realism
People tend to communicate more through nonverbal behavior during social interaction compared to verbal channels.
Matumoto et al., 2012
performed Charades game in the shared virtual environment(they call it SVE).
rational behind choosing charades: engaging with nonverbal behavior to complete a collaborative challenge
The main contribution of the research
fully expressive avatar control system
supports eye-gaze
mouth rendering combined with tracking natural nonverbal behavior
using multiple LEAP motion tracking cameras
evaluation of the effects of different levels of expression through an avatar on communication and collaboration.
<Key Points from Related Work Section>
quality of SVE depends on whether it can support social interaction. This requires users’ appearances and behavior and motion
nonverbal cues
quality of embodiment → higher social presence. “can lead to higher social presence ratings compared to face-to-face interactions
Previous research is limited in that they omitted or did not fully support the nonverbal behaviors when examining collaboration between users
<Avatar control system and representation>
conventionally 3 tracking points with HMD, and two controllers. → mostly just floating avatars
Use of RGB-D sensor and VR device as a potential solution
Additionally, hand gestures could be tracked by using Leap Motion Controller.
Eye gaze and facial expression
RGB-D sensor to track facial expressions and eye gaze
<Technical Setup>
Body Tracking could be done by using 4 Kinect v2 devices located at the four corners of the tracking area
Hand tracking was done using a multi-LMC system where they had five LMCs installed on the HMD
Eye gaze direction: generated random gaze shift to mimic natural eye movement.
<Measurements>
objective and subjective data were collected. Participants completed questionnaires every two sessions after they performed the charades as a performer and a guesser.
Results of HEA vs. LEA
Co-presence: “feeling that the user is with other entities”
did not show any significant difference between HEA and LEA
Social Presence: “feeling of the user, which makes people feel connected with other through the telecommunication system”
a significant difference between the two
Interpersonal Attraction: “the measurement of liking and attraction”
also, HEA made a significant difference