On flat, two-dimensional displays, data visualizations are often accompanied by a significant amount of text, including titles, labels, descriptions, headings, and figures. This is similarly true in 3D environments, even as text remains 2D, as being able to make sense of the visualization remains critically important for those using it. Text, in general, provides the context and information necessary for understanding data visualizations, orienting and grounding users to grasp what the symbolic information is trying to convey.
Part of the problem presented by text is that it must be legible and understandable within the visualization to be useful. In a 2D environment, this is a trivial issue, as this is largely a matter of positioning the text well and following known best practices for contrast and font. Even in a less-controlled 3D environment, such as in VR, there is additional complexity with the text being occluded by the data itself, but control over the background and surroundings for the text can maintain legibility. AR environments pose the greatest issue in this regard, as any background might be possible, making it challenging to identify best practices for creating text labels that are constantly legible.
4/01: Collecting best practices in literature, building wiki page synthesizing information and common use
4/03: Demo visualizations for feasibility, further literature review on common practices
4/08: Assessment of potential software for development and deployment, prioritizing interactivity and headset API access for passthrough control
4/10: Setup development software, generate initial text displays in headset
4/15: Further development, all displays and text work and are demoable in a single package
4/17: Interaction added to experiment program, completed minimum viable walkthrough to collect reasonable data
4/22: Process created for extracting quantitative data from the headset, testing and polishing experiment program
4/24: Creating survey evaluations, piloting ICA with individuals to confirm functionality and stability
4/29: ICA! Running experiment in class
In-class activity: Individually, walk through the experiment program in your Quest headset. You will see several text displays and will be asked to read the sentences displayed out loud to the best of your ability. After, you will complete a survey indicating your preferences for the text displays in different contexts.
What evaluative info that activity will collect: Quantitative data on reading speed based on how quickly users move between sentences, quantitative and qualitative data on user preferences and associated reasonings
5/01: Final presentation and data analysis from ICA
5/10: Poster day, public demo
This list gives options, goal is to do 3-5 of these (or more!).
Documentation of using and modulating passthrough media with the Quest 3
Collection of resources on displaying text well in VR and AR; comparison page between AR and VR
Tutorial on using Unity for scientific data collection, generating user reports
Tutorial on extracting quantitative user data from Quest
Best practices on transitioning between AR and VR (environment dimming, removing color)
Software:
Unity
Most flexibility, interactivity; should be viable for extracting scientific data
"Building block" development structure provided by Meta should be effective for getting started quickly
Vizard
Very Python forward, more unfamiliar to me
Seems to have both AR support and data collection tools out of the box, built for research purposes explicitly
Others?
Not clear that there is much else that could give the data needed for this project while accessing passthrough and being interactive.
Data:
Unclear. Need some demo data for testing occlusion under different scenarios.
May want to randomly generate/simulate this under different conditions, or use visualizations that already exist.
Papers:
Wiki: