Connor Flick's Journal
PROPOSALS
Project 1 Proposal: Integrative Approach to Manhattan Congestion Pricing Data
Presentation for Project 1 Proposal: Presentation Link
Project 1 ICA: In-class Activity Link
End Presentation for Project 1: Presentation Link
Project 2 Proposal <ADD LINK>
Presentation for Project 2 Proposal: Presentation Link
Poster <ADD LINK>
In-class Activity <ADD LINK>
Public Demo <ADD LINK>
ALL OTHER WIKI CONTRIBUTIONS
CONTRIBUTION 1: Added additional genomics datasets to scientific data, Updated links on course year homepage, Reorganized Medical File Types into Scientific Data
CONTRIBUTION 2: Full page detailing pros/cons of various AR development tools
CONTRIBUTION 3: Full page overview of Meta Quest 3
CONTRIBUTION 4: Adding traffic data to scientific data sources
CONTRIBUTION 5: List of Data Visualization Tools in Blender
.....
CONTRIBUTION N [short description] <ADD LINK>
Learning Goals
before | after
-----------
3 | 4 | Goal 1: articulate AR/VR visualization software tool goals, requirements, and capabilities
2 | 4 | Goal 2: construct meaningful evaluation strategies for software libraries, frameworks, and apps
2 | 5 | Goal 3: execute tool evaluation strategies
3 | 4 | Goal 4: build visualization software packages
1 | 4 | Goal 5: comparatively analyze software tools based on evaluation
3 | 5 | Goal 6: be familiar with a number of AR/VR software tools and hardware
4 | 5 | Goal 7: think critically about software
3 | 5 | Goal 8: communicate ideas more clearly
2 | 4 | Goal 9: establish coherent, understandable process documentation standards
4 | 5 | Goal 10: have fun :D
HOURS SUMMARY
Total: 78.5 hours (as of 03/19/2025)
Hours Journal
Week 1
1/24/2025 - 2 Hours
Made this journal!
Got familiarized with Google Sites
Introduced self on Slack
Read through some project ideas and the wiki at large
1/25/2025 - 3 Hours
Read Kenny Gruchalla's bio and brainstormed 3 questions to ask before Tuesday, added to class board doc
Continued reading through parts of website, identified 3 changes within 3 timescales to make to wiki
Implemented 3 of the identified changes below
3 Changes, 3 Times
10 Minutes:
Microsoft open data sets do not link directly to datasets, need updates - Completed
Page on medical file types should be reorganized into scientific data for clarity - Completed
Several links on Spring 2025 home page go to the wrong place/year, should be corrected - Completed
1 hour
Social network data should include sources beyond Twitter (Bluesky, Instagram, Facebook, etc)
VR evaluation tools should include more methods beyond TLX and SUS
Images throughout wiki require alt-text
10 hours
"What is VR for?" and "Applications of VR" should be consolidated for clarity, large number of pages makes this difficult
Dedicated sections for ethics, contemporary issues in VR/AR spaces
Prototype some more accessible, sustainable, customizable wiki format beyond Google Sites, such as MediaWiki, Wiki.js, HackMD (full transition likely 10+ hour endeavor)
Week 2
1/28/2025 - 4 Hours
Did Quest 3 Setup
Searched & looked over some past projects
Potential software to evaluate:
VR Chat (multimodal collaborative tool?)
WebXR frameworks (focus on cross-device collab?)
Architecture/product rendering?
Potential project ideas:
Collaborative approach to Manhattan Congestion Pricing data (high data dimensionality, available data, could benefit from route/traffic/time overlays, connects with new domains not in wiki)
Something to do with the Bloomberg analysis of male YouTubers before the election (high data dim, vr for social sciences, embedding videos could make this dynamic?)
Some architectural collaborative ar/vr thing with CAD models. You could have vr for entering the space and ar for having a top-down contextual view. Maybe floorplan overlays too
Disease/medical visualization within a body context -- seeing a person on a table and being able to overlay problems and see it in different contexts. External/nervous/circulatory/skeletal etc. Might be good to use published case studies for this?
This could also intersect with medical imagry if relevant (x-rays or mris?)
Visualizing migration flows + genomic data + location data from human evolution(?) (new domain, data types)
Added learning goals to journal
Week 3
02/03/2025 - 5.5 Hours
Completed Bezi Lab (https://bezi.com/play/ffad2484-b393-4af3-ae75-13b70bc262b7)
Completed Google Earth VR Tutorial (see below)
Installed DinoVR and Read Associated Paper
Expanded project ideas:
Collaborative approach to Manhattan Congestion Pricing data
list three things you will do during the project:
Investigate and develop methods for collaborative passthrough visualization
Explore methods for dimensional visualization of time-based data
Create an informational quiz to evaluate AR view against Web formats
list one class activity we might do for the project:
In small groups using either the AR or Web software, for three affected routes, determine how much a decrease there was in commute time, if any. Report how you came to this conclusion and how long this took.
list potential deliverables:
Expansion and addition to previous work on traffic visualization in AR/VR
Comparison of current visualization software supporting passthrough on Quest 3
Documentation of challenges and methods to approaching data visualization spread over space/time dimensions
Something to do with the Bloomberg analysis of male YouTubers before the election
list three things you will do during the project:
Connect AR visualization software (or visualization broadly) to embedded media
Evaluating interference between collaboration tools and dynamic media
Leverage network visualization tools to establish and evaluate connection graphs between media figureheads
list one class activity we might do for the project:
With a partner, explore an AR space that contains multiple short videos. Then, with a partner, watch multiple short videos as part of a playlist. Discuss your ability to control these videos and whether you felt that you could effectively view them with someone else present.
list potential deliverables:
Tutorials on adding embedded media to various visualization software packages
Elaboration on network visualization in VR/AR and comparison in doing so between VR vs. AR
Adding application section for AR in journalism/media.
Disease/medical visualization within a body context -- seeing a person on a table and being able to overlay problems and see it in different contexts. External/nervous/circulatory/skeletal etc. Might be good to use published case studies for this?
list three things you will do during the project:
Research diagnosis and visualization efficacy of medical professionals using VR/AR
Establish methods for associating biological visual data with physical parts of the body
Explore collaborative annotation methods
list one class activity we might do for the project:
Examine two visualizations on the body and annotate them. With someone else, review and explain your annotations and make edits. Report on your ability to view the same data, work together, and make edits to someone else's annotation.
list potential deliverables:
Create comparison between state of medical visualization industry in VR and AR.
Investigate usage of trackers/markers to stablize virtual objects within a set space
Provide tutorial for using connectivity between HMDs to facilitate collaboration
Visualizing migration flows + genomic data + location data from human evolution(?) (new domain, data types)
list three things you will do during the project:
Collect novel historical- and demographic-based genomic data
Explore visualization methods and approaches for extremely large datasets (3D dimensionality reduction?)
Connect and adapt specialized software analysis and visualization packages to AR contexts
list one class activity we might do for the project:
View a web-based and AR-based view of two different dimensionality reductions, in 2D and 3D respectively. Attempt to establish any groupings or trends you may see in the datasets.
list potential deliverables:
Expansion of biological, genomic scientific datasets
Exploration and documentation of common scientific analysis and visualization methods in practice
Provide tutorials on adapting visualization software to leverage passthrough
Google Earth VR:
Home!
Dorm!
Unfamous Place!

VR video from second to third location
Google Earth Web:
Home!
Dorm!
Unfamous Place!

Web video from second to third location
02/05/2025 - 2 Hours
Added feedback to peer project ideas
Asked for assistance on Slack with my own project ideas :)
Outlined milestones and project outline for potential project:
Collaborative approach to Manhattan Congestion Pricing data
list three things you will do during the project:
Investigate and develop methods for collaborative passthrough visualization
Explore methods for dimensional visualization of time-based data
Create an informational quiz to evaluate AR view against Web formats
list one class activity we might do for the project:
In small groups using either the AR or Web software, for three affected routes, determine how much a decrease there was in commute time, if any. Report how you came to this conclusion and how long this took.
list potential deliverables:
Expansion and addition to previous work on traffic visualization in AR/VR
Comparison of current visualization software supporting passthrough on Quest 3
Documentation of challenges and methods to approaching data visualization spread over space/time dimensions
Identify and format dataset to exist in reasonable, consumable format
Collect, compare, and modify software packages that may work for complex dataset
Produce prototype that facilitates collaborative analysis while displaying data over routes
Potential milestones:
2/11: Complied list of visualization software, initial formatting of data
2/13: Comparison list of software after initial investigation, final use of data, tests for collaboration
2/20: Import of data into the software, first test visualization
2/25: Further building of visualization, adding additional models, pilot versions for comparison
2/27: Finalizing visualization, documentation of process, building evaluative tools
3/04: In-class activity! Collect data using evaluative tool, comparing AR and Web visualizations of the dataset.
In-class activity: In small groups using either the AR or Web software, for three affected routes, determine how much a decrease there was in commute time, if any. Report how you came to this conclusion and how long this took. Additionally, complete the SUS for both modes.
What evaluative info that activity will collect: Qualitative data about reasoning in AR vs. Web, quantitative data on whether that reasoning is accurate, comparative usability data from SUS metrics
3/06: Build first-pass analysis of data, documenting methods, overarching conclusions, next steps
Completed feedback form 02/04 DinoVR tutorial
Week 4
02/09/2025 - 3 Hours
Created brief project presentation for class on 2/11
Uploaded DinoVR photos to class board
Determined unresolved gaps in wiki for deliverables in project plan
02/10/2025 - 3 Hours
Project self-assessment:
Rating scale
strongly disagree
disagree
neither agree nor disagree
agree
strongly agree
The proposed project clearly identifies deliverable additions to our VR Software Wiki - 5; multiple possible additions listed
Involves passthrough or “augmented” in VR - 4; relies on AR/passthrough for one visualization, but details aren't super finalized
The proposed project involves large scientific data visualization along the lines of the "Scientific Data" wiki page and identifies the specific data type and software that it will use - 5; data identified
The proposed project has a realistic schedule with explicit and measurable milestones at least each week and mostly every class - 4; some milestones looser than others, typically come with deliverables but some intentional room added
The proposed project explicitly evaluates VR software, preferably in comparison to related software - 5; comparison between AR/web, evaluation metrics clear
The proposed project includes an in-class activity, which can be formative (early in the project) or evaluative (later in the project) - 4; it exists, needs some fleshing out
The proposed project has resources available with sufficient documentation - 4; some software identified but nothing solid quite yet, part of research section of project
Started list of potential software to drive visualization
02/11/2025 - 3.5 Hours
Journal Self-Evaluation:
Key:
5 == exceeds expectations
4 == meets expectations
3 == mostly solid with some gaps
2 == shows some progress
1 == some attempt
0 == not found
Criteria:
Journal activities are explicitly and clearly related to course deliverables
4 -- Journal activities reflect deliverables requested in course timeline; project milestones.
deliverables are described and attributed in wiki
3 -- more deliverables completed would be nice, but those present are described and linked.
report states total amount of time
5 -- there it is at the top!
total time is appropriate
4 -- Length of activities align with time taken
Continued examining potential visualization software w/ initial notes:
Two options broadly: Unity or WebXR
Also some dark-horse options for development: Vizard, Bezi, Sketchfab, Niantic Studio, Vuforia
Most of these are more plug-and-play with restrictions on interactivity
Some of these do support collaboration out of the box to varying degrees
A few (particularly vizard) actually include documentation referencing passthrough on Quest, which is helpful!
In any case, the actual design of the visualization will likely need to be done "by hand" to some degree
Likely visualizing the data and porting it into some software to view
Deck.gl is interesting here as a data vis plugin that has a bit of information on how to do webXR
Could use this as a driver of data, might minimize the steps involved
How to do this might be interesting... Blender with data vis plugins?
WebXR frameworks seem like the stronger choice to get a prototype out as quickly as possible, even if constrained on interactivity
Held place for ICA in Course Timeline
02/12/2025 - 2.5 Hours
Started AR Dev tool comparison wiki page
Week 5
02/18/2025 - 3.5 Hours
Setup Paraview and Volume Rendering for in-class activity
Continued working on AR Dev tool comparison wiki page
02/20/2025 - 1 Hour
APV Unity Lab
Week 6
02/23/2025 - 7.5 Hours
Finalized visualization programs
Began loading models, test visualizations
Experimenting with Blender
Presentation showing progress on project
02/27/2025 - 4 Hours
Distilled dataset into useable form
Finalized Blender workflow
Began producing models to integrate into final project
Week 7
03/02/2025 - 5.5 Hours
Finished making models of dataset in Blender
Ported and placed all models into Bezi
Finalized, tested, touched-up Bezi models
03/03/2025 - 7 Hours
Polished Bezi models, attempted to improve performance
Research for evaluation tools for desktop services and AR tools
Generated instructions and surveys for ICA
Research for wiki pages
Setting up Sidequest, prepping for peers' ICAs
Week 8
03/09/2025 - 5.5 Hours
Created and published broad overview of AR development software
Began research for future wiki contributions
Quest 3 AR overview, additions to traffic datasets, overview of evaluation metrics, data visualizations in Blender, modelling with good performance on HMDs, potential miscellaneous edits
03/10/2025 - 5.5 Hours
Created wiki page providing overview for Meta Quest 3
Created wiki page providing list of Blender tools for data visualization
Prepped for 3/11/2025 ICA
Started final project presentation, data analysis
03/11/2025 - 4.5 Hours
Completed analysis of data from project 1 ICA
Completed final project 1 presentation, self-evaluation
Small touch-ups on various wiki contributions
Week 9
03/17/2025 - 3 Hours
Project 2 Ideas
Implement AR options in DinoVR
Seven scenarios: CDA, CTV, UE
DinoVR is notable for being a VR visualization tool that supports interactive collaboration between users. By adding AR support, the ability for people to collaborate (CDA) to communicate information and findings (CTV) could be increased. Additionally, the actual usability of the system may improve (UE), as users may more easily co-locate and contextualize the data in their physical shared environment, rather than in a blank space.
Potential Deliverables:
Documentation on accessing cameras, passthrough directly with Quest 3
Documentation of networking and share views as implemented in DinoVR
Comparison of other collaborative AR tools (Needle) to DinoVR
Evaluating text displays in AR environments
Seven scenarios: UE, UP
This project would examine how text is displayed in AR context in multiple forms, providing guidance on how labels and text might be provided across environments. The result of this project would be a head-to-head assessment of several versions of text display, including novel ones that might take advantage of passthrough technology, relying on controlled experiments with reading speed (UP) along with preferences and percieved comfort in reading (UE).
Potential Deliverables:
Documentation of using and modulating passthrough media with the Quest 3
Collection of resources on displaying text well in VR and AR; comparison between AR and VR
Tutorial on using Unity for scientific data collection
Tutorial on extracting data from Quest 3
Disease/medical visualization within a body context -- seeing a person on a table and being able to overlay problems and see it in different context
Seven scenarios: CTV, CDA, UE, VDAR
Participants would view two different sets of medical data, examining the data as it's placed on a body in AR and viewing similar data in a context-free AR space. Particular attention would be paid to how users understand how the body context might better convey the nature of the data (CTV) and how findings might be generated (VDAR), including in a group (CDA). Participants would also assess the usability of both applications to determine if there is a trade-off between increased context, physical demand, and complexity (UE).
Potential Deliverables:
Expansion of medical data on scientific data page
Create comparison between state of medical visualization industry in VR and AR.
Investigate usage of anchors to stabilize virtual objects within a set space, including with direct camera access
Provide tutorial for using connectivity between HMDs to facilitate collaboration
Evaluating use of embedded media into AR data visualization environments, including occlusion and spatial audio
Seven scenarios: UE, CTV
Many visualizations have embedded media in some form, including pictures, videos, and audio. This may be because the media itself is the data being visualized (i.e. showing an example video in content analysis) or enhances and explains the visualization in a clearer way (i.e. an audio presentation). How this media is included and what best practices are for doing so are unclear. This project would evaluate best practices for embedding media into visualizations to make them clearest (CTV) and more comfortable and accessible to use (UE).
Potential Deliverables:
Evaluation and comparison of media tools in AR and VR
Documentation on importing video to AR apps
Documentation on using and disabling spatial audio, best practices for doing so
Best practices for using video clips in AR (environment dimming, curved screens, etc)
03/18/2025 - 1.5 Hours
Created project 2 proposal presentation
03/19/2025 - 1.5 Hours
Project 2 draft proposal:
Optimizing Text Displays in AR Environments
list three things you will do during the project:
Investigate and collect current practices for text display in AR
Document processes for using Unity and Quest HMDs as tools for scientific data collection
Create a research-quality program to evaluate user performance with text displays
list one class activity we might do for the project:
Individually, participants will work through the experiment program, reading the set of standardized sentences out loud at a comfortable pace. Participants view several variations on how text is displayed, examining aspects of contrast, color, backgrounds, and outlines in a head-to-head manner. After, participants will complete a survey indicating their text preferences along several factors.
list potential deliverables:
Documentation of using and modulating passthrough media with the Quest 3
Collection of resources on displaying text well in VR and AR; comparison between AR and VR
Tutorial on using Unity for scientific data collection, generating user reports
Tutorial on extracting quantitative user data from Quest
Best practices on transitioning between AR and VR (environment dimming, removing color)
Potential milestones:
4/01: Collecting best practices in literature, building wiki page synthesizing information and common use
4/03: Demo visualizations for feasibility, further literature review on common practices
4/08: Assessment of potential software for development and deployment, prioritizing interactivity and headset API access for passthrough control
4/10: Setup development software, generate initial text displays in headset
4/15: Further development, all displays and text work and are demoable in a single package
4/17: Interaction added to experiment program, completed minimum viable walkthrough to collect reasonable data
4/22: Process created for extracting quantitative data from the headset, testing and polishing experiment program
4/24: Creating survey evaluations, piloting ICA with individuals to confirm functionality and stability
4/29: ICA! Running experiment in class
In-class activity: Individually, walk through the experiment program in your Quest headset. You will see several text displays and will be asked to read the sentences displayed out loud to the best of your ability. After, you will complete
What evaluative info that activity will collect: Quantitative data on reading speed based on how quickly users move between sentences, quantitative and qualitative data on user preferences and associated reasonings
5/01: Final presentation and data analysis from ICA
Project 2 proposal self-evaluation:
Rating scale
1 -- strongly disagree
2 -- disagree
3 -- neither agree nor disagree
4 -- agree
5 -- strongly agree
The proposed project clearly identifies deliverable additions to our VR Software Wiki - 5; multiple possible additions listed, specific pages named
Involves passthrough or “augmented” in VR - 5; explicitly evaluates AR
The proposed project involves large scientific data visualization along the lines of the "Scientific Data" wiki page and identifies the specific data type and software that it will use - 2; this project largely deals with text, which would be likely be related to labels for data visualization. Adding occlusion as a factor, such as in the DinoVR paper, might help this
The proposed project has a realistic schedule with explicit and measurable milestones at least each week and mostly every class - 5; milestones clearly defined, each one appears to have some actionable item attached
The proposed project explicitly evaluates VR software, preferably in comparison to related software - 4; evaluates text displays, which varies significantly between software based on development practice
The proposed project includes an in-class activity, which can be formative (early in the project) or evaluative (later in the project) - 5; it exists, evaluative, project very clearly seems to build to this
The proposed project has resources available with sufficient documentation - 4; Unity mentioned as potential development software, some early work identified with literature review, but this may be fleshed out further