Tongyu Zhou's Journal

ALL OTHER WIKI CONTRIBUTIONS

CONTRIBUTION 1: "Testing in VR"

  • This is a subpage under VR Development Software > Tutorials > Unity3D. It contains instructions on several ways to test your Unity project in VR, depending on your operating system and the devices you have available. I also include a brief summary of my overall experience with the different methods.

CONTRIBUTION 2: "Unity Spectrogram Visualization"

  • This is a subpage under VR Visualization Software > Tutorials. Moved to VR Development Software > Tutorials > Unity3D since that felt more appropriate. It contains a tutorial on how to import any audio file into Unity and visualize it in real time as animated spectrograms.

CONTRIBUTION 3: "Mirror"

  • This is a subpage under "Multiplayer VR Development in Unity" describing the Mirror plugin

CONTRIBUTION 4: "Fishnet"

  • This is a subpage under "Multiplayer VR Development in Unity" describing the Fishnet plugin, which nicely complements the page about Mirror above

CONTRIBUTION 5: "VR in Audio Visualization"

  • This is a subpage under "Applications of VR" examining current trends in VR audio viz as well as my own takeaways from creating the Unity visualizer + the class activity

CONTRIBUTION 6: "EVERTims"

  • This is a subpage under "3D Acoustic Simulations" under VR Visualization Software that summarizes the EVERTims software

CONTRIBUTION 7: "Soundvision"

  • This is a subpage under "3D Acoustic Simulations" under VR Visualization Software that summarizes the Soundvision software

CONTRIBUTION 8: "Unity Room Acoustics Visualization"

  • This is a subpage under VR Development Software > Tutorials > Unity3D that contains a tutorial describing how I create the heatmap on the ground for audio amplitudes in my room acoustics demo.

CONTRIBUTION 9: "Lessons from Designing VR Audio Visualizations in Collaborative Spaces"

  • This is a subpage under "VR in Audio Visualization" where I summarize the 2 Unity applications I built this semester for VR audio vis and summarize their challenges and takeaways

SOFTWARE CONTRIBUTIONS


  1. Spectrogram Visualizer (.apk)

  2. Room Acoustics Visualizer (.apk)

Proposed Wiki Changes (Week 1)

  • Three changes should each require ~10 minutes to complete.

      1. Replace a raster-based graph or chart with their vector-based counterpart. For example, the "Features" chart on the "VR Visualization Software" page are in PNGs. Replacing these images can make their text searchable, which improves the browsing experience.

(see below for example)

Features.pdf

2. Add images displaying each hardware on the "VR Hardware" page.

3. Comb through pages to make sure all links work. For example, the links on the "VR Visualization Software" do not.

  • Three changes should each require ~1 hour to complete.

      1. Design a landing survey for new users (setting up + linking everything probably takes longer than 1 hour) that identifies their personal interests, prior experience, and time availabilities. This is similar to the "Where do I start?" section on the home page, but is more personalized since it accounts for multiple parameters. Based on their survey responses, the site then recommends several pages on the Wiki to the user that better suit their needs.

      2. Fill out the "Applications of VR" and "Student Research" pages. Seems a little strange that the other sections have content when you click on their headers but these do not.

      3. Create a comparison table evaluating each VR software against each other for the "VR Development Software" landing page.

  • The final three changes should each require ~10 hour to complete.

      1. Research and provide up-to-date recommendations for visualization, development, and modeling software. For a new user, I feel like it still may be difficult deciding on which systems to start with based on descriptions alone. Providing summaries on what are some popular frameworks (popularity usually also means there is more documentation, Stack Overflow entries, demos on the software) that developers are currently using and what they are creating with each software can help with this decision process.

      2. Add a "VR Sketching" section to "Applications of VR": This would include a discussion on how operating in virtual environments can alter traditional 2D workflows and a review of current software for AR sketching, both commercial and for research.

      3. Research and add an entry into how to make Unity extension: This would be helpful for folks trying to add functionality that they do not see being previously supported

HOURS SUMMARY

Total: 141.5 hours

HOURS journal

1/28/22 - 2.5 Hours

  • Join slack, set up journal, read through the course homepage

  • Read through the previous year's VR software wiki

  • Identify 9 potential changes to the site

1/29/21 - 2 Hours

2/1/22 - 3.5 Hours

2/3/22 - 2/4/22 - 5 Hours

  • Finished Google Earth assignment (see above)

    • Left (significant but not famous): My apartment when I studied abroad in Budapest, Hungary -- it had the most interesting wooden elevator that would squeak a lot whenever someone rode it up. I was spooked more than a couple of times.

    • Middle: The street I grew up on in Brooklyn, NY

    • Right: My current house in Providence

  • More brainstorming for project ideas:

    • Mapping locations of (indoor) Wifi signatures and floor plans to predict optimal paths for navigation tools

      • potential software:

      • list three things you will do during the project: explore tools that enable mapping 2d coordinate data to floor plans to create 3d virtual space, explore collaborative annotation tools for virtual spaces, look into incorporating path finding algorithms into virtual spaces

      • list one class activity we might do for the project: users race through a virtual indoor space and try to find another person in the shortest time possible based on information of only past Wifi signatures. after each run, these users leave behind their own Wifi signatures. Theoretically, the time it takes should decrease as more optimal paths are discovered/"voted" on.

      • list potential deliverables:

    • Visualizing virtual cities by creating 3d models from lidar point cloud data, then exploring different software for interacting (applying lighting, adding sound cues, etc.) and collaborative editing of these models (stretch goal: if there are captures of the city over time, how can we visualize all this changing information w/o sacrificing rendering speed?)

      • potential software: Unity + plugins (PiXYZ (altho it's paid, does offer 14 day trial), Point Cloud Free Viewer), Open3D

      • list three things you will do during the project: explore tools for (large) point cloud visualization (standalone and also after being converted to meshes), visualize these visualizations in vr with different rendering styles, explore tools for collaborative editing this 3d model in vr

      • list one class activity we might do for the project: users all go on the same 3d model of the city and make one edit to an object of their choice, go around and present/discuss what was the experience like and what can be improved // or use sound cues to find each other in vr

      • list potential deliverables:

      • task: visualizing point cloud data for cities in different software for editing (converting to mesh, applying lighting or sound cues) these models; milestone: write up page on comparisons for unity plugins for point cloud viewing

    • Visualizing real-time (with a few hours of lag) changing air quality of countries over time

      • potential software:

      • list three things you will do during the project: explore tools for visualizing air quality in 3d (heatmaps to identify areas of potential concern?), explore possible plugins for google earth vr (?) to incorporate this data, look into automatic api calling/querying so data is updated in real-time

      • list one class activity we might do for the project:

      • list potential deliverables:

  • Download Unity + SteamVR plugin

2/7/22 - 1 Hour

  • Set up DinoVR

  • Watched tutorials on point cloud visualization in Unity

2/9/22 - 3 Hours

2/12/22 - 3 Hours

12/31/2021 – first 15 minutes, sampled at 16 kHz

Selected dolphin sound snippet – 30 seconds, sampled at 32 kHz


2/13/22 -6 Hours

2/15/22 -3 Hours

  • Get audio into the UnityXR scene that can be controlled through the VR controllers

    • This involves adding custom input actions to the XRI Default Interactions (currently added a toggle for turning audio on/off): https://www.youtube.com/watch?v=jOn0YWoNFVY

    • Then after adding an AudioSource with the desired .wav file, writing a script for playing/pausing/toggling logic based on the InputActionReference from the XRI Default Interactions

2/16/22 -6 Hours

  • Implemented play / pause audio functionality based on hover interactions onto a cassette player (will probably change this object later)

  • Explored custom input actions more

  • A user can now do this using the Quest 2 after running the apk file (Note: I noticed that there wasn't really documentation on how to test in Unity -- I spent some time hashing this out based on my current setup, so I thought it would be helpful if I do a writeup on it: will be creating an entry under Tutorials > Unity3D for testing and building projects)

  • 2/17 Milestone: Created a video for demonstration (toggling on/off the dolphin audio snippet from earlier):

  • Journal self-review:

    • Journal activities are explicitly and clearly related to course deliverables: 4

    • Deliverables are described and attributed in wiki: 4 (my current deliverables for the project include the 2D spectrograms and the video showcasing audio playing in VR. These are documented above. While I did not mention this in my proposal, I noticed that the wiki could use a page on testing with Unity XR, so I also added that as another deliverable).

    • Report states total amount of time: 4

    • Total time is appropriate: 3.5 (I have 35 hours total. Going by an avg rate of 10 hrs/week and the fact that we are halfway through week 4, I think this is appropriate, but my goal is to get a couple more hours in by the end of the week)

  • Maia's Review:

  • 1.) 4 - super interesting topics and research!

  • 2.) 3 - good wiki deliverables described, not yet implemented; journal may be a little verbose in the beginning

  • 3.) 5 - all times are kept track of

  • 4.) 4 - agree, amount of time is roughly appropriate

2/18/22 - 4 Hours

  • Look into generating spectrogram with audio -> spectrum data

    • Spectrogram = procedural generated meshes that correspond to the played data

      • Mesh gen tutorial, procedural gen tutorial -> DONE! (see blue mesh below -- this is just generated with Perlin noise)

      • Now, the plan is to set the height of this terrain to amplitude, and z-axis as frequency once I have the output of the FFT

    • Getting FFT data:

      • There's a neat function that Unity provides called GetSpectrumData that takes in the currently playing audio source and populates a power of 2 array with spectrum values -> need to dig into how to appropriately scale these values (the four lines in red/green/blue represent different log scales) to warp the terrain into the desired spectrogram

2/19/22 - 4 Hours

  • Finished generating dynamic spectrogram with live audio

    • Output of GetSpectrumData = the relative amplitudes split into n (user-specified) bins, where there are (audio sample rate) / 2 / n Hz per bin

    • To animate, I call ^ on every frame on the current playing audio and update the y value of the row of the mesh closest to the camera with the amplitudes, then shift the previous y values back

    • Next goal: add color based on amplitude to make viewing easier + find a more suitable scale

2/20/22 -5.5 Hours

  • Added color to vertices of mesh based on height/amplitude

    • Scale height values to be between 0 and 1 using Mathf.InverseLerp and map that to a color gradient > set the mesh color to those rescaled height values

    • Window > Package Manager > make sure Universal RP (render pipeline) and Shader Graph are installed

      • Check Edit > Project Settings > Graphics and make sure there is a UniversalRenderPipelineAsset. If not, navigate to the Packages section > right click > follow the dialog below to create a pipeline asset, then drag that into the empty field

  • Create a shader graph: In Projects, right click > Create > Shader > Blank shader graph

    • Open the new shader graph in the editor > Active Targets -> Universal

    • Drag this new shader into the shader field of the object material

  • Play around with scale -- using 1000 as a multiplier seems alright

  • Next goals: think about what are reasonable decibel <--> color mappings? How do we take advantage of the VR space?

  • Write up tutorial summarizing how I did this in Unity and add to the Wiki

2/26/22 - 8 Hours

  • Started annotation implementation (spent around 3-4 hours to get typing to work...) -- it seems kind of clunky to do this in VR -- perhaps annotation is not the best idea...

  • Found out that controllers buttons don't work on Unity debug mode -> need to build for controller input to register

  • Rethinking about the annotation of audio vis -- I feel like this activity could perhaps be less useful in the VR space/typing in VR feels more awkward -> instead, focus on the visualization aspect instead:

    • VR controller rays can intersect the spectrogram to show amplitude/frequency/time at that point --> but running into issues with regenerating the collision box of the spectrogram mesh on each frame update (very VERY slow...)

      • Try only regenerating the collision when audio/animation is paused? --> I'm using collider.sharedMesh = mesh to update but somehow this still takes 2-3 seconds on each pause to re-render and the collision doesn't look right....

    • Controllable audio playback

  • Added controls -- which ones are most helpful?:

    • [A] on right controller to toggle play/pause

    • [Y/X] on left controller to increase/decrease volume

2/27/22 - 8 Hours

  • Added more controls:

    • [Hold trigger and drag] to rotate visualization -> to do this, I extracted the deviceRotation quaternion of the right-hand controller and mapped it to (x, y) directions to rotate spectrogram. Specifically, the deviceRotation (x, y, z, w) was mapped to (-100y, 100x) in mouse position coordinates. I then compute a delta value is that the current position - the previous position and perform a transform as follows:

private void UpdateTransform(Vector3 delta)

{

if (Vector3.Dot(transform.up, Vector3.up) >= 0)

{

transform.Rotate(transform.up, -Vector3.Dot(delta, Camera.main.transform.right), Space.World);

}

else

{

transform.Rotate(transform.up, Vector3.Dot(delta, Camera.main.transform.right), Space.World);

}

transform.Rotate(Camera.main.transform.right, Vector3.Dot(delta, Camera.main.transform.up), Space.World);

    • Going back to the idea of rays intersecting the spectrogram to show more information. To do this, instead of generating colliders for the entire mesh, I just generated it for the coordinate system (so when y=0)

      • Then, to detect the collision point, I created a reticle with a XR Simple Interactable and then got the world space coordinates of that reticle

      • On those coordinates, I then displayed the frequency and relative amplitude of whatever the user is pointing at

      • Since I am not generating new colliders each time, this function works even when the animation is still running!

  • To understand whether these visualizations are effective at all in VR, I also made corresponding controls for the above VR interactions that work on the desktop

    • [SPACE] key to toggle play/pause

    • [up/down] arrow keys to increase/decrease volume

    • [mouse click & drag] to rotate visualization

  • Made progress presentation

  • Added current git code onto the sound vis tutorial

3/2/22 - 5 Hours

  • Research multiplayer plugins / support for Unity XR

  • Read over the existing multiplayer VR development in Unity section on the current Wiki

  • Something of interest could be: BaaS+DGS comparison sheet, although they don't really talk about ease of use with Unity

  • Possible top solutions (for Unity):

    • Photon PUN: easy to set up, goes through their server but you can code it as if it had a peer-to-peer structure, but difficult to scale

    • Mirror: relatively easy to set up, good for developers who want a dedicated server (but not necessarily, also supports peer-to-peer coop), has a large community

    • Unity Player Networking (or MLAPI): higher learning curve than the rest but supported natively by Unity so there's a potential long-term investment

    • FishNetworking: made by a guy who made a lot of useful extensions for Mirror, supports everything that Mirror does with clearer variable declarations --> very new (2/24/22 release) but seemed to have gotten very high praise from the dev community -- definitely worth checking out

  • Other contenders considered: ENET

  • Add entries in the Wiki for Mirror and Fishnet under multiplayer VR dev in Unity

  • Decide on a solution to go with: probably Photon PUN since at the end of the day, it is the easiest solution. I don't need to support a ton of users + scalability isn't really an issue at this point since this is not intended to be a scalable project. However, for future multiplayer projects that actually require a substantial userbase, I'm inclined to check out the new Fishnet (especially as more people start using it)

  • Added ocean skybox to Unity scene:

3/4/22 - 6 Hours

  • Added multiplayer capability with Photon PUN -> can now have up to 10 users in the same space with their headsets/controllers visualized

  • self = blue hand prefab while others = yellow hand prefab

  • To prevent chaotic-ness, only show your own intersection ray (the red/white line)

Unity emulator view -- user is the blue hand connected to the red ray

quest 2 view -- again, user is the blue hand connected to the red ray

  • Synced spectrogram rotations using Photon View Transform

  • Synced interactions (pause/volume change) with RPCs

3/8/22 - 2.5 Hours

  • Tested the multiplayer functionality to make sure it works

  • Ended up de-syncing the pause/play button (primarily because it could be pretty annoying if someone keeps pressing play when you want to pause to examine something and vice-versa)

3/15/22 - 6 Hours

3/20/22 - 3/21/22 - 5 Hours

  • Read the seven scenarios paper

    • 1. Understanding data analysis: understanding environments and work practices (UWP), evaluating visual data analysis and reasoning (VDAR), evaluating communication through visualization (CTV), and evaluating collaborative data analysis (CDA).

    • 2. Understanding data visualization: evaluating user performance (UP), evaluating user experience (UE), and evaluating visualization algorithms (VA).

    • My project requires some domain expertise in signal processing to do useful data analysis, so it probably falls into 1) for experts and 2) for everyone else

  • Brainstorm ideas for project 2 -- probably still sticking with the visualization of phonons in 3d room with real-time audio input since that seems the most interesting to me

  • Create tentative plan project 2 plan

Project 2: Collaborative Auralization in VR Architectural Acoustics

  • Building off of my project 1 where I previously visualized the spectrograms of audio files in real-time, I want to now factor in VR space and user-inputted audio into the immersive experience. Depending on the reflective/refractive properties of materials in the scene and locations of different sound sources (ie. speakers, different users talking, etc.), I want to identify and implement visualizations that can help users understand the architectural acoustics of a virtual room. Guided by the Seven Scenarios paper, I specifically want to focus on the collaborative data analysis (CDA) of sound as well as evaluate the user experience (UE) of this process.

  • Collaborative data analysis:

    • "Evaluations in this group study how an information visualization tool supports collaborative analysis and/or collaborative decision-making processes." To understand the level of support provided by the visualization tool, the paper focuses on identifying the organization it will be embedded in, the team or group that will be using it, or the system itself (pg 1528)

      • The ideal team would be architects designing indoor spaces with specific audio-related requirements in mind (auditorium where sound needs to be transmitted vs. study rooms that try to insulate sound), but for the sake of class activities the realistic group that will be using it is the class

    • Possible relevant evaluation questions (taken from the paper):

      • Does the tool satisfactorily support or stimulate group analysis or sensemaking?

      • Is social exchange around and communication about the data facilitated?

      • What is the process of collaborative analysis?

  • Evaluating the user experience:

    • "The goal is to understand to what extent the visualization supports the intended tasks as seen from the participants’ eyes and to probe for requirements and needs."

    • Possible tasks for the project:

      • Out of a collection of possible materials, which one is the best/worst for sound propagation? (turn the audio visualization on/off to examine both)

    • Collect information about perceived effectiveness, perceived efficiency, perceived correctness (pg 1530)

      • How accurate do you think the visualizations were in portraying your spoken voice in the scene? 1-10 -- perceived correctness

      • How effective do you think the visualizations were in increasing your immersion of the VR space? 1-10 -- perceived effectiveness

WIP timeline:

    • 4/05: Try out existing 3D auralization software (EVERTims, Odeon, etc.), which are single-user, to identify what can be most effective in a collaborative VR environment

      • Contributions include Wiki entry of comparisons of these software

    • 4/07: Have a list of different realistic materials walls/floors can be made of and look into how they affect sound propagation

    • 4/12: Set up collaborative VR scene with transmittable audio

    • 4/14: Populate VR scene with toggleable materials for the walls/floors

    • 4/19: Explore possible ways to visualize sound propagation

    • 4/21: Determine most effective way and refine that

    • 4/26: Depending on progress, prepare class activity where users are placed in rooms with different material and asked to determine which is the best/worst in terms of sound propagation in different scenarios

    • 4/28: Write up a report based on results of the class activity and compare to what's actually being used in realistic environments

      • Contribution includes the Wiki entry with these comparisons and takeaways

    • 5/03: Prepare final presentation

    • 5/05: (Leaving some slack here in case one of the previous milestones spill over)

3/23/22 - 3 Hours

  • Finished final proposal + evaluated it based on the previous rubric

  • Revised timeline

    • 4/05: Installed and finished trying existing 3D auralization software (EVERTims, Odeon, etc.), which are single-user, to identify what can be most effective in a collaborative VR environment

    • 4/07: A completed Wiki entry of comparisons of these software in terms of 1) time to setup 2) functionality 3) usability 4) potential for collaboration

      • Deliverable: the documented Wiki entry

    • 4/12: A VR scene populated with sound sources (like speakers, instruments, etc.)

      • Deliverable: video showcasing this scene

    • 4/14: A fully built collaborative VR scene with the above + transmittable user audio

      • Deliverable: video showcasing this scene

    • 4/19: After exploring possible ways to visualize sound propagation in enclosed rooms (this will be inspired by the software I explored in 4/5 as well as vis papers), a completed implementation for architectural acoustic visualization

      • Deliverable: A tutorial in the Wiki describing how I constructed these visualizations

    • 4/21: An extended version of the architectural acoustic visualizations to rooms with different materials. The users should be able to toggle between them to see different visualizations

      • Deliverable: video showcasing the completed visualizations

    • 4/26: A class activity where users are placed in rooms with different materials and asked to determine which is the best/worst in terms of sound propagation in different scenarios based on the visualization

      • Deliverable: the class activity and the corresponding surveys (see above)

    • 4/28: A report based on results of the class activity that compares their feedback to what's actually being used in realistic environments

      • Deliverable: a Wiki entry with these comparisons and takeaways

    • 5/03: Prepare final presentation

    • 5/05: (Leaving some slack here in case one of the previous milestones spill over)

3/30/22 - 5 Hours

Install and evaluate two 3D models auralization software for room acoustics analysis.

  • EVERTims: "Open source framework for real-time auralization in architectural acoustics and virtual reality"

    • VR enabled

    • MacOS + Linux support

    • realtime visualization of sound trajectories in a 3D space

    • uses ray-tracing to visualize would-be acoustics using an original client + a Blender add-on + JUCE auralization engine

  • L-Acoustics Soundvision

    • not VR enabled

    • Windows + MacOS support

    • one of the first 3D, real-time acoustical simulation program, seems largely commercial, only supports imports of their own company's speakers, does not export easily to external software

    • Download: https://www.l-acoustics.com/products/soundvision/ (takes 2 minutes)

    • static visualization of system sound pressure level (SPL) in a 3D space

Example SPL mapping on a scene with a simple rectangular box representing the audience and 1 sound source (a L-Acoustics K1)

  • Created a page for 3D Acoustic Simulations (in VR) and added short descriptions to introduce the software I will later review

3/31/22 - 1.5 Hour

4/3/22 - 3 Hours

  • Finished page for EVERTims

    • Had to fix some Blender compatibility issues and ended up have to downgrade to 2.8

4/10/22 - 3.5 Hours

  • Create new Unity project with:

    • New model room

    • Re-used old code to integrate Photon PUN again

    • Add controller locomotion to player avatar

    • Populate with 3D sound source and test that walking closer = louder

      • linear sound decay

    • Added skybox

4/11/22 - 1 Hour

  • Added multiplayer voice with Photon Voice

4/12/22 - 3 Hours

  • Add amplitude area of effect indicator for each sound source to show how close a user would have to be to hear the audio

    • Made this indicator transparent so that the user can still see objects instead

    • Unity shaders do back-culling, meaning that only the front is rendered so you cannot actually see this amplitude indicator if you're standing inside it -> as a hack, I duplicated the mesh and flipped the normals

    • Area of effect can be controlled by user:

      • While hovering over the sound source, press B button on right controller to expand amplitude area, press A to decrease amplitude area

4/17/22 - 2 Hours

  • Added smaller dynamic circle inside amplitude sphere to show frequencies:

    • Mesh created

  • Create progress report presentation

  • Record VR demo video

4/21/22 - 4 Hours

  • Showed frequency visualization to some folks -- overall sentiment seems to be it's not super useful in comparison to the amplitude sphere, mentioned that they wanted a visualization of the overall amplitude distribution for the room

    • Possibly create a heatmap on the group that maps to sound at that position?

  • Made objects grab-able + respond to physics so that users can move sound sources around

    • Need to manually adjust colliders to match meshes

4/22/22 - 6 Hours

  • Make sure all objects are properly synced over multiplayer

  • Finished creating the heatmap -- this involved:

    • Generating a mesh to match size/shape of floor

    • Get positions of audio sources and map that to vertices in the map

    • Build a gradient shader that interpolates between different colors based on some max distances and renders the appropriate color at each vertex

  • This results in something like (note that red = louder, blue = softer):

4/23/22: 4 Hours

  • More testing and debugging:

    • fixed issue where amplitude changes were not sync-ing by adding RPC calls

  • Added some paintings in the room for decor

  • Recorded new demo:

4/24/22: 4 Hours

(I was at CHI this week, so did not work on the project much)

5/5/22: 5 Hours

5/11/22: 5 Hours

  • Edited the Wiki entry from ^ to include a couple more takeaways

  • Created poster for final demos (left a couple of placeholders to fill in for project 1 stuff)

  • Created flash talk slide

  • Rehearsed presentation/flash talks

5/13/22: 2.5 Hours

  • Completed the rest of the poster

  • Rehearsed poster demo pitches