Complete all of this section before starting any milestone. It should take roughly 2–3 hours the first time, mostly waiting on downloads.
Python:
3.10 or 3.11
https://python.org — verify: python --version
pip packages
latest
pip install numpy scipy scikit-learn umap-learn matplotlib
Unity Hub
latest
https://unity.com/download
Unity Editor
2022.3 LTS
Install via Unity Hub → Installs → Add
Android Build Support
via Unity Hub
Unity Hub → Installs → your version → Add Modules → Android Build Support + NDK + SDK
Meta XR All-in-One SDK
latest
Unity Asset Store: search 'Meta XR All-in-One SDK'
VS Code
latest
https://code.visualstudio.com — add Python + C# extensions
ADB
Android SDK
winget install Google.PlatformTools
GloVe (Global Vectors for Word Representation) is the embedding source for this project. The 6B 300-dimension file is 862 MB uncompressed.
# 1. Go to: https://nlp.stanford.edu/projects/glove/
# 2. Download: glove.6B.zip (862 MB)
# 3. Unzip to get: glove.6B.300d.txt
# (other sizes: 50d, 100d, 200d also in the zip — only use 300d)
# Verify the file:
wc -l data/glove.6B.300d.txt
# Expected output: 400000 (400K words)
# Check the first line:
head -1 data/glove.6B.300d.txt
# Expected: 'the 0.418 0.24968 -0.41242 ...' (word followed by 300 floats)
mkdir semantic_fields
cd semantic_fields
mkdir data scripts output unity_export evaluation
# Place your GloVe file:
# Move/copy glove.6B.300d.txt into semantic_fields/data/
# If you completed Project 1, you can also copy:
# output/X_3d_umap.npy
# output/words.npy
# output/cats.npy
# output/embeddings.json
# from the Project 1 output folder — the field pipeline uses them directly.
# Final structure:
semantic_fields/
data/
glove.6B.300d.txt <- GloVe file (862 MB)
scripts/
word_categories.py <- Reused from Project 1
build_scalar_field.py <- New: KDE density grids
compute_gradient.py <- New: gradient vector fields
export_field_json.py <- New: export field.json for Unity
plot_2d.py <- Updated: flow field version
output/ <- Generated files go here
X_3d_umap.npy <- From Project 1 or regenerated
words.npy
cats.npy
scalar_fields.npy <- New: per-category density grids
gradient_fields.npy <- New: per-category gradient vectors
field.json <- New: Unity field data
plot_2d_flow_field.png <- New: 2D condition plot
unity_export/
evaluation/
This project extends the Project 1 Unity scene. If you have the Project 1 project open, you can add the field rendering system directly on top of the existing EmbeddingCloud and OVRCameraRig setup. If starting fresh:
Open Unity Hub. Click New Project
Select template: 3D (Core). Do NOT pick URP or HDRP
Project Name: SemanticFields
Unity version: 2022.3 LTS
Click Create Project and wait (~3 min)
In Unity: File → Build Settings → select Android → click Switch Platform. Wait ~5 min
Create folders in the Project panel (Assets root): Scripts, Data, Materials, Prefabs, Scenes
File → Save Scene As → Assets/Scenes/SemanticField.unity
Window → Package Manager. Use the search bar for each:
TextMeshPro
Unity Registry (search)
Click Import TMP Essentials when prompted after install
XR Plugin Management
Unity Registry (search)
Required for all XR features
OpenXR Plugin
Unity Registry (search)
The cross-platform XR backend
Meta XR All-in-One SDK
My Assets tab (after Asset Store import)
Brings in OVRCameraRig, Passthrough, Hand Tracking, etc.
Newtonsoft Json (optional)
Unity Registry (search 'Json')
Easier JSON parsing than JsonUtility; optional but helpful
After importing Meta XR SDK
A 'Meta XR Configuration' window will pop up. Click 'Fix All' to apply recommended project settings automatically.
This sets Android API level, removes stereo rendering overrides, and enables required permissions.
You may need to restart Unity after this step.
Source
Stanford NLP — https://nlp.stanford.edu/projects/glove/
File
glove.6B.300d.txt
Vocabulary
400,000 words, trained on 6 billion tokens from Wikipedia + Gigaword
Dimensions
300 (standard; good balance of coverage vs. computation)
Why GloVe over word2vec
Pre-trained file format is simpler (plain text); no special loading library needed
Why GloVe over BERT
Static embeddings are easier to visualize; one vector per word, not context-dependent
Why 300d over 50d
More semantic structure preserved; richer spatial geometry in 3D after reduction
Four semantic categories were chosen to maximize visual cluster separation while testing meaningful NLP concepts:
emotions
joy anger fear sadness love hate happiness grief anxiety hope disgust surprise pride shame envy guilt
Red #EE4F4F
professions
doctor nurse teacher engineer lawyer pilot chef scientist artist soldier farmer banker writer judge architect accountant
Blue #45A1E8
moral_concepts
justice fairness freedom authority power truth honor virtue loyalty courage mercy duty rights equality liberty conscience
Green #52C784
nature
mountain river forest ocean sky earth fire wind rain snow desert valley storm sun moon thunder
Yellow #F9C22E
Project 1 placed each word at a point in 3D space derived from UMAP. Project 2 asks: what does the space between the words look like? The answer is computed as a continuous scalar field using kernel density estimation — at every point in the 3D grid, we compute how strongly each category is present. The gradient of that field gives a vector at every point showing the direction meaning changes most quickly.
This produces three new visual elements not present in Project 1:
Fog clouds (FieldRenderer) — transparent colored voxels where density exceeds a threshold, showing where each category is concentrated in space
Flow lines (FlowLineRenderer) — streamlines integrated along the gradient field, showing the direction semantic meaning increases most strongly from each location
Probe panel (FieldProbe) — real-time percentage readout of category composition at wherever the controller is pointing, reading the underlying scalar field values directly
The study compares the 2D flow field plot against the VR semantic field on tasks that specifically test field understanding — tasks that were not possible to ask in Project 1 because the point cloud contained no field information:
Flow direction interpretation: do the arrows/lines converge toward regions or split between competing ones?
Boundary detection: where do two category regions meet, and does the transition feel gradual or sharp?
Field composition sampling (VR only): point the probe at a boundary — what does the percentage panel show?
Mystery dot placement: use field structure rather than colored background to reason about which category each gray dot belongs to
Cluster Identification tests spatial understanding at a coarse level — can users see the macro structure? This is where AR's ability to show 3D depth should most clearly help, since 2D plots project the third dimension flat and clusters overlap more.
Similarity Judgment tests fine-grained neighborhood relationships. In AR, users can physically walk toward two words and judge which feels physically closer. In 2D, they rely on pixel distance. AR has a clear advantage here because human spatial judgment is calibrated for real distances, not screen coordinates.
Spatial Reasoning is qualitative and tests whether users build an accurate mental model of the geometry. AR users should be able to reference the physical layout (e.g., 'power is behind and to the left of authority') in a way 2D users cannot.
Create the evaluation Google Form now so it is ready for pilot testing. Go to forms.google.com and create a new form titled: Word Embedding Visualization Study. Add the following sections:
Section 1: Consent & Background
Background — VR experience, data visualization comfort, familiarity with word embeddings
Section 2: Task Results (2D Flow Field Plot)
2D Flow Field Plot — cluster clarity rating 1–5, flow arrow direction (converge vs split), gradual vs sharp transitions, most self-contained category, most mixed category, mystery dot grid (Dot A/B/C × 4 categories), open description of field structure
Section 3: Task Results (VR Semantic Field)
same questions as Section 2 plus: walk to a boundary and describe the probe reading, did walking change understanding of transitions, discomfort rating
Section 4: Comparison Questions
Comparison — which format made flow direction easier, which showed boundaries better, which helped mystery dot placement, did probe add information contours couldn't, usefulness ratings 1–5 for each condition
Section 5: Open Feedback
Open Feedback — what surprised you, what was confusing, does continuous field add insight over discrete points
Save the form link
After creating the form, click the link icon to get a shareable URL. Save this in your evaluation/ folder as survey_link.txt. You will share this with participants.
Reused unchanged from Project 1. Contains the canonical word list, category assignments, and color mappings used by all other scripts. See Project 1 documentation for full code.
# scripts/word_categories.py
# ─────────────────────────────────────────────────────────────
# Canonical word list and color assignments for the project.
# ALL other scripts import from here — never hardcode words elsewhere.
# ─────────────────────────────────────────────────────────────
CATEGORIES = {
'emotions': [
'joy', 'anger', 'fear', 'sadness', 'love', 'hate',
'happiness', 'grief', 'anxiety', 'hope', 'disgust',
'surprise', 'pride', 'shame', 'envy', 'guilt'
],
'professions': [
'doctor', 'nurse', 'teacher', 'engineer', 'lawyer',
'pilot', 'chef', 'scientist', 'artist', 'soldier',
'farmer', 'banker', 'writer', 'judge', 'architect', 'accountant'
],
'moral_concepts': [
'justice', 'fairness', 'freedom', 'authority', 'power',
'truth', 'honor', 'virtue', 'loyalty', 'courage',
'mercy', 'duty', 'rights', 'equality', 'liberty', 'conscience'
],
'nature': [
'mountain', 'river', 'forest', 'ocean', 'sky',
'earth', 'fire', 'wind', 'rain', 'snow',
'desert', 'valley', 'storm', 'sun', 'moon', 'thunder'
],
}
# Flat ordered list of all words (used for matrix construction)
ALL_WORDS = [w for cat in CATEGORIES.values() for w in cat]
# Maps each category to its word set for fast lookup
WORD_TO_CATEGORY = {w: cat for cat, words in CATEGORIES.items() for w in words}
# RGB colors (0.0–1.0) for Unity; hex for matplotlib
CATEGORY_COLORS_UNITY = {
'emotions': (0.93, 0.31, 0.31), # Red
'professions': (0.27, 0.63, 0.91), # Blue
'moral_concepts':(0.32, 0.78, 0.52), # Green
'nature': (0.98, 0.76, 0.15), # Yellow
}
CATEGORY_COLORS_HEX = {
'emotions': '#EE4F4F',
'professions': '#45A1E8',
'moral_concepts':'#52C784',
'nature': '#F9C22E',
}
if __name__ == '__main__':
print(f'Total words: {len(ALL_WORDS)}')
for cat, words in CATEGORIES.items():
print(f' {cat}: {len(words)} words')
# Verify it works:
python scripts/word_categories.py
# Expected:
# Total words: 64
# emotions: 16 words
# professions: 16 words
# moral_concepts: 16 words
# nature: 16 words
Builds a per-category 3D KDE density grid from the UMAP coordinates. The output is a (4, G, G, G) array — one density grid per category over a G×G×G uniform grid, where each value is the estimated probability density of that category at that grid location, normalized to [0, 1].
# scripts/build_scalar_field.py
# ─────────────────────────────────────────────────────────────
# Builds per-category KDE scalar fields over a 3D grid
# from the existing UMAP coordinates.
# Output: output/scalar_fields.npy — shape (4, G, G, G)
# Runtime: ~30 seconds
# ─────────────────────────────────────────────────────────────
import numpy as np
from scipy.stats import gaussian_kde
CATEGORIES = ['emotions', 'professions', 'moral_concepts', 'nature']
GRID_RES = 20 # 20x20x20 grid — increase for smoother clouds, decrease for performance
def build_fields(grid_res=GRID_RES):
X = np.load('output/X_3d_umap.npy') # (64, 3) normalized [0,1]
cats = np.load('output/cats.npy')
# Build uniform 3D grid
t = np.linspace(0, 1, grid_res)
gx, gy, gz = np.meshgrid(t, t, t, indexing='ij')
grid_pts = np.stack([gx.ravel(), gy.ravel(), gz.ravel()]) # (3, G^3)
fields = np.zeros((len(CATEGORIES), grid_res, grid_res, grid_res))
for ci, cat in enumerate(CATEGORIES):
mask = cats == cat
pts = X[mask].T # (3, N_cat) — KDE expects (dims, samples)
if pts.shape[1] < 2:
print(f' Skipping {cat} — too few points')
continue
# bw_method controls smoothness:
# 0.3-0.4 = tighter peaks, more distinct clusters
# 0.5-0.6 = wider, more overlapping clouds
kde = gaussian_kde(pts, bw_method=0.5)
density = kde(grid_pts).reshape(grid_res, grid_res, grid_res)
density /= density.max() # normalize to [0,1]
fields[ci] = density
print(f' {cat}: peak density = {density.max():.3f}')
np.save('output/scalar_fields.npy', fields)
print(f'Saved: output/scalar_fields.npy — shape {fields.shape}')
return fields
if __name__ == '__main__':
build_fields()
python scripts/build_scalar_field.py
# Expected:
# emotions: peak density = 1.000
# professions: peak density = 1.000
# moral_concepts: peak density = 1.000
# nature: peak density = 1.000
# Saved: output/scalar_fields.npy — shape (4, 20, 20, 20)
Key parameter: GRID_RES = 20 is the recommended setting. At 30 the Quest 3 struggles with voxel count (~27,000 objects). At 15 the clouds become visibly blocky. 20 is the balance point. The bw_method parameter controls how spread out each category's density hill is — increase to 0.6 for more overlap between regions, decrease to 0.3 for tighter, more distinct clouds.
Computes the gradient of each scalar field using numpy.gradient — a standard central-difference numerical differentiation. The gradient at each grid point is a 3D vector pointing in the direction of steepest density increase for that category. This is mathematically identical to how gradients are computed in CFD and weather simulation tools.
# scripts/compute_gradient.py
# ─────────────────────────────────────────────────────────────
# Computes the gradient vector field for each category's scalar field.
# The gradient at each grid point = direction of steepest semantic increase.
# Output: output/gradient_fields.npy — shape (4, 3, G, G, G)
# axis 1 = (dx, dy, dz) components
# ─────────────────────────────────────────────────────────────
import numpy as np
def compute_gradients():
fields = np.load('output/scalar_fields.npy') # (4, G, G, G)
n_cats, G = fields.shape[0], fields.shape[1]
grads = np.zeros((n_cats, 3, G, G, G))
for ci in range(n_cats):
# np.gradient returns [dF/dx, dF/dy, dF/dz]
gx, gy, gz = np.gradient(fields[ci])
grads[ci, 0] = gx
grads[ci, 1] = gy
grads[ci, 2] = gz
np.save('output/gradient_fields.npy', grads)
print(f'Saved: output/gradient_fields.npy — shape {grads.shape}')
return grads
if __name__ == '__main__':
compute_gradients()
python scripts/compute_gradient.py
# Expected:
# Saved: output/gradient_fields.npy — shape (4, 3, 20, 20, 20)
Exports all scalar field grids and gradient vectors into a single field.json file for Unity. All arrays are flattened to 1D lists for JSON serialization. Unity's FieldLoader.cs reads this file at startup and provides per-point lookup methods.
# scripts/export_field_json.py
# ─────────────────────────────────────────────────────────────
# Exports scalar fields + gradient vectors into field.json for Unity.
# ─────────────────────────────────────────────────────────────
import numpy as np
import json
CATEGORIES = ['emotions', 'professions', 'moral_concepts', 'nature']
COLORS = {
'emotions': [0.93, 0.31, 0.31],
'professions': [0.27, 0.63, 0.91],
'moral_concepts': [0.32, 0.78, 0.52],
'nature': [0.98, 0.76, 0.15],
}
def export():
fields = np.load('output/scalar_fields.npy') # (4, G, G, G)
grads = np.load('output/gradient_fields.npy') # (4, 3, G, G, G)
G = fields.shape[1]
out = {'grid_resolution': G, 'categories': []}
for ci, cat in enumerate(CATEGORIES):
out['categories'].append({
'name': cat,
'color': COLORS[cat],
'density': fields[ci].ravel().tolist(),
'grad_x': grads[ci, 0].ravel().tolist(),
'grad_y': grads[ci, 1].ravel().tolist(),
'grad_z': grads[ci, 2].ravel().tolist(),
})
with open('output/field.json', 'w') as f:
json.dump(out, f)
print(f'Saved: output/field.json ({G}^3 grid, {len(CATEGORIES)} categories)')
if __name__ == '__main__':
export()
python scripts/export_field_json.py
# Expected:
# Saved: output/field.json (20^3 grid, 4 categories)
# Then copy to Unity:
# Windows:
Copy-Item output\field.json 'C:\path\to\SemanticFields\Assets\Data\field.json'
# macOS/Linux:
cp output/field.json /path/to/SemanticFields/Assets/Data/field.json
Generates the 2D condition plot used in the study. Unlike the Project 1 scatter plot, this version adds colored contour rings (equivalent to the 3D fog clouds), black gradient flow arrows (equivalent to the 3D streamlines), and removes the colored background so mystery dot placement requires reading field structure rather than color regions.
# scripts/plot_2d.py
# ─────────────────────────────────────────────────────────────
# Generates 2D semantic flow field plot for the baseline condition.
# Shows: word dots, KDE contour rings, gradient flow arrows, mystery dots.
# Run AFTER build_scalar_field.py and compute_gradient.py
# ─────────────────────────────────────────────────────────────
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import os
from scipy.stats import gaussian_kde
from word_categories import CATEGORY_COLORS_HEX
MYSTERY_WORDS = ["guilt", "soldier", "power"]
def build_category_fields(X, cats, grid_size=220):
categories = list(CATEGORY_COLORS_HEX.keys())
x_min, y_min = X.min(axis=0)
x_max, y_max = X.max(axis=0)
xs = np.linspace(x_min, x_max, grid_size)
ys = np.linspace(y_min, y_max, grid_size)
xx, yy = np.meshgrid(xs, ys)
grid = np.vstack([xx.ravel(), yy.ravel()])
fields = {}
for cat in categories:
pts = X[np.array(cats) == cat]
if len(pts) < 2:
continue
kde = gaussian_kde(pts.T)
zz = kde(grid).reshape(grid_size, grid_size)
fields[cat] = zz
return xx, yy, fields
def compute_vector_field(xx, yy, fields):
categories = list(fields.keys())
stack = np.stack([fields[c] for c in categories], axis=-1)
stack = stack / (stack.max(axis=(0, 1)) + 1e-8)
scalar = np.max(stack, axis=-1)
dy, dx = np.gradient(scalar)
return dx, dy, scalar, stack
def plot_2d_with_flow(X, words, cats, title, filename,
show_labels=True, mystery_words=None):
if mystery_words is None:
mystery_words = []
fig, ax = plt.subplots(figsize=(14, 11), dpi=150)
ax.set_facecolor("#FFFFFF")
xx, yy, fields = build_category_fields(X, cats)
u, v, scalar, stack = compute_vector_field(xx, yy, fields)
categories = list(fields.keys())
# ── Flow lines ──────────────────────────────────────────────
ax.streamplot(xx, yy, u, v,
color="black", linewidth=0.7, density=1.2,
arrowsize=0.8, zorder=2, minlength=0.2)
# ── Contour rings (no filled background) ────────────────────
for i, cat in enumerate(categories):
ax.contour(xx, yy, stack[..., i],
levels=3, colors=[CATEGORY_COLORS_HEX[cat]],
alpha=0.75, linewidths=1.4, zorder=1)
# ── Word dots ────────────────────────────────────────────────
dot_labels = dict(zip(mystery_words, ["Dot A", "Dot B", "Dot C"]))
for i, (word, cat) in enumerate(zip(words, cats)):
if word in mystery_words:
ax.scatter(X[i,0], X[i,1], c="#BBBBBB", s=140,
edgecolors="#333333", linewidths=1.2, zorder=5)
if show_labels:
ax.annotate(dot_labels[word], (X[i,0], X[i,1]),
fontsize=10, fontweight="bold",
xytext=(6,6), textcoords="offset points")
else:
ax.scatter(X[i,0], X[i,1], c=CATEGORY_COLORS_HEX[cat],
s=85, edgecolors="white", linewidths=0.7, zorder=4)
if show_labels:
ax.annotate(word, (X[i,0], X[i,1]),
fontsize=8.5, xytext=(5,5),
textcoords="offset points", alpha=0.9)
# ── Legend ───────────────────────────────────────────────────
legend = [mpatches.Patch(color=v, label=k.replace("_"," ").title())
for k, v in CATEGORY_COLORS_HEX.items()]
legend.append(mpatches.Patch(color="#BBBBBB", label="Mystery Words (A, B, C)"))
ax.legend(handles=legend, loc="upper left", framealpha=0.95,
fontsize=10, title="Semantic Flow Field")
ax.set_title(title, fontsize=14, fontweight="bold", pad=14)
ax.set_xlabel("Embedding Dimension 1")
ax.set_ylabel("Embedding Dimension 2")
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.grid(True, linestyle="--", alpha=0.2)
plt.tight_layout()
os.makedirs("output", exist_ok=True)
plt.savefig(f"output/{filename}", bbox_inches="tight", dpi=150)
plt.close()
print(f"Saved: output/{filename}")
if __name__ == "__main__":
X = np.load("output/X_2d.npy")
words = list(np.load("output/words.npy"))
cats = list(np.load("output/cats.npy"))
plot_2d_with_flow(X, words, cats,
title="Semantic Field Space with Vector Flow (Streamlines)",
filename="plot_2d_flow_field.png",
show_labels=True, mystery_words=MYSTERY_WORDS)
plot_2d_with_flow(X, words, cats,
title="Semantic Field Space with Vector Flow (No Labels)",
filename="plot_2d_flow_field_no_labels.png",
show_labels=False, mystery_words=MYSTERY_WORDS)
print("Done → output/plot_2d_flow_field.png")
python scripts/plot_2d.py
# Saves:
# output/plot_2d_flow_field.png (labeled — for reference)
# output/plot_2d_flow_field_no_labels.png (unlabeled — for study use)
Create Assets/Scripts/FieldLoader.cs. Reads field.json at startup and provides SampleDensity and SampleGradient methods used by all other field components.
// Assets/Scripts/FieldLoader.cs
// ─────────────────────────────────────────────────────────────
// Reads field.json and provides grid lookup for density + gradient.
// ─────────────────────────────────────────────────────────────
using UnityEngine;
using System.Collections.Generic;
[System.Serializable]
public class FieldCategory {
public string name;
public float[] color; // [r, g, b]
public float[] density; // G^3 flattened, ix*G*G + iy*G + iz
public float[] grad_x, grad_y, grad_z;
public Color UnityColor => new Color(color[0], color[1], color[2], 1f);
}
[System.Serializable]
public class FieldData {
public int grid_resolution;
public List<FieldCategory> categories;
}
public class FieldLoader : MonoBehaviour
{
[Header("Data")]
public TextAsset fieldJson;
public FieldData Data { get; private set; }
public int G { get; private set; }
void Awake()
{
if (fieldJson == null)
{ Debug.LogError("FieldLoader: no JSON asset assigned!"); return; }
Data = JsonUtility.FromJson<FieldData>(fieldJson.text);
G = Data.grid_resolution;
Debug.Log($"FieldLoader: {Data.categories.Count} categories, grid={G}^3");
}
public float SampleDensity(int catIdx, Vector3 pos)
{
int ix = Mathf.Clamp(Mathf.FloorToInt(pos.x * G), 0, G - 1);
int iy = Mathf.Clamp(Mathf.FloorToInt(pos.y * G), 0, G - 1);
int iz = Mathf.Clamp(Mathf.FloorToInt(pos.z * G), 0, G - 1);
return Data.categories[catIdx].density[ix * G * G + iy * G + iz];
}
public Vector3 SampleGradient(int catIdx, Vector3 pos)
{
int ix = Mathf.Clamp(Mathf.FloorToInt(pos.x * G), 0, G - 1);
int iy = Mathf.Clamp(Mathf.FloorToInt(pos.y * G), 0, G - 1);
int iz = Mathf.Clamp(Mathf.FloorToInt(pos.z * G), 0, G - 1);
int idx = ix * G * G + iy * G + iz;
var c = Data.categories[catIdx];
return new Vector3(c.grad_x[idx], c.grad_y[idx], c.grad_z[idx]);
}
public (int catIdx, float strength) DominantCategory(Vector3 pos)
{
int best = 0;
float bestVal = 0f;
for (int i = 0; i < Data.categories.Count; i++) {
float d = SampleDensity(i, pos);
if (d > bestVal) { bestVal = d; best = i; }
}
return (best, bestVal);
}
}
Create Assets/Scripts/FieldRenderer.cs. Spawns transparent colored spheres at all grid cells where density exceeds the threshold, creating the fog cloud appearance.
// Assets/Scripts/FieldRenderer.cs
// ─────────────────────────────────────────────────────────────
// Instantiates transparent colored voxels at dense grid points
// to create semantic fog cloud volumes.
// Y button on left controller cycles density threshold live.
// ─────────────────────────────────────────────────────────────
using UnityEngine;
using System.Collections.Generic;
public class FieldRenderer : MonoBehaviour
{
[Header("References")]
public FieldLoader fieldLoader;
[Header("Settings")]
public float cloudSizeMeters = 2.5f;
public float densityThreshold = 0.25f;
public float voxelAlphaScale = 0.15f;
[Header("Voxel")]
public GameObject voxelPrefab;
private List<GameObject> _voxels = new();
void Start() => RenderField();
public void RenderField()
{
foreach (var v in _voxels) Destroy(v);
_voxels.Clear();
if (fieldLoader == null || fieldLoader.Data == null) return;
int G = fieldLoader.G;
float step = cloudSizeMeters / G;
for (int ci = 0; ci < fieldLoader.Data.categories.Count; ci++)
{
var cat = fieldLoader.Data.categories[ci];
Color col = cat.UnityColor;
for (int ix = 0; ix < G; ix++)
for (int iy = 0; iy < G; iy++)
for (int iz = 0; iz < G; iz++)
{
float d = cat.density[ix * G * G + iy * G + iz];
if (d < densityThreshold) continue;
Vector3 normPos = new Vector3(ix, iy, iz) / G;
Vector3 basePos = (normPos - Vector3.one * 0.5f) * cloudSizeMeters;
// Jitter to break up grid pattern
Vector3 jitter = UnityEngine.Random.insideUnitSphere * step * 0.3f;
var go = Instantiate(voxelPrefab, transform);
// localPosition keeps voxels relative to parent (EmbeddingCloud)
go.transform.localPosition = basePos + jitter;
// Scale variation makes overlapping voxels merge into solid mass
float scale = UnityEngine.Random.Range(0.7f, 1.1f);
go.transform.localScale = Vector3.one * step * scale;
var rend = go.GetComponent<Renderer>();
if (rend != null) {
var mpb = new MaterialPropertyBlock();
Color c = col;
c.a = Mathf.Clamp01(d * voxelAlphaScale);
mpb.SetColor("_Color", c);
rend.SetPropertyBlock(mpb);
}
_voxels.Add(go);
}
}
Debug.Log($"FieldRenderer: {_voxels.Count} voxels rendered.");
}
void Update()
{
// Y button on left controller: cycle threshold for live tuning
if (OVRInput.GetDown(OVRInput.Button.Four, OVRInput.Controller.LTouch))
{
densityThreshold = densityThreshold > 0.25f ? 0.1f : densityThreshold + 0.05f;
RenderField();
Debug.Log($"Density threshold: {densityThreshold:F2}");
}
}
}
Create Assets/Scripts/FlowLineRenderer.cs. Seeds streamlines equally across all four categories and integrates RK2 paths along the gradient field with direction smoothing to eliminate zigzag artifacts.
// Assets/Scripts/FlowLineRenderer.cs
// ─────────────────────────────────────────────────────────────
// Integrates smoothed streamlines through the gradient field.
// Equal seeds per category guarantee all four colors appear.
// Tapered LineRenderer + capsule arrowhead shows flow direction.
// ─────────────────────────────────────────────────────────────
using UnityEngine;
using System.Collections.Generic;
public class FlowLineRenderer : MonoBehaviour
{
[Header("References")]
public FieldLoader fieldLoader;
[Header("Streamline Settings")]
public int numStreamlines = 20;
public int stepsPerLine = 40;
public float stepSize = 0.006f;
public float cloudSizeMeters = 2.5f;
public float minDensityToStart = 0.03f;
[Header("Line Appearance")]
public Material streamlineMaterial;
public float lineWidth = 0.008f;
[Header("Arrow")]
public GameObject arrowPrefab; // Small capsule, Particles/Standard Unlit material
private List<LineRenderer> _lines = new();
void Start() => GenerateStreamlines();
public void GenerateStreamlines()
{
foreach (var lr in _lines) Destroy(lr.gameObject);
_lines.Clear();
if (fieldLoader?.Data == null) return;
var seeds = FindSeeds();
foreach (var seed in seeds)
{
var pts = IntegrateStreamline(seed.pos, seed.catIdx);
if (pts.Count < 4) continue;
var go = new GameObject($"Streamline_{seed.catIdx}");
go.transform.SetParent(transform);
var lr = go.AddComponent<LineRenderer>();
lr.material = streamlineMaterial;
// Thick end = start, thin end = destination (direction indicator)
lr.startWidth = lineWidth * 0.2f;
lr.endWidth = lineWidth;
lr.positionCount = pts.Count;
lr.useWorldSpace = true;
Color col = fieldLoader.Data.categories[seed.catIdx].UnityColor;
lr.startColor = new Color(col.r, col.g, col.b, 0.05f);
lr.endColor = new Color(col.r, col.g, col.b, 0.9f);
lr.SetPositions(pts.ToArray());
_lines.Add(lr);
PlaceArrow(pts, col, go.transform);
}
Debug.Log($"FlowLineRenderer: {_lines.Count} streamlines generated.");
}
// Equal seeds per category — guarantees all 4 colors always appear
List<(Vector3 pos, int catIdx)> FindSeeds()
{
var seeds = new List<(Vector3, int)>();
int perCat = numStreamlines / fieldLoader.Data.categories.Count;
for (int ci = 0; ci < fieldLoader.Data.categories.Count; ci++)
{
int placed = 0;
int attempts = 0;
while (placed < perCat && attempts < perCat * 30)
{
attempts++;
Vector3 rnd = new Vector3(
UnityEngine.Random.Range(0.15f, 0.85f),
UnityEngine.Random.Range(0.15f, 0.85f),
UnityEngine.Random.Range(0.15f, 0.85f));
if (fieldLoader.SampleDensity(ci, rnd) < minDensityToStart) continue;
if (fieldLoader.SampleGradient(ci, rnd).magnitude < 0.002f) continue;
seeds.Add((rnd, ci));
placed++;
}
}
return seeds;
}
List<Vector3> IntegrateStreamline(Vector3 startNorm, int catIdx)
{
var pts = new List<Vector3>();
Vector3 pos = startNorm;
Vector3 prevDir = Vector3.zero;
for (int i = 0; i < stepsPerLine; i++)
{
if (pos.x < 0.05f || pos.x > 0.95f ||
pos.y < 0.05f || pos.y > 0.95f ||
pos.z < 0.05f || pos.z > 0.95f) break;
if (fieldLoader.SampleDensity(catIdx, pos) < 0.03f) break;
// TransformPoint: converts local normalized pos to world space
// correctly accounts for SemanticField parent transform
Vector3 worldPos = transform.TransformPoint(
(pos - Vector3.one * 0.5f) * cloudSizeMeters);
pts.Add(worldPos);
Vector3 grad = fieldLoader.SampleGradient(catIdx, pos);
if (grad.magnitude < 0.0001f) break;
Vector3 dir = grad.normalized;
// Direction smoothing: 25% new, 75% previous — eliminates zigzag
if (prevDir != Vector3.zero)
dir = Vector3.Lerp(prevDir, dir, 0.25f).normalized;
prevDir = dir;
pos += dir * stepSize;
pos.x = Mathf.Clamp01(pos.x);
pos.y = Mathf.Clamp01(pos.y);
pos.z = Mathf.Clamp01(pos.z);
}
return pts;
}
void PlaceArrow(List<Vector3> pts, Color col, Transform parent)
{
if (arrowPrefab == null || pts.Count < 2) return;
int last = pts.Count - 1;
Vector3 dir = (pts[last] - pts[last - 1]).normalized;
if (dir == Vector3.zero) return;
var arrow = Instantiate(arrowPrefab, parent);
arrow.transform.position = pts[last];
// Capsule Y-axis must align with flow direction:
// LookRotation points Z toward dir, rotate 90° X to shift to Y
arrow.transform.rotation = Quaternion.LookRotation(dir, Vector3.up)
* Quaternion.Euler(90f, 0f, 0f);
arrow.transform.localScale = Vector3.one * 0.015f;
var rend = arrow.GetComponent<Renderer>();
if (rend != null) {
var mpb = new MaterialPropertyBlock();
mpb.SetColor("_Color", new Color(col.r, col.g, col.b, 0.85f));
rend.SetPropertyBlock(mpb);
}
}
}
Create Assets/Scripts/FieldProbe.cs. Samples the scalar field at the point the right controller is aimed at and displays live category composition percentages on the HUD panel.
// Assets/Scripts/FieldProbe.cs
// ─────────────────────────────────────────────────────────────
// Samples field composition at the controller aim point every frame.
// Displays live category % on the HUD-locked InfoCanvas.
// ─────────────────────────────────────────────────────────────
using UnityEngine;
using TMPro;
public class FieldProbe : MonoBehaviour
{
[Header("References")]
public FieldLoader fieldLoader;
public TextMeshProUGUI infoPanel;
public LineRenderer laserLine;
[Header("Settings")]
public float probeDepth = 3.0f;
public float cloudSize = 2.5f;
private static readonly string[] CAT_NAMES =
{ "Emotions", "Professions", "Moral Concepts", "Nature" };
private static readonly string[] CAT_COLORS =
{ "#FF6B6B", "#60B8FF", "#6DDF8A", "#FFD166" };
private OVRCameraRig _rig;
void Start()
{
_rig = FindObjectOfType<OVRCameraRig>();
}
void Update()
{
bool usingController =
OVRInput.GetConnectedControllers() != OVRInput.Controller.None;
Vector3 origin, direction;
if (usingController && _rig != null) {
origin = _rig.rightHandAnchor.position;
direction = _rig.rightHandAnchor.forward;
} else if (usingController) {
origin = OVRInput.GetLocalControllerPosition(OVRInput.Controller.RTouch);
direction = OVRInput.GetLocalControllerRotation(OVRInput.Controller.RTouch)
* Vector3.forward;
} else {
origin = Camera.main.transform.position;
direction = Camera.main.transform.forward;
}
Vector3 sampleWorld = origin + direction * probeDepth;
// Convert world pos → normalized field coordinates [0,1]
// Accounts for SemanticField's world position
Vector3 fieldCenter = transform.position;
Vector3 normPos = (sampleWorld - fieldCenter) / cloudSize + Vector3.one * 0.5f;
normPos = new Vector3(
Mathf.Clamp01(normPos.x),
Mathf.Clamp01(normPos.y),
Mathf.Clamp01(normPos.z));
UpdateInfoPanel(normPos);
UpdateLaser(origin, direction, sampleWorld);
}
void UpdateInfoPanel(Vector3 normPos)
{
if (infoPanel == null || fieldLoader?.Data == null) return;
float total = 0f;
float[] densities = new float[fieldLoader.Data.categories.Count];
for (int i = 0; i < densities.Length; i++) {
densities[i] = fieldLoader.SampleDensity(i, normPos);
total += densities[i];
}
if (total < 0.01f) {
infoPanel.text = "<size=16>Point into\nthe field</size>";
return;
}
var sb = new System.Text.StringBuilder();
sb.AppendLine("<b><size=20>Semantic Composition</size></b>");
for (int i = 0; i < densities.Length; i++) {
float pct = densities[i] / total * 100f;
if (pct < 2f) continue;
int bars = Mathf.RoundToInt(pct / 5f);
string bar = new string('|', bars);
sb.AppendLine(
$"<color={CAT_COLORS[i]}><size=18>{CAT_NAMES[i],15}: {pct:0}% {bar}</size></color>");
}
infoPanel.text = sb.ToString().TrimEnd();
}
void UpdateLaser(Vector3 origin, Vector3 direction, Vector3 samplePoint)
{
if (laserLine == null) return;
laserLine.enabled = true;
laserLine.SetPosition(0, origin);
laserLine.SetPosition(1, samplePoint);
}
}
FieldVoxel prefab (for FieldRenderer):
Hierarchy → right-click → 3D Object → Sphere → rename FieldVoxel
Add Standard material with Rendering Mode → Transparent, Albedo → white
Drag into Assets/Prefabs → save as FieldVoxel prefab
Delete from Hierarchy
Arrow prefab (for FlowLineRenderer):
Hierarchy → right-click → 3D Object → Capsule → rename FlowArrow
Scale to X=0.3, Y=1, Z=0.3 (elongated)
Add a Particles/Standard Unlit material → save as ArrowMat
Drag into Assets/Prefabs → save as FlowArrow prefab
Delete from Hierarchy
StreamlineMat material:
Assets/Materials → right-click → Create Material → StreamlineMat
Shader: Particles/Standard Unlit → Color: white
The complete scene hierarchy. SemanticField must be a child of EmbeddingCloud at local position (0, 0, 0) so field voxels and word spheres share the same coordinate origin.
OVRCameraRig
TrackingSpace
CenterEyeAnchor
InfoCanvas ← HUD-locked: child of CenterEyeAnchor
Image (background)
InfoText (TextMeshProUGUI)
EmbeddingCloud ← Position (0, 1, 2), CloudSize 2.5
(64 word spheres)
SemanticField ← LOCAL position (0, 0, 0) — critical
FieldLoader script ← drag field.json into Field Json slot
FieldRenderer script ← drag FieldVoxel prefab, set thresholds
FlowLineRenderer script ← drag StreamlineMat and FlowArrow prefab
FieldProbe script ← drag InfoText and LaserPointer
LaserPointer ← LineRenderer for controller ray
LegendAnchor ← CategoryLegend script
EventSystem
Directional Light
InfoCanvas setup for HUD lock:
Drag InfoCanvas to be a child of CenterEyeAnchor (inside OVRCameraRig → TrackingSpace)
Set Transform Position: X=-0.22, Y=0.12, Z=0.5
Set Transform Rotation: X=0, Y=0, Z=0
Set Transform Scale: X=0.001, Y=0.001, Z=0.001
Canvas Render Mode: World Space
FieldRenderer on SemanticField:
Field Loader: drag SemanticField (self — FieldLoader is on same object)
Cloud Size Meters: 2.5
Density Threshold: 0.25
Voxel Alpha Scale: 0.15
Voxel Prefab: Assets/Prefabs/FieldVoxel
FlowLineRenderer on SemanticField:
Field Loader: drag SemanticField
Num Streamlines: 20
Steps Per Line: 40
Step Size: 0.006
Cloud Size Meters: 2.5
Min Density To Start: 0.03
Streamline Material: Assets/Materials/StreamlineMat
Line Width: 0.008
Arrow Prefab: Assets/Prefabs/FlowArrow
FieldProbe on SemanticField:
Field Loader: drag SemanticField
Info Panel: drag InfoText from InfoCanvas
Laser Line: drag LaserPointer's LineRenderer component
Probe Depth: 3.0
Cloud Size: 2.5
Press Play. Console should show:
FieldLoader: 4 categories, grid=20^3
FieldRenderer: XXX voxels rendered (expect 500–2500)
FlowLineRenderer: 20 streamlines generated
In Game view: colored transparent fog clouds should fill the scene. Moving the mouse (editor gaze) should update the InfoCanvas with percentage bars. Streamlines with tapered ends and small capsule arrowheads should be visible inside the clouds.
If clouds appear offset from word spheres: confirm SemanticField is a child of EmbeddingCloud at local position (0, 0, 0). If voxel count causes lag: increase Density Threshold to 0.3 or reduce GRID_RES to 15 in build_scalar_field.py and rerun the pipeline.
# Verify Quest is connected:
adb devices
# In Unity: File → Build Settings → Build
# Save as SemanticField.apk
# Install (fresh):
adb install "...\SemanticField.apk"
# Reinstall over existing (faster):
adb install -r "...p\SemanticField.apk"
Find the app under Unknown Sources in the Quest App Library.
Google Form: https://forms.gle/iKkNZDx9vS72wT1D7
2D Graph: https://docs.google.com/document/d/13fCe2tB3B_TiJphriHMx_r5szUiOQ_4wawR3DiX5EDU/edit?usp=sharing
Apk: APK
Responses: https://docs.google.com/spreadsheets/d/1WX1X6jW_S9Hi3ZCdeR_Yc4tIiTVe0SwCGb2c8pKNrJQ/edit?usp=sharing
VR reveals semantic continuity, not just clustering
→ Users consistently saw smoother transitions and overlap regions that are hidden in 2D.
Flow fields add directional meaning, but are not self-explanatory
→ Many users interpreted “flow” inconsistently without guidance.
Different visualizations support different kinds of understanding
→ 2D = clarity & separability, VR = spatial intuition & structure.
Flow Lines
Helpful for understanding direction toward categories
But:
Meaning not always intuitive
Some confusion about what gradients represent
Sparse or inconsistent visibility in VR
Did VR Improve Understanding?
Yes for spatial intuition and relationships
Mixed for clarity of categories
Helped users:
See overlap regions better
Understand local neighborhood structure
But:
Harder to interpret globally than 2D
Confusing / Hard to Interpret
“What exactly do flow lines represent?” (common issue)
VR sparsity made structure harder to read at times
Percentage / composition UI sometimes unclear or hard to interact with
“Not sure what flow means semantically” repeated across users
Other Comments (Common Themes)
VR felt more immersive but more complex
2D felt simpler and easier to interpret quickly
Flow + probe interaction seen as interesting but not fully self-explanatory
Overall: “useful but needs clearer explanation of semantics”