Human perception has limits in detecting subtle visual changes
Our visual system is optimized for detecting large, meaningful differences in everyday environments, but struggles when differences become very small.
Lack of clear reference points
When two stimuli are very similar, there are fewer distinguishing features, making it difficult to reliably compare them without external cues.
Visual noise and ambiguity
Lighting, shadows, and surrounding context can interfere with perception, especially for attributes like shade and size.
Limits of perceptual sensitivity (JND)
The smallest detectable difference between two stimuli is known as the Just Noticeable Difference (JND). Below this threshold, differences become indistinguishable.
Dependence on relative comparison
Humans rely heavily on side-by-side comparison. Without stable positioning, perception becomes less accurate.
Immersion increases engagement but not always accuracy
VR environments provide a strong sense of presence, which can improve user engagement but does not necessarily improve fine perceptual discrimination.
3D spatial context
Depth cues and stereoscopic vision can help with:
Spatial understanding
Quantity/density perception
However, they may not help with:
Subtle size differences
Fine shade distinctions
Dynamic viewpoint and interaction
Users can:
Move around objects
Change viewing angles
This can improve understanding in some cases, but also introduces variability.
Environmental variability
Lighting, shadows, and rendering quality in VR can introduce visual noise, making it harder to detect very small differences.
Fechner, G. T. (1860). Elements of Psychophysics
Introduced foundational ideas behind measuring perceptual thresholds and JND.
Weber, E. H. (1834). De Pulsu, Resorptione, Auditu et Tactu
Established Weber’s Law, showing that detectable differences scale with stimulus magnitude.
Macmillan, N. A., & Creelman, C. D. (2004). Detection Theory: A User's Guide
Explains methods for measuring perceptual thresholds and decision-making under uncertainty.
Kingdom, F. A. A., & Prins, N. (2016). Psychophysics: A Practical Introduction
Covers experimental methods for studying perception, including JND tasks.
Stanley Smith Stevens (1957) – On The Psychophysical Law
Expanded on Weber and Fechner by showing that perception scales nonlinearly with stimulus intensity. Important for understanding why small differences become harder to detect.
George A. Miller (1956) – The Magical Number Seven, Plus or Minus Two
Explores limits of human perception and cognition, relevant for understanding how users process multiple stimuli in comparison tasks.
One of the earliest studies of JND involved participants comparing two weights and identifying which was heavier. Weber found that the smallest detectable difference depends on the initial weight, leading to what is now known as Weber’s Law. This experiment established that perception is based on relative, not absolute, differences.
Gustav Fechner formalized methods for measuring perception by systematically varying stimuli and recording detection thresholds. His work introduced controlled experimental designs that are still used in modern JND studies.
In controlled lab settings, participants are shown two colors or shades and asked to identify differences. These experiments revealed that lighting conditions, contrast, and background significantly influence perception—directly relevant to your VR vs 2D comparison.
Participants are shown two groups of dots and must determine which contains more. These studies show that humans are good at estimating quantity at a glance, but struggle when differences are small—especially without spatial cues. This directly inspired the quantity portion of this project.
Modern visualization research uses JND to evaluate how well users can detect differences in charts (e.g., bar height, color gradients). These studies highlight that design choices strongly impact perceptual accuracy, reinforcing the importance of controlled visual environments.
To investigate how virtual reality affects the perception of Just Noticeable Differences (JND), I designed an in-class activity comparing user performance across 2D and VR visualizations.
Participants were asked to complete a series of pairwise comparison tasks, where they identified which of two options differed along a specific visual attribute. The tasks focused on three categories:
Size (e.g., circles with slightly different radii)
Shade (grayscale differences)
Quantity (number of dots in a region)
The activity was conducted across two formats:
2D condition: Participants completed a structured set of image-based comparisons through a Google Form
VR condition: Participants interacted with a Unity-based environment on a Meta Quest 3 headset, where they viewed similar comparisons in a 3D space
In both conditions, the differences between stimuli were gradually reduced across trials, approaching the perceptual threshold where differences become difficult to detect (the JND).
Participants recorded their answers, allowing for comparison of accuracy.
The goal of this activity was to evaluate whether VR improves users’ ability to detect subtle visual differences compared to traditional 2D visualizations.
Please download/have access to the following:
Steps to utilize the VR simulation:
1. Please make sure developer mode is turned on- on your phone's meta horizon app, go to devices -> your device -> headset settings -> developer mode-> turn on
2. While connected USB -C to your computer, your headset will ask to allow USB debugging- check always allow, then OK
3. Download the Meta Quest Developer Hub on your computer and log in to your Meta account that is linked to your Meta Quest 3
4. Drag and drop the apk file onto the Meta Quest Developer Hub app, and put under "Connected Device: Meta Quest 3"
5. ScaleVR should be accessible to users now on the headset under applications
The post-activity responses suggest a clear tradeoff between 2D precision and VR immersion/spatial understanding. Across the size, shade, and quantity tasks, participants generally performed more accurately in the 2D condition, especially when the visual differences became subtle. This is consistent with the poster results, where the accuracy charts show that 2D performance stayed more stable for fine-grained differences, while VR accuracy tended to decline more as the trials approached the JND threshold.
For size comparisons, 2D appeared to be the strongest format. Participants noted that the circles were compact, stable, and directly side-by-side, making it easier to compare relative size. Because the images were presented on a plain white background with no movement or perspective changes, users could rely on direct visual comparison rather than needing to reposition themselves. Several responses specifically mentioned that 2D was easier because the objects “held still,” had fewer distractions, and allowed participants to use peripheral vision to compare surrounding shapes.
For shade comparisons, 2D also performed better overall. Participants repeatedly mentioned that the lack of shadows and lighting variation made grayscale differences easier to detect. In VR, lighting and shading introduced ambiguity: shadows could make one object appear darker even if the intended shade difference was small. Some participants also noted that headset blur or fuzzy object edges may have affected shade perception. This suggests that VR can introduce visual noise that interferes with tasks requiring high perceptual precision.
For quantity/density comparisons, VR showed more potential. Several participants said that being able to move around, view objects from different angles, and physically approach the dot groups made quantity judgments feel easier or more intuitive. The larger field of view and room-like separation also helped some users perceive density more spatially. This suggests that VR may be more useful when the task benefits from depth, scale, or embodied exploration rather than flat, precise comparison.
Qualitative feedback supports the main conclusion: VR increased engagement and spatial intuition, but 2D was more reliable for fine visual discrimination. Participants described 2D as clearer, more controlled, and easier for direct comparison, while VR was more immersive and useful for viewing density or quantity. However, VR also introduced extra sources of variability, including navigation time, viewing angle differences, shadows, lighting, and headset resolution. As a result, VR did not consistently improve JND detection and may have made subtle size and shade differences harder to judge.
Overall, the results indicate that VR is not automatically better for perceptual tasks. Instead, its usefulness depends on the type of visual difference being tested. For small, precise differences like size and shade, 2D visualizations may be more accurate because they reduce visual noise and standardize viewing conditions. For spatial or density-based tasks, VR may provide advantages by allowing users to move, inspect, and view stimuli from multiple perspectives. This highlights the central finding of the project: immersion can improve engagement, but precision often depends on simplicity and control.