Single Viewer vs. Multi-Viewer Glasses-Free 3D Displays: Why It’s Not a War
Or, Why We Should Aim for Both Solutions, at the Same Time.

Glasses-free 3D displays have come a long way since the Nintendo 3DS era, which remains, to date, the most successful mass-market glasses-free 3D device, ever. Today, we have two main approaches: single-viewer displays (like those from Acer, Lenovo, or Sony, among others) and multi-viewer displays (such as Looking Glass Factory or SeeCubic, among others). Both provide an autostereoscopic experience without the use of glasses or a headset, but they have some key differences.
The Optical Core
Both single-viewer and multi-viewer glasses-free 3D displays rely on the same fundamental optical principle: redirecting light from the pixels to specific viewing angles. There are many ways to do this, but to simplify, let’s assume the common use of a lenticular array located in an optical stack that sits in front of the screen. This optical stack (sometimes called filter) bends the light1 so that each eye receives a slightly different image—creating the illusion of depth.
From an optical designer’s perspective, the key question isn’t whether to steer light, but where to steer it. The "sweet spot" (the zone where the 3D effect works) is simply a matter of engineering priorities:
Single-viewer displays focus on precision, using eye tracking to dynamically shift the sweet spot, ensuring a crisp 3D experience for one person. You use a narrow spot and only left and right views are necessary. Good for high resolution views, but requires fast and accurate tracking with a lenticular design made to handle small deviations2.
Multi-viewer displays sacrifice some per-viewer resolution to widen the sweet spot into a "sweet area," allowing groups to see 3D simultaneously—albeit often with softer imagery. More views means less resolution3, but no need of eye tracking and the possibility of a shared experience.
This may come as a surprise, but the underlying hardware is often nearly identical; the magic lies in how the optics (and also software) are tuned. I want to emphasize that neither approach is inherently "better". They serve different needs, much like a sedan vs. a SUV car: both can transport people, but one is optimized for efficiency while the other prioritizes space. Similarly, you could work on an 85-inch display, but you likely prefer a 32-inch one for daily tasks, saving the big screen for meeting rooms or signage.
This shared foundation is why I argue that displays should strive to toggle between modes—the optics are already capable. The challenge isn’t reinventing the wheel, but refining how we steer it.
The Hot Take: Why Not Both?
Here’s where I diverge from the usual discourse: both types of displays should support the other mode.
Yes, I know: this is hard, very hard. A single-viewer display would need a wider sweet spot for multiple viewers, requiring software adjustments4. And viewing distances change drastically: imagine trying to use a 15.6-inch laptop from 2 meters or 6 feet away. A multi-viewer display might need eye tracking to work optimally for a single user, adding complexity (and likely a camera).
But I still believe this versatility is necessary. I believe that a great breakthrough in glasses-free 3D it’s the ability to switch seamlessly between 2D and 3D, making the display behave like a "normal" screen when needed. Think about high-refresh-rate monitors: just because they can run at 240Hz doesn’t mean they always do. The flexibility is what matters.
Let’s revisit the car analogy: Some hypercars aren’t even street-legal. They’re low-volume, ultra-expensive machines built to push boundaries—free to ignore practicality because they exist for the extremes. But when car manufacturers want to go mainstream, even their highest-performance cars must comply with regulations and user expectations5.
The goal isn’t to force users to adapt to a new way of working—it’s to assure them that their new display can do everything the old one did, and then some. History has shown that asking people to fundamentally change their habits is a recipe for rejection. The best innovations don’t disrupt; they expand possibilities without sacrificing familiarity.
And let's be honest: Telling a solo user to look for a 'sweet spot' or, preventing a team from viewing a screen together violates the most fundamental expectations we have for displays. We've spent decades interacting with screens that simply work wherever you sit, however many people gather around. Any glasses-free 3D solution that breaks these core assumptions feels immediately alien, like a car that only drives in reverse.
Conclusion: It’s About Use Case, Not Superiority
I’ve often seen single-viewer and multi-viewer displays framed as irreconcilable opposites. I can’t see why they shouldn’t be. They serve different purposes. That said, a display that can adapt to both scenarios would be far more compelling. The future of glasses-free 3D isn’t in choosing one over the other, but in making the technology flexible enough to handle both, even if it is designed to excel for one use case only.
Using Snell’s law, like a pencil bending in water, you can read here an explanation.
If the optical design doesn’t take in account that the eye tracker will have a small inaccuracy or delay, the perceived 3D effect will be substantially worse of what the display can do.
Assuming the same resolution in pixels for the underlying panel.
Or replacing the optical stack, but that prevents good optical bonding with the panel, and results in a bad image. In any case, a software adjustment seems also likely.
Even if it is in the form of an optional, very expensive add-on.


