T O P

  • By -

RidgeMinecraft

This is a lot closer than a lot of people think. Meta was working on Varifocal HMDs and showed them off about 5 years ago, saying that we'd see them implemented in a consumer application in 5 ish years. That time should be coming fairly soon. I tried some Liquid Crystal based multifocal lenses while I was at AWE this year, they're seeming pretty production ready at this point.


DatGameh

Interesting! Though, this varifocal tech: Based on how you describe it, is it the lens that simulates the depth for you, or does it actually project objects at variable depths for your eyes to do the focusing for you?


Lujho

The lens reacts to where you’re looking and adjusts the accommodation distance accordingly. It’s not a true light field.


DatGameh

I see, that's a bummer. But still cool nonetheless! I imagine stuff like that is useful for someone with far/nearsightedness, but a bit jarring to be done in "real time", when the lens and your eyes try to do the focusing at the same time.


mike11F7S54KJ3

Multiple event-based eye tracking research shows \~10,000fps tracking. [https://ieeexplore.ieee.org/document/9389490](https://ieeexplore.ieee.org/document/9389490) Should technically be natural, especially with liquid crystal based focusing.


RidgeMinecraft

Essentially, using eye tracking, the lens adjusts its full focal depth. With good enough eye tracking (Which we have by now) this should feel roughly the same as CREAL's tech.


yewnyx

The problem is the intense brightness dropoff in digital varifocal causes the screen to have to run much, much brighter, which causes heat issues. The mechanical version doesn’t have that issue, but has delicate moving parts. Since that time the focus shifted completely over to standalone VR. The Quest Pro being delayed for a long time due to troubled thermal design sort of completes the picture. Digital varifocal being power hungry and heat generating kind of clinches its nonviability on any plausible consumer headset without further research and innovation.


RidgeMinecraft

Meta's choice to switch to standalone VR was made likely far before that announcement. Additionally, multiple digital varifocal-based strings have recently been added to Meta's operating system for the Quest.


LeeeonY

Step 1: Creating a mechanical structure that can adjust its focus and is small enough to fit in a headset. Another way of looking at this is: you want a pair of glasses that can change its prescriptions while worn on the face, without looking ridiculously bulky. Step 2: Eye-tracking. We need to know where the player is looking at. The tech is already there but as an active user of it, I feel like its accuracy may not be good enough for the job. Step 3: Once we know which pixel the user is looking at, we get the depth (distance) info in the virtual space from the underlying 3D program (either via raycast or directly save that info before rendering). We then adjust the mechanical parts in step 1 to set the focal distance to that depth. I'm generally quite optimistic when it comes to VR tech but TBH, this one doesn't sound like something we'll see in the next decade. Rapidly changing focal distance in a headset is definitely not an easy task, and the profit that may generate from the resolution of this issue is not enough to justify the cost: The vergence-accommodation conflict has been known for as long as VR existed, but how often do you see VR users reporting it to be a dealbreaker for them to keep using VR?


Lujho

It doesn’t need to be mechanical, Meta have had solid state varifocal display prototypes for years now. There was just a rumour that the next pro model might actually ship with them.


DatGameh

Woah, is that true? What kind of tech does it use? I'm curious to read about this!


Lujho

It uses waveguides I believe. This video has a section on their varifocal prototypes starting at 9:30. It goes through their mechanical solutions first before moving onto their solid state ones. https://youtu.be/x6AOwDttBsc?si=Ltt_03VT9eeZB84I


DatGameh

Ooo, interesting! I'm surprised at how long and in-depth it gets. Will need to watch this when I have the chance. Thanks!


wescotte

You might find [this video](https://www.youtube.com/watch?v=LQwMAl9bGNY) interesting. It's a talk by Meta's Director of Display Systems Research and goes pretty deep on the various methods they've tried to solve a specific problem (vergence accommodation conflict) which is very related to varifocal headsets. If you're already familiar with vergence acommodnation conflct you might want to [jump ahead to 27minutes](https://youtu.be/LQwMAl9bGNY?t=1614) as there is where they start showing the actual prototypes and lab tests.


DatGameh

Light field displays don't require eye tracking nor mechanical features to simulate this. CREAL's light field works by projecting light at differing depths, so your eyes do all the focusing work for you. They already have working demos, so surely alternative techs for variable focal depth shouldn't be this complex nor expensive?


Chriscic

Light fields use a massive amount of data. I believe the practicality of that approach is orders of magnitude less than changing focus length of a lens somehow.


mike11F7S54KJ3

They produce the same picture approx 16 times from different angles. So it's like overlayed 640x480. The centre may be sharp and over bright, and the edge may be dark and pixelated...


RookiePrime

No knowing for sure, but I'm guessing we'll be waiting until lightfields are usable in VR, and that could be a long time. We've seen a few different R&D solutions shown off, but they haven't resulted in things that people could buy and use, so they're probably prohibitively expensive and/or fragile. Or they don't really work, and they were showing them off more to court investor funds to continue to research them until they can work. If you're waiting to get into VR just because of the lack of dynamic focal depth, I think you should give it a try as it is now and see how you feel using it. I only really notice the lack of ability to refocus my eyes when looking at things really, really close to my eyes. It doesn't cause nausea for me, and if there is eyestrain from it, it's very minimal.


Different_Ad9336

I don’t see why with eye tracking and Ai tech we can’t convincingly emulate the look and feel of standard human focal depth just on the software side in the next couple years.


RookiePrime

It's a physical issue, software wouldn't be enough. No matter what you display on the screen, the light coming off the screen has to pass through the lenses, and the lenses are what set your eyes to the 2m-ish focal distance. That's why Facebook's prototype solution was an optical stack of multiple liquid crystal lenses of different focus lengths. Eye tracking would see where you're looking, infer depth data from the program you're using, and charge some combination of the lenses that adds up to the correct focal distance.


Different_Ad9336

Go ahead and look up how much of an area your direct and then peripheral vision can accurately and most sharply focus on. Now apply that to areas of the viewing plane as your eyes move around. Displacement based on focus range could very easily be implemented in the vantage point of the person wearing the headset merely by software. I could actually link you to articles related to the subjects I am merely hinting toward you, but I am entirely sure you wouldn’t be able to properly grasp or interpret any of it.


james_pic

Link the articles. I dare ya.


Different_Ad9336

I dare you to find them yourself.


james_pic

What you've given us is so vague it's hard to know what to even look for, and my gut feeling is that you've misunderstood articles that don't support your point at all, so I'm good thanks.


Different_Ad9336

You’re good but you’re also aloof.


james_pic

Look, if what you're claiming is true, I'd be excited to read about it. But I'm not going off on some wild goose chase without so much as a clue where to start.


RookiePrime

I'm curious to hear about this, actually. This is the first I've heard of a way to accomplish dynamic focal depth with existing lens tech -- everything I've seen is to the contrary, that we would either need an optical stack without lenses or one in which the lens system itself accommodates by switching focal depth. It would be a big deal if companies could solve this problem without having to re-engineer VR hardware and manufacture.


Different_Ad9336

The only real world references I have to prove that you can trick the brain into seeing these illusions are 3D effects in imax theatres and even more impressive Star Wars rides etc at Disney theme parks.


RookiePrime

I'm not totally sure I know which effects you're referring to. The theme park one would be Pepper's Ghost, I'm guessing? And for IMAX, you're referring to 3D movies where you wear 3D glasses? If the theme park one is Pepper's Ghost, I think that's sorta what waveguide and birdbath optics are doing in the AR space. Field of view is too narrow on those, though, once you make them small enough to wear comfortably. Maybe that'll change in the future, though. As for 3D glasses (if that's what we're talking about), those cause a lot of eyestrain, so that might be a non-starter as a replacement for existing optics. Beyond that, I dunno. Maybe that would be a viable alternative. I guess I figure that's such an obvious one that if it was viable, Oculus (before Facebook acquired them) would've tried that first.


james_pic

I don't believe the 3D glasses solve focal distance, in any case. You're still focusing on the screen and you'll still get vergence accommodation conflict. Stacked waveguides would in theory partially achieve this effect, but the cost and light loss are steep enough even with one waveguide


DatGameh

I suppose the thing I want to avoid is to buy VR goggles frequently because of how quickly developing the world of VR is. I like the idea of having a "mature" product that will stay relevant for the next 3-5 years before replacing, but it feels like VR isn't in that stage quite yet. Dynamic focal depth is that milestone I'd like to wait for before getting one.


RookiePrime

I hear that. I've had my Valve Index since 2019, and it certainly feels like it's old and dusty now, compared to the new crop of headsets. VR has been evolving rapidly for years, and it's hard to say when it'll slow down enough that a headset won't feel outdated within a year or two of release. Well, I hope you won't have to wait too long. It'd be real cool to see a revolutionary tech upgrade in headsets like dynamic focal depth. Headsets that come out today aren't substantially different than the ones that came out in 2016.


Blaexe

For the first time there are actual Varifocal references in the Quest firmware. [https://www.uploadvr.com/firmware-finding-quest-pro-2-varifocal-lenses/](https://www.uploadvr.com/firmware-finding-quest-pro-2-varifocal-lenses/) So yes, it's possible that the next (very expensive) Quest Pro will have that feature. Maybe 2026/27. For mainstream headsets? Probably quite a bit longer.


DatGameh

I see...! Now this is something I very much look forward to :) Thanks for the news!


JorgTheElder

None of us over about 45 really care. We can't focus on things up close anyway so are not bothered by the vergence-accommodation conflict. 😁


emertonom

The technical term for the issue you're experiencing, where you look at something at a different depth but the focal distance is the same, is "vergence-accommodation conflict." There are three common approaches to dealing with it. The first is to ignore it and hope people don't notice. This works for a surprising number of folks. The second is varifocal displays with eye tracking. The idea is that they track both eyes, and by looking at the angle between them, they can tell how far away the thing is that you're looking at; then they can adjust the distance of the focal plane of the display to match what you're expecting. This has come a long way, thanks to liquid crystal electrically variable lenses; but it's still a few years off. I'd expect to see this before 2030 but probably not before 2026. The third is light field displays. This is a fundamentally different technique, in which the entire light field is reconstructed. This doesn't have to respond when your eyes change focus, so it's a little bit more natural in that sense; but the trade-off is that it requires rendering many, MANY more pixels than a single-focal-depth display. It's basically the inverse of what the Lytro camera did. Each element of the display is a lens that combines light from multiple pixels to account for different angles. This tech is really nice in a sense, because it's displaying the whole image--you can naturally shift your focus around as fast as you like, and the display doesn't have to respond rapidly to adjust to that change. It's particularly well-suited to AR for that reason. But the huge rendering cost makes it basically a non-starter for now. We might get this eventually, when GPU power so vastly outstrips our graphics needs that resolution and FOV area are non-issues, but for the foreseeable future this is going to be an extremely niche tech at best. I wouldn't expect to see this in consumer headsets for at least a decade, and probably longer.


_hlvnhlv

Depending of how good or bad do you want it to be, it could be literally months, or years... So, the thing is that Facebook already had varifocal headsets like 5 years ago or something. But they basically were using "pankake lenses with polarized lenses", like, you had lenses that could be ""enabled or disabled"" with an electric charge, and thus modifying the focal distance. The issues are that you need eye tracking, you need to ""live update" the compositor for fixing the distortions and chromatic aberration, and you need things like access to the game z buffer to know where are you looking at. But the real issue is that we don't know how those lenses behave, like, maybe the efficiency is even worse than what pankake already offers, and you have to blast the lenses with thousands of nits, maybe it switches very slowly, who knows. But there is a major catch... This is not really a varifocal image, like, yeah, the exact point at what you are looking at, is at the correct distance, but this is also true for literally everything else... In reality you would need to modify the focus, and also blur accordingly everything else, otherwise, it's not really perfect This is why I think that it could be decades in the future


DatGameh

Good point. Another person mentioned a vid that described the tech working, and it does sound like that's how it works. I'd be surprised if images where everything gets closer or farther suddenly based on where you look doesn't affect most people


mike11F7S54KJ3

CREAL is not the solution you want for high res VR. VR focal depth requires efficient, fast, mass producible eye tracking. Event-sensor based eye tracking is recent and works. Focal blur is handled by Valve & Meta with ImagineOptix multi layer liquid crystal tech, so the image can stay at infinite focus but what you see changes based on image depth information and where you look. Focal blur by the mutli-layered liquid crystal tech is better than using post-processing/gpu blur for latency.


royaltrux

I know it’s not but it “feels” like it’s there. I feel like I’m focusing on different distances, and can do it at will just as if I was sitting in a car and can focus on the windshield or something far away, VR is like that. 


DatGameh

I see what you mean (I've tried VR) but to have that depth adds another dimension, you know what I mean? Imagine focusing on something close or far, and everything else blurring out of focus. No eye tracking or sensing necessary - just a natural result of the display projected onto a light field.


royaltrux

That's how it is. As far as "focus" is concerned it's like real life (feels/acts like it). If I'm sitting in a virtual airplane cockpit, if I focus on an oil smear on the canopy, the terrain in front of me is out of focus and blurry, if I relax and look at the further away terrain, the oil smear is blurry. The headset doesn't know what I'm looking at so this is a pretty organic thing that's happening. It sure seems/feels real to me in the focus department. (Windows MR, Quest 2 and 3, some experience with Rift and Vive)


Most_Way_9754

It's not real but eye tracking+foveated rendering sort of stimulates the effect that OP is after. A used Quest2 does not set you back by much and you can enjoy the VR experience today (without the focal depth effect) while you wait for the light field tech to be integrated into an actual product.


Different_Ad9336

Very close because eye tracking is becoming a standard in higher end headsets. I would give it 3 years maybe and it will likely become a common feature.


JorgTheElder

Eye-tracking doesn't help without tech to change the focal-plane.


Different_Ad9336

Yeah that is where Ai, frame interpolation and hdr comes into Play. Don’t take my word for it, go ahead and look up information about eye tracking, Ai and software tricks that allow for artificial simulation of depth of field as well as other depth perception tricks that were previously unobtainable via basic vr recoding or referencing. These processes that were once pipe dreams of internal


james_pic

These things can create bokeh, but if you want to change how far away your eye's lenses need to focus (and fix vergence accommodation conflict), you need optics.


JorgTheElder

> Yeah that is where Ai, frame interpolation and hdr comes into Play. That cannot help with the vergence-accommodation conflict. The focal length has to physically change.


Different_Ad9336

No it doesn’t, if what you’re claiming was absolutely true then focal depth as an illusion wouldn’t work in 3D animation in iMax theaters or Disney park rides.


JorgTheElder

You don't know what you are talking about. The vergence-accommodation conflict is only an issue when things are close to you. It is not a problem on a big screen far away, but it is a huge issue for people under 40 on VR headset. The distance your eyes need to accommodate for *(the focal plane)* needs to change to match the distance that convergence tells your brain the object you are looking at is. It can't be done without the optics changing to match what you are looking at.


XRCdev

We had dual depth in magic leap 1 it was limited but still effective as indicator of improvement


DatGameh

Interesting, and for a headset that small too. A little pricey but hey it's only about time before it "trickles down" to cheaper headsets.


SledgeH4mmer

Hopefully never. Accommodation is the last thing I want to do in a VR headset. People begin to naturally lose their ability to accommodate in their 40's anyway. Yet they don't walk around saying the real world isn't immersive enough.


DatGameh

That's fair... but then again, why not enjoy accomodation while we can? Worst case, you could make the lightfield project onto a single focal depth like they do today


SledgeH4mmer

But now you're wasting a huge amount of resources to achieve something that doesn't add much. I'd much rather have a wider field of view, better pixel density, etc.


FrontwaysLarryVR

If you're interested in VR *now*, to the point that you're waiting on very niche features to come out, I'd honestly recommend just jumping in. Nausea in VR can happen, but it generally happens if the framerate is too low or if your brain hasn't adjusted yet. We're literally brains piloting a mech suit made of bones and meat, so becoming a brain piloting a mech that's also piloting a virtual body is not something we evolved to do naturally, you have to build up to it. For some it causes nausea, some get disassociaton for a while, but it goes away for basically everyone. Just take breaks regularly if you get nauseous, and you'll be fine. I have a strong feeling that varifocal displays will do very little to reduce motion sickness for those that get it. It'll be amazing and maybe help a little bit, but generally it's not gonna revolutionize the nausea crowd, I don't think.