Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why did they decide they could sacrifice frame-rate when the first generation of PC VR - after extensive study - decided it was the one thing you should never sacrifice?

Did somebody in sales overrule the engineers?



> first generation of PC VR

What do you mean by this?

I'm only asking for clarification, because when I hear "first generation of PC VR" - I think "REND386".

I doubt, though, that many other people do, because for some reason, all of that old history and the lessons from it have seemingly had to be "re-discovered" with VR technology after the Rift came out.


Oculus CV1 and the Vive are commonly considered to be "first gen" headsets in the modern VR community, with anything older than that (including recent stuff like the Oculus DK1) being essentially just experiments leading up to the development of gen 1.

Though as with anything involving multiple "generations" of products, the exact definition is pretty fuzzy.


Yeah - I did wonder about clarifying that as I typed it - there's obviously a long history to VR (I'm a child of the 80s so I'm not completely ignorant of the twists and turns of computing history)



I remember it well on BBC Tomorrow's World.

I'm always tempted to watch "The Lawnmower Man" although I suspect it hasn't aged terribly well.


60 fps is what mobile VR works at. And is used only on integrated graphics.


And you'll never get anywhere close to the realism of PC VR on mobile, nor will you have that experience for a long time. Just because it's "possible" to do VR with much lower graphics power, doesn't mean you'll have a similar experience. The trade-off is almost linear.


Realism is good, but not essential. Only some productivity uses need high polygon count and depend less on textures (because you are conveying physical form) while gaming depends on what the desired look is. I don't think that enjoyability is a linear function of polygon count and texture quality.

And realism will come. We still have a good 4x increase in transistor density in Silicon before going for the next clever trick.


Yeah there are a lot of applications I can see that wouldn't need anything more than a flat square, maybe with a video overlaid on it. Not everything needs high-poly normal-mapped graphics with ambient occlusion and 4x AA and 10x AF.

I was fixing a fence last weekend, and man it would have been nice to have an AR overlay of where the next board should be placed and where to put the screw when I can't see the board it's sinking into. That doesn't need realistic graphics, it just needs to be mobile.

For VR, I mean Fruit Ninja was a really popular game for a long time.


Fruit Ninja would be a good example of something that needs 90Hz to be playable for any length of time without causing discomfort.


Although the tech to achieve both are somewhat similar, AR and VR solve completely different problems.

The core value of VR is to replace our real life. Graphics to convince you of this new reality is a hard requirement.

The core value of AR(or MR?) is to augment our real life. Being able to surface information about the world around you is the hard requirement. For example while cooking if it can tell you what and how much of the next ingredient you need in a recipe, it doesn't matter if the displaying of that information on top of the item is a bit laggy. The stepwise jump in utility is still there.


These aren't AR headsets though, they're VR headsets. Microsoft seems to have decided they're going to use the term "Mixed Reality" to refer to both.


Oh I didn't know. That's pretty confusing since I remember mixed reality being just a rebranding of AR...


"The core value of VR is to replace our real life"

That's ... debatable. It's no more that than a regular screen.

VR displays track head movement in 3D space and then give the possibility to use this positioning information when rendering a 3D scene.

This is the basic requirement for AR as well. Pokemon level AR just gives an image of real world where the 3D scene can be overlaid. ARkit level AR uses what ever data inputs are available to guestimate the 3D geometry surrounding the user and offering this 3D information available for the program context. I suppose the third step is when this data is fed to some machine learning algorithm to 1. detect features in the data 2. label them (i.e. 'carrot' etc).

Now, what to do with all this data is the business of the developer of the application.

It's the application and it's users who decide what the value proposition is. It's not stamped on the underlying technology. The technology is the substrate and the facilitator for the end user applicariona but they alone do not provide any value. Owners of Vive in the current ecosystem where VR applications are scarce and unpolished can probably understand this intuitively.

The potential of VR and AR to provide value are different but the value proposition depends wholly on the application developed for the substrates.

Sorry about the long rant. I just wanted to point out that it's non-value adding and detrimental to innovation to slap labels and qualities on things that don't posses them.

Nobody[0] wants just a Ferrari engine. Everybody wants the whole car.

0: In the general consumer context, I know somebody would want it


Microsoft is messed up in many ways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: