Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Larger learning sets, essentially. I'm a neural net novice, but I think the general perspective seems to be that 8 vs 12 is not a big deal; either you are training something that's going to have to get split up into multiple GPUs anyway, or there's probably a fair amount of efficiency you can get in your internal representations shrinking RAM usage.

One thing not mentioned in this gaming-oriented press release is that the Pascal GPUs have additional support for really fast 32 bit (and do I recall 16 bit?) processing; this is almost certainly more appealing for machine learning folks.

On the VR side, the 1080 won't be a minimum required target for some time is my guess; the enthusiast market is still quite small. That said, it can't come too quickly; better rendering combined with butter-smooth rates has a visceral impact in VR which is not like on-screen improvements.



8 vs 12 is a really big difference - especially with state of the art deep learning architectures. The problem isn't the size of the dataset but of the model. Large Convolutional Neural Networks often need the whole 12 GB VRAM of a Titan card for themselves and distributing over multiple cards/machines adds a lot of complexity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: