Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's just because they don't have competition.

Back in the dark days of 2015 we used to spend a day or two just getting tensorflow working on a GPU because of all the install issues, driver issues, etc. Theano was no better, but it was academic research code, we didn't expect better.

Once pytorch started gaining ground, it forced to adapt - Keras was written to hide tensorflow's awfulness. Then Google realized it's an unrecoverable situation of technical debt and they started building JAX.

With AMD, Intel, Tenstorrent, and several other AI chip specialists coming with pytorch compatibility, NVIDIA will eventually have to adapt. They still have the advantage of 15 years of CUDA code already written, but pytorch as ab abstraction layer can make the switch easier.



The problem is that NVidia is a single company participating with multiple interdependent markets. They are participating with the market of hardware specification, and they are participating in the market of driver implementation, and they are participating in the market of userland software. This is called "vertical integration".

Because of copyright, NVidia gets an explicit government-enforced monopoly over the driver implementation market. Sure, 3rd-party projects like nouveau get to "compete", but NVidia is given free reign to cripple that competition, simply by refusing to share necessary hardware (and firmware) specs; and also by compelling experienced engineers (anyone who works on NVidia's driver implementation) to sign NDAs, legally enforcing the secrecy of their specs.

On top of this, NVidia gets to be anti-competitive with the driver-compatibility of its userland software, including CUDA, GSync, DLSS, etc.

When a company's market participation is vertically integrated, that participation becomes anticompetitive. The only way we can resolve this problem is be dissolving the company into multiple market-specific companies.


PyTorch and CUDA solve completely different problems. CUDA is a general purpose programming environment. PyTorch is for machine learning. PyTorch won't ever displace CUDA because there are things other than machine learning models that GPUs are good at accelerating.


Yeah, the amount of tunnel vision from AI/ML users thinking that Nvidia exists solely for their use is funny to watch. Try writing anything other than ML in pytorch. You can't? You can in CUDA. There's a much bigger world than ML out there.


> Try writing anything other than ML in pytorch. You can't?

Of course you can! It's a library of vectorized math operations. You don't need to do gradient descent on the graph either.


I tried that, tensorflow is actually better for general purpose compute, jax a lot better. Pytorch seems to omit most of the non-ml basic building blocks, while tf kind of gives you most of the XLA api.


Nvidia's stock price isn't at an all time price because of all the people writing fluid dynamics in CUDA.

Nor is it because of all the tensorflow models people are writing, to be honest.


Of course it's all of the mining, but that's not using pytorch either. It's using CUDA


GPU mining went waaay down since Ethereum went POS (Proof of Stake) almost 2 years ago. Does BTC even use GPU's for mining? I am pretty sure they use ASICS.


What is being mined using CUDA?


And similarly from people who consider Nvidia to be the "Gaming GPU" company, not understanding why it's so big now.


Pytorch and Jax have a numpy like API, so you can use PyTorch for other things too.


It was an example.


> With AMD, Intel, Tenstorrent, and several other AI chip specialists coming with pytorch compatibility, NVIDIA will eventually have to adapt.

I don't see how Nvidia has to do anything since PyTorch works just fine on their GPUs, thanks to CUDA. If anything, they're still one of the best platforms and that's definitely not because CUDA isn't competitive.

I hate stuff that only works on certain GPUs as much as the next person, but sadly competition has only really started to catch up to CUDA very recently.


> I don't see how Nvidia has to do anything since PyTorch works just fine on their GPUs

It was an example. The example was: competition from pytorch meant that tensorflow had to improve their DX to keep up.


That example was earlier in the comment and isn't what I quoted.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: