Hacker Newsnew | past | comments | ask | show | jobs | submit | bpodgursky's commentslogin

You guys can hate him, but Alex Karp of Palantir had the most honest take on this recently which was basically:

"Yes, I would love to pause AI development, but unless we get China to do the same, we're f***, and there's no advantage unilaterally disarming" (not exact, but basically this)

You can assume bad faith on the parts of all actors, but a lot of people in AI feel similarly.


In China, I wonder if the same narrative is happening, no new junior devs, threats of obsolescence, etc. Or are they collectively, see the future differently?

Most reporting I've seen rhymes with this, from last year https://www.theguardian.com/technology/2025/jun/05/english-s...

They absolutely see the future differently because their society is already set up for success in an AI world. If what these predictions say become true, free market capitalism will collapse. What would be left?

The reason you think its honest is because you already believed it.

What does he gain by saying "Yes I'd love to shut all of this down"?

He gets to pretend to be impartial ("sure, I've love to shut this down and lose the billions coming my way"),

and by pretending that he has no option but to "go against his own will" and continue it, he gets to make it sound nuclear-bomb-level important.

This is hyping 101.


Its rhetoric - he gains your support. He doesn't want to shut it down.

You should really step back and think, maybe you've spun a very complicated web of beliefs on top of your own eyes.

Yes, maybe everyone is playing 8 dimensional chess and every offhand comment is a subtle play for credibility. Or maybe sometimes people just say what they think.


This is what I think:

- Alex Karp genuinely believes China is a threat

- I think China is an economic threat, especially for tech

- An AI arms race is itself threatening; it is not like the nuclear deterrent

- Geopolitical tensions are very convenient for Alex Karp

- America has a history of exaggerating geopolitical threats

- Tech is very credulous with politics


"This late night informercial guy is not genuinely believing the product to be amazing, he's just saying shit to sell it"

"Yes, maybe everyone is playing 8 dimensional chess and every offhand comment is a subtle play for credibility. Or maybe sometimes people just say what they think".

I mean, dude, a corporate head having a conflict of interest in saying insincere shit to promote the stuff his company makes is not some conspiracy thinking about everybody playing "8 dimensional chess".

It's the very basic baseline case.


If he was making nuclear weapons, would you question the statement, "I would be happy if everyone stopped making nuclear weapons, but if China is making them, we can't unilaterally disarm"? I'm very willing to believe a CEO who says that! They are human beings!

Whether or not you believe it, a lot of people view advanced AI the same way.


>They are human beings!

Barely.

Even in the nuclear weapons CEO case, if they actually believed that, they'd be in another line of business, not making millions of nuclear weapons.


Yeah but it’s in his interest to encourage an arms race with China.

OK, but the other view equally compatible with the evidence is that he is scared of getting rolled by an AI-dominant China and that's why he's building tools for the dept of defense.

Like I said you can believe whatever you want about good-faith motives, but he didn't have to say he wanted to pause AI, he could have been bright-and-cheery bullish, there was no real advantage to laying his cards out on his qualms.


Answer the question.

If the USA pauses AI development, do you think China will?


"I would love to stop getting your money, but consider the children/China/some disaster/various scenarios. That's why you should continue to shower me with billions".

oh please. people said that about the moon, and nuclear weapons too. and yet it's the one side who has a track record of using new technology to intidmidate.

HN has become so marxist they hate the country they live in

Oh no! What a pity. In hindsight, mind you, countries that treat their population like disposable labor units to be wrung dry and then discarded, tend to see the population develop opinions about that eventually.

>disposable labor units

This is the source of migration allowance but I bet you stand there with a "Refugees welcome" sign.


xAI is infamous for not caring about alignment/safety though. OpenAI always paid a lot more lip service.

I don't want to be rude but like, maybe you should pre-register some statement like "LLMs will not be able to do X" in some concrete domain, because I suspect your goalposts are shifting without you noticing.

We're talking about significant contributions to theoretical physics. You can nitpick but honestly go back to your expectations 4 years ago and think — would I be pretty surprised and impressed if an AI could do this? The answer is obviously yes, I don't really care whether you have a selective memory of that time.


I don't know enought about theoretical physics: what makes it a significant contribution there?

It's a nontrivial calculation valid for a class of forces (e.g. QCD) and apparently a serious simplification to a specific calculation that hadn't been completed before. But for what it's worth, I spent a good part of my physics career working in nucleon structure and have not run across the term "single minus amplitudes" in my memory. That doesn't necessarily mean much as there's a very broad space work like this takes place in and some of it gets extremely arcane and technical.

One way I gauge the significance of a theory paper are the measured quantities and physical processes it would contribute to. I see none discussed here which should tell you how deep into math it is. I personally would not have stopped to read it on my arxiv catch-up

https://arxiv.org/list/hep-th/new

Maybe to characterize it better, physicists were not holding their breath waiting for this to get done.


Thank you!

Not every contribution has immediate impact.

That doesn't answer the question. That statement just admits "maybe" which isn't helpful or insightful to answering it.

I never said LLMs will not be able to do X. I gave my summary of the article and my anecdotal experiences with LLMs. I have no LLM ideology. We will see what tomorrow brings.

> We're talking about significant contributions to theoretical physics.

Whoever wrote the prompts and guided ChatGPT made significant contributions to theoretical physics. ChatGPT is just a tool they used to get there. I'm sure AI-bloviators and pelican bike-enjoyers are all quite impressed, but the humans should be getting the research credit for using their tools correctly. Let's not pretend the calculator doing its job as a calculator at the behest of the researcher is actually a researcher as well.


If this worked for 12 hours to derive the simplified formula along with its proof then it guided itself and made significant contributions by any useful definition of the word, hence Open AI having an author credit.

> hence Open AI having an author credit.

How much precedence is there for machines or tools getting an author credit in research? Genuine question, I don't actually know. Would we give an author credit to e.g. a chimpanzee if it happened to circle the right page of a text book while working with researchers, leading them to a eureka moment?


>How much precedence is there for machines or tools getting an author credit in research?

For a datum of one, the mathematician Doron Zeilberger give credit to his computer Shalosh B. Ekhad on select papers.

https://medium.com/@miodragpetkovic_24196/the-computer-a-mys...

https://sites.math.rutgers.edu/~zeilberg/akherim/EkhadCredit...

https://sites.math.rutgers.edu/~zeilberg/pj.html


Interesting (and an interesting name for the computer too), thanks!

Not exactly the same thing, but I know of at least two professors that would try to list their cats as co-authors:

https://en.wikipedia.org/wiki/F._D._C._Willard

https://en.wikipedia.org/wiki/Yuri_Knorozov


That is great, thank you!

I have seem stuff like "you can use my program if you will make me a co-author".

That usually comes up with some support usually.


it's called ethics and research integrity. not crediting GPT would be a form of misrepresentation

Would it? I think there's a difference between "the researchers used ChatGPT" and "one of the researchers literally is ChatGPT." The former is the truth, and the latter is the misrepresentation in my eyes.

I have no problem with the former and agree that authors/researchers must note when they use AI in their research.


now you are debating exactly how GPT should be credited. idk, I'm sure the field will make up some guidance

for this particular paper it seems the humans were stuck, and only AI thinking unblocked them


> now you are debating exactly how GPT should be credited. idk, I'm sure the field will make up some guidance

In your eyes maybe there's no difference. In my eyes, big difference. Tools are not people, let's not further the myth of AGI or the silly marketing trend of anthropomorphizing LLMs.


>How much precedence is there for machines or tools getting an author credit in research?

Well what do you think ? Do the authors (or a single symbolic one) of pytorch or numpy or insert <very useful software> typically get credits on papers that utilize them heavily? Well Clearly these prominent institutions thought GPT's contribution significant enough to warrant an Open AI credit.

>Would we give an author credit to e.g. a chimpanzee if it happened to circle the right page of a text book while working with researchers, leading them to a eureka moment?

Cool Story. Good thing that's not what happened so maybe we can do away with all these pointless non sequiturs yeah ? If you want to have a good faith argument, you're welcome to it, but if you're going to go on these nonsensical tangents, it's best we end this here.


> Well what do you think ? Do the authors (or a single symbolic one) of pytorch or numpy or insert <very useful software> typically get credits on papers that utilize them heavily ?

I don't know! That's why I asked.

> Well Clearly these prominent institutions thought GPT's contribution significant enough to warrant an Open AI credit.

Contribution is a fitting word, I think, and well chosen. I'm sure OpenAI's contribution was quite large, quite green and quite full of Benjamins.

> Cool Story. Good thing that's not what happened so maybe we can do away with all these pointless non sequiturs yeah ? If you want to have a good faith argument, you're welcome to it, but if you're going to go on these nonsensical tangents, it's best we end this here.

It was a genuine question. What's the difference between a chimpanzee and a computer? Neither are humans and neither should be credited as authors on a research paper, unless the institution receives a fat stack of cash I guess. But alas Jane Goodall wasn't exactly flush with money and sycophants in the way OpenAI currently is.


>I don't know! That's why I asked.

If you don't read enough papers to immediately realize it is an extremely rare occurrence then what are you even doing? Why are you making comments like you have the slightest clue of what you're talking about? including insinuating the credit was what...the result of bribery?

You clearly have no idea what you're talking about. You've decided to accuse prominent researchers of essentially academic fraud with no proof because you got butthurt about a credit. You think your opinion on what should and shouldn't get credited matters ? Okay

I've wasted enough time talking to you. Good Day.


Do I need to be credentialed to ask questions or point out the troubling trend of AI grift maxxers like yourself helping Sam Altman and his cronies further the myth of AGI by pretending a machine is a researcher deserving of a research credit? This is marketing, pure and simple. Close the simonw substack for a second and take an objective view of the situation.

If a helicopter drops someone off on the top of Mount Everest, it's reasonable to say that the helicopter did the work and is not just a tool they used to hike up the mountain.

Who piloted the helicopter in this scenario, a human or chatgpt? You'd say the pilot dropped them off in a helicopter. The helicopter didn't fly itself there.

“They have chosen cunning instead of belief. Their prison is only in their minds, yet they are in that prison; and so afraid of being taken in that they cannot be taken out.”

― C.S. Lewis, The Last Battle


"For me, it is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring."

— Carl Sagan


I read the narnia series many times as a kid and this one stuck with me, I didn't prompt for it.

I have no real way to demonstrate that I'm telling the truth, but I am ¯\_(ツ)_/¯


Sorry for the assumption. For what it's worth, I read one of Sagan's books last year, but pulled the quote from Goodreads :P

Anthropic took the day off to do a $30B raise at a $380B valuation.

Most ridiculous valuation in the history of markets. Cant wait to watch these compsnies crash snd burn when people give up on the slot machine.

As usual don't take financial advice from HN folks!

not as if you could get in on it even if you wanted to

WeWork almost IPO’s at $50bn. It was also a nice crash and burn.

Why? They had $10+ billion arr run rate in 2025 trippeled from 2024 I mean 30x is a lot but also not insane at that growth rate right?

It's a 13 days old account with IHateAI handle.

If rich techies had too much influence in California, the state government would not look like what it does. I mean I just don't see how you get to this opinion after any real review of the evidence.

You cherry picked California which is very much an outlier compared to the rest of the country? Are you denying the effect of money affecting political outcomes, the rich wouldn’t spend their money on media and PACs if it didn’t work would they?

> Y Combinator CEO Garry Tan launches group to influence CA politics

I'm talking about the actual issue being discussed! Garry Tan isn't launching a group to influence Wyoming politics.


What about my comment? :)

I think this is completely missing the point… are you really saying California would be improved by more rich people being able to game the system? I think CA would benefit from more visionary politicians (i.e. not paid for) and more people at the bottom end being able to have homes in the big cities and less wealth accumulation, maybe reducing the gap between power and poverty means we could have better societies. I’m not talking about crazy change btw, reducing billionaires wealth to that of the nineties would allow us to rebuild a lot of great things and employ a lot of people. Putting money into stocks, real estate and crypto does not create wealth.

> I mean I just don't see how you get to this opinion after any real review of the evidence.

Graybeard here: took me a while to get it, but, usually these are chances to elucidate what is obvious to you :)* ex. I don't really know what you mean. What does the California state government look like if rich techies had even more influence? I can construct a facile version (lower taxes**) but assuredly you mean more than that to be taken so aback.

* Good Atlas Shrugged quote on this: "Contradictions do not exist. Whenever you think that you are facing a contradiction, check [ED: or share, if you've moseyed yourself into a discussion] your premises."

** It's not 100% clear politicians steered by California techies would lower taxes ad infinitum.


There's simply no way to look at the governing going on in California and think this is what the tech industry or movie industry or (formerly) oil industry wants for one of its traditional homes.

The government there has suffered since it went to basically one-party rule. There's no counterbalance for any bad policy ideas.


Tbh I think its awesome here, arrived 6 weeks ago. (both of these comments suffer from...I think begging the question?...basically, like, what's so clearly _not_ what tech/film/almond growers/whatever want in California?)

[flagged]


Less competent might be a disservice. But I've seen nothing to suggest that execs/founders are any more competent that the average employee. Execs and founders just had a few more dice rolls go their way.

Yes that might be the high-level logic, but if you give a MANPAD to a 19 year old sicario on meth, accidents do happen.

I’d be surprised if cartels would tolerate hard drug use by their soldiers, it seems like the kind of thing they’d kill you for, lack of discipline.

I think you misunderstood that movie.

2034? That's the longest timeline prediction I've seen for a while. I guess I should file my taxes this year after all.

This is a vacuous statement because in much of the world (ie most of the developing world), there's no such thing as "prescription only" medicine, people can buy whatever they want over the counter.

No… The rest of the first world countries as a counter example.


In Germany, I cannot buy ibuprofen, paracetamol (acetamenophen), or ASS (Aspirin - TM Bayer) at a grocery or "Drogerie" (place to buy cosmetics and other health & beauty items). I have to go to a pharmacy and ask for it at the counter - truly "OTC", and they're expensive compared to their US retail equivalents. That said, most common prescription drugs are significantly cheaper in Germany than in the US, even without insurance.

Antibiotics are definitely prescription-only, as are birth control and morning after ("Plan B") pills. I was once able to talk an airport pharmacy into selling me an albuterol inhaler without a script in hand, but only when I promised that I'd had it before and explained how to use it, and that I was about to get on a flight.


I know it feels comforting to say this, but deep down you have to realize that saying things confidently does not cause them to become true.

Was that comforting? At least the commentator came with a source

Theres a real problem with people who are too tech-induced: youre disconnected from how the average person interacts with stuff.

Lol this is why you aren't a VC. Even if every single Musk venture failed other than SpaceX, the investments would have paid off wildly well. You aim for the tails not the median.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: