You guys can hate him, but Alex Karp of Palantir had the most honest take on this recently which was basically:
"Yes, I would love to pause AI development, but unless we get China to do the same, we're f***, and there's no advantage unilaterally disarming" (not exact, but basically this)
You can assume bad faith on the parts of all actors, but a lot of people in AI feel similarly.
In China, I wonder if the same narrative is happening, no new junior devs, threats of obsolescence, etc. Or are they collectively, see the future differently?
They absolutely see the future differently because their society is already set up for success in an AI world. If what these predictions say become true, free market capitalism will collapse. What would be left?
You should really step back and think, maybe you've spun a very complicated web of beliefs on top of your own eyes.
Yes, maybe everyone is playing 8 dimensional chess and every offhand comment is a subtle play for credibility. Or maybe sometimes people just say what they think.
"This late night informercial guy is not genuinely believing the product to be amazing, he's just saying shit to sell it"
"Yes, maybe everyone is playing 8 dimensional chess and every offhand comment is a subtle play for credibility. Or maybe sometimes people just say what they think".
I mean, dude, a corporate head having a conflict of interest in saying insincere shit to promote the stuff his company makes is not some conspiracy thinking about everybody playing "8 dimensional chess".
If he was making nuclear weapons, would you question the statement, "I would be happy if everyone stopped making nuclear weapons, but if China is making them, we can't unilaterally disarm"? I'm very willing to believe a CEO who says that! They are human beings!
Whether or not you believe it, a lot of people view advanced AI the same way.
OK, but the other view equally compatible with the evidence is that he is scared of getting rolled by an AI-dominant China and that's why he's building tools for the dept of defense.
Like I said you can believe whatever you want about good-faith motives, but he didn't have to say he wanted to pause AI, he could have been bright-and-cheery bullish, there was no real advantage to laying his cards out on his qualms.
"I would love to stop getting your money, but consider the children/China/some disaster/various scenarios. That's why you should continue to shower me with billions".
oh please. people said that about the moon, and nuclear weapons too. and yet it's the one side who has a track record of using new technology to intidmidate.
Oh no! What a pity. In hindsight, mind you, countries that treat their population like disposable labor units to be wrung dry and then discarded, tend to see the population develop opinions about that eventually.
I don't want to be rude but like, maybe you should pre-register some statement like "LLMs will not be able to do X" in some concrete domain, because I suspect your goalposts are shifting without you noticing.
We're talking about significant contributions to theoretical physics. You can nitpick but honestly go back to your expectations 4 years ago and think — would I be pretty surprised and impressed if an AI could do this? The answer is obviously yes, I don't really care whether you have a selective memory of that time.
It's a nontrivial calculation valid for a class of forces (e.g. QCD) and apparently a serious simplification to a specific calculation that hadn't been completed before. But for what it's worth, I spent a good part of my physics career working in nucleon structure and have not run across the term "single minus amplitudes" in my memory. That doesn't necessarily mean much as there's a very broad space work like this takes place in and some of it gets extremely arcane and technical.
One way I gauge the significance of a theory paper are the measured quantities and physical processes it would contribute to. I see none discussed here which should tell you how deep into math it is. I personally would not have stopped to read it on my arxiv catch-up
I never said LLMs will not be able to do X. I gave my summary of the article and my anecdotal experiences with LLMs. I have no LLM ideology. We will see what tomorrow brings.
> We're talking about significant contributions to theoretical physics.
Whoever wrote the prompts and guided ChatGPT made significant contributions to theoretical physics. ChatGPT is just a tool they used to get there. I'm sure AI-bloviators and pelican bike-enjoyers are all quite impressed, but the humans should be getting the research credit for using their tools correctly. Let's not pretend the calculator doing its job as a calculator at the behest of the researcher is actually a researcher as well.
If this worked for 12 hours to derive the simplified formula along with its proof then it guided itself and made significant contributions by any useful definition of the word, hence Open AI having an author credit.
How much precedence is there for machines or tools getting an author credit in research? Genuine question, I don't actually know. Would we give an author credit to e.g. a chimpanzee if it happened to circle the right page of a text book while working with researchers, leading them to a eureka moment?
Would it? I think there's a difference between "the researchers used ChatGPT" and "one of the researchers literally is ChatGPT." The former is the truth, and the latter is the misrepresentation in my eyes.
I have no problem with the former and agree that authors/researchers must note when they use AI in their research.
> now you are debating exactly how GPT should be credited. idk, I'm sure the field will make up some guidance
In your eyes maybe there's no difference. In my eyes, big difference. Tools are not people, let's not further the myth of AGI or the silly marketing trend of anthropomorphizing LLMs.
>How much precedence is there for machines or tools getting an author credit in research?
Well what do you think ? Do the authors (or a single symbolic one) of pytorch or numpy or insert <very useful software> typically get credits on papers that utilize them heavily?
Well Clearly these prominent institutions thought GPT's contribution significant enough to warrant an Open AI credit.
>Would we give an author credit to e.g. a chimpanzee if it happened to circle the right page of a text book while working with researchers, leading them to a eureka moment?
Cool Story. Good thing that's not what happened so maybe we can do away with all these pointless non sequiturs yeah ? If you want to have a good faith argument, you're welcome to it, but if you're going to go on these nonsensical tangents, it's best we end this here.
> Well what do you think ? Do the authors (or a single symbolic one) of pytorch or numpy or insert <very useful software> typically get credits on papers that utilize them heavily ?
I don't know! That's why I asked.
> Well Clearly these prominent institutions thought GPT's contribution significant enough to warrant an Open AI credit.
Contribution is a fitting word, I think, and well chosen. I'm sure OpenAI's contribution was quite large, quite green and quite full of Benjamins.
> Cool Story. Good thing that's not what happened so maybe we can do away with all these pointless non sequiturs yeah ? If you want to have a good faith argument, you're welcome to it, but if you're going to go on these nonsensical tangents, it's best we end this here.
It was a genuine question. What's the difference between a chimpanzee and a computer? Neither are humans and neither should be credited as authors on a research paper, unless the institution receives a fat stack of cash I guess. But alas Jane Goodall wasn't exactly flush with money and sycophants in the way OpenAI currently is.
If you don't read enough papers to immediately realize it is an extremely rare occurrence then what are you even doing? Why are you making comments like you have the slightest clue of what you're talking about? including insinuating the credit was what...the result of bribery?
You clearly have no idea what you're talking about. You've decided to accuse prominent researchers of essentially academic fraud with no proof because you got butthurt about a credit. You think your opinion on what should and shouldn't get credited matters ? Okay
Do I need to be credentialed to ask questions or point out the troubling trend of AI grift maxxers like yourself helping Sam Altman and his cronies further the myth of AGI by pretending a machine is a researcher deserving of a research credit? This is marketing, pure and simple. Close the simonw substack for a second and take an objective view of the situation.
If a helicopter drops someone off on the top of Mount Everest, it's reasonable to say that the helicopter did the work and is not just a tool they used to hike up the mountain.
Who piloted the helicopter in this scenario, a human or chatgpt? You'd say the pilot dropped them off in a helicopter. The helicopter didn't fly itself there.
“They have chosen cunning instead of belief. Their prison is only in their minds, yet they are in that prison; and so afraid of being taken in that they cannot be taken out.”
If rich techies had too much influence in California, the state government would not look like what it does. I mean I just don't see how you get to this opinion after any real review of the evidence.
You cherry picked California which is very much an outlier compared to the rest of the country? Are you denying the effect of money affecting political outcomes, the rich wouldn’t spend their money on media and PACs if it didn’t work would they?
I think this is completely missing the point… are you really saying California would be improved by more rich people being able to game the system? I think CA would benefit from more visionary politicians (i.e. not paid for) and more people at the bottom end being able to have homes in the big cities and less wealth accumulation, maybe reducing the gap between power and poverty means we could have better societies. I’m not talking about crazy change btw, reducing billionaires wealth to that of the nineties would allow us to rebuild a lot of great things and employ a lot of people. Putting money into stocks, real estate and crypto does not create wealth.
> I mean I just don't see how you get to this opinion after any real review of the evidence.
Graybeard here: took me a while to get it, but, usually these are chances to elucidate what is obvious to you :)* ex. I don't really know what you mean. What does the California state government look like if rich techies had even more influence? I can construct a facile version (lower taxes**) but assuredly you mean more than that to be taken so aback.
* Good Atlas Shrugged quote on this: "Contradictions do not exist. Whenever you think that you are facing a contradiction, check [ED: or share, if you've moseyed yourself into a discussion] your premises."
** It's not 100% clear politicians steered by California techies would lower taxes ad infinitum.
There's simply no way to look at the governing going on in California and think this is what the tech industry or movie industry or (formerly) oil industry wants for one of its traditional homes.
The government there has suffered since it went to basically one-party rule. There's no counterbalance for any bad policy ideas.
Tbh I think its awesome here, arrived 6 weeks ago. (both of these comments suffer from...I think begging the question?...basically, like, what's so clearly _not_ what tech/film/almond growers/whatever want in California?)
Less competent might be a disservice. But I've seen nothing to suggest that execs/founders are any more competent that the average employee. Execs and founders just had a few more dice rolls go their way.
This is a vacuous statement because in much of the world (ie most of the developing world), there's no such thing as "prescription only" medicine, people can buy whatever they want over the counter.
In Germany, I cannot buy ibuprofen, paracetamol (acetamenophen), or ASS (Aspirin - TM Bayer) at a grocery or "Drogerie" (place to buy cosmetics and other health & beauty items). I have to go to a pharmacy and ask for it at the counter - truly "OTC", and they're expensive compared to their US retail equivalents. That said, most common prescription drugs are significantly cheaper in Germany than in the US, even without insurance.
Antibiotics are definitely prescription-only, as are birth control and morning after ("Plan B") pills. I was once able to talk an airport pharmacy into selling me an albuterol inhaler without a script in hand, but only when I promised that I'd had it before and explained how to use it, and that I was about to get on a flight.
Lol this is why you aren't a VC. Even if every single Musk venture failed other than SpaceX, the investments would have paid off wildly well. You aim for the tails not the median.
"Yes, I would love to pause AI development, but unless we get China to do the same, we're f***, and there's no advantage unilaterally disarming" (not exact, but basically this)
You can assume bad faith on the parts of all actors, but a lot of people in AI feel similarly.
reply