I believe with regulatory capture the companies that pushed for the regulation in the first place at least comply with it (and hopefully the regulation is not worthless). This behaviour by ClosedAI is even worse: push for the regulation, then push for the exemption.
Regulatory capture is usually the company pushing for regulations that align with the business practices they already implement and would be hard for a competitor to implement. For example, a car company that wants to require all other manufactures to build and operate wind tunnels for aerodynamics testing. Or more realistically, regulations requiring 3rd party sellers for vehicles.
I haven't heard that definition of "Regulatory Capture" before. I mostly thought it was just when the regulators are working for industry instead of the people. That is, the regulators have been "Captured." The politicians who nominate the regulatory bodies are paid off by industry to keep it that way.
Regulatory capture has different flavours, but it basically comes down to the regulated taking control of or significantly influencing the regulator. It can be by the complete sector, but in my experience most often by the leading incumbants in a domain.
It can be through keeping regulation to be mild or look the other way, but as often to put up high cost/high compliance burdens in place to pull up the drawbridge for new entrants.
I’ve seen this happen many times during the RFI/RFP process for large projects, the largest players put boots on the ground early and try to get into the ears of the decision makers and their consultants and “helpfully” educate them. On multiple occasions I’ve seen requests actually using a specific vendor’s product name as a generic term without realizing it, when their competitors’ products worked in a completely different way and didn’t have a corresponding component in their offering.
I agree. I wasn't trying to strictly define it just specify the form it usually takes.
In the case of OpenAI, were I to guess, they'll likely do things like push for stronger copyright laws or laws against web scraping. Things that look harmless but ultimately will squash new competitors in the AI market. Now that they already have a bunch of the data to train their models, they'll be happy to make it a little harder for them to get new data if it means they don't have to compete.
Regulators can require all manufactures to build and operate wind tunnels for aerodynamics testing, or alternatively allow someone from south africa to be president.
That's the first time I've ever heard someone make this unusual and very specific definition. It's almost always much simpler - you get favorable regulatory findings and exemptions by promising jobs or other benefits to the people doing the regulating. It's not complicated, it's just bribery with a different name.
We all predicted this would happen but somehow the highly intelligent employees at OpenAI getting paid north of $1M could not foresee this obvious eventuality.
Trump should have a Most Favored Corporate status, each corporation in a vertical can compete for favor and the one that does gets to be "teacher's pet" when it comes to exemptions, contracts, trade deals, priority in resource right access, etc.
Can you explain why this is associated with fascism specifically, and not any other form of government which has high levels of oligarchical corruption (like North Korea, Soviet Russia, etc).
I am not saying you’re wrong, but please educate me why is this form of corruption/cronyism is unique to fascism?
It might be basic, but I found the Wikipedia article to be a good place to start:
> An important aspect of fascist economies was economic dirigism,[35] meaning an economy where the government often subsidizes favorable companies and exerts strong directive influence over investment, as opposed to having a merely regulatory role. In general, fascist economies were based on private property and private initiative, but these were contingent upon service to the state.
It's rather amusing reading the link on dirigisme given the context of its alleged implication. [1] A word which I, and suspect most, have never heard before.
---
The term emerged in the post-World War II era to describe the economic policies of France which included substantial state-directed investment, the use of indicative economic planning to supplement the market mechanism and the establishment of state enterprises in strategic domestic sectors. It coincided with both the period of substantial economic and demographic growth, known as the Trente Glorieuses which followed the war, and the slowdown beginning with the 1973 oil crisis.
The term has subsequently been used to classify other economies that pursued similar policies, such as Canada, Japan, the East Asian tiger economies of Hong Kong, Singapore, South Korea and Taiwan; and more recently the economy of the People's Republic of China (PRC) after its economic reforms,[2] Malaysia, Indonesia[3][4] and India before the opening of its economy in 1991.[5][6][7]
It’s a poor definition. The same “subsidization and directive influence” applies to all of Krugman’s Nobel-wining domestic champion, emerging market development leaders, in virtually all ‘successful’ economies. It also applies in the context of badly run, failed and failing economies. Safe to say this factor is only somewhat correlated. Broad assertions are going to be factually wrong.
The key element here is that the power exchange in this case goes both ways. The corporations do favors for the administration (sometimes outright corrupt payments and sometimes useful favors, like promoting certain kinds of content in the media, or firing employees who speak up.) And in exchange the companies get regulatory favors. While all economic distortions can be problematic — national champion companies probably have tradeoffs - this is a form of distortion that hurts citizens both by distorting the market, and also by distorting the democratic environment by which citizens might correct the problems.
All snakes have scales, so there is a 100% correlation between being a snake and having scales.
That does not imply that fish are snakes. Nor does the presence of scaled fish invalidate the observation that having scales is a defining attribute of snakes (it's just not a sufficient attribute to define snakes).
For correlation to be 1, it's not enough that all snakes have scales. You also need all scaly animals to be snakes.
Here's a toy example. Imagine three equally sized groups of animals: scaly snakes, scaly fish, and scaleless fish. (So all snakes have scales, but not all scaly animals are snakes.) That's three data points (1,1) (0,1) (0,0) with probability 1/3 each. The correlation between snake and scaly comes out as 1/2.
You can also see it geometrically. The only way correlation can be 1 is if all points lie on a straight line. But in this case it's a triangle.
You’re looking for the logical argument here, not the statistical one. You sampled from snakes and said there is a 100% correlation with being a snake (notwithstanding the counterarg in an adjacent comment about scale-free snakes).
I am noting that the logical argument does not hold in the provided definition. If “some” attributes hold in a definition, you are expanding the definitional set, not reducing it, and thus creating a low-res definition. That is why I said: ‘this is a poor definition.’
So then you agree that the original post that called this "text book fascism" was wrong, as this is just one very vague, and only slightly correlated property.
Yea fascism, communism, etc aren’t abstract ideals in the real world. Instead they are self reinforcing directions along a multidimensional political spectrum.
The scary thing with fascism is just how quickly it can snowball because people at the top of so many powerful structures in society benefit. US Presidents get a positive spin by giving more access to organizations that support them. Those kinds of quiet back room deals benefit the people making them, but not everyone outside the room.
That's not fascism, that is the dysfunctional status quo in literally every single country in the world. Why do you think companies and billionaires dump what amounts to billions of dollars on candidates? Often times it's not even this candidate or that, but both!
They then get access, get special treatment, and come out singing the praises of [errr.. what's his name again?]
It’s not Fascism on its own, but it’s representative of the forces that push society to Fascism.
Start looking and you’ll find powerful forces shaping history. Sacking a city is extremely profitable throughout antiquity, which then pushes cities to have defensive capabilities which then…
In the Bronze Age trade was critical as having Copper ore alone wasn’t nearly as useful as having copper and access to tin. Iron however is found basically everywhere as where trees.
Such forces don’t guarantee outcomes, but they have massive influence.
Socialism and communism are state ownership. Fascism tends toward private ownership and state control. This is actually easier and better for the state. It gets all the benefit and none of the responsibility and can throw business leaders under the bus.
All real world countries have some of this, but in fascism it’s really overt and dialed up and for the private sector participation is not optional. If you don’t toe the line you are ruined or worse. If you do play along you can get very very rich, but only if you remember who is in charge.
“Public private partnership” style ventures are kind of fascism lite, and they always worried me for that reason. It’s not an open bid but a more explicit relationship. If you look back at Musk’s career in particular there are ominous signs of where this was going.
The private industry side of fascist corporatism is very similar to all kinds of systematic state industry cronyism, particularly in other authoritarian systems that aren't precisely fascist (and named systems of government are just idealized points on the multidimensional continuum on which actual governments are distributed, anyway), what distinguishes fascism particularly is the combination of its form of corporatism with xenophobia, militaristic nationalism, etc., not the form of corporatism alone.
I think it is associated with fascism, just from the other party.
This is pretty common fascist practice that is used all over Europe and in any left-leaning countries, when with regulations governments make doing business on large scale impossible, and then give largest players exemptions, subsidies and so on. Governments gain enormous leverage to ensure corporate loyalty, silence dissenters and combat opposition, while the biggest players secure their place at the top and gain protection from competitors.
So the plan was push regulations and then dominate over the competitors with exemptions from those regulations. But fascists loose the election, regulations threaten to start working in a non-discriminatory manner, and this will simply hinder business.
That's in progress. It's called the MAGA Parallel Economy.[1]
Donald Trump, Jr. is in charge. Vivek Ramaswamy and Peter Thiel are involved.
Azoria ETF and 1789 Capital are funds designed to fund MAGA-friendly companies.
But this may be a sideshow. The main show is US CEOs sucking up to Trump, as happened at the inauguration. That parallels something Putin did in 2020.
Putin called in the top two dozen oligarchs, and told them "Stay out of politics and your wealth won’t be touched." "Loyalty is what Putin values above all else.” Three of the oligarchs didn't do that. Berezovsky was forced out of Russia. Gusinsky was arrested, and later fled the country. Khodorkovsky, regarded as Russia’s richest man at the time (Yukos Oil), was arrested in 2003 and spent ten years in jail. He got out in 2013 and left for the UK. Interestingly, he was seen at Trump's inauguration.
Why are these idiots trying to ape Russia, a dumpster fire, to make America great again?
If there’s anyone to copy it’s China in industry and maybe elements of Western Europe and Japan in some civic areas.
Russia is worse on every metric, even the ones conservatives claim to care about: lower birth rate, high divorce rate, much higher abortion rate, higher domestic violence rate, more drug use, more alcoholism, and much less church attendance.
Because they aren’t interested in “making America great again”, that’s the marketing line used to sell it to American voters. They are solely interested in looting the nation for personal gain.
> That parallels something Putin did in 2020. Putin called in the top two dozen oligarchs, and told them "Stay out of politics and your wealth won’t be touched.
It does have an effect; it is just a slow and grinding process. And people have screwy senses of proportion - like old mate mentioning insider trading. Of all the corruption in the US Congress insider trading is just not an issue. They've wasted trillions of dollars on pointless wars and there has never been a public accounting of what the real reasoning was. That sort of process corruption is a much bigger problem.
A great example - people forget what life was like pre-Snowden. The authoritarians were out in locked ranks pretending that the US spies were tolerable - it made any sort of organisation to resist impossible. Then one day the parameters of the debate get changed and suddenly everyone is forced to agree that encryption everywhere is the only approach that makes sense.
How is it any more accessible now than it was before? Don't you have to fact-check everything it says anyway, effectively doing the research you'd do without it?
I'm not saying LLMs are useless, but I do not understand your use case.
I worry I'm just trying too hard to make it make sense, and this is a TimeCube [0] situation.
The most-charitable paraphrase I can come up with it: "Bad people can't use LLMs to hide facts, hiding facts means removing source-materials. Math doesn't matter for politics which are mainly propaganda."
However even that just creates contradictions:
1. If math and logic are not important for uncovering wrongdoing, why was "tabulation" cited as an important feature in the first post?
2. If propaganda dominates other factors, why would the (continued) existence of the Internet Archive be meaningful? People will simply be given an explanation (or veneer of settled agreement) so that they never bother looking for source-material. (Or in the case of IA, copies of source material.)
OMG Thank you - hilarious. TimeCube is a legend...
---
I am saying that AI can be used very beneficially to do a calculated dissection of the Truth of our Political structure as a Nation and how it truly impacts an Individual/Unit (person, family) -- and do so where we can get discernible metrics and utilize AIs understanding of the vast matrices of such inputs to provide meaningful outputs. Simple.
EDIT @MegaButts;
>>Why is this better than AI
People tend to think of AI in two disjointed categories; [AI THAT KNOWS EVERYTHING] v [AI THAT CAN EASILY SIFT THROUGH VAST EVERYTHING DATA GIVEN TO IT AND BE COMMANDED TO OUTPUT FINDINGS THAT A HUMAN COULDN'T DO ALONE]
---
Which do you think I refer to?
AI is transformative (pun intended) -- in that it allows for very complex questions to be asked of our very complex civilization in a simple and EveryMan hands...
Why is AI better for this than a human? We already know AI is fundamentally biased by its training data in a way where it's actually impossible to know how/why it's biased. We also know AI makes things up all the time.
If you dont understand the benefit of an AI augmenting the speed and depth of ingestion of Domain Context into a human mind.. then... go play with chalk.||I as a smart Human operate on lots of data... and AI and such has allowed me to consume such.
The most important medicines in the world are MEMORY retention...
It s youd like a conspiracy, eat too much aluminum to give you alzheimers asap so your generation forgets... (based though. hope you undestand what I am saying)
Can anyone say which of the LLM companies is the least "shady"?
If I want to use an LLM to augment my work, and don't have a massively powerful local machine to run local models, what are the best options?
Obviously I saw the news about OpenAI's head of research openly supporting war crimes, but I don't feel confident about what's up with the other companies.
E.g. i'm very outspoken about my preferences for open llm practices like executed by Meta and Deepseek. I'm very aware of the regulatory caption and pulling up the ladder tactics by the "AI safety" lobby.
However. In my own operations I do still rely on OpenAI because it works better than what I tried so far for my use case.
That said, when I can find an open model based SaaS operator that serves my needs as well without major change investment, I will switch.
I'm not talking about me developing the applications, but about using LLM services inside the products in operation.
For my "vibe coding" I've been using OpenAI, Grok and Deepseek if using small method generation, documentation shortcuts, library discovery and debugging counts as such.
You need a big "/s" after this. Or maybe just not post it at all, because it's just value-less snark and not a substantial comment on how hypocritical and harmful OpenAI is (which they certainly are).
No moat means Joe Anybody can compete with them. You just need billions in capital, a zillion GPUs, thousands of hyper skilled employees. You need to somehow get the attention of tens of millions of consumers (and then pull them away from the competition, ha).
Sure.
The same premise was endlessly floated about eg Uber and Google having no moats (Google was going to be killed by open search, Baidu, magic, whatever). These things are said by people that don't understand the comically vast cost of big infrastructure, branding, consumer lock-in (or consumer behavior in general), market momentum, the difficulty of raising enormous sums of capital, and so on.
Oh wait the skeptics say: what about DeepSeek. To scale and support that you're going to need what I described. What's the plan for supporting 100 million subscribers globally with a beast of an LLM that wants all the resources you can muster? Yeah, that's what I thought. Oh but wait, everyone is going to run a datacenter out of their home and operate their own local LLM, uhh nope. It's overwhelmingly staying in the cloud and it's going to cost far over a trillion dollars to power it globally over the next 20 years.
OpenAI has the same kind of moat that Google has, although their brand/reach/size obviously isn't on par at this point.
365 is not taking off. Numbers are average at best. Most companies now pay 20/user/month extra, and whilst the sentiment is that it likely kina is somehow worth it, nobody claims it would be better than break even. Many users are deeply disappointed with the overpromising in powerpoint and excel. Sure it's quite useful in outlook and the assistant is great to find files in scattered sharepoints, but that's the limit of my value with it.
OpenAI copilot, not microsoft copilot, actually looks like a stronger product and they're going full force after the enterprise market as we speak. We're setting a demo in motion with them next month to give it a go.
We'll have to wait for the first one to crack Powerpoint, that'll be the gamechanger.
LLM usage is still gaining traction. OpenAI may not be on top anymore, but they still have useful services, and they aren’t going under anytime soon.
And time spent dealing with laws and regulations may decrease efficiency, leading to increased power consumption, resulting in greater water usage in datacenters for cooling and more greenhouse gas emissions.
Controlling demand for services is something that could stop this, but it’s technological progress, which could enable solutions for global warming, hunger, and disease.
It’s a locomotive out-of-control. Prayer is the answer I’d think of.
If they’re not making money[1], and competitors are, or competitors are able to run at a negative for longer, then things could definitely wrap up for them quickly. To most consumers, LLMs will be a feature (of Google/Bing/X/Meta/OS), not a product itself.
Don't worry; they'll have plenty of time to regret that.
There's a reason they're sweating the data issue. As much as it sucks to say it, Google/Bing/Meta/etc. all have a shitton of proprietary human-generated data they can work with, train on, fine tune with, etc. OpenAI _needs_ more human generated data desperately to keep going.
I remember for years people on HN said Uber would never work as a profitable business because it spent a lot of VC money earlier on without having enough revenue to cover it all. It's been around for 16yrs now despite running in the black until 2023.
Waymo has ~1000 cars. Uber has 8 million drivers. Worst case Uber will be acquired or merger or make a deal with one of the many AI driving startups.
I predict Waymo will have their own struggles with profitability. Last I heard the LIDAR kit they put on cars costs more than the car. So they'll have to mass produce + maintain some fancy electronics on a million+ cars.
>And time spent dealing with laws and regulations may decrease efficiency, leading to increased power consumption, resulting in greater water usage in datacenters for cooling and more greenhouse gas emissions.
They don't care about that if they get a regulatory moat around them.
There’s only so much of that you can do without it becoming a problem you have to deal with. There is a limited supply of water in any area of the earth.
It's a common tactic in new fields. Fusion, AI, you name it are all actively lobbying to get new regulation because they are "different", and the individual companies want to ensure that it's them that sets the tone.
Exactly. I'm reminded of Gavin Belson saying something along the lines of "I don't want to live in a world where someone makes it a better place to live than we do" in Silicon Valley.
That's exactly what has been happening:
Ask HN: Why is OpenAI pushing for regulation so much - 2023
https://news.ycombinator.com/item?id=36045397