In fact, wind and solar often bid negative amounts to sell power.
For those confused by this, the way a utility buys power is they take bids from powerplants, and then "fill their bucket" with the cheapest options, the most expensive, cheapest option is what everyone gets paid. So if you can compete in a market with coal and coal is 8 cents/kwh, and you have zero operational costs, you can bid negative values to always be in the bucket and be compensated 8 cents/kwh.
Mend is not a competitor, renovate is the software, mend is the company.
They are tools that automatically check your repo for dependencies and create PRs when there are updates. It supports a wide range of package managers and other places dependencies may be specified.
Dependabot is another solution which is more „GitHub-native“ maybe.
In short, Renovate (by Mend) is a dependency manager for software projects. It watches your repository for outdated libraries, packages, and frameworks and opens Pull Requests to update them.
Let your Support/QA/Sales engineers use AI to create features / fix bugs. They're closer to the customer than your developers are.
Even if what they deliver is complete AI slop and has to be rewritten, that pull request is a far better specification of what customers actually want than the ticket you used to get.
Too bad you fired all your QA/support engineers a few years ago.
It’s actually worse than slop, because they aren’t capable of producing slop, because they don’t understand the basics of version control or even how to talk about code concepts, so they require additional training and onboarding before they can even start producing claude-slop…
10% capture seems highly unlikely. That level of capture is only possible for b2b high touch sales, aka "call-me" pricing.
For call-me pricing to work, you have to ensure that any sort of public sticker price is not a suitable alternative. You can not have a sticker price, make the sticker price so high essentially nobody will buy it or by finding a feature like oauth that makes the public version infeasible for businesses.
And then you also have to maintain enough of a monopoly / oligarchy to sustain that level of pricing.
I don't think either of those two conditions will apply in the future.
AI providers now have a sticker price that provides basically all functionality, almost completely eliminating the opportunity for extremely high-margin b2b. They've decided a small slice of a large pie is bigger than large piece of a smaller pie. I suspect that's true and will continue to be true in the future.
An oligarchy is difficult to sustain with more than 3 global players. Right now we seem to have 3 frontier models for coding that can and will charge more than commodity prices. However there are open source non-frontier models that you can use for inference costs only and even if those don't keep up it seems likely there will be enough non-frontier models available that their pricing will also be at the commodity level. Those cheaper models will provide significant downward pressure on frontier pricing.
I don't think we have seen "all functionality" yet.
We have not seen iterative AI use for example.
The use case, where you tell the model "Solve this task. Then solve it again. Keep the better solution, then solve it again. On and on. Tomorrow, show me the best solution.".
And also not the "Run a company on your own" use case.
Those might make people and companies use models full-time. The price of that will be way different from current subscription prices. The TCO of a single instance of a SOTA model is on the order of $100k per year.
I think more realistic napkin map is 10% GDP bump and 1% capture. You'll still find a lot of people who think we're going to get more than a 10% GDP bump from AI, but it'll definitely be fewer.
Will AI increase the rate of GDP growth by 0.5% or so over 20 years?
I've started calling it "revenge of the QA/Support engineers", personally.
Our QA & Support engineers have now started creating MR's to fix customer issues, satisfy customer requests and fix bugs.
They're AI sloppy and a bunch of work to fix up, but they're a way better description of the problem than the tickets they used to send.
So now instead of me creating a whole bunch of work for QA/Support engineers when I ship sub-optimal code to them, they're creating a bunch of work for me by shipping sub-optimal code to me.
It does quite well and definitely catches/fixes things I miss. But I still catch significant things it misses. And I am using AI to fix the things I catch.
Which is then more slop I have to review.
Our product is not SaaS, it's software installed on customer computers. Any bug that slips through is really expensive to fix. Careful review and code construction is worth the effort.
Why? Statins are one of the most well studied drugs in existence. Most people have no side effects, and the long-term benefits are incredibly straightforward - on par with blood pressure medication.
Blood pressure is a often a side effect of being overweight. But only one side effect of many. Losing weight gets rid of all side effects, not just one.
I don't imagine the difference is very significant on long drives. If the car is cold soaked at -30, it uses about 10kW for the first 3km. Then everything is warmed up, and the ~25% difference is increased consumption, not decreased battery capacity.
As long as you have a heat pump harvesting the waste heat to keep the battery up to temp.
But might be significant on short drives, 10kW for the first 3 km is massive.
Yeah, this heat up effect is massive for around-town use. We have had below freezing weather for two weeks, which is very unusual here in Annapolis. That’s had a huge impact on my wife’s use case, which involves a bunch of 5-10 mile trips to drop the kids off at school, go on a grocery run, pick the kids up, take the kids to math tutoring, etc. She ran out of charge the other day during drop-off b/c the “37 miles left” we had the night before was actually a lot less than that accounting for warming the battery up the next day.
reply