Hacker Newsnew | past | comments | ask | show | jobs | submit | sanswork's commentslogin

Lightning isn't even a good solution for most diehard bitcoin users. It's a failed project.

It would take 27 years to onboard every internet user to the lightning network unless you start adding level 3 aggregators and then at that point you lose all the benefit of it being on chain at all.

It would take almost 2 years just to onboard every American assuming during that time there were zero other bitcoin transactions. Then you need to add the fees for the on and off ramps to the individual transaction fees to get the real cost per transaction noting that these would go up quite a bit as the competition between lightning and non-lightning uses of the transaction space would drive prices higher.


The throughput is arbitrarialy limited by bitcoin's current block size, which hasn't been increased since satoshi's era.

Most cryptocurrencies have an adaptive block-size mechanism which allows the blocks to grow to a reasonable size which could facilitate such an onboarding of users. So it isn't a technical problem, it is just a question of bitcoin's current leadership, which is controlled by companies like blockstream.


People have been debating the blocksize for a very long time now and there doesn't seem to be any large desire to change it so while the ability to increase it exists changing anything that fundamental about bitcoin seems to be a non-starter and while that is true lightning is pointless as a solution for the masses.

Even if you increase the block size 100x though you're still not improving the numbers much since my very generous numbers ignore activity outside of lightning and assume a single on chain transaction for every user and a perfect network.


Micropayments work for games because there is some specific outcome I know I want and know paying this money will move me closer to that goal in the immediate future.

That isn't the case for news content. In news it's "reading this might be interesting" or being generous "knowing this might improve my life at some point".

That delay in outcome will kill micropayments because it again goes from a very easy calculation in your mind to "too hard" like Clay talked about.


Thank you for responding to the actual article rather than (like many others here) going straight to pre-cooked talking points on micropayments.

I also don't have any proof that the article will be any good. When buying a whole newspaper for the day, if some of the articles are suboptimal, I can still make money from the reliably good stuff. But if I go look at an article, am I getting something good, or is it regurgitated Reuters I read before, plain AI, or completely wrong? The barrier is too high if I don't have a lot of faith in the source, and if I do, I should just subscribe

Sure, but if a source routinely clickbaits you/has a worse than expected article, you learn to avoid it (or even add a "don't show me this source" rule).

As long as the sources last long enough for reputation to build naturally (so, not the Amazon LLC model), it should all come out in the wash pretty reasonably.


But if you're only paying a penny the risk is tolerable.

I've spoken to a german news outlet a while back, and that was my contention too: I don't know if the article will be any good.

My suggestion was as follows:

Start the article by providing the dry facts - the meat of the article - in a super condensed format, so people get it as quickly as possible. Then, ask for money to get the rest - the analysis, the "whodunit", the "how we got there", the background, the bio's, and everything else. And then tell people: "If this interests you, you can pay $0.xx to read the rest of our article about it, including: (insert bullet points I just mentioned)"

The first section acts as proof that the person writing the article did their research; the rest is for those who are genuinely interested in the topic. It prevents disappointment and tells you clearly and transparently what you're getting for you cents.

I don't think the company did it in the end. They're struggling.


I think the site is right about the "coins" method. If I had an automatic subscription of $10/month to refill my news wallet, and I could pay $0.05 out of it to read an article, I'd do it, especially if it was a use-it-or-lose it system.

In fact, if they charged $0.20 per story if you pay directly, or $0.05 per story if you pay out of your auto-reload wallet, I think that could incentivize users to subscribe.

Of course, it would have to be shared across every newspaper, and publishers hate that. Apple News is the closest it's gotten - the app sucks, but you can share articles into it to remove the paywall and that works great.


Handle it this way - a user has Silver tier coin subscription, gold tier coin subscription, and platinum tier coin subscription that they pay in per month. I'll set hypothetical prices at 15, 30 and 60 dollars. Over the course of a month, you look at articles without making decisions about whether to buy them one way or another - you just have your "tab" and the article loads as-is. Then, at the end of the month, mycrowpaymint.biz tallies up how many articles you read * each article's relative cost multiplier from what different news sites (15% forbes, 30% percent NYT, 10 percent utne reader, 45 percent random YouTube videos) and then remits the subscription revenue to each publisher based on the percentage used. For flexibility's sake, maybe the publisher was hoping to get 17 dollars coin based, PAYG revenue off of a 15 subscription at 80 percent utilization, but them's the breaks, because in other months they'll get more revenue than they would expect because a customer engaged with less content overall. Obviously, the existence of tier limits would be for those cases where someone tries to look at a thousand different articles on a silver plan, and perhaps Financial Times would only allow Platinum subscribers to work with this plan, but the reduction in friction, ease of subscription management for the customer, and equitable financial allocation would (I believe) make such a scheme viable.

People already can't be bothered clicking on the paywall busting links to view articles because the friction is too high. Having to decide if something is potentially worth 20 cents seems easy when you have to make that decision in your mind a single time for something you're obviously interested in but in reality it becomes multiple times a day for things that you are maybe only slightly curious about the fatigue will add up very quickly and I double anyone would do a second reload(if they do the first load at all).

What’s a paywall busting link?

archive.whatever links

"Community" might be the hook, not the content itself. That's the way it works right now even in the pure editorial garbage piles. They might not always pay for the content directly, but they get revenue through high-margin merchandise, advertising, and scams. But you might imagine positioning as "I'm a XYZ reader." Still feels weak, but that's all we've got. The internet killed content scarcity. The product is not the content. The product is the way reading / watching / paying for it makes you feel. It is church. It is a tithing. A community subscription service.

Maybe initially you wouldn’t know if an article would be good. But over time you could probably make reasonable guesses from the author/headline/title combination.

Great now I need to pay attention to the authors and make a mental mapping of who the good ones are to decide if the friction is worth it. That in itself adds more friction which in turn makes the barrier higher.

I mean that's just how reading works.

It isn't with news though. I am a bit of a news junkie and have actually subscribed to multiple news sources over the past year and I can't name a single journalist from any of them and I am almost certainly average in that way.

What about movie rentals on various platforms like Youtube. They are more in the domain of "milli"-payments, but they do share the feature that you don't know if you will like the movie until after you have watched part of it.

With a movie rental I'm paying $5-30 for a 1-2 hour experience where I have some idea going in of what I'm getting thanks to trailers and I'm making that decision maybe once a fortnight if that.

The scale of the decisions doesn't align.


That is a correct evaluation of this. I've worked in marketing for a longer while and your instincts are spot on.

In media generation, such as music, streaming, articles, etc the only thing that gets people to fork over money regularly is if they're a fan of some sort. The patronage system. That means they have to like you and come back to you so often that they'll feel a connection - and they'll want to support you out of the goodness of their heart. This is the strategy used by streamers, by buskers on the street, and by content creators of all sort.

The main issue with applying this to articles is that most news is discovered by way of google news, or a similar hub site, which sometimes will present news from you - but it won't happen often enough to create such a connection. One may ask if the frequency of this happening is deliberately that low, compared to social algorithms on other products, where return visits are encouraged - if you like a tweet, you get more tweets from that same person; if you like a short, you get more youtube shorts from that channel; and so on.

Ultimately for news you have to be so large that people will come to you on their own, without being funneled through google news. This works for huge news sites - the register, NYT, Golem, etc. There is no way for a small site to break through like that. I think the last time I've seen this get pulled off successfully - a website started from 0 generating a cult following - was Drudge Report.


> "reading this might be interesting"

I find it hard to take this objection seriously, since almost everything that isn't a physical commodity has some degree of "I don't know if I'm satisfied with this yet". Books and movies clearly do. But we expect to take a risk and occasionally pay for them, and it feels ordinary to do so -- so why not here?

I don't object at all to people not liking micropayments -- I don't like them either. But the reason I don't like them is because I'm accustomed to getting good quality content for free, and no other reason.


With books, movies, tv shows, music almost everyone is discovering based on recommendations or curation. Very few people are consuming much of that type of content with no outside input on its quality or interest. News is almost always a blind link with just a headline to work from.

I tracked intake, calories burned(from Apple watch with activity tracking turned on for any specific exercise) and weight for 12 weeks as part of 75 hard and found my daily weight decreases were exactly in line with what you'd expect given the estimated deficit 95% of days and 100% at the weekly level.

I don't track consistently anymore only when I'm working towards a goal but when I have more than 2 weeks data these days it seems pretty spot on to the point I can calculate the tracked captors to target to get the desired rate of change in weight pretty consistently.


I'd pay up to $1000 pretty easily just based off the time it saves me personally from a lot of grindy type work which frees me up for more high value stuff.

It's not 10x by any means but it doesn't need to be at most dev salaries to pay for itself. 1.5x alone is probably enough of an improvement for most >jr developers for a company to justify $1000/month.

I suppose if your area of responsibility wasn't very broad the value would decrease pretty quickly so maybe less value for people at very large companies?


I can see $200 but $1,000 per month seems crazy to me.

Using Claude Code for one year is worth the same as a used sedan (I.E., ~$12,000) to you?

You could be investing that money!


Yes, easily. Paying for Claude would be investing that money. Assuming 10% return which would be great I'd make an extra $1200 a year investing it. I'm pretty sure over the course of a year of not having to spend time doing low value or repetitive work I can increase productivity enough to more than cover the $13k difference. Developer work scales really well so removing a bunch of the low end and freeing up time for the more difficult problems is going to return a lot of value.

For most of the history the main locked feature was just a premium web interface(there were a few more but that was the main draw) that's included in free now and I think the locked features are primarily around most specialised job ordering engines. Things that if you need free you almost certainly don't need. Oban has been very good about deciding what features to lock away.

(I've paid for it for years despite not needing any of the pro features)


As someone that uses vim full time all that happened is people started porting all the best features of IDEs over to vim/emacs as plugins. So those people were right it's just the features flowed.

Pretty sure you can count the number of professional programmers using vanilla vim/neovim on one hand.


People also started using vi edit mode inside IDEs. I've personally encountered that much more often.


Back in the early 2000s I worked for Cap Gemini in Birmingham England which had a part of the office that was some sort of partnership with IBM GS(I think IBM did the hardware and cap got the services contacts). They also had a big blinkin lights server setup in the middle of the office for clients to see. As a teenage geek in his first tech job I used to love going to peek at it even though I did tape rotation on the real servers in the basement most days.


Yes, plenty of users here compulsively posting and compulsively checking for responses/upvotes/etc.


I will not use Jr developers for engineering work and never will, because doing the work of a Jr.....

You don't have to outsource your thinking to find value in AI tools you just have to find the right tasks for them. The same as you would with any developer jr to you.

I'm not going to use AI to engineer some new complex feature of my system but you can bet I'm going to use it to help with refactoring or test writing or a second opinion on possible problems with a module.

> unlikely to have a future in this industry as they are so easily replaceable.

The reality is that you will be unlikely to compete with people who use these tools effectively. Same as the productivity difference between a developer with a good LSP and one without or a good IDE or a good search engine.

When I was a kid I had a text editor and a book and it worked. But now that better tools are around I'm certainly going to make use of them.


> The reality is that you will be unlikely to compete with people who use these tools effectively.

If you looked me or my work up, I think you would likely feel embarrassed by this statement. I have a number of world firsts under my belt that AI would have been unable to meaningfully help with.

It is also unlikely I would have every developed the skill to do any of that aside from doing everything the hard way.


I just looked and I'm not sure what I'm meant to be seeing that would cause me to feel embarrassed but congrats on whatever it is. How much more could you have developed or achieved if you didn't limit yourself?

Do you do all your coding in ed or are you already using technology to offload brain power and memory requirements in your coding?


AI would have been near useless when I was creating https://stagex.tools https://codeberg.org/stagex/stagex, for instance.

Also I use VIM. Any FOSS tools with predictable deterministic behavior I can fully control are fine.


I don't know, just a quick glance at that repo and I feel like AI could have written your shell scripts which took up several tries from multiple people to get right about as well as the humans did.

So your ok with using tools to offload thinking and memory as long as they are FOSS?


Take this one for example https://codeberg.org/stagex/stagex/src/branch/main/src/compa...

It took some iteration and hands on testing to get that right across multiple operating systems. Also to pass shellcheck, etc.

Even if an LLM -could- do that sort of thing as well as my team and I can, we would lose a lot of the arcane knowledge required to debug things, and spot sneaky bugs, and do code review, if we did not always do this stuff by hand.

It is kind of like how writing things down helps commit them to memory. Typing to a lesser extent does the same.

Regardless those scripts are like <1% of the repo and took a few hours to write by hand. The rest of the repo requires extensive knowledge of linux internals, compiler internals, full source bootstrapping, brand new features in Docker and the OCI specs, etc.

Absolutely 0 chance an LLM could have helped with bootstrapping a primitive c toolchain from 180 bytes of x86 machine code like this: https://codeberg.org/stagex/stagex/src/branch/main/packages/...

That took a lot of reasoning from humans to get right, in spite of the actual code being just a bunch of shell commands.

There are just no significant shortcuts for that stuff, and again if there were, taking them is likely to rob me of building enough cache in my brain to solve the edge cases.

Also yes, I only use FOSS tools with deterministic behavior I can modify, improve, and rely on to be there year after year, and thus any time spent mastering them is never wasted.


That x86 machine code link reminded me of an LLM project I did just last week - https://tools.simonwillison.net/sloccount

I decided to see if I could get an old Perl and C codebase running via WebAasembly in the browser having Claude brute-force figuring out how to compile the various components to WASM. Details here: https://simonwillison.net/2025/Oct/22/sloccount-in-webassemb...

Here are notes it wrote for me on the compilation process it figured out: https://github.com/simonw/tools/blob/473e89edfebc27781b43443...

I'm not saying it could have created your exact example (I doubt that it could) but you may be under-estimating how promising it's getting for problems of that shape.


I do not doubt that LLMs might some day be able to generate something like my work in stagex, but it would only be because someone trained one on my work and that of other people that insist on solving new problems by hand.

Even then, I would never use it, because it would be some proprietary model I have to pay some corpo for either with my privacy, my money, or both... and they could take it away at any time. I do not believe in or use centralized corpotech. Centralized power is always abused eventually. Also is that regurgitated code under an incompatible license? Who knows.

Also, again, I would rob myself of the experience and neural pathway growth and rote memory that come from doing things myself. I need to lift my own weights to build physical strength just as I need to solve my own puzzles to build patience and memory for obscure details that make me better at auditing the code of others and spotting security bugs other humans and machines miss.

I know when I can get away with LTO, and when I cannot, without causing issues with determinism, and how to track down over linking and under linking. Experience like that you only get by experimenting and compiling shit hundreds of times, and that is why stagex is the first Linux distro to ever hit 100% determinism.

Circling back, no, I am not worried about being unemployable because I do not use LLMs.

And hey, if I am totally wrong and LLMs can create perfectly secure projects better than I can in the future, and spot security bugs better than I can, and I am unemployable, then I will go be an artist or something, because there are always people out there that appreciate hard work done by humans by hand, because that is how I am wired.


> Even then, I would never use it, because it would be some proprietary model I have to pay some corpo for either with my privacy, my money, or both... and they could take it away at any time.

Have you been following the developments in open source / open weight models you can run on your own hardware?

They're getting pretty good now, especially the ones coming out of China. The GLM, Qwen and DeepSeek models out of China are all excellent. Mistral's open weight models (from France) are good too, as are the OpenAI gpt-oss models.

No privacy or money cost involved in running those.

I get your concern about learning more if you do everything yourself. All I can say there is that the rate and depth of technical topics I'm learning has been expanded by my LLM usage because I'm able to take on a much wider range of technical projects, all of which teach me new things.

You're not alone in this - there are many experienced developers who are choosing not to engage with this new family of technology. I've been thinking of it similar to veganism - there are plenty of rational reasons to embrace a vegan lifestyle and I respect people who do it but I've made different choices myself.


Not only have I been following a lot of the open models, you may find it surprising I have extensively tested some of them and coerced them to generate deterministic responses across different machines as a method to prove responses are not tampered with, as well as developing ways to run them in remotely attestable secure enclaves to ensure people that use them for sensitive applications can have provable privacy with end to end encryption.

I will admit that I find deploying and hacking on the tech itself super interesting. Hell I founded a machine learning company and got a paper published with the AAAI for my cheap bulk training data acquisition techniques back in 2012 before most cared about this stuff.

I even think there are a ton of great and exciting use cases for this tech. Like identifying cancer in large photographic datasets, etc. I have a lot of hope about medical applications in particular.

All that said, I just don't think LLMs are remotely competitive or useful at the type of threat modeling, security engineering, and auditing work I do on average. They are the wrong tool for my job, which require a level of actual reasoning that LLMs are nowhere near capable of right now, or are likely to be any time soon. Maybe they could help with a script here and there which might save me a few hours a month, but for 95%+ of it, they would just waste my time regurgitating the same industry standard bad advice and approaches that I am trying to change while making me duller at writing code by hand when I need to.

As contrast though, I would not fire someone for using LLMs for learning or inspiration as long as they consistently prove they fully understand and can explain every line of every PR they submit, can pair program or usefully contribute to engineering discussions without LLMs, and maintain a competitive level of quality with the rest of the team. Not everyone has to make the same tool choices I do, as long as they can hold their own in a team with me and are not dumb enough to regurgitate AI slop they don't understand.

It is amusing you use vegans as an example. I am not a vegan, but I often describe myself as something of a digital vegan that is very very selective about what tools I use and what I expect from them such as why I also don't use a smartphone or GPS.


I respect this, you've clearly done your due diligence here.


"This tool uses the WebAssembly build of Perl running actual SLOCCount algorithms from licquia/sloccount."

The best form of (AI) plagiarism is to simply wrap the original tool in your own facade and pretend like you built anything of value.

Is this intended to be a bad joke?


You're criticizing me for directly crediting the original here. That's the correct and ethical thing to do!

Honestly, I've seen the occasional bad faith argument from people with a passionate dislike of AI tooling but this one is pretty extreme even by those standards.

I hope you don't ever use open source libraries in your own work.


Actually, my criticism was the result of my own misunderstanding of what you were claiming. My apologies for that, although I'm still unlikely to use these tools based upon the example when my own personal counterexamples have shown me that it's often as much or more work to get there via prompting than it is to simply do the thinking myself. Have a good day.


Thanks - this was a misunderstanding, apology accepted!


For whatever it's worth, this is exactly the kind of awful that I never want in any code base that I'm working on:

<https://github.com/simonw/tools/blob/473e89edfebc27781b43443...>

At least run a pretty-printer on the code so that it can be reviewed by anything but a robot.

Part of developing a good software system is about exercising taste in vendored-in libraries and, especially, the structure around them.

P.S. I've gone to look at other chunks of Javascript and see that I was unlucky to grab this steaming pile first.


That was vendored in from this project: https://webperl.zero-g.net/ - it's one of the files distributed in the zip file listed here: https://webperl.zero-g.net/using.html#basic-usage

Originally I tried to get it working loading code directly but as far as I can tell there's no stable CDN build of that, so I had to vendor it instead.


FFS stop it with the “it’s just the same as a human” BS. It’s not just like working with a junior engineer! Please spend 60 seconds genuinely reflecting on that argument before letting it escape like drool from the lips of your writing fingers.

We work with junior engineers because we are investing in them. We will get a return on that investment. We also work with other humans because they are accountable for their actions. AI does not learn and grow anything like the satisfying way that our fellow humans do, and it cannot be held responsible for its actions.

As the OP said, AI is not on the team.

You have ignored the OP’s point, which is not that AI is a useless tool, but that merely being an AI jockey has no future. Of course we must learn to use tools effectively. No one is arguing with that.

You fanboys drive me nuts.


I'm not saying it's the same as working with a jr developer. I'm saying that not using something less skilled than yourself for less skilled tasks is stupid and self defeating.

Yes, when someone builds a straw man you ignore it. There is a huge canyon between never use AI in engineer(op proposal) and only use AI for all your engineering(op complaint).


There's a very good argument for not using tools vended by folks who habitually lie as much as the AI vendors (and their tools). I don't want their fingers anywhere in my engineering org, quite honestly. Given their ethics around intellectual property in general, I must assume that my company's IP is being stolen every time a junior engineer lazily uses one of these tools.


I'm sure you never use any Google or Microsoft products at all, such as Google Search, Maps or Android, and none of the companies and engineering teams you've ever worked with have used such products, given how habitually they lie (and the fact that they're two major AI vendors).

If so, congratulations for being old or belonging to the 0.01%. Good luck finding a first job where that holds in 2025.


Not at all true, though. You see, I expect the Jr will grow and learn from those off-loaded tasks in such a way that they will eventually become another Sr in the atelier. That development of the society of engineers is precisely what I do not wish to ever outsource to some oligarch's rental fleet of bullshit machines.


But happily serve other the collective oligarchs by training their next generation of knights...


I'm far more fond of knights than kings. So, yes, I would much more happily train another wave of humans at my craft than salt the earth behind me.


You don't have to be ignorant of FOSS to disagree with the statement that closed source software is unethical.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: