This is a fundamental problem and one reason we're mired in this culture war. Social friction is caused by jostling based on group membership, and there's no common values-based scaffolding we can use to collaborate in building a better way.
I used to scoff when told to say the pledge of allegiance as a young person. Now, the closing words "liberty and justice for all" sound quite aspirational.
I used mine to buy Macbook Airs with 0% interest just fine. For the iPhone, the fine print says you (now) have to sign up with one of their pre-approved carriers. If you use another - Mint or US Mobile or whatever - you're out of luck.
The argument for not using electric sharpeners is that they (1) cut down the lifetime of your knife substantially and (2) they do a mediocre job of sharpening.
Mechanically, it's just high-abrasive motorized spinning discs at preset angles. So rather than getting a good edge by taking a few microns of material off by doing it manually, you get an OK edge by taking 0.2mm off at a time. (If 0.2mm doesn't sound like a lot, think about how many mm wide your knife is.)
---
I'm personally 50-50 on this advice: most people don't sharpen their knives at all, and I think people are better off getting 10 OK years out of a knife than 50 terrible years out of it.
I still sharpen my knives on a whetstone, but given the general cost trajectory of most manufactured items, I've decided that I'm okay if I wear out my knives. Buying a new chef's knife in 10 years is basically free on a per-day-of-use basis.
(I say that, but I'm still using knives that mostly range from 25-50 years old, but some didn't get sharpened enough when they belonged to our parents and grandparents.)
I landed on using a diamond stone with 300 grit and 1000 grit. Unlike whetstones they never need to be flattened. I just use one of those cheap plastic angle guides. After a bit of practice you will learn to hold the angle well enough. Finish with a leather strop and some polishing compound and I can keep my knives shaving-sharp with only a few minutes effort before I cook.
OK, but Gmail, Google Maps, Google Docs, and Google Search etc are ubiquitous. `Google' has even become a verb. Google might take a shotgun approach, but it certainly does create widely used products.
Anti-trust doesn’t have to involve force, but monopolistic behavior.
Google has spent over a decade advertising Chrome on all their properties and has an unlimited budget and active desire to keep Chrome competitive. Mozilla famously needs Google’s sponsorship to stay solvent. Apple maintains Safari to have no holes in their ecosystem.
Stop being silly defending trillion dollar companies that are actively making the internet worse, it’s not productive or funny.
Yesterday I had semi-coherent idea for an essay. I told it to an LLM and asked for a list of authors and writings where similar thoughts have been expressed - and it provided a fantastic bibliography. To me, this is extremely fun. And, reading similar works to help articulate an idea is absolutely part of writing.
"LLMs" are like "screens" or "recording technology". They are not good or bad by themselves - they facilitate or inhibit certain behaviors and outcomes. They are good for some things, and they ruin some things. We, as their users, need to be deliberate and thoughtful about where we use them. Unfortunately, it's difficult to gain wisdom like this a priori.
As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
Sadly all the AI is owned by companies that want to do all your art and writing so that they can keep you as a slave doing their laundry and dishes. Maybe we'll eventually see powerful LLMs running locally so that you don't have to beg some cloud service for permission to use it in the ways you want, but at this point most people will be priced out of the hardware they'd need to run it anyway.
However you feel about LLMs or AI right now, there are a lot of people with way more money and power than you have who are primarily interested in further enriching and empowering themselves and that means bad news for you. They're already looking into how to best leverage the technology against you, and the last thing they care about is what you want.
As a former artist, I can tell you that you will never have good or sufficient ideas for your art or writing if you don’t do your laundry and dishes.
A good proxy for understanding this reality is that wealthy people who pay people to do all of these things for them have almost uniformly terrible ideas. This is even true for artists themselves. Have you ever noticed how that the albums all tend to get worse the more successful the musicians become?
It’s mundanity and tedium that forces your mind to reach out for more creative things and when you subtract that completely from your life, you’re generally left with self-indulgence instead of hunger.
Only if you are already wealthy or fine with finding a new job
If I were still employed, I would also not want my employer to tolerate peers of mine rejecting the use of agents in their work out of personal preference. If colleagues were allowed to produce less work for equal compensation, I would want to be allowed to take compensated time off work by getting my own work done in faster ways - but that never flies with salaried positions, and getting work done faster is greeted with more work to do sooner. So it would be demoralizing to work alongside and be required to collaborate with folks who are allowed to take the slow and scenic route if it pleases them.
In other words, expect your peers to lobby against your right to deny agent use, as much as your employer.
If what you really want is more autonomy and ownership over your work, rejecting tool modernity won't get you that. It requires organizing. We learned this lesson already from how the Luddite movement and Jacobin reaction played out.
You’re assuming implicitly that the tool use in question always results in greater productivity. That’s not true across the board for coding agents. Let me put this another way: 99% of the time, the bottleneck is not writing code.
Why limit this to AI? There have been lots of programming tools which have not been universally adopted, despite offering productivity gains.
For example, it seems reasonably that using a good programming editor like Emacs or VI would offer a 2x (or more) productivity boost over using Notepad or Nano. Why hasn't Nano been banned, forbidden from professional use?
Maybe, but probably not. For me, an early goal of writing is to get my thoughts in order. A later goal is to discuss the writing with people, which can only happen in a high-quality way if my thoughts are in order. Achieving goals is fun.
Whether the LLM could do a better job than me at writing the essay is a separate question...I suspect it probably could. But it wouldn't be as fun.
I heard someone say that LLMs don't need to be as good as an expert to be useful, they just need to be better than your best available expert. A lot of people don't have access to mental health care, and will ask their chatbot to ask like a psychologist.
>[...] LLMs don't need to be as good as an expert to be useful, they just need to be better than your best available expert.
This mostly makes sense.
The problem is that people will take what you've said to mean "If I have no access to a therapist, at least I can access an LLM", with a default assumption that something is better than nothing. But this quickly breaks down when the sycophantic LLM encourages you to commit suicide, or reinforces your emerging psychosis, etc. Speaking to nobody is better than speaking to something that is actively harmful.
All very true. This is why I think the concern about harm reduction and alignment is very important, despite people on HN commonly scoffing about LLM "safety".
Is that not the goal of the project we are commenting under? To create an evaluation framework for LLM's so they aren't encouraging suicide, psychosis, or being actively harmful.
Sure, yeah. I'm responding to the comment that I directly replied to, though.
I've heard people say the same thing ("LLMs don't need to be as good as an expert to be useful, they just need to be better than your best available expert"), and I also know that some people assume that LLMs are, by default, better than nothing. Hence my comment.
"We present Dreamer 4, a scalable agent that learns to solve control tasks by imagination training inside of a fast and accurate world model. ... By training inside of its world model, Dreamer 4 is the first agent to obtain diamonds in Minecraft purely from offline data, aligning it with applications such as robotics where online interaction is often impractical."
In other words, it learns by watching, e.g. by having more data of a certain type.
This is a fundamental problem and one reason we're mired in this culture war. Social friction is caused by jostling based on group membership, and there's no common values-based scaffolding we can use to collaborate in building a better way.
I used to scoff when told to say the pledge of allegiance as a young person. Now, the closing words "liberty and justice for all" sound quite aspirational.
reply