Hacker Newsnew | past | comments | ask | show | jobs | submit | tim-kt's commentslogin

Do you have some practical tips to reduce exposure to unwanted thoughts (except for, maybe, the obvious such as making sure that people/things do not interrupt you while you work by turning off notifications and closing the door)?

It does for me too. Especially the short parts with headings, the bold sentences in their own paragraph and especially formulations like "X isn't just... it's Y".


In other words, this website uses headings for sections, doesn't ramble, and has a single line of emphasis where you'd expect it. I wonder what style we'll have to adopt soon to avoid LLM witchhunt - live stream of consciousness ranting with transcript and typos?


To me this kind of use of AI (generating the whole article) is equivalent to a low-effort post. I also personally don't like this kind of writing, regardless of whether or not an AI generated it.


"In other words" means paraphrasing, not simply changing the words to something completely different.


Imagine being a person like me who has always been expressing himself like that. Using em dash, too.

LLMs didn’t randomly invent their own unique style, they learned it from books. This is just how people write when they get slightly more literate than nowadays texting-era kids.

And these suspicions are in vain even if happen to be right this one time. LLMs are champions of copying styles, there is no problem asking one to slap Gen Z slang all over and finish the post with the phrase “I literally can’t! <sad-smiley>”. “Detecting LLMs” doesn’t get you ahead of LLMs, it only gets you ahead of the person using them. Why not appreciate example of concise and on-point self-expression and focus on usefulness of content?


My comment was not really meant as a criticism (of AI) but more of an agreement that I am also confident in the fact that the post is AI-generated (while the parent comment does not seem to be so confident).

But to add a personal comment or criticism, I don't like this style of writing. If you like prompt your AI to write in a better style which is easier on the eyes (and it works) then please, go ahead.


The most jarring point that they mentioned, having sudden one-off boldfaced sentences in their own paragraphs, is not something I had ever seen before LLMs. It's possible that this could be a habit humans have picked up from them and started adding it the middle of other text that similarly evokes all of the other LLM tropes, but it doesn't seem particularly likely.

Your point about being able to prompt LLMs to sound different is valid, but I'd argue that it somewhat misses the point (although largely because the point isn't being made precisely). If an LLM-generated blog post was actually crafted with care and intent, it would certainly be possible to make less obvious, but what people are likely actually criticizing is content that's produced in I'll call "default ChatGPT" style that overuses the stylistic elements that get brought up. The extreme density of certain patterns is a signal that the content might have been generated and published without much attention to detail. There's was already a huge amount of content out there even before generating it with LLMs became mainstream, so people will necessarily use heuristics to figure out if something is worth their time. The heuristic "heavy use of default ChatGPT style" is useful if it correlates with the more fundamental issues that the top-level comment of this thread points out, and it's clear that there's a sizable contingent of people who have experienced that this is the case.


> although largely because the point isn't being made precisely

I agree. I wasn't really trying to make a point. But yes, what I am implying is that posts that you can immediately recognize as AI are low effort posts, which are not worth my time.


While I agree that it's not important whether or not someone uses AI to improve a blog post or create code examples, this blog post seems like the output of the prompt "Write an interesting blog post about a goroutine leak". I don't have the expertise to verify if what is written is actually correct or makes sense, but based on the other comments there seems to be some confusion if what is written is actually content or also AI generated output.


I do have expertise in Go. The bug was real, and the fix makes sense (though I couldn't verify it, of course).

I just hope HN gets over the "but it might be AI!!" crap sooner rather than later and focuses on the actual content because these types of posts are never going away.


Personally, I just don't like the way this is written. As I said though, I am not an expert and so I may be outside the target group. I think that the original "this is AI" comment is an automatic response which alternatively carries the meaning "this is low-effort" and in that sense I still think that it is valid criticism.


Fair enough - I appreciate your thoughts. I'll keep the "this is low-effort" == "this is AI" equivalence in mind moving forward.


I've done a similar fix, even a bit more interesting, however I wouldn't consider it worthy of writing a blog post, not to mention submitting it to HN.


Even the part where they deploy new code to production without restarting processes?


This seems like an interesting problem and an interesting fix, but there is so much code and so little explanation that I am lost after "The Code That Looked Perfectly Fine". It also reads very much like AI. And FYI the "output" code blocks are (at least for me on Firefox) a dark gray on a darker gray background, so very unreadable.


You can still optimize for the expectation value, which is also essentially poker strategy.


Anybody who plays poker “optimally” is bound to lose money when they come up against anyone with skill. Once you know the strategy your opponent is employing you can play like you have anything. I believe I’ve won with 7,2 offsuite more than any other hand, because I played like I had the nuts.


This is completely wrong - the entire point of the Nash equilibrium solution (in the context of poker, at least) is that it is, at worst, EV-neutral even when your opponent has perfect knowledge of your strategy.

Your 72o comment indicates you are either playing with very weak players, or have gotten lucky, as in reasonably competitive games playing (and then full bluffing) 72o will be significantly negative EV. Try grinding that strategy at a public 10/20 table and you will be quickly butchered and sent back to the ATM.


There are numerous videos of high level professional poker players winning large hands with incredible bluffs, this whole "Nash equilibrium solution" is nothing more than a conjecture with some symbols thrown in. I will re-iterate, there is no such thing as perfect knowledge when you have imperfect information. If you play "optimally," you will get bluffed out of all your money the moment everyone else at the table figures out what you're doing.


I saw the mirrored interactive Human simulator and decided to just post this exact Show HN:

https://news.ysimulator.run/item/2297


I don't know how to use WordPress, so I accidentally published this as a standalone site. I'm killing the submitted link and the actual blog post now lives here: https://timktitarev.wordpress.com/2025/10/11/continuous-petr...


Maybe it would be better to repost that link.


The last time I tried Immich (a year ago or so), my impression was that Immich tries to imitate Google Photos as much as possible. This includes features such as searching by a person or by "cat", which requires some machine learning sophistication, which is done locally (you can also disable these features). This would be my guess, but I'm not entirely sure.


> vimtutor is to Babbel what this is to duolingo

It took me half a minute to realize that you probably meant "vimtutor is to VIM master what Babbel is to duolingo".


Isn't it exactly the same thing? If a:b=c:d then a:c=b:d.


The problem is that it's not the relationship between vimtutor and Babbel that you're comparing to the relationship between vimmaster and Duolingo.


Thats exactly what I meant to say. Thanks!


Meaning the first best language depends on the job?


Of course :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: