Hacker Newsnew | past | comments | ask | show | jobs | submit | gubicle's commentslogin

Consider the possibility that the people who make these decisions aren't actually all that smart and are easily manipulated by marketing and the sycophants/impostors they surround themselves with.


You're telling me the folks who brought us the metaverse that revolutionized our lives are making dumb investments? That's a bold claim.


Who are you in this scenario though? Are you ManusAI getting bought for a giant pile of money? Are you a vendor that supplies Meta for their VR hardware that's getting paid in money? Are you an employee at Meta getting paid in money and Meta shares to build the Metaverse? Are you a shareholder of Meta who's stock is up? Like, sure, we can sit back and laugh at no legs, but Meta spent money they had on a thing they wanted to do. Sure, it didn't pan out, like that time I tried to pick up scuba diving, but when you have that much money, you can afford to try things that don't work. What's better, to try and fail, or never try because someone might make fun of you? If I just sold a company for half a billion, you could call me all the names you want, I wouldn't be able to hear you over the engines of my private fighter jet.


I understand what they are arguing, but they are just lobbing insinuations at the crowd. I (perhaps wrongly) assumed they had specific insight into the people and relationships inside the transaction that could be shared.


There is a lot of dumb money chasing AI related anything at this time. And there are people who know how to play the game.


> You could only save somebody time if they skipped the content and started doing comments on HN anyhow

as is customary


My calculations tell me that would be a yellow flag.


My knowledge of colors tells me red and green make brown.


#ffff00 is a pretty bright yellow color.


What does a brown flag tell us?


proceed with caution


The (relatively big and successful) tech company I work at, has gradually seen ~all high level decision-making positions filled with PMs, while senior engineers who have been at the company for years are being pushed out and/or leaving. Most of these PMs have very little understanding of the tech, the market, or how software engineering works, yet they now make ~all of the product decisions at the company. I haven't worked on anything remotely useful, or bottom-line impactful in 2 years. I was originally very optimistic about the company and elected to get paid in as much stock (vs cash) as possible... which I now realize was a big mistake.


I thank the stars every day that my direct manager is an actual engineer.


they'd get it before it gets to china


The whole point is the prompt (+ a static set of (system)prompts). If your whole function as a human is clicking one of a set of buttons to trigger an AI action, then you are automate-able in a few lines of code (and the AI is better than you at deciding which button to click anyway (supposedly)).

There are like thousands wrappers around LLMs masquerading as AI apps for specialized usecases, but the real performance of these apps is really only bottlenecked by the LLM performance, and their UIs generally only get in the way of the direct LLM access/feedback loop.

To work with LLMs effectively you need to understand how to craft good prompts, and how to read/debug the responses.


I mean if you’re building for a consumer and you know what most of them may prompt, you can interface it with the UI so it’s not a game of hope you’re good at prompting because if not your experience isn’t going to be good. You could still offer a text panel if it fails


What does 'interface it with the UI' mean though? How does adding buttons make it easier for the user to work with the AI? The whole point is that users can control it using the most natural and ubiquitous way possible - through natural language.

Yeah, it often makes sense to adjust the user's prompt, add system/wrapper prompts, etc. But that's not really related to UI..


A lot of people don't know how to ask for what they want or ask it in different ways. If you can normalize this, you can normalize results. When consistent results are more important, introducing guardrails via UI or a guided flow is more relevant


> I still like the Canadian approach that to have a title with the word Engineer in it you have to be licensed by the engineering regulator for the province you work in.

That's just not true.

(Despite what Engineers Canada and related parasites tell you.)


Steam on (arch)linux works so well these days, I haven't needed windows for gaming in a while.


same thing


How many of these 'this new LLM version is super amazing' stories are paid for?


Do you count personal stakes? Financial or reputational.


Certainly both.

Rarely can you get the recipients to admit to the latter...

> I have not accepted payments from LLM vendors, but I am frequently invited to preview new LLM products and features from organizations that include OpenAI, Anthropic, Gemini and Mistral, often under NDA or subject to an embargo. This often also includes free API credits.

... but even HN's favorite shill "discloses" the former.

> One exception: OpenAI paid my for my time when I attended a GPT-5 preview at their office which was used in a video. They did not ask for any editorial insight or control over what I wrote after that event, aside from keeping to their embargo.

https://simonwillison.net/about/#disclosures


Yes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: