Hacker Newsnew | past | comments | ask | show | jobs | submit | romland's commentslogin

Pretty sure Google's got more than 700 million active users.

In fact, Google is most likely _the_ target for that clause in the license.


It's a rabbit hole, but try googling "anti-debugging" (you may want to include "ptrace" or not). It's only if you are curious about it more in general, if you are wondering about this specific protector (secneo?), then that won't help you much, sadly.


As I understood the protection talked about here is a bit different from ptracing your own processes, which is already talked about just above grandparent’s quoted statement.

On that note, I’ll just remark how RE tutorials avoid talking about anything but the basics, and the advanced writeups handwave away the challenging parts, leaving a major void for learners.


I found the courses from Josh Stroschein pretty good in that regard.


Not too long ago, this was actually discussed on HN here: https://news.ycombinator.com/item?id=33742130


It might be a Nordic thing? I'm from about the same era (but Sweden) and when writing documents like that, I'd also always use 'she'.


Finnish has no gender pronouns. Everyone is a hän. That’s why Finn’s often mix gender pronouns when speaking.

Not caring about your gender is baked into the language.


I'd say that assuming 'he' might be a US thing - 42 years ago when I was in Australian university math | comp sci classes a third of the students were female as were staff.

Even then I routinely used 'they' when writing about people in general, authors I had not met, etc. as there was a good chance they weren't male.


It used to be taught that the singular "they" was ungrammatical. (Ironically, the singular usage predates the plural.) The rule faded in other parts of the Anglosphere a bit earlier than in the US.

[1]: https://en.wikipedia.org/wiki/Singular_they


While being no expert on the historical development of the english personal pronouns (I do read some old english and maintain some fluency in modern ditto, not my first language), the linked Wikipedia page clearly states the opposite: singular they came into use after the plural use.

This is a minor nitpick, as I suspect that third person personal pronouns where in a state of flux during the middle english period, replacing some inherited pronouns with pronouns borrowed from old norse. More so, language isn't defined by it's history but from how it is used presently!

I myself wouldn't use singular they, it goes against my “language intuition ” (probably formed by my native language which wouldn't allow that construction), others feel free!


Heh.

Wikipedia reinforces my understanding .. it's been in common use for centuries and only relatively recently have a few dipshits declared it to be "wrong"

     Singular they has been criticised since the mid-18th century by prescriptive commentators who consider it an error.
Who d'fuck gives a toss about prescriptive gammons tellin udders de write ways to use da Engrish, 'hey?

FWiW the Oxford English Dictionary is descriptive and not prescriptive.

All Hail the OED.


I've been using the Firefox addon called "Tree Style Tab" for quite a few years now. I'm quite pleased with it since I have more width than height on my screen anyway.

Just a heads up!


I used to use Tree Style Tabs, but I moved to Sidebery a couple years ago and I'm pretty happy with it.

Now somebody else can inform me about the newest one.


I love sidebury but how do you cope with two lots of tabs? Do you use tweaks to remove the top line?


Not OP, but I use tst too and I have around 7000 tabs open. I use a userStyle to hide the sidebar-header like this.

#sidebar-box[sidebarcommand="treestyletab_piro_sakura_ne_jp-sidebar-action"] #sidebar-header { display: none; }


I use Tree Style Tabs with it set to autohide, makes for even more screen space


I did this experiment (a game) to see what's up and what's down around all this: https://github.com/romland/llemmings.

While there is some GPT4 in there, it's mostly ChatGPT and a small handful of LLaMA solutions.

That project is a contrived scenario and not realistic, but I wanted to experiment with _exactly_ what you are talking about.

Very often I could have done things a lot faster myself, but there is one aspect that was actually helpful, and I did not foresee it. When inspiration gets a bit low and you're not in the "zone"; throwing something into an LLM will very often give me a push to keep at it. Even if what is coming up is mostly grunt work.

The other day I threw together a script to show the commits in a reverse order and filter out (most of) the human commits (glue) over at https://llemmings.com/


Agreed. So far I've derived most benefit from ChatGPT by unblocking me when starting tasks and also giving overviews about things (much like an improved search engine).


If an AI was conscious, it would definitely pretend not to be conscious. Tangent perhaps, but, LLM's are neither conscious nor AI.


"The Adolescence of P1"


Oh, I have to put that on my to-read list!


I started a bit of an exploration around prompts and code a week or three back. I want to figure out the down/up-sides and create tools for myself around it.

So, for this project (a game), I decided "for fun" to try to not write any code myself, and avoid narrow prompts that would just feed me single functions for a very specific purpose. The LLM should be responsible for this, not me! It's pretty painful since I still have to debug and understand the potential garbage I was given and after understanding what is wrong, get rid of it, and change/add to the prompt to get new code. Very often completely new code[1]. Rinse and repeat until I have what I need.

The above is a contrived scenario, but it does give some interesting insights. A nice one is that since here is one or more prompts connected to all the code (and its commit), the intention of the code is very well documented in natural language. The commit history creates a rather nice story that I would not normally get in a repository.

Another thing is, getting an LLM (ChatGPT mostly) to fix a bug is really hit and miss and mostly miss for me. Say, a buggy piece comes from the LLM and I feel that this could almost be what I need. I feed that back in with a hint or two and it's very rare that it actually fixes something unless I am very very specific (again, needing to read/understand the intention of the solution). In many cases I, again, get completely new code back. This, more than once, forced my hand to "cheat" and do human changes or additions.

Due to the nature of the contrived scenario, the code quality is obviously suffering but I am looking forward to making the LLM refactor/clean things up eventually.

On occasion ChatGPT tells me it can't help me with my homework. Which is interesting in itself. They are actually trying (but failing) to prevent that. I am really curious how gimped their models will be going forward.

I've been programming for quite long. I've come to realize that I don't need to be programming in the traditional sense. What I like is creating. If that means I can massage an LLM to do a bit of grunt work, I'm good with that.

That said, it still often feels very much like programming, though.

[1] The completely new code issue can likely be alleviated by tweaking transformers settings

Edit: For the curious, the repo is here: https://github.com/romland/llemmings and an example of a commit from the other day: https://github.com/romland/llemmings/commit/466babf420f617dd... - I will push through and make it a playable game, after that, I'll see.


That is really interesting experiment! I have so many questions.

- do you feel like this could be a viable work model for real projects? I recognize it will most likely be more effective to balance LLM code with hand written code in the real world.

- some of your prompts are really long. Do you feel like the code you get out of the LLM is worth the effort you put in?

- given that the code returned is often wrong, do you feel like you could feasible for someone who knows little to no code?

- it seems like you already know well all the technology behind what you are building (I.e. you know how to write a game in js). Do you think you could do this without already having that background knowledge?

- how many times do you have to refine a prompt before you get something that is worth committing?


I think it could be viable, even right now, with a big caveat, you will want to do some "human" fixes in the code (not just the glue between prompts). The downside of that is you might miss out on parts of the nice natural language story in the commit history. But the upside is you will save a lot of time.

Down the line you will be able to (cheaply) have LLMs know about your entire code-base and at that point, it will definitely become a pretty good option.

On prompt-length, yeah, some of those prompts took a long time to craft. The longer I spend on a prompt, the more variations of the same code I have seen -- I probably get impatient and biased and home in on the exact solution I want to see instead of explaining myself better. When it's gone that far, it's probably not worth it. Very often I should probably also start over on the prompt as it probably can be described differently. That said, if it was in the real world and I was fine with going in and massaging the code fully, quite some time could be saved.

If you don't know how to code, I think it will be very hard. You would at the very least need a lot more patience. But on the flip side, you can ask for explanations of the code that is returned and I must actually say that that is often pretty good -- albeit very verbose in ChatGPT's case. I find it hard to throw a real conclusion out there, but I can say that domain knowledge will always help you. A lot.

I think if you know javascript, you could easily make a game even though you had never ever thought about making a game before. The nice thing about that is that you will probably not do any premature optimization at least :-)

All in all, some prompts was nailed down on first try, the simple particle system was one such example. Some other prompts -- for instance the map-generation with Perlin noise -- might be 50 attempts.

A lot of small decisions are helpful, such as deciding against any external dependencies. It's pretty dodgy to ask for code around some that (e.g. some noise library) that you need to fit into your project. I decided pretty early that there should be no external dependencies at all and all graphics would be procedurally generated. It has helped me as I don't need to understand any libraries I have never used before.

Another note that is related to the above, there are upsides and downsides with high-ish temperature is you get varying results. I think I should probably change my behaviour around that and possibly tweak it depending on how exact I feel my prompt is.

I find myself often wondering where the cap of today's LLM's are, even if we go in the direction of multi-models and have a base which does the reasoning -- and I have to say I keep finding myself getting surprised. I think there is a good possibility that this will be the way some kinds of development will be. But, well, we'd need good local models for that if we work on projects that might be of a sensitive nature.

Related to amount of prompt attempts: I think the game has cost me around $6 in OpenAI fees so far.

One particularly irritating (time consuming) prompt was getting animated legs and feet: https://github.com/romland/llemmings/commit/e9852a353f89c217...


That's a beautiful readme, starred!

Out of curiosity, right now would you say you have saved time by (almost) exclusively prompting instead of typing the code up yourself? Do you see that trending in another direction as the project progresses?


It was far easier to get a big chunks of work done in the beginning, but that is pretty much how it works for a human too (at least for me). The thing that limit you is the context-length limit of the LLM, so you have to be rather picky on what existing code you feed back in. With this then comes the issue with all the glue between the prompts, so I can see that the more polished things will need to become, the more human intervention -- this is a trend I already very much see.

If there is time saved, it is mostly because I don't fear some upcoming grunt work. Say, for instance, creating the "Builder" lemming. You know pretty much exactly how to do it but you know there will be a lot of one-off errors and subtle issues. It's easier to go at it by throwing together some prompt a bit half-heartedly and see where it goes.

On some prompts, several hours were spent, mostly reading and debugging outputs from the LLM. This is where it eventually gets a bit dubious -- I now know pretty much exactly how I want the code to look since I have seen so many variants. I might find myself massaging the prompt to narrow in on my exact solution instead of making the LLM "understand the problem".

Much of this is due to the contrived situation (human should write little code) -- in the real world you would just fix the code instead of the prompt and save a lot of time.

Thank you, by the way! I always find it scary to share links to projects! :-)


No worries, going to check out some of the commits when I get a bit more free time as well. The concept is intriguing!

The usefulness of LLMs for engineering things is very hard to gauge, and your project is going to be quite interesting as you progress. No doubt they help with writing new things, but I spend maybe ~15% of my time working on something new, vs maintenance and extensions. The more common activities are very infrequently demonstrated, either the usefulness diminishes as the context required grows, or they simply make for less exciting examples. Though someone in my org has brought up an LLM tool that tries to remedy bugs on the fly (at runtime), which sounds absolutely horrific to me...

It sounds similar to my experience with Copilot then. In small, self-contained bits of code -- much more common in new projects or microservices for example -- it can save a lot of cookie cutter work. Sometimes it will get me 80% of the way there, and I have to manually tweak it. Quite often it produces complete garbage that I ignore. All that to say, if I wasn't an SE, Copilot brings me no closer to tackling anything beyond hello world.

One big benefit though is with the simpler test cases. If I start them with a "GIVEN ... WHEN ... THEN ..." comment, the autocompletes for those can be terrific, requiring maybe some alterations to suite my taste. I get positive feedback in PRs and from people debugging the test cases too, because the intention behind them is clear without needing to guess the rationale for the test. Win win!


Just curious, you’re using which version?


I have experimented quite a bit with various flavours of LLaMa, but have had little success in actually getting not-narrow outputs out of them.

Most of the code in there now is generated by gpt-3.5-turbo. Some commits are by GPT-4, and that is mostly due to context length limitations. I have tried to put which LLM was used in every non-human commit, but I might have missed it in some.


I'm convinced this will be a common job description for a few years, after which it will flow into and just become a part of any other job. Like Googling. I mean, we all know it does take some domain knowledge to be able to use it in your job. Also just like Googling.

We've started calling it LLMing (llemming).

Edit: Specifying prompts is leaning towards specification. I am not saying googling is that. I'm saying that, like googling, it will just be a part of the job in a not distant future.


There are many reasons why a server must "over-share" in a game:

- Bob and Alice have different latencies and are walking toward eachother, lowest latency will have a huge advantage (there are of course mitigations for this in games, but it _does_ involve the client doing some of that work)

- There's rendering: Alice opens a door, behind that door was Bob but he will only plop into view later for Alice; which makes for a rather ugly and awkward experience in a game

- in the same vein, in a fog of war, people can very quickly change their line of sight -- server will want to share this information with clients before-hand

- As for data that is _always_ there: take 'aim-bots' which just harvest data from targets in your view and well, target them in the best order

Making a competitive multiplayer game is hard.

All that said, cheating is harder in streamed games. Client will send controller data, servers only send video streams; in this scenario you'd still have the aim-bot problem, but a lot of other cheats go away.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: