Hacker Newsnew | past | comments | ask | show | jobs | submit | leopoldhaller's commentslogin

> "You see, when you make something, you put it together, you arrange parts, or you work from the outside in, as a sculpture works on stone, or as a potter works on clay. But when you watch something growing, it works in exactly the opposite direction. It works from the inside to the outside. It expands. It burgeons. It blossoms."

Reminds me of this great quote by Terrence Deacon on a podcast (I believe it was one of his Mind & Matter episodes). From memory:

> Engineering is in some sense the opposite of life: Engineering involves assembling components. Life involves differentiating wholes.

I highly recommend Deacon's 'Incomplete Nature', and I'm very psyched about his upcoming book “Falling Up: Inverse Darwinism and Life’s Complexity Ratchet”


You may be interested in the open source framework we're developing at https://github.com/agentic-ai/enact

It's still early, but the core insight is that a lot of these generative AI flows (whether text, image, single models, model chains, etc) will need to be fit via some form of feedback signal, so it makes sense to build some fundamental infrastructure to support that. One of the early demos (not currently live, but I plan on bringing it back soon) was precisely the type of flow you're talking about, although we used 'prompt refinement' as a cheap proxy for tuning the actual model weights.

Roughly, we aim to build out core python-level infra that makes it easy to write flows in mostly native python and then allows you track executions of your generative flows, including executions of 'human components' such as raters. We also support time travel / rewind / replay, automatic gradio UIs, fastAPI (the latter two very experimental atm).

Medium term we want to make it easy to take any generative flow, wrap it in a 'human rating' flow, auto-deploy as an API or gradio UI and then fit using a number of techniques, e.g., RLHF, finetuning, A/B testing of generative subcomponents, etc, so stay tuned.

At the moment, we're focused on getting the 'bones' right, but between the quickstart (https://github.com/agentic-ai/enact/blob/main/examples/quick...) and our readme (https://github.com/agentic-ai/enact/tree/main#why-enact) you get a decent idea of where we're headed.

We're looking for people to kick the tires / contribute, so if this sounds interesting, please check it out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: