Honestly yeah, the Google Doc has all of the relevant info in it and is about 1/4 the length.
The LLM doesn’t know anything you didn’t tell it about this scenario, so all it does is add more words to say the same thing, while losing your authorial voice in the process.
I guess to put it a bit too bluntly: if you can’t be bothered writing it, what makes you think people should bother reading it?
I'm actually not so sure that LLMs are good at knowledge regurgitation. They're good at generating text that semi-plausibly looks like knowledge regurgitation (which may or may not be incomplete or wrong).
See the recent Google AI Summary mishaps for some good examples of this.
I’m thinking of knowledge regurgitation in the context of a very structured environment — a la knowledge base for a company & internal policies as opposed to the entire internet.
A better way to convey this might be that LLMs are good at being conversational and given the appropriate context and guardrails, they can regurgitate knowledge from said context with reasonable accuracy.
Google’s mishaps (eating rocks, etc.) demonstrate there’s still quite a bit of work to do for this to work at scale, but the tech is still pretty good.
yes, role based access is something I want to work on as soon as possible, just a few things I need to figure out implementation wise. Ive also started looking at the idea of forms and polls
The LLM doesn’t know anything you didn’t tell it about this scenario, so all it does is add more words to say the same thing, while losing your authorial voice in the process.
I guess to put it a bit too bluntly: if you can’t be bothered writing it, what makes you think people should bother reading it?