Hacker Newsnew | past | comments | ask | show | jobs | submit | bel423's commentslogin

I mean maybe he can be replaced but he’s below my line for what I would consider a VC.


You still have to worry about errors. You will probably have to add an error handler function that it can call out to. Otherwise the LLM will hallucinate a valid output regardless of the input. You want it to be able to throw an error and say I could produce the output given this format.


I feel like I’m taking crazy pills with the amount of people saying this is game changing.

Did they not even try asking gpt to format the output as json?


> I feel like I’m taking crazy pills....try asking gpt to format the output as json

You are taking crazey pills. Stop

gpt-? is unreliable! That is not a bug in it, it is the nature of the beast.

It is not an expert at anything except natural language, and even then it is an idiot savant


It literally does it everytime perfectly. I remember I put together an entire system that would validate the JSON against a zod schema and use reflection to fix it and it literally never gets triggered because GPT3.5-turbo always does it right the first time.


> It literally does it everytime perfectly. I remember I put together an entire system that would validate the JSON against a zod schema and use reflection to fix it and it literally never gets triggered because GPT3.5-turbo always does it right the first time.

Danger! There be assumptions!!

gpt-? is a moving target and in rapid development. What it does Tuesday, which it did not do on Monday, it may well not do on Wednesday

If there is a documented method to guarantee it, it will work that way (modulo OpenAI bugs - and now Microsoft is involved....)

What we had before, what you are talking of, was observed behaviour. An assumption that what we observed in the past will continue in the future is not something to build a business on


ChatGPT moves fast. The API version doesn’t seem to change except with the model and documented API changes.


No it doesn't lol. I've seen it just randomly not use a comma after one array element, for example.


Yep. Incorrect trailing commas ad nauseum for me.


Are you saying that it return only JSON before? I'm with the other commenters it was wildly variable and always at least said "Here is your response" which doesn't parse well.


If you want a parsable response, have it wrap that with ```. Include an example request/response in your history. Treat any message you can’t parse as an error message.

This works well because it has a place to put any “keep in mind” noise. You can actually include that in your example.


Yeah no


Did people really struggle with getting JSON outputs from GPT4. You can literally do it zero shot by just saying match this typescript type.

GPT3.5 would output perfect JSON with a single example.

I have no idea why people are talking about this like it’s a new development.


Unfortunately, in practice that works only most of the time. At least in our experience (and the article says something similar) sometimes ChatGPT would return something completely different when JSON-formatted response would be expected.


I've been using the same prompts for months and have never seen this happen on 3.5-turbo let alone 4.

https://gist.github.com/BLamy/244eec016beb9ad8ed48cf61fd2054...


In my experience if you set the temperature to zero it works 99.9% of the time, and then you can just add retry logic for the remaining 0.1%


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: