Find answers from the community

Updated 2 weeks ago

Optimizing JSON mode for Pydantic models with OpenAI

Hey guys, is there a way to force JSON mode over function calling for JSON with Pydantic models (OpenAI)? We are noticing 4o-mini is way worse at function calling compared to 3.5-turbo (which we want to deprecate).

Also sources taking about this: https://news.ycombinator.com/item?id=41173223
W
N
L
9 comments
Aaahh didn't know it was that easy
I've also noticed 4o-mini sucks at function calling compared to 3.5. A shame that was a step back 😦
Man, lets hope they release an improved model soon
I've started using anthropic a lot more. Requires completely different prompting (need to include XML in the prompt, and prompting for XML output), but it works very well assuming you change the way you prompt
Would be a game changer if Llama index could map Pydantic models to the XML needed for Anthropic.
Sounds interesting, we will definitely consider it if the price per tokens is comparable to OpenAI
Not a bad idea, but since anthropic has a native tools/functions api, could just use that

So far though I've used a mix of those two approaches (raw prompting and tool calling)
Anthropics prompt optimizer is super helpful
Add a reply
Sign up and join the conversation on Discord