Find answers from the community

Updated 2 weeks ago

Is Strict = True Default For Structured Outputs For Function Calling With Openai (Ex. FunctionCallingProgram)?

Hey all, is strict = True default for structured outputs for function calling with openai (ex. FunctionCallingProgram)?
L
D
8 comments
not at the moment (mostly because it has latency impacts, doesn't work with every pydantic class)
You can set it though
OpenAI(...., strict=True)
It will break for some pydantic classes though
since openai restricts it
I noticed that for some data models and prompts, when I tried to extract via FunctionCallingProgram I ran into an error (0 tool calls) but was able to get proper output using the response format (ie. not function calling) through openai - I'm assuming it's the structured output through function calling that is supported in llamaindex exclusively, correct? maybe if I set strict=True with the FunctionCallingProgram I will get the same output as calling OpenAI directly?
also, is there a simple way to set the system prompt for the FunctionCallingProgram? via chat_history or something else?
Pass in a ChatMessagePromptTemplate with your chat messages to FunctionCallingProgram

program(OutputCLS, prompt, arg1="val1", ...)

You can also set tool choice to force the function call

program(...., llm_kwargs={"tool_choice": "<class name>"})
Add a reply
Sign up and join the conversation on Discord