I noticed that for some data models and prompts, when I tried to extract via FunctionCallingProgram I ran into an error (0 tool calls) but was able to get proper output using the response format (ie. not function calling) through openai - I'm assuming it's the structured output through function calling that is supported in llamaindex exclusively, correct? maybe if I set strict=True with the FunctionCallingProgram I will get the same output as calling OpenAI directly?