Find answers from the community

Updated last month

Timeout

At a glance

The community member is building a system using various large language models (LLMs), including some from OpenAI and some using the llama_index.llms.azure_inference module. They have noticed that the default LlamaIndex LLM class does not have a built-in timeout feature like the one in OpenAI's LLMs. The community member is wondering if there is an easy way to implement a timeout or if they have missed something.

In the comments, another community member suggests that the timeout functionality may be available through the client arguments, and provides a link to the relevant code in the llama_index repository.

Useful resources
Hi I am builiding rag with various LLMs, some openai ones some using llama_index.llms.azure_inference, and I just realized that default llamaindex LLM class doesn't come with timeout which is built in OpenAI. I wonder if there are some easy ways to implement the timeout, or I missed something?
Add a reply
Sign up and join the conversation on Discord