Find answers from the community

Updated 2 weeks ago

Agent

Hi guys, https://docs.llamaindex.ai/en/stable/examples/agent/agent_workflow_basic/ based on this doc, the multi agent handoff is mor explicit and sequential. My question is does llamaindex have more implicit and parallelization approach for this agent to implement or any design restriction. Any suggestions
1
L
J
k
15 comments
The agent workflow itself is inherently sequential -- there is only one "active agent" at a given time controlling the system

If you want parallelization, just put agents as tools to other agents imo
@Logan M thank you for your response. Based on doc wise, if I wanna make parallelization, do you have an example code to follow to understand how it works?

Also, currently agent can't react and execute any task, it's just work as simple chat Q&A type, if I want to do some automation with agents like it to create auto calendar events for me and write email automatically, what way I can do? Any suggestions on these two things
Why OpenAi calling function API not working says symbols etc error @kapa.ai even I use LLMAIndex
Let me check the knowledge base for you again. One moment! :books:
----------------------
I apologize, but I don't have enough specific information in the provided knowledge sources to directly answer why you're experiencing a "symbols etc error" with the OpenAI function calling API when using LlamaIndex. However, I can provide some general information about using OpenAI function calling with LlamaIndex that might be helpful:

  1. Model Support: Make sure you're using an OpenAI model that supports the function calling API. As mentioned in the LlamaIndex documentation, you need to use a model like "gpt-3.5-turbo-0613" or newer that supports this feature.
  1. LLM Configuration: When setting up your LLM, ensure you're using the OpenAI class from LlamaIndex correctly. For example:
Plain Text
   from llama_index.llms.openai import OpenAI
   
   llm = OpenAI(model="gpt-3.5-turbo-0613")
   


  1. Function Definition: Make sure your functions are properly defined using the correct format. LlamaIndex provides utilities to help with this, such as to_openai_function.
  1. Error Checking: The LlamaIndex code includes checks to ensure you're using a compatible model. For instance:
Plain Text
   if not llm.metadata.is_function_calling_model:
       raise ValueError(
           f"Model name {llm.model} does not support function calling API."
       )
   


If you're still encountering errors, it would be helpful to see the exact error message and the code you're using. The error might be related to how you're defining or calling the functions, or there could be an issue with your OpenAI API key or permissions.
For more detailed troubleshooting, you might want to check the OpenAI documentation or seek help in the LlamaIndex community forums with your specific error message and code snippet.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
LLMAIndex what gpt list of models are used and accept for functional calling tools supported ? @kapa.ai
Let me check the knowledge base for you again. One moment! :books:
----------------------
Based on the knowledge sources provided, I can give you information about the GPT models that LlamaIndex supports for function calling tools. Here's what I found:

  1. OpenAI Models:
    • GPT-3.5-turbo: The documentation mentions "gpt-3.5-turbo-0613" as a model that supports function calling LlamaIndex documentation.
    • GPT-4: While not explicitly mentioned for function calling, GPT-4 models generally support advanced features.
    • A recent update added support for "gpt-3.5-turbo-0125" LlamaIndex Changelog.
  1. Anthropic Models:
    The documentation mentions that Claude models support function calling. Specifically:
  1. Mistral AI:
  1. NVIDIA:
    • NVIDIA supports tool calling with certain models, but the specific model names are not provided in the given sources. The documentation mentions that you can get a list of models supporting tool calling using:
      Plain Text
      tool_models = [model for model in NVIDIA().available_models if model.is_function_calling_model]
    LlamaIndex documentation
It's important to note that the availability of function calling or tool support can change with model updates. Always refer to the most recent documentation or check with the is_function_calling_model attribute when using LlamaIndex to ensure you're using a compatible model for your specific use case.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@Logan M is this the bot connected to Runllm? Tyia
nah this is kapa. We only use runllm in our docs site
https://docs.llamaindex.ai/en/stable/

See the icon on the botom right
#ask-runllm -- oh we do have it on discord
I have it muted lol forgot it was there
Add a reply
Sign up and join the conversation on Discord