llama-index-core
, there was a fix implemented for "agentworkflow handoffs for non-openai llms" (issue #17631). This suggests that there was a known issue with agent handoffs in the multi-agent workflow, particularly when using LLMs other than OpenAI's.llama-index-core
.can_handoff_to
parameter for each agent. For example:research_agent = FunctionAgent( name="ResearchAgent", description="Useful for searching the web for information on a given topic and recording notes on the topic.", system_prompt=( "You are the ResearchAgent that can search the web for information on a given topic and record notes on the topic. " "Once notes are recorded and you are satisfied, you should hand off control to the WriteAgent to write a report on the topic." ), llm=llm, tools=[search_web, record_notes], can_handoff_to=["WriteAgent"], )
can_handoff_to
lists for each agent are correctly set up to allow the desired handoffs between agents.FunctionAgent
setup:write_agent = FunctionAgent( name="WriteAgent", description="Useful for writing a report on a given topic.", system_prompt=( "You are the WriteAgent that can write a report on a given topic. " "Your report should be in a markdown format. The content should be grounded in the research notes. " "Once the report is written, you should get feedback at least once from the ReviewAgent." ), llm=llm, tools=[write_report], can_handoff_to=["ReviewAgent", "ResearchAgent"], )
record_notes
, write_report
, and review_report
) interact with the context and state of the workflow. For example:async def record_notes(ctx: Context, notes: str, notes_title: str) -> str: """Useful for recording notes on a given topic. Your input should be notes with a title to save the notes under.""" current_state = await ctx.get("state") if "research_notes" not in current_state: current_state["research_notes"] = {} current_state["research_notes"][notes_title] = notes await ctx.set("state", current_state) return "Notes recorded."
AgentWorkflow
. This allows you to see what's happening under the hood as the agents work and hand off control. Here's an example of how you can set this up:handler = agent_workflow.run(user_msg="Your task description here") async for event in handler.stream_events(): if isinstance(event, AgentOutput): if event.response.content: print("📤 Output:", event.response.content) if event.tool_calls: print( "🛠️ Planning to use tools:", [call.tool_name for call in event.tool_calls], ) elif isinstance(event, ToolCallResult): print(f"🔧 Tool Result ({event.tool_name}):") print(f" Arguments: {event.tool_kwargs}") print(f" Output: {event.tool_output}") elif isinstance(event, ToolCall): print(f"🔨 Calling Tool: {event.tool_name}") print(f" With arguments: {event.tool_kwargs}")
AgentWorkflow
is set up correctly, especially the can_handoff_to
parameters for each agent, as mentioned in the previous answer.AgentWorkflow
.LlamaDebugHandler
:import llama_index.core llama_index.core.set_global_handler("simple")
if isinstance(event, AgentOutput): state = await handler.ctx.get("state") print("Current state:", state) current_agent_name = await handler.ctx.get("current_agent_name") print("Current agent:", current_agent_name)
LlamaDebugHandler
for more detailed tracing:from llama_index.core.callbacks import LlamaDebugHandler, CallbackManager debug_handler = LlamaDebugHandler(print_trace_on_end=True) callback_manager = CallbackManager([debug_handler]) agent_workflow = AgentWorkflow( agents=[research_agent, write_agent, review_agent], root_agent=research_agent.name, initial_state={ "research_notes": {}, "report_content": "Not written yet.", "review": "Review required.", }, callback_manager=callback_manager )
can_handoff_to
configurations:can_handoff_to
lists for each agent are correctly set up. For example:print("ResearchAgent can handoff to:", research_agent.can_handoff_to) print("WriteAgent can handoff to:", write_agent.can_handoff_to) print("ReviewAgent can handoff to:", review_agent.can_handoff_to)
can_handoff_to
list of the current agent.