Find answers from the community

Updated last month

Passing Custom Prompt Template to the FunctionAgent

At a glance

The community members are discussing how to pass custom prompt templates to the FunctionAgent, where they can prompt engineer the system prompt, tools output, and other aspects. The post includes code for setting up a FunctionAgent with various prompts and tools. The comments indicate that the tool description, tool output, and tool inputs are controlled by the user, and there are some default prompts that can be used or customized. The community members also mention that the exact LLM inputs can be inspected by iterating over the events. There is no explicitly marked answer in the comments.

Useful resources
How can I pass custom prompt template to the FunctionAgent, where I can prompt engineer the system prompt, tools output ,etc...
:
Plain Text
from llama_index.core.agent.workflow import FunctionAgent, AgentWorkflow
from llama_index.llms.vllm import Vllm

from prompts import (
    ORCHESTRATOR_SYSTEM_PROMPT,
    NOTION_SYSTEM_PROMPT,
    IMPLEMENTATION_SYSTEM_PROMPT,
)
from tools import notion_retrieval_tool, implementation_tool

llm = Vllm(
    model="model_name",
    tensor_parallel_size=4,
    max_new_tokens=100,
    vllm_kwargs={"swap_space": 1, "gpu_memory_utilization": 0.5},
)


orchestrator_agent = FunctionAgent(
    name="OrchestratorAgent",
    description=(
        "You are the OrchestratorAgent responsible for coordinating tasks between multiple agents. "
     
    ),
    system_prompt=ORCHESTRATOR_SYSTEM_PROMPT,
    llm=llm,
    tools=[],
    can_handoff_to=["NotionAgent", "ImplementationAgent"],
)
L
L
9 comments
Yeah I see, but can I get more info how is it parsed, can I inject parts of like description, tools desription and tools output
I dont even know the structure of the baseline system prompt you guys are parsing
There is no baseline prompt
Its a FunctionAgent --- it uses the LLMs built in tool calling API. It just parses your functions into schemas
You can actually inspect the exact LLM inputs being used as it runs by iterating over the events
Tool description is controlled by you (its the docstring)

There is a few prompts that are specific to
  • the state (you aren't using any initial state, so this is unused)
  • the description for the handoff tool
  • the output of the handoff tool
https://github.com/run-llama/llama_index/blob/0ff60af4cfea146a02c63075b967e97956aec9b0/llama-index-core/llama_index/core/agent/workflow/multi_agent_workflow.py#L40

There is our standard prompt system for getting/setting these
Tool output is controlled by you (its the type annotation)
Tool inputs are controlled by you (its the type annotation)
There is a few minor prompts that are specific to
  • the state (you aren't using any initial state, so this is unused)
  • the description for the handoff tool
  • the output of the handoff tool
https://github.com/run-llama/llama_index/blob/0ff60af4cfea146a02c63075b967e97956aec9b0/llama-index-core/llama_index/core/agent/workflow/multi_agent_workflow.py#L40

You can the defaults above, or use agent_workflow.get_prompts() and agent_workflow.update_prompts() (or just set them in the constructor)
Add a reply
Sign up and join the conversation on Discord