Find answers from the community

L
Lvka
Offline, last seen last month
Joined November 4, 2024
How can I pass custom prompt template to the FunctionAgent, where I can prompt engineer the system prompt, tools output ,etc...
:
Plain Text
from llama_index.core.agent.workflow import FunctionAgent, AgentWorkflow
from llama_index.llms.vllm import Vllm

from prompts import (
    ORCHESTRATOR_SYSTEM_PROMPT,
    NOTION_SYSTEM_PROMPT,
    IMPLEMENTATION_SYSTEM_PROMPT,
)
from tools import notion_retrieval_tool, implementation_tool

llm = Vllm(
    model="model_name",
    tensor_parallel_size=4,
    max_new_tokens=100,
    vllm_kwargs={"swap_space": 1, "gpu_memory_utilization": 0.5},
)


orchestrator_agent = FunctionAgent(
    name="OrchestratorAgent",
    description=(
        "You are the OrchestratorAgent responsible for coordinating tasks between multiple agents. "
     
    ),
    system_prompt=ORCHESTRATOR_SYSTEM_PROMPT,
    llm=llm,
    tools=[],
    can_handoff_to=["NotionAgent", "ImplementationAgent"],
)
9 comments
L
L
What should I set my Llamaparse object arguments, so my parsing_instruction make any difference:
Plain Text
  parser = LlamaParse(
            api_key=LLAMAINDEX_API_KEY,
            parsing_instruction=parsing_instruction,
            premium_mode=True,
            split_by_page=False,
            verbose=False
        )
        file_extractor = {".pdf": parser, ".docx": parser, ".doc": parser}
        documents = SimpleDirectoryReader(
            input_files=[file_path], file_extractor=file_extractor
        ).load_data()

        document_text = "\n".join(
            [doc.text for doc in documents if hasattr(doc, 'text')]
        )


Doesnt make here. I think llama havent implemented this feature yet, but they shipped it 😄
35 comments
L
S
L