Find answers from the community

Updated 7 months ago

Hey all, has anyone built a react agent

Hey all, has anyone built a react agent with Claude via Bedrock and enabled streaming properly via Streamlit?
L
D
14 comments
streaming in streamlit should be the same as any other LLM

I think the pattern was

Plain Text
with st.empty():
  response = agent.stream_chat("chat")
  response_str = ""
  for token in response.response_gen:
    response_str += token
    st.write(str(token))
it seems to fail immediately at the first tool call (react), so i'm not sure if my custom tools are causing the problem or not
When you say fail, is there some traceback?
ah found it

ERROR:root:An exception occured: The provided input (type: <class 'llama_index.core.chat_engine.types.StreamingAgentChatResponse'>) cannot be iterated. Please make sure that it is a generator, generator function or iterable.
but this fails before it is about to execute it's first function call, so we wouldn't be expecting anything to be sent back to the final response at this point I would think
it sounds like you are trying to iterate response instead of response.response_gen

Hard to say though without a traceback
when using this with the ReAct agent, is there something to configure so that response.response_gen only streams for the final result? i'm having trouble with it streaming prior to the first function call, which results in the agent loop ending
I don't actually see a variable signifying the presence of streaming in the parse_action_reasoning_step function with the ReAct output parser, could this be the cause?
Its supposed to be detecting when the final answer is being streamed, but maybe thats not working quite properly for some LLMs (it works for openai in my testing though)
for ref this is the call:

assistant_response = st.session_state.agent.stream_chat(prompt)
assistant_response_s= st.write_stream(assistant_response.response_gen)

where the agent has a few tools
okay got it, I see the detection of streaming and fetching of the final answer in the extract_final_response function, but in the parse_action_reasoning_step function there doesn't seem to be an indicator of streaming vs. non-streaming
for the openai one that is working, is it using function calling?
no, a react agent will always use the react reasoning loop, no matter the llm
just for reference here:

Plain Text
        if "Thought:" not in output:
            # NOTE: handle the case where the agent directly outputs the answer
            # instead of following the thought-answer format
            return ResponseReasoningStep(
                thought="(Implicit) I can answer without any more tools!",
                response=output,
                is_streaming=is_streaming,
            )

        # An "Action" should take priority over an "Answer"
        if "Action:" in output:
            return parse_action_reasoning_step(output)

        if "Answer:" in output:
            thought, answer = extract_final_response(output)
            return ResponseReasoningStep(
                thought=thought, response=answer, is_streaming=is_streaming
            )



Plain Text
def parse_action_reasoning_step(output: str) -> ActionReasoningStep:
    """
    Parse an action reasoning step from the LLM output.
    """
    # Weaker LLMs may generate ReActAgent steps whose Action Input are horrible JSON strings.
    # `dirtyjson` is more lenient than `json` in parsing JSON strings.
    import dirtyjson as json

    thought, action, action_input = extract_tool_use(output)
    json_str = extract_json_str(action_input)
    # First we try json, if this fails we use ast
    try:
        action_input_dict = json.loads(json_str)
    except Exception:
        action_input_dict = action_input_parser(json_str)
    return ActionReasoningStep(
        thought=thought, action=action, action_input=action_input_dict


I don't see anything around streaming when extracting the action step (which is where my agent is failing), although still not sure why that would affect openai vs. non-openai calls
Add a reply
Sign up and join the conversation on Discord