Find answers from the community

Updated 3 months ago

llama_index/docs/examples/agent/openai_a...

With the example here:
https://github.com/run-llama/llama_index/blob/main/docs/examples/agent/openai_assistant_agent.ipynb
We load an openai assistant agent using a file such that we use the built-in retriever, but how do we add files after the agent is created?
L
K
16 comments
I think you can just do something like uber_index.insert(document)
Since it's all in memory in pass-by-reference, it should work
@Logan M wdym? We don't pass in an index when we're doing it natively through the openai assistant agent
Attachment
CleanShot_2023-11-07_at_16.15.21.png
or at telast, that's what I see here?
ohhh that's the bult-in retrieval assistant (I was looking at the first section lol)

I have no idea in that case
I think you'd just have to re-build the agent with the added data
tbh from what i've seen though, OpenAIs retrieval doesn't work great when you add more files πŸ˜…
oh wow really? So you mean that like, openai retrieval with an initial large set of documents works better than an initial smaller set that grows into a larger set?
hmmm rebuilding the agent would lose us the context/memory tho right?
like, I mean it works fine for a small number of documents, much worse with larger. We are working on comaprissons to llama-index πŸ™‚
hmm, true true, unless it's using that threads thing to keep track of chat history? I haven't dove too deep into this specific API myself yet lol
how are the initial comparisons looking like?
worse somewhat?
Ah yeah same here, not sure how upstream memory works will chime back in if I figure it out
yea worse somewhat, and no way to customize or improve it really πŸ˜…
Add a reply
Sign up and join the conversation on Discord