Find answers from the community

Updated 3 months ago

Logan M how would I use a simple

how would I use a simple Langchain LLM with a simple chat engine and handle memory and token limits in llama index? All id like to do is make a chat interface using llama index and a custom langchain LLM. Docs seem to suggest chatmemorybuffer, cant really understand how it works (does it summarize when token limit is reached or does it just remove least recent messages?)
L
V
8 comments
The chat memory buffer just includes X number of most recent messages that are below a token limit

Every chat agent is instansiated with one. But you might have to adjust the token limit
Anyway to include different types of memory?
Every chat agent is instantiated with one, but what about a chat engine?
Simple chat engines?
it's the only memory module we have, definitely welcome a PR on this
I see, gotcha
Add a reply
Sign up and join the conversation on Discord