Hello just starting to build a knowledge graph or long term memory for my LLM agents
evaluating llama index for the same.
Couple of quick questions -
- is there a s3 bucket loder in llama index
- i dont understand vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
why do i need to send the vector store while load index and storage context both when storage context itself knows whats the vector store used ?
- do i need to store the embeddings at some blob storage as well, incase my vector store goes down i might need to recreate all of those ?