Find answers from the community

Updated yesterday

Llama Index Setup for Knowledge Graph and LLM Agents

Hello just starting to build a knowledge graph or long term memory for my LLM agents

evaluating llama index for the same.

Couple of quick questions -

  1. is there a s3 bucket loder in llama index
  2. i dont understand vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
why do i need to send the vector store while load index and storage context both when storage context itself knows whats the vector store used ?
  1. do i need to store the embeddings at some blob storage as well, incase my vector store goes down i might need to recreate all of those ?
W
p
3 comments
Hey!

  1. Yes there is: https://llamahub.ai/l/readers/llama-index-readers-s3?from=
  2. How are you loading back the index,
Plain Text
index = VectorStoreIndex.from_vector_store(vector_store=vector_store)

If you are creating storage_context then you will have to pass in your vector_store ref else it will create a default one on its own.

  1. You can, if the service goes down like ( Qdrant or chroma, which has never happened to me ) , then yeah if the service does not gets up correctly you'll have to re-do the embeddings process again.
thanks @WhiteFang_Jr for answering

if u have more context on the storage context, can u tell me more on when to use it as why would i need to store documents separately, index store separately etc ?
yeah sure, you can read more about storage context here: https://docs.llamaindex.ai/en/stable/module_guides/storing/
Add a reply
Sign up and join the conversation on Discord