Find answers from the community

Updated 2 weeks ago

Creating a Query Engine on Top of an Existing Vector Store

hi everyone! i have a llamaindex pipeline which tokenizes, extracts entities, etc from a list of documents and stores it in the vector_store; after that, how do i create a query engine on top of that vector store so that i can ask questions about the documents indexed in the vector store? that vector store already has embeddings from previous piplien runs, so i don't want to use VectorStoreIndex.from_documents(..) because the embeddings are already in the data store - what do i do?

pipeline = IngestionPipeline(
transformations=[
SummaryExtractor(summaries=["self"], llm=qa_llm),
QuestionsAnsweredExtractor(llm=qa_llm, questions=1),
EntityExtractor(label_entities=True, llm=qa_llm),
embed_model,
],
vector_store=vector_store,
)
Add a reply
Sign up and join the conversation on Discord