Find answers from the community

Home
Members
di5corder4701
d
di5corder4701
Offline, last seen 2 weeks ago
Joined December 13, 2024
hi everyone! i have a llamaindex pipeline which tokenizes, extracts entities, etc from a list of documents and stores it in the vector_store; after that, how do i create a query engine on top of that vector store so that i can ask questions about the documents indexed in the vector store? that vector store already has embeddings from previous piplien runs, so i don't want to use VectorStoreIndex.from_documents(..) because the embeddings are already in the data store - what do i do?

pipeline = IngestionPipeline(
transformations=[
SummaryExtractor(summaries=["self"], llm=qa_llm),
QuestionsAnsweredExtractor(llm=qa_llm, questions=1),
EntityExtractor(label_entities=True, llm=qa_llm),
embed_model,
],
vector_store=vector_store,
)