hi everyone! i have a llamaindex pipeline which tokenizes, extracts entities, etc from a list of documents and stores it in the vector_store; after that, how do i create a query engine on top of that vector store so that i can ask questions about the documents indexed in the vector store? that vector store already has embeddings from previous piplien runs, so i don't want to use VectorStoreIndex.from_documents(..) because the embeddings are already in the data store - what do i do?