okay.
answer_engine = index.as_query_engine(doc_ids=doc_ids, output_cls=ev.output, use_async=True, llm=self.llm)
this is how i create the query engine .
when i send a text to openai using the query engine, llamaindex automatically includes some additional texts in prompt similar to this;
Context information is below.\n---------------------\nfile_name: 695658e2-6966-48b5-be32-61aa29a19257
and rest of the data.
since this file names/doc ids are different, i believe i cant use the seed mechanism properly, therefore results are different.
everything except the file_name/doc_ids are same.