Find answers from the community

Updated 3 months ago

Hello I have been exploring llamaindex

Hello, I have been exploring llamaindex, managed to do some indexing and basic querying.
I have a bit of a blindspot with my understanding, do queries only query my indexes?
2
j
A
A
14 comments
yes - though just for me to understand @FairlyAverage if they weren't querying your indices what would they be querying against?
I think sometimes I have a similar problem understanding what "querying your index" means @jerryjliu0 . I am assuming that the index is to provide context for LLMs to find answers within their "large knowledge base" rather than just from the documents we are indexing? This is the part I am not 100% sure about. If that makes sense?
In the prompt you can specify to not use prior knowledge ad to base the answer on your index…from what I experienced you use llama index to prevent llm to have hallucinations for specific answers and ofc to “add” knowledge taking advantage of llm skills
Just using the basic examples provided in the tutorials, I indexed a folder of technical documents. I was able to ask questions about things not covered by that corpus.
So, I'm a unsure about my understanding, specifically about how to constrain the query to the corpus index..
Interestingly, it did answer my queries in the style of the corpus.
ahh i see. yeah the way to understand a query is an initial "input prompt" that will be augmented with additional context from your data under llamaindex, and will give you the final result
Thanks for explaining that. As I delve deeper I am also finding a significant amount of overlap between llama index and langchain. Any advice on how you would combine the two? Currently my plan is to index with llama, store the results with pinecone then query with langchain?
I personally switched almost all to llama, and im losing all the news on langchain side. For now, querying with llama index gives be better results (maybe for the prompts and refining)
But would be interested of a comparison!
ok, so a query is an llm prompt, and it is prefixed with the most similar node in the index?
Certainly when I indexed my Obsidian, and asked questions it did generally return my content.
sometimes prior knowledge creeps in if your top hits contain irrelevant context (high embedding similarity but not answering the question). it also happens if you have multiple hits, after multiple rounds of trying to refine the answer the final answer gets screwed up. it's a combination of using prior knowledge and hallucination/confusion that you are seeing.
chunking the source texts is a big problem that needs a better solution.
Add a reply
Sign up and join the conversation on Discord