Find answers from the community

B
Brent
Offline, last seen 2 weeks ago
Joined September 25, 2024
Hi @All

I have 2 documents as 2 guide files on different topics in one index. When querying a guide, both documents contain that guide, and when the source returns, it also provides nodes from those 2 documents. How can I select the node that best matches the answer?
4 comments
W
B
Hi @all.

I cannot delete a doc in the index using the doc_id.

doc_id = f"{id}"
index.delete(doc_id)

Save the index after the update

index.storage_context.persist(persist_dir=f"{Constants.index_path}/{folder}")

I got the error:
NotImplementedError: Delete not yet implemented for Faiss index.
2 comments
W
L
Hi @all.
I am using CondensePlusContextChatEngine for chatting, but I am facing an issue where the response time is too long, averaging over 12 seconds to return an answer. How can I optimize this response time?
50 comments
W
B
L
Hi all.

I am using CondensePlusContextChatEngine. How can I generate questions from my question?

Thanks!
9 comments
B
W
Hi all

I have an index for querying, and after the query is done, I want to retrieve a few queries related to the previous query. How can I do that?.

I using RetrieverQueryEngine
12 comments
W
B
Hi
I'm using PrevNextNodePostprocessor with RetrieverQueryEngine. When querying, I receive the error message ValueError: doc_id not found. Please tell me what this error is caused by and how to solve it?

Settings.chunk_size = 512
docstore = SimpleDocumentStore()
documents = SimpleDirectoryReader(f"{Constants.docs_path}").load_data()
nodes = Settings.node_parser.get_nodes_from_documents(documents)
docstore.add_documents(nodes)

node_postprocessor = PrevNextNodePostprocessor(docstore=docstore, num_nodes=4)
query_engine = RetrieverQueryEngine.from_args(retriever, streaming=True, text_qa_template=prompt_tmpl, service_context=service_context, node_postprocessors=[node_postprocessor])
1 comment
B
B
Brent
·

Retrieve

Hi all.
I am currently using QueryFusionRetriever for querying, but I am facing an issue where it cannot fetch additional related nodes to answer a question; it only relies on the information from one node that it finds. For example, if I have a question whose answer lies across three consecutive nodes with relationships, the results only provide data from the first node. How can I retrieve the information from all three nodes? Here is my code:

retriever = QueryFusionRetriever(
indexes,
similarity_top_k=6,
num_queries=3,
mode="simple", #reciprocal_rerank
use_async=True,
verbose=True,
)

Thank you!
5 comments
L
B
Hello. I am using OpenAIAgent along with QueryEngineTool and FunctionTool, but I am facing an issue where, despite training on specific data, it still returns results from outside the trained dataset. I would like to know if there are any options to limit responses to only the data I have trained on?
Thanks all.
3 comments
B
r
B
Brent
·

Hi all.

Hi all.
I have a question. why using service_context in CondensePlusContextChatEngine to serve for token is not working. When I run it always get 0 value
response = chat_engine.chat(input_text) print(str(token_counter.total_llm_token_count))
3 comments
B
W
Hi all. Can we currently use the gpt-4o-mini model?
12 comments
W
B
t
B
Brent
·

Hi everyone.

Hi everyone.

I'm encountering an issue with querying when training a docx file containing a description of the content I need to search for. However, when I query, I can't retrieve the desired content. Instead, the result I receive is from a different part within the file.

Please let me know how to handle this.
Thank you all very much.
9 comments
B
W
B
Brent
·

Hello everybody.

Hello everybody.
How do I know if my docx file is up to standard or not? What criteria should be considered for making a standard file in terms of the data inside and during training? I'm asking because my data contains content, but when I query, I cannot retrieve that content. It only indicates that the content is not provided. Additionally, I want to ask how to query for more detailed data rather than just summarizing the main points of the content.
1 comment
W