Find answers from the community

Home
Members
kevingoed
k
kevingoed
Offline, last seen 3 months ago
Joined September 25, 2024
Hey everyone. Is there a way to call the LLM without an index, I want it to be like a basic call to OpenAI without the knowledge in the index. OR would you recommend doing that with the OpenAI API/SDK?
1 comment
W
k
kevingoed
·

Vectors

I'm using chat engine to query a vector store. If no vectors exist in the vector store I dont get an error, which is a problem. It will simply send the query to the LLM with an empty context. How do I avoid this?
18 comments
k
L
k
kevingoed
·

Docstore

What exactly is the doc store for btw?
1 comment
L
Hey everyone 🙂 I was wondering what features of Llama Index would be best to use if I would want to build a chat bot that can ultimately generate a JSON after multiple back and forth questions (without and with knowledge from vector store). Would Agents make sense for that, or would you just chain a few prompts together. Is Llama Index even necessary in that scenario?
1 comment
k
Hi there, we're now using PGVectorStore and we're seeing that there's a lot of open connections to the postgres db. Do we need to close the connections somehow, has anyone every run into this issue?
8 comments
L
k
k
kevingoed
·

Namespace

What's the recommended way to separate different "namespaces" with PGVectorStore. In PineCone we used different indexes for different documents to limit the replies to a particular document/scope. I was wondering what the equivalent would in PGVectoreStore. Do you recommend setting up a new table for each "context"/"namespace"?
3 comments
L
k
k
kevingoed
·

Keywords

Quick question, I just started playing around with KeywordExtractor and specified keywords=5 as a param. But yet in my postgres DB i find up to 50 keywords per document chunk. Is this normal?
3 comments
k
L
Quick question: What's the difference between loading the index with DocumentSummaryIndex.from_documents and using load_index_from_storage
2 comments
r
k
Also has someone else experienced that gpt-3.5-turbo-1106 produces significantly worse results as the old gpt-3.5-turbo-16k-0613
7 comments
L
k
b
Hey there 🙂 got another questions for y'all:

Plain Text
    PyMuPDFReader = download_loader("PDFReader")
    loader = PyMuPDFReader()

    documents = loader.load_data(file=Path("./test-doc2.pdf"))
    # Create and store Summray index
    storage_context = StorageContext.from_defaults()

    index = DocumentSummaryIndex.from_documents(
        documents,
        service_context=service_context,
        storage_context=storage_context,
        show_progress=True,
    )
    query_engine = index.as_query_engine()
    result = query_engine.query("Write an extensive summary of this context for me?")
    print(result)


How can I make sure that the summary that it writes is at longer than 20 sentences. Or how do I make sure it uses the full 4096 tokens for the response?
4 comments
L
k
Question: What is the best way to summarize single documents? Would that be with a VectorIndex and then just prompting it "summarize this document for me"?
21 comments
k
b
k
I noticed that my service_context does NOT update unless I restart the entire application, has anyone run into this issue before?

Here's my code:

Plain Text
    set_global_service_context(service_context)
    vector_store = PineconeVectorStore(
        pinecone_index=PINECONE_INDEX, namespace=namespace
    )
    storage_context = StorageContext.from_defaults(
        docstore=DOCUMENT_STORE,
        index_store=INDEX_STORE,
        vector_store=vector_store,
    )
    print(service_context)
    return VectorStoreIndex.from_vector_store(
        vector_store=vector_store,
        storage_context=storage_context,
        service_context=service_context,
    )
45 comments
k
L
I believe Llama Index depends on tree-sitter-languages and Tree Sitter doesnt have a build for Mac: https://github.com/grantjenks/py-tree-sitter-languages/issues/20
13 comments
L
k