Find answers from the community

Home
Members
smokeoX
s
smokeoX
Offline, last seen 3 months ago
Joined September 25, 2024
I am seeing this error in production
Plain Text
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.

Is this related to llama-index? I have no problem with everything locally
22 comments
L
s
Thanks , I think I'm close...i was able to store a document in pinecone and retrieve the index, but I am confused by the gpt_index syntax around retrieval. For example if I want to do this in two separate API calls. For now I have something like this, which looks like it still needs to load the original documents? I am not a python dev so i may be missing somthing obvious here
Plain Text
    
index = pinecone.Index("<pinecone-index-name>")
index2 = GPTPineconeIndex(**documents**, pinecone_index=index)
response = index2.query("<my query string>?")
13 comments
o
j
O
b
s
loader.load_data() is no longer working for me with S3Reader, could this be a environment thing?
12 comments
L
G
s
does anyone have general tools/tips for speeding up indexing and responses?
3 comments
j
h
i struggled a lot with the dependencies on this :/
32 comments
L
s
PDFReader should be able to parse S3 URLs right? I am 100% sure I had this working a few days back
6 comments
s
j
i wanted to improve my foundational understanding of the technology here so I asked chatGPT to go deep with examples 😄
2 comments
s
Based on what i've seen here today i'm gonna stick with da vinci-003 for now
4 comments
s
j
there was an open source vector DB that was added as loader to llama hub recently, gonna find that one and try it out too
1 comment
j
I am also experimenting with whatsapp embeddings...is there a recommended better approach than GPTSimpleVectorIndex for that data structure? I tried GPTTreeIndex but the instance just hangs...not sure if I was barking up the wrong tree with my approach
9 comments
j
s