Find answers from the community

Home
Members
tilleul
t
tilleul
Offline, last seen 3 months ago
Joined September 25, 2024
Well it looks like it simply does not work at all in fact ... I don't know why this "haiku" query was streamed ...

Here's a simpler example:
Plain Text
from llama_index.llms import OpenAI
from llama_index import ServiceContext
from llama_index.chat_engine import SimpleChatEngine

import os, openai

os.environ['OPENAI_API_KEY'] = "sk-..."
openai.api_key = os.environ['OPENAI_API_KEY']


service_context = ServiceContext.from_defaults(
    llm=OpenAI(temperature=0.7, model="gpt-3.5-turbo")
)
chat_engine = SimpleChatEngine.from_defaults(service_context=service_context)
response = chat_engine.stream_chat("Why is the sun yellow ?")
# for token in response.response_gen:
    # print(token, end="")
    
response.print_response_stream()
61 comments
t
L
I'm trying to teach openai's text-davinci-003 a new programming language based on BASIC. I'm feeding llama_index with a dozen text files explaining variables types, if/then/else blocks, PRINT/INPUT, how to comment code, and a few other things like numeric functions (min, max, abs, sgn, etc). The files are divided by topic.

I'm using a simple vector reading the documents stored in a folder as explained in the very first basic examples of llama index. There's also a service context with temp set to 0.7 and a prompt helper to to slightly tweak the initial settings.

When I ask davinci to explain a particular instruction or function, the result is usually good.
When I ask the AI to write code that asks the user his age and then write a funny comment about his age category. The code returned is actually rather good:
Plain Text
INPUT "What is your age?": Age

IF Age < 18 THEN
   PRINT "You are a kid!"
ELSE IF Age >= 18 AND Age < 30 THEN
   PRINT "You are young and wild!"
ELSE IF Age >= 30 AND Age < 50 THEN
   PRINT "You are a grown-up!"
ELSE
   PRINT "You are wise!"
END IF

But if ask the AI to do the same AND comment the code (I just add " and comment your code" at the end of the query, it will return something completely wrong either in JS or in C.

Any idea why a single additional simple order is enough to break a process that seemed to work ?
11 comments
L
t
Can someone explain how a Tree Index is actually built ? Given a list of nodes extracted from a text file (or another document) ? What makes a node a parent node ? Querying the index is somewhat explained in the docs but not what the index tree looks like ...
5 comments
t
L
If I'm not mistaken, meta data extraction occurs AFTER nodes creation, thus it does not take the token limit into account. Shouldn't meta data extraction be integrated within text splitters instead ?
8 comments
L
t
t
tilleul
·

No text

@Logan M Trying to update to 0.7.0 but "NO_TEXT" as response mode seems broken.
ValueError: Unknown mode: ResponseMode.NO_TEXT

Raised here
llama_index\response_synthesizers\factory.py", line 96, in get_response_synthesizer
raise ValueError(f"Unknown mode: {response_mode}")

Looks like that factory.py file does not take NO_TEXT into account ... it's not in the if/elif list ...

The quick fix I have so far is to create a NoText class, based on BaseSynthesizer that returns nothing:
Plain Text
from typing import Any, Sequence
from llama_index.response_synthesizers.base import BaseSynthesizer
from llama_index.types import RESPONSE_TEXT_TYPE

class NoText(BaseSynthesizer):
    def get_response(
        self,
        query_str: str,
        text_chunks: Sequence[str],
        **response_kwargs: Any,
    ) -> RESPONSE_TEXT_TYPE:
        return ""
        

    
    async def aget_response(
        self,
        query_str: str,
        text_chunks: Sequence[str],
        **response_kwargs: Any,
    ) -> RESPONSE_TEXT_TYPE:
        return ""

and of course modify factory.py so it handles NO_TEXT:
Plain Text
...
    elif response_mode == ResponseMode.NO_TEXT:
        return NoText(
            service_context=service_context,
            streaming=streaming,
        )
    else:
...      


I can do a PR if this "fix" is ok for you
8 comments
t
L
I think training a model on your data will be in the end more effective ... take chatgpt for instance ... it's not an expert in python but it has "digested" enough documents about python to know his way around ... the "algorithm" that selects nodes is basically cosine similarity between your query and your nodes ... so yes, it sounds simplistic ... @Logan M what do you think ?
1 comment
L
Does anybody know how to use streamlit with the latest "load_index_from_storage" function ? The index returned is not "pickleable" and thus using st.cache_data does not work ...

Other question: any recommandation for a lib or a module that would split a large json file like a vector_store.json file ? If the file is >100Mb, it cannot be uploaded to github (unless we use github large files storage -- whatever that is) ... I'd prefer to split the file, store it in parts on github, then when it's time to read it back, simply "join" the json again before feeding it to a storage_context ... unless github large files is really cool and the way to go ... 😉
18 comments
t
L
We desperately need a WHAT'S NEW release log or something either here, on github or in the docs ... if you leave this channel for only 3 days, there are 5 new updates but not much in the "releases" channel ... 🥴

So, what's new from 0.6.13 to 0.6.19 ? 😉
6 comments
t
L
Is there a way to retrieve the text of the chunks that were created so I can see where the text was cut ?

I'm using the "classic" index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context) from the examples ..
1 comment
L
don't get too excited too fast ... it may takes minutes before you get an answer from privateGPT ... Tested it myself with the provided source doc and I was not impressed by the results ... but of course, this is only a "first step" ... this is evolving every day ...
https://github.com/imartinez/privateGPT/issues/43
2 comments
t
L
t
tilleul
·

Word2vec

Is there a way to test the embeddings/vectors similarly to what can be done with Word2Vec ?
https://tedboy.github.io/nlps/generated/generated/gensim.models.Word2Vec.most_similar_cosmul.html
2 comments
L
b
Is there a limit on the amount of documents and text we can feed llama index with ? If I give llama_index the whole text of Victor Hugo's Les Misérables, will it be able to digest it properly ? (513 000 words)
1 comment
L
Sorry if this has been asked before and I suppose that the answer depends on each use case but ...

Is it important to organize/split documents to feed to llama_index into chapters, sections, sub-sections, etc ? Is it important to use titles ? Sub-titles ? Are these features automatically detected based on the document format ... I suppose MS Word files and PDF could benefit of their specific paragraph styles ? What about plain TXT files ? What about markdown files ? Does the structure of the document have any importance at all ?
1 comment
j
Is llama_index adequate to "teach" openAI a new programming language ?
6 comments
L
t