Find answers from the community

Updated 3 months ago

```prompt helper PromptHelper max input

Plain Text
prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit)

    llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.7, model_name="gpt-3.5-turbo", max_tokens=num_outputs))
    
    sc = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)

    download_loader('SimpleDirectoryReader')
    documents = SimpleDirectoryReader(input_files=[file_path]).load_data()
    index = GPTVectorStoreIndex.from_documents(documents, service_context=sc)
    
   

    # save
    storage_context = StorageContext.from_defaults(persist_dir="./storage")
    # load
    index = load_index_from_storage(storage_context)
    
    return index

index = build_index(file_path=file_path)  

query_engine = index.as_query_engine(similarity_top_k=5)
L
A
m
14 comments
to save, you actually should do index.storage_context.persist(persist_dir="./storage")
and when you load, make sure you pass in the service context again too πŸ™‚
Plain Text
download_loader('SimpleDirectoryReader')
    documents = SimpleDirectoryReader(input_files=[file_path]).load_data()
    index = GPTVectorStoreIndex.from_documents(documents, service_context=sc)
    index.storage_context.persist(persist_dir="./storage")
    
    
   

    # save
    storage_context = StorageContext.from_defaults(persist_dir="./storage")
    # load
    index = load_index_from_storage(storage_context)
    
    return index
Right? πŸ™‚
Almost!

Plain Text
# load
index = load_index_from_storage(storage_context, service_context=sc)
Thank you. Good. Do you have an understanding of why the bot answers in the wrong language in which the question was asked to it? After all, the document he refers to is in the same language. But he answers in English. What could be the reason?
If you move return index to index = GET Vector Store Index.from_documents(documents, service_context=sc) and put it before index.storage_context.persist(persist_dir="./storage"), the answers will only be in the language in which both the document and the question are, BUT will be cut off:(
Yea I'm not sure what's going on there 🫠
Figured it out πŸ™‚ you can add a role to the bot and set the right prompt. And also take into account that in 1000 tokens there are more English words than from other languages. It is possible to apply a translator in the code.
If I may ask, when you mentioned that you can add role to the bot, do you refer to the equivalent of :
Plain Text
openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"},
        {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
        {"role": "user", "content": "Where was it played?"}
    ]
)
if so, how did you manage ?
Hello. I have a slightly different code, but the meaning is the same. Instead of "role" I gave him the role of who he is or who he portrays. I'll get a sample when I get home.
def chatbot(prompt): role = "superhero" language = "en" full_prompt = f"{role}:{language}:{prompt}" return query_engine.query(full_prompt)
oh, so the colon punctuation allows to specify the role directly into the prompt. Interesting. I'll try, thank you very much
an interest approach, for all intentes and purposes. It worked. Thanks
Add a reply
Sign up and join the conversation on Discord