Find answers from the community

Updated 2 weeks ago

python-agents-tutorial/2_local_agent.py ...

So Maybe I get this all wrong, but Im banging my head around a simple example for days now...

I work on a M1 mac with 32GB ram.

IM trying to get this example to run:

https://github.com/run-llama/python-agents-tutorial/blob/main/2_local_agent.py

But all I get is "Process finished with exit code 0"

I try to use a smaller model, but this als does not generate the response..

When I chat with the llm directy it does deliver output.

Are these example even ment to run on local machines?
1
L
m
G
12 comments
This works on my M2 + 32GB

Maybe try a different model? llama3.x is probably a better choice these days
I'm guessing you ran the script with python ./2_local_agent.py ?
Yeah im using

Settings.llm = Ollama(model="llama3.2:1b")

and run it from pycharm, not sure if that matters..
even this example does not work, so wonder if there is something on the machine that needs to be configured?

https://docs.llamaindex.ai/en/stable/getting_started/starter_example_local/
usually empty response is returned when no documents are retrieved πŸ€” Does data/paul_graham point to a real folder with real text in it?
yeah.... i created a clean project, that does run the example now....
its like learning to programm for this first time.... 😦
Attachment
image.png
Empty response on my experience is when the documents are not ingested, try ollama pull mxbai-embed-large
And create a embedding with ollama
Hey if you check the following doc: https://docs.llamaindex.ai/en/stable/examples/agent/react_agent/#define-function-tools

The defined functions have docstring. Could you try adding it and run again?

Also 1b parameter model is small for performing function calling. I would suggest try 8b parameter once
thanks, will try after dinner!
Add a reply
Sign up and join the conversation on Discord