Find answers from the community

Updated 3 months ago

How to load a model, that I've already

How to load a model, that I've already loaded?

Plain Text
service_context = ServiceContext.from_defaults(llm=llm, embed_model="local:WhereIsAI/UAE-Large-V1")


In the above example, I must declare local flag, right?

But If I need just an embedding model, what should I write?

Plain Text
embed_model = HuggingFaceEmbedding(model_name="local:WhereIsAI/UAE-Large-V1")


the above does not work and If I remove local , it will download the model again
L
p
8 comments
embed_model = HuggingFaceEmbedding(model_name="WhereIsAI/UAE-Large-V1") should work. Whats the issue?

I see those two code paths use a slightly different cache dir though :PSadge:
not a huge deal to download it twice though? πŸ™
I can change that to be consistant though
Sure, I can download no problem.

Just was wondering If I can re-use it somehow.
I think it will be better to force to add the local flag whenever we are using the local models:

e.g.: embed_model = HuggingFaceEmbedding(model_name="local:WhereIsAI/UAE-Large-V1")

So it will follow the same pattern as ServiceContext, otherwise we can add another argument in ServiceContext and embedding classes, to differ the local models from apis
The local flag is really only there for the service context, to make it clear its downloading and using that model

For huggingface embeddings, this is implied. I don't think it should have the local string thing (although I guess no reason why it shouldn't handle it for consistency)

Ideally, both of these use the same cache dir. ServiceContext uses the same cache dir with a /models suffix for some reason
Add a reply
Sign up and join the conversation on Discord