Find answers from the community

Updated 11 months ago

@Logan M is there any way of triggering

@Logan M is there any way of triggering the use of GPU during embedding finetuning? I'm running the finetuning activity from the llamaindex embedding documentation and it seems to use CPU only. I can't seem to find a way to make it use the GPU
J
L
5 comments
this is what im running:

from llama_index.finetuning import (
generate_qa_embedding_pairs,
EmbeddingQAFinetuneDataset,
)
from llama_index.finetuning import SentenceTransformersFinetuneEngine


train_dataset = EmbeddingQAFinetuneDataset.from_json("new_dataset.json")
val_dataset = EmbeddingQAFinetuneDataset.from_json("val_dataset.json")


finetune_engine = SentenceTransformersFinetuneEngine(
train_dataset,
model_id="sentence-transformers/all-mpnet-base-v2",
model_output_path="mpnet_finetuned_v2",
batch_size=16,
val_dataset=val_dataset,
show_progress_bar=True
)

finetune_engine.finetune()
hmmm it should be using gpu automatically, at least from my understanding
Plain Text
import torch
print(torch.cuda.is_available())
Does that print True?
hey, sorry for the late reply, so it was printing False and once I installed torch it started working; the weird thing is that in the same environment I'm using GPU for embedding calculations; anyways, it works so I'm happy!

thank you again! πŸ™
Add a reply
Sign up and join the conversation on Discord