Find answers from the community

S
Sniper
Offline, last seen 2 weeks ago
Joined September 25, 2024
is there any support coming for AWS valkye, as redis is not open source anymore?
https://aws.amazon.com/elasticache/what-is-valkey/
CC: @Logan M
1 comment
L
Has anyone used HuggingFaceInferenceAPIEmbedding module from llamaindex?
I am running into issue where embedding returned by model is in not in the expecteed format what llamaindex is expecting.
I am using CODE-BERT model and using task as sentence embedding
Would love to get some guindance if possible.
CODE:
embed_model = HuggingFaceInferenceAPIEmbedding( model_name='https://*******.us-east-1.aws.endpoints.huggingface.cloud', token='*******' ) text_embedding = embed_model.get_text_embedding("Your text here") print(text_embedding)

emebding format what model is returning
response: b'{"embeddings":[0.02895120158791542,0.4713986814022064,0.41316112875938416,0.2680184543132782,0.4315677583217621 ,...........]
Error:

TypeError: float() argument must be a string or a real number, not 'dict'
3 comments
L
S
LLaMA Index Knowledge Graph Query

Hey fellow developers,
I'm building knowledge graphs using LLaMA Index and storing them in both the graph store and storage context. However, I'm having trouble querying the graph database directly. The only option I've found is load_index_from_storage, which stores some graph context locally. How can I query the graph database and return Knowledge Graph nodes?
Here's my code so far:

def build_knowledge_graph(): # ... (setup code) graph_store = NebulaGraphStore(space_name=SPACE_NAME, edge_types=EDGE_TYPES, rel_prop_names=REL_PROP_NAMES, tags=TAGS) storage_context = StorageContext.from_defaults(graph_store=graph_store) kg_index = KnowledgeGraphIndex.from_documents(documents=github_document_loader(), storage_context=storage_context, max_triplets_per_chunk=10, space_name=SPACE_NAME, edge_types=EDGE_TYPES, rel_prop_names=REL_PROP_NAMES, tags=TAGS, include_embeddings=True) kg_index.storage_context.persist(persist_dir=PERSIST_PATH) return storage_context def load_graph(): # ... (setup code) graph_store = NebulaGraphStore(space_name=space_name, edge_types=edge_types, rel_prop_names=rel_prop_names, tags=tags) storage_context = StorageContext.from_defaults(persist_dir=persist_path, graph_store=graph_store) kg_index = load_index_from_storage(storage_context=storage_context, max_triplets_per_chunk=10, space_name=space_name, edge_types=edge_types, rel_prop_names=rel_prop_names, tags=tags, verbose=True) return kg_index

Any help would be greatly appreciated!
1 comment
S