Find answers from the community

D
DS
Offline, last seen last week
Joined September 25, 2024
Hey all, is NVIDIA NV-Embed-v2 available via NVIDIAEmbedding?
5 comments
L
D
Hey all, is strict = True default for structured outputs for function calling with openai (ex. FunctionCallingProgram)?
8 comments
L
D
Hey all, is cohere rerank 3.5 through bedrock available as a postprocessor? I can't seem to locate documentation for that
1 comment
L
Hey all, how do we enable Context to persist over multiple runs of the same workflow?
2 comments
L
D
Hey all, is there an example notebook where parallel function calling is being used in a workflow?
6 comments
D
L
D
DS
·

Notebook

Hey all, is there a notebook that references the visualization function for workflows?
7 comments
L
D
Hey all, any ideas why the results coming from a direct search_batch request to Qdrant would be different than the results coming from using a llamaindex retriever that is pointed at the same Qdrant collection? Some overlap in the results but I would have assumed an exact match. No filters being applied during the search.
8 comments
L
D
Seeing the following error when trying to use llm.chat with Claude3 Sonnet through Bedrock:

ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: "claude-3-sonnet-20240229" is not supported on this API. Please use the Messages API instead.
13 comments
m
L
D
W
After a fresh pip install of llama-index I am running into this error using a script from a notebook in the docs. I seem to be running into these often - is there a certain way these different llama-index packages should be installed now?
10 comments
L
D
W
hey all, having issues around importing llama-index-llms-openai in 0.10.8.post1, any ideas?
5 comments
D
Hey another async question - does llamaindex support using async with a Huggingface embedding model deployed via Sagemaker?
2 comments
L
Hey all, is there async compatibility with Claude in Bedrock in LlamaIndex? Poked around but couldn't figure out for sure
1 comment
L
is the vectorindexautoretriever still a preferred module for vector search with metadata filters in v0.10+? I imagine an entity extraction component in a pipeline would be optimal?
2 comments
a
T
Were some of the evaluation modules deprecated? Not sure if I'm looking at the most up-to-date example notebooks or not...
4 comments
G
L
D
DS
·

React

Can anyone point to the parser/code that is currently being used to fetch function arguments when using models that don't have function calling (through a ReAct agent in particular)?
2 comments
D
L
D
DS
·

Agent

Hey all, anyone have an example of building an agent using a function calling llm where they stream the final output? There is the option of passing the full final message to a final step and streaming that but you'll get a latency hit; I haven't found a nice solution yet as the full message is required to determine if a function call is required.
9 comments
L
D
Hey all, I am getting the following error when using astream_chat_with_tools with BedrockConverse, where the async and stream functions worked on their own
9 comments
D
L
Hey all, is there a notebook or example somewhere of the implementation of a Workflow with streaming enabled?
4 comments
D
L
Hey all, I recently updated llama-index-core and I am now getting an error concerning character limits for a tool description via the to_openai_tool function, although I am not using openai and this tool has worked just fine in the past - any thoughts?
5 comments
L
D
Hey all, has anyone built a react agent with Claude via Bedrock and enabled streaming properly via Streamlit?
14 comments
D
L
Hey all, when running a react agent we see output logged and printed to the terminal - is that captured in a variable in the agent itself (ie. can we easily access and print that out in a python script for example?)
1 comment
D
You could just exclude all those fields from being passed to the LLM when indexing and then add them manually in the way you desire to the prompt sent to the LLM at query time
1 comment
p
Hey all, when defining a FunctionTool is there a simple way to define a parameter that will not be treated as a parameter to be extracted by the LLM? For example, if you wanted to pass a user-defined filter to that function at runtime.
3 comments
L
D
are we able to visualize the graph for the ReActAgent ?
2 comments
D
L
Hey all, trying to instantiate a pipeline using a local Qdrant vector store and Huggingface embedding but am running into this error now, thoughts?
12 comments
D
W
L