Find answers from the community

Home
Members
Logan M
L
Logan M
Offline, last seen 4 hours ago
Joined September 24, 2024
Hmm that's rather odd. I'll try swapping the sentence window notebook to use weaviate and confirm, it shouuuuld be fine 🤔
19 comments
L
J
You'll need to start with a fresh index if you switch embeddings, the dimensions of every embedding vector need to be the same 👍
21 comments
L
P
The default model that loads for the huggingface embeddings in the docs page that i sent usually works well

For LLMs, vicuna seems to be good (but it's also non-commericial). I like camel for commercial models so far
67 comments
L
n
Anyone tried the new alpha release yet? Definitely open to comments on any of the change made.

My favourite new feature is the new IngestionPipeline + cache

Plain Text
client = qdrant_client.QdrantClient(location=":memory:")
vector_store = QdrantVectorStore(client=client, collection_name="test_store")

pipeline = IngestionPipeline(
    transformations=[
        SentenceSplitter(chunk_size=25, chunk_overlap=0),
        TitleExtractor(),
        OpenAIEmbedding(),
    ],
    cache=IngestionCache(cache=RedisCache(), collection="test_cache"),
    vector_store=vector_store,
)

# Ingest directly into a vector db
pipeline.run(documents=[Document.example()])

# Create your index
from llama_index import VectorStoreIndex
index = VectorStoreIndex.from_vector_store(vector_store)
19 comments
K
k
L
Is this a llama index thing or langchain thing? If it's langchain, I got no idea 😅
6 comments
p
L
Hmmm, I don't think anything like that exists right now. Or at least nothing that isn't super hacky lol
24 comments
L
k
Unless you create a graph over your indexes, you'll need one tool per index.

Be careful though, you'll run out of prompt space around 30ish tools
6 comments
L
p
index._service_context.llm_predictor.last_token_usage()

index._service_context.embed_model.last_token_usage()
69 comments
k
L
It makes 5 llm calls total, or 5 llm calls to llama index? What do your settings/indexes look like?
8 comments
L
g
In addition to alpaca, you'll also need and embed_model. By default it uses openAI text-ada-002 (which is pretty cheap thankfully).

You can use any model from huggingface locally, using this guide: https://gpt-index.readthedocs.io/en/latest/how_to/customization/embeddings.html#custom-embeddings
16 comments
L
S
Are you setting a system prompt somewhere? By default there isn't one in llama index
2 comments
T
You can actually do a pip install --upgrade -e . when you are inside the top-level after a clone, this will add the package to your env, but allow you to edit files and test the changes
5 comments
L
K
On its own, llama index is more of a search bar than a chatbot.

If you are wanting support for an actual chatbot, you can use llama index as a tool within langchain

https://gpt-index.readthedocs.io/en/latest/guides/building_a_chatbot.html
1 comment
j
Anyone know how to get the discord token for using the DiscordReader? I tried doing the thing where you copy from the network dev tools panel, but that doesn't seem to be working 🤔
14 comments
F
L
d
j
I think so! Have "complete" ideas in each chunk (whether a paragraph, or a section, or a chapter) usually helps the embeddings better represent the text. Normally I would do this at the document level and let the actual nodes fall where they may lol
13 comments
E
L
t
I thiiiink you can load the index from storage (as you are doing), call insert_nodes(), and then call persist again to write to disk
index.storage_context.persist(persist_dir=...)
33 comments
L
s
That's a good point. @disiok maybe you know if it's possible to include the cell outputs in the new embedded notebooks?
3 comments
d
Pretty sure the document objects from the llama hub loaders will work in llama index actually 🤔
5 comments
b
L
Probably the LLM stopped following instructions and printed some output that langchain couldn't parse

Pretty common error with langchain tbh. The parsing code for that specific agent it here https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/output_parser.py

Langchain at some post probably needs less-brittle parsing. Not much to do about it besides making a PR or maybe improving the tool instructions
https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/prompt.py
13 comments
L
p
L
Logan M
·

```

Plain Text
import logging
import sys

logging.basicConfig(stream=sys.stdout, level=logging.CRITICAL)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
1 comment
f
If front end stuff isn't interesting, you can make a quick and dirty frontend using streamlit or gradio

Personally I like streamlit, as it's a bit more customizable. But gradio can work too.

You basically build a frontend with a few lines of python, and it generally looks good.

Extremely valuable library (or libraries) to master, let's you quickly make POCs to show people and demo stuff
4 comments
L
F
In my experience, you just need to be super verbose in the description. Or if you wanted, you could even do something like "If user mentions the keyword [TOOL], use this tool"
7 comments
m
L