-
Notifications
You must be signed in to change notification settings - Fork 12.1k
batch : rework llama_batch_allocr #14153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
for (uint32_t i = 0; i < n_tokens; i++) { | ||
const llama_seq_id seq_id = ubatch.seq_id[i][0]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@compilade Regarding this comment from earlier, how does this sequence traversal work correctly when the ubatch is created with split_simple()
?
AFAIU the original meaning of ubatch.seq_id[i][j]
was "the jth sequence of the ith token"
. With split_equal()
, this now changes to "the ith sequence and j == 0"
. What is not clear to me is if I used split_simple()
how could the sequence traversal be correct?
llama.cpp/src/llama-context.cpp
Lines 1146 to 1152 in f164ba9
for (uint32_t s = 0; s < ubatch.n_seqs; ++s) { | |
const llama_seq_id seq_id = ubatch.seq_id[s][0]; | |
if (embd_seq_out.find(seq_id) != embd_seq_out.end()) { | |
continue; | |
} | |
embd_seq_out[seq_id].resize(n_embd); |
I am planning to rework this in some way, so any suggestions how to improve this logic are welcome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ggerganov
With split_simple()
, an invariant is that ubatch.n_seqs == n_tokens
, and ubatch.n_seq_tokens == 1
, because the sequences are not aggregated.
Line 141 in a592c13
ubatch.n_seqs += ubatch.equal_seqs ? 1 : length; // virtual sequences for simple splits |
This makes traversal which would work correctly with split_equal
also be correct with split_simple
, even though the seq_ids
are definitely repeated (when ubatch.equal_seqs == false
, ubatch.n_seqs
doesn't really map to distinct sequences).
I'm not sure how to make it more obvious while still sharing the same traversal code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I understand that this traversal over the tokens is correct for both split strategies:
llama.cpp/src/llama-kv-cache-unified.cpp
Lines 816 to 822 in 4c07964
for (uint32_t s = 0; s < n_seqs; ++s) { | |
const llama_seq_id seq_id = ubatch->seq_id[s][0]; | |
for (uint32_t j = 0; j < n_seq_tokens; ++j) { | |
const uint32_t idx = s*n_seq_tokens + j; | |
const llama_pos p1 = ubatch->pos[idx]; |
However, if I want to traverse over the unique sequence ids in the ubatch, or traverse over all sequence ids to which a token in the ubatch is assigned, there is no way to do it correctly for both splits. Is this correct?
For example, in the snippet above, if I wanted to get the list of all sequence ids of token idx
, there is no way to do it without checking the ubatch.equal_seqs
. Correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if I want to traverse over the unique sequence ids in the ubatch
Yes, traversing unique seq_ids
with simple splits (when ubatch.equal_seqs == false
) is a bit more complicated, because they are not aggregated (simple splits are plain slices of the user-provided batch
).
traverse over all sequence ids to which a token in the ubatch is assigned
This is easier, though, and possible by traversing ubatch.seq_id[s][_]
with ubatch.n_seq_id[s]
. For example:
llama.cpp/src/llama-kv-cache-recurrent.cpp
Lines 446 to 449 in 26ff368
for (uint32_t s = 0; s < n_seqs; ++s) { | |
const uint32_t n_seq_id = ubatch.n_seq_id[s]; | |
for (uint32_t j = 0; j < n_seq_id; ++j) { | |
const llama_seq_id seq_id = ubatch.seq_id[s][j]; |
llama.cpp/src/llama-kv-cache-unified.cpp
Lines 816 to 822 in 4c07964
for (uint32_t s = 0; s < n_seqs; ++s) { const llama_seq_id seq_id = ubatch->seq_id[s][0]; for (uint32_t j = 0; j < n_seq_tokens; ++j) { const uint32_t idx = s*n_seq_tokens + j; const llama_pos p1 = ubatch->pos[idx]; [...]
For example, in the snippet above, if I wanted to get the list of all sequence ids of token
idx
In that snippet, seq_id
would need to be defined later:
for (uint32_t s = 0; s < n_seqs; ++s) {
for (uint32_t j = 0; j < n_seq_tokens; ++j) {
const uint32_t idx = s*n_seq_tokens + j;
const llama_pos p1 = ubatch->pos[idx];
for (uint32_t k = 0; k < ubatch.n_seq_id[s]; ++k) {
const llama_seq_id seq_id = ubatch->seq_id[s][k];
Although depending on what you need it's also possible to swap the two inner loops:
for (uint32_t s = 0; s < n_seqs; ++s) {
for (uint32_t k = 0; k < ubatch.n_seq_id[s]; ++k) {
const llama_seq_id seq_id = ubatch->seq_id[s][k];
for (uint32_t j = 0; j < n_seq_tokens; ++j) {
const uint32_t idx = s*n_seq_tokens + j;
const llama_pos p1 = ubatch->pos[idx];
In this situation, you would not need to check ubatch.equal_seqs
unless unique sequences are required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, thank you. I think I understand now.
int32_t * n_seq_id; // [n_seqs] // TODO: remove, should belong to only 1 sequence | ||
llama_seq_id ** seq_id; // [n_seqs] // TODO: become llama_seq_id * seq_id; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Decided against these TODOs because multiple sequences per input token actually has some useful properties that cannot be achieved otherwise (for example see the hellaswag usage). Instead, will add logic to guarantee that the provided ids are valid, utilizing the memory's seq_pos_min()
and seq_pos_max()
methods.
ggml-ci
ggml-ci
llama_batch_allocr
llama_batch_allocr
inllama_context
to avoid allocating memory for each batchint32_t
anduint32_t
, need to do some refactoring and fix this)llama_batch_allocr
llama_ubatch
indexing refactorNext PRs:
LLAMA_BATCH_DEBUG
envllama_ubatch
indexing