Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@deboer-tim
Copy link
Collaborator

Typical HLS indexes for streaming contain a rolling set of video files.

The HLS support prior to this was focussed on 'cached streaming': the CDS would connect to an endpoint and cache both the HLS index and all files contained in it before responding with the index to the client. This was simple and ensured a full cache when dealing with multiple clients, but there's a long initial delay / latency, it gave us little control over how the cache is used, and didn't work for samples (where the index contains the full video, potentially Gbs of cache).

This change drops the pre-caching down to only the first file, and adds support for caching while streaming, i.e. if there's a file requested that we don't have in the cache, we can stream it directly to the client while filling the cache for the following client.

Now that we have the byterange support, I also optimized caching for cases where we know the size of the file in advance or the server tells us the content length.

Net effect:

  • latency for the first client drops from waiting for ~5 files (depending on the HLS source) to 1.
  • samples are fully supported, since we fill the cache as we go.
  • caching is slightly faster (avoids 1+ large array copies).
  • now that the code handles all cases we have more options for how we manage the cache in the future.
  • clients accessing the same file concurrently is almost certainly still broken, but again this is a better codebase to fix that with.

Typical HLS indexes for streaming contain a rolling set of video files.

The HLS support prior to this was focussed on 'cached streaming': the CDS would
connect to an endpoint and cache both the HLS index and all files contained in it
before responding with the index to the client. This was simple and ensured a full
cache when dealing with multiple clients, but there's a long initial delay /
latency, it gave us little control over how the cache is used, and didn't work for
samples (where the index contains the full video, potentially Gbs of cache).

This change drops the pre-caching down to only the first file, and adds support
for caching while streaming, i.e. if there's a file requested that we don't have
in the cache, we can stream it directly to the client while filling the cache
for the following client.

Now that we have the byterange support, I also optimized caching for cases where
we know the size of the file in advance or the server tells us the content
length.

Net effect:
- latency for the first client drops from waiting for ~5 files (depending on the
  HLS source) to 1.
- samples are fully supported, since we fill the cache as we go.
- caching is slightly faster (avoids 1+ large array copies).
- now that the code handles all cases we have more options for how we manage
  the cache in the future.
- clients accessing the same file concurrent is almost certainly still broken,
  but again this is a better codebase to fix that with.
@deboer-tim deboer-tim merged commit e20353b into icpctools:main Aug 29, 2025
4 checks passed
@deboer-tim deboer-tim deleted the hls-streaming branch August 31, 2025 07:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants