Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@LimLLL
Copy link
Contributor

@LimLLL LimLLL commented Mar 17, 2025

Description

Add necessary files to build a docker image that support GPU utilization

Who Can Review?

@timerring

Checklist

  • Code has been reviewed
  • Code complies with the project's code standards and best practices
  • Code has passed all tests
  • Code does not affect the normal use of existing features
  • Code has been commented properly
  • Documentation has been updated (if applicable)
  • Demo/checkpoint has been attached (if applicable)

@LimLLL
Copy link
Contributor Author

LimLLL commented Mar 17, 2025

目前在网页端修改配置后,变化后的settings.toml不会体现在/app/config/settings.toml里,会导致recreate容器时,还是会沿用上一份已经持久化了的settings.toml,可以再在entrypoint里写个监控脚本,但是感觉很不优雅,就没实现了。是否能够指定读取和写入的settings.toml位置,以及config.py的变量感觉可以都改为env传入会更灵活一些

@LimLLL LimLLL closed this Mar 17, 2025
@timerring
Copy link
Owner

目前在网页端修改配置后,变化后的settings.toml不会体现在/app/config/settings.toml里,会导致recreate容器时,还是会沿用上一份已经持久化了的settings.toml,可以再在entrypoint里写个监控脚本,但是感觉很不优雅,就没实现了。是否能够指定读取和写入的settings.toml位置,以及config.py的变量感觉可以都改为env传入会更灵活一些

是的,这个也是我在思考的问题,准确来说我这种写 docker 的方式并不是最佳实践,因为 docker 本质上是无状态的,只跑服务进程就行,然后通过 docker logs 就能方便查看日志,我这种写法本质上是将 docker 虚拟机化了,每次调整还需要另开 bash 在 docker 内部启动并且配置一些服务,在下个版本 v0.3.0 中我会另写一个 volume 以及 setting.toml 的挂载,让容器更灵活,这是我下个版本的主要方向。

@timerring
Copy link
Owner

还是感谢佬的 contribute,我重构后测试一下,修改测试到没问题就 merge。

@timerring timerring self-assigned this Mar 17, 2025
Copy link
Owner

@timerring timerring left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

感谢,我测试完后将修改部分内容。

下个版本将会重新优化 docker 以及 compose 流程,container 将会直接采用 release 的源码。目前这个版本确实不是最佳实践。

build-gpu.sh Outdated
fi

# detect CUDA version
CUDA_VERSION=$(nvidia-smi | grep "CUDA Version" | awk '{print $9}')
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CUDA version here is only represent the max version of CUDA that the driver supports.

To query the exact version of CUDA, use these:

nvcc --version
# or
nvcc -V
# or
cat /usr/local/cuda/version.txt

Dockerfile-GPU Outdated
Comment on lines 22 to 38
RUN ln -sf /usr/bin/python3.10 /usr/bin/python3 && \
ln -sf /usr/bin/python3 /usr/bin/python && \
python -m pip install --upgrade pip

RUN git clone https://github.com/timerring/bilive /app

RUN touch src/utils/cookies.json


RUN pip install --no-cache-dir -r requirements.txt
RUN mkdir Videos


ENV BILIVE_PATH=/app

COPY entrypoint-gpu.sh /entrypoint-gpu.sh
RUN chmod +x /entrypoint-gpu.sh
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each RUN directive creates a new layer, which will cause some performance issues.

@timerring timerring reopened this Mar 18, 2025
Copy link
Owner

@timerring timerring left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, wait for test.

@timerring
Copy link
Owner

目前这次 commit 我还是按照原来的方式制作 docker image了。下个版本就重构。

@timerring timerring merged commit 0ec6b9b into timerring:main Mar 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants