Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@lakinduakash
Copy link

Reference : #1871

@lakinduakash lakinduakash marked this pull request as ready for review October 25, 2024 08:06
@EshamAaqib
Copy link
Contributor

@achraf-mer Just wondering if we could remove the Stack mode, is this in use ? Ideally on K8s vLLM should run separately instead of the same pod as h2oGPT I think. WDYT ?

@achraf-mer
Copy link
Collaborator

@achraf-mer Just wondering if we could remove the Stack mode, is this in use ? Ideally on K8s vLLM should run separately instead of the same pod as h2oGPT I think. WDYT ?

yes, we can do separate and keep the help straightforward, let's do, I think we might have used the same pod for latency considerations, but since vLLM can be resource intensive, it is best IMO to have on a separate pod. (more isolation and we can scale separately)

@EshamAaqib
Copy link
Contributor

@lakinduakash Lets remove Stack mode from h2oGPT and the checks as well, similar to what was done with Agents

@lakinduakash
Copy link
Author

@lakinduakash Lets remove Stack mode from h2oGPT and the checks as well, similar to what was done with Agents

Stack is removed

Copy link
Contributor

@EshamAaqib EshamAaqib left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets document the breaking changes that was done to the chart, ex: changing the path of the model lock in values. We will need to communicate this to the other teams

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants