-
Notifications
You must be signed in to change notification settings - Fork 450
feat: Add fallback logic to the small, medium, and large E2E tests to select a new AZ when AWS has insufficient capacity (backport #3161) #3331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Jaideep Rao <[email protected]> (cherry picked from commit 8ea9979)
missed adding logic to pick up teacher model id from config in earlier PR. This PR fixes that **Checklist:** - [ ] **Commit Message Formatting**: Commit titles and messages follow guidelines in the [conventional commits](https://www.conventionalcommits.org/en/v1.0.0/#summary). - [ ] [Changelog](https://github.com/instructlab/instructlab/blob/main/CHANGELOG.md) updated with breaking and/or notable changes for the next minor release. - [ ] Documentation has been updated, if necessary. - [ ] Unit tests have been added, if necessary. - [ ] Functional tests have been added, if necessary. - [ ] E2E Workflow tests have been added, if necessary. <hr>This is an automatic backport of pull request #3309 done by [Mergify](https://mergify.com). Approved-by: courtneypacheco Approved-by: jaideepr97
For instructlab, "pip install ." does not install vllm, but it does install an uncapped torch (2.7.0 currently). When we install vllm later, we compile a binary flash_attn wheel against torch 2.7.0. vllm 0.8.4 requires torch==2.6.0, so we downgrade torch, and then we use that with the incompatible flash_attn binary wheel. To resolve this, use constraints-dev.txt in the first pip install operation. This restricts torch to 2.6.0 immediately when we first install instructlab, so that we will compile flash_attn against that torch version. Signed-off-by: Ken Dreyer <[email protected]> (cherry picked from commit 8a11c90) # Conflicts: # .github/workflows/e2e-nvidia-l40s-x4-py312.yml
|
Cherry-pick of f98f92b has failed: To fix up this pull request, you can check it out locally. See documentation: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/checking-out-pull-requests-locally |
…-3320 use `constraints-dev.txt` in e2e tests (backport #3320)
|
This pull request has merge conflicts that must be resolved before it can be merged. @mergify[bot] please rebase it. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork |
Signed-off-by: Courtney Pacheco <[email protected]> (cherry picked from commit f98f92b)
4955e7e to
a011a32
Compare
|
Dropped the -python312 workflow change that is not present in this branch. |
Issue resolved by this Pull Request:
Resolves #3160
Checklist:
conventional commits.
Overview
The large E2E test has been failing due to "insufficient instance capacity" in AWS. In other words, whenever we manually kick off our large E2E job or the large E2E job gets kicked off at its regularly scheduled interval, AWS almost always returns an error letting us know that there aren't enough machines (VMs) available to run that job in AWS. Thus, the large E2E job fails to run:
See linked issue for more details.
Proposed Solution
This PR takes the fallback logic I added in #2975 and tweaks it so that it can also be used in our small, medium, and large E2E jobs.
This is an automatic backport of pull request #3161 done by Mergify.