Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

Ibinarriaga8
Copy link
Contributor

Description

This PR solves issue #3097 by providing a state-of-the-art (SOTA) implementation for discrete offline Conservative Q-Learning (CQL) within the repository.

  • Implemented a SOTA discrete offline CQL algorithm fully integrated into the codebase.
  • As with the SOTA implementation of discrete online CQL (already present in the repo), the CartPole environment was used for offline discrete CQL training.
  • Leveraged the changes from commit 0627e85c78756b4e4dbde726a9d5e85300e239a0 to utilize the updated MinariExperienceReplay class for loading experiences from a custom CartPole dataset into the offline replay buffer.
  • Updated the discrete loss function according to the latest torchrl documentation, explicitly handling the categorical action space as required for discrete environments.
  • Added a SOTA-check test for this new algorithm to ensure correctness and performance.

Motivation and Context

This change was required to complete the set of SOTA CQL implementations in the repository, specifically by adding support for discrete offline CQL. Previously, only the online discrete CQL and both online/offline continuous CQL were available. By implementing discrete offline CQL, the repository now covers all major CQL benchmarks, enabling users to run and benchmark offline RL in discrete action spaces natively within torchrl.

The change also brings the repository in line with other top RL codebases and helps the community reproducibly benchmark and compare discrete offline RL algorithms.

This PR closes #3097.

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

What types of changes does your code introduce? Remove all that do not apply:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds core functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)
  • Example (update in the folder of examples)

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.

jorge.ibinarriaga.robles.becas and others added 29 commits July 1, 2025 12:03
@pytorch-bot
Copy link

pytorch-bot bot commented Jul 28, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/3098

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 35 Pending

As of commit ef6b66f with merge base 009f4ce (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 28, 2025
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added sota-check


return data


Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replay buffer from a custom minari dataset

model,
loss_function=loss_cfg.loss_function,
action_space="categorical",
delay_value=True,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated make_discrete_loss to follow torchrl documentation

@Ibinarriaga8
Copy link
Contributor Author

@vmoens vmoens added the new algo New algorithm request or PR label Jul 28, 2025
Copy link
Collaborator

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're good to go when the linter is fixed!

mode: online
eval_iter: 1000
video: False
video: True
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want this?

@Ibinarriaga8
Copy link
Contributor Author

@vmoens I made some changes to make sure SOTA checks work:

I applied minor fixes to the make_discrete_loss function in utils.py, and added the necessary commands in test_sota.py, including the sota-check for my new algorithm.
Please, lmk if anything else is needed for the sota-check

@Ibinarriaga8
Copy link
Contributor Author

@vmoens I noticed that the minari dependency is missing from the linux_sota environment. Do you think it’s sufficient to simply add it to linux-sota/scripts/environment.yml?

@Ibinarriaga8
Copy link
Contributor Author

SOTA-checks were successful
Lmk if there is anything else needed

@vmoens vmoens merged commit 1eccb49 into pytorch:main Aug 1, 2025
51 of 72 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. new algo New algorithm request or PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature Request] Discrete offline CQL

3 participants