-
Notifications
You must be signed in to change notification settings - Fork 6
Description
Hey, thanks for open-sourcing this great repo!
I have a use case that I believe is quite common, and I'm wondering if you have any thoughts on how chz can be used in this scenario. I was tempted to use beta_blueprint_to_argv, but the docstring mentioned it's better to ask you first. I think you have probably already thought about this scenario, and I'd be very interested in hearing how you'd approach it!
The scenario is this (tl;dr: launching many new Python processes that use a chz entrypoint from a Python script):
- I have a script (e.g.
train.py) that uses achzentrypoint with some relatively complex, recursively polymorphic parameters (e.g. different kinds of optimizers or learning rate schedulers) --- the nicechzstuff - I have some other orchestrator/launcher script (e.g.
run_many_experiments.py) that is supposed to launch many processes that runtrain.pywith many different parameters. Depending on the configuration, these processes can run either locally or remotely somewhere in the cloud.
Ideally, in the launcher script, I would create the arguments to the train.py script as Python classes (e.g. optimizer = FancyAdam(....), where FancyAdam could be a chz class). Then to actually start the new process, those arguments need to somehow be sent as arguments (potentially over the wire).
What do you think is the best way to solve that?
Ideas I had:
- Use something like
beta_blueprint_to_argvto convert the classes back to command line arguments, which can then easily be used to start the process with something likedocker_container.exec(cmd + args). (This feels nice and easy to me, except that I can imagine thatbeta_blueprint_to_argvis going to be very tricky for some kinds of blueprints?) - Pickle the arguments, and send them to the new process in some other way. (Seems annoying, and requires a different entrypoint to
train.pythat does the unpickling.)