Add bulk pipeline interface #1531
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fix #1512
Changes (Desktop only)
Following the precedent of the Multi-Training tab, a new tab has been added to the Dive Desktop navigation bar, called "Pipeline."

This tab contains an interface similar to that of the multi-training interface, but with the ability to run a specified pipeline over multiple datasets.
Intended Workflow
Step 1. Choose a pipeline type
This is handled by a simple dropdown menu. This must be done first to enable the pipeline select to become enabled.

Limitation: The pipeline types "2-cam" and "3-cam" are filtered out for now to reduce the complexity of selecting datasets.
Step 2. Choose a pipeline
Once a pipeline type is chosen, the "Pipeline" dropdown will allow users to select a specific pipeline of that type. Once this is done, the available datasets appear for selection.
Because the pipeline/type pipeline are chosen first, we have additional control over what datasets are presented for staging. For example, measurement pipelines only allow users to select from stereo datasets.
Step 3. Stage datasets
Much like how datasets are staged/unstaged for multi-training, users can add from the available datasets table, and remove from the staged datasets table as needed.
Step 4. Run pipelines
Clicking the button will create a request for each selected dataset. If all requests come back to the frontend with no issues, the user will be moved to the Jobs tab to monitor the jobs that they just started. If any requests come back with issues, the user will be shown a prompt informing them that some of the pipelines did not start.