This is a minimal Nextflow workflow to test GPU support on Google Cloud Platform (GCP).
- Google Cloud Account with appropriate permissions
- Nextflow installed (version 22.10.0 or higher)
- Google Cloud SDK installed and configured
- Appropriate GCP permissions:
compute.instances.createcompute.instances.deletecompute.disks.createcompute.networks.getstorage.buckets.create(for intermediate files)
gcloud auth application-default loginnextflow run main.nf -profile gcp_gpunextflow run main.nf -profile localThe nextflow.config file supports the following GPU types:
nvidia-tesla-t4- Good for inference and light ML workloads (recommended for cost)nvidia-tesla-v100- Better for training workloadsnvidia-tesla-p100- High-performance computingnvidia-tesla-p4- Cost-effective option
Edit nextflow.config and modify the gpu_task process:
withLabel: gpu {
gpuType = 'nvidia-tesla-v100' // Change to your preferred GPU
machineType = 'n1-highmem-8' // Adjust machine type if needed
}- T4 GPU: Use
n1-highmem-4orn1-standard-4 - V100 GPU: Use
n1-highmem-8or larger - P100/P4 GPU: Use
n1-highmem-8or larger
After running the workflow, check the generated reports:
timeline.html- Visual timeline of task executionreport.html- Detailed execution reporttrace.txt- Raw execution trace
Solution: Adjust GPU count in process { withLabel: gpu }
Solution: Change zone in nextflow.config or try a different GPU type
Solution: Check your GCP quota limits or request quota increase
- Modify
gpu_taskprocess inmain.nfto run your actual GPU workload - Install required software in the container (see example below)
- Test with your own data
Modify the gpu_task process:
process gpu_task {
label 'gpu'
container 'nvidia/cuda:11.8.0-runtime-ubuntu22.04'
// ... rest of process
}T4 GPU pricing on GCP (varies by region):
- Compute instance: ~$0.35/hour
- T4 GPU: ~$0.35/hour
- Total: ~$0.70/hour for this minimal setup
Always monitor your resources to avoid unexpected charges!