© Copyright 2025, Intel Corporation
The module can deploy an Intel Optimized Databricks Cluster. Instance Selection and Intel Optimizations have been defaulted in the code.
Learn more about Intel optimizations :
Faster insights With Databricks Photon Using AWS i4i Instances With the Latest Intel Ice Lake Scalable Processors
Reduce Time to Decision With the Databricks Lakehouse Platform and Latest Intel 3rd Gen Xeon Scalable Processors
All the examples in example folder shows how to create a Intel Optimized Databricks cluster using this module along with the Intel Cloud Optimization Module for Databricks Workspace in AWS and Azure
Usage Considerations
-
If you dont have pre-existing Databricks Workspace, use the Intel Cloud Optimization Module for Databricks Workspace:
-
See examples folder and README for each examples above to use this module
Run Terraform
terraform init
terraform plan
terraform apply
More Information regarding deploying Databricks Workspace Databricks
Name | Version |
---|---|
aws | ~> 5.31 |
azurerm | ~> 3.48 |
databricks | ~> 1.14.2 |
Name | Version |
---|---|
databricks | ~> 1.14.2 |
No modules.
Name | Type |
---|---|
databricks_cluster.dbx_cluster | resource |
databricks_token.pat | resource |
databricks_spark_version.latest_lts | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
aws_dbx_node_type_id | The type of the AWS Compute Machine that are supported by databricks. | string |
"i4i.2xlarge" |
no |
azure_dbx_node_type_id | The type of the Azure Compute Machine that are supported by databricks. | string |
"Standard_E8ds_v5" |
no |
dbx_auto_terminate_min | Automatically terminate the cluster after being inactive for this time in minutes. If specified, the threshold must be between 10 and 10000 minutes. You can also set this value to 0 to explicitly disable automatic termination. Defaults to 60. We highly recommend having this setting present for Interactive/BI clusters. | number |
120 |
no |
dbx_autoscale_max_workers | The maximum number of workers to which the cluster can scale up when overloaded. max_workers must be strictly greater than min_workers. | number |
50 |
no |
dbx_autoscale_min_workers | The minimum number of workers to which the cluster can scale down when underutilized. It is also the initial number of workers the cluster will have after creation. | number |
1 |
no |
dbx_cloud | Flag that decides which Cloud to use for the instance type in Databricks Cluster | string |
n/a | yes |
dbx_cluster_name | Cluster name, which doesn’t have to be unique. If not specified at creation, the cluster name will be an empty string. | string |
"dbx_optimized_cluster" |
no |
dbx_host | Required URL for the databricks workspace | string |
n/a | yes |
dbx_runtime_engine | The type of runtime engine to use. If not specified, the runtime engine type is inferred based on the spark_version value. Allowed values include: PHOTON, STANDARD. | string |
"PHOTON" |
no |
dbx_spark_config | Key - Value pair for Intel Optimizations for Spark configuration | map(string) |
{ |
no |
enable_intel_tags | If true adds additional Intel tags to resources | bool |
true |
no |
intel_tags | Intel Tags | map(string) |
{ |
no |
tags | Tags | map(string) |
{ |
no |
Name | Description |
---|---|
dbx_cluster_autoterminate_min | Autoterminate minute of the databricks cluster |
dbx_cluster_custom_tags | Custom Tags |
dbx_cluster_name | Name of the databricks cluster |
dbx_cluster_node_type_id | Instance type of the databricks cluster |
dbx_cluster_runtime_engine | Runtime Engine of the databricks cluster |
dbx_cluster_spark_conf | Spark Configurations of the databricks cluster |
dbx_cluster_spark_version | Spark version of the databricks cluster |
dbx_pat | Personal Access Token |