A CLI tool that generates tf/json and tfstate files based on existing infrastructure
(reverse Terraform).
- Disclaimer: This is not an official Google product
- Created by: Waze SRE
- Capabilities
- Installation
- Supported Providers
- Major Cloud
- Cloud
- Infrastructure Software
- Network
- VCS
- Monitoring & System Management
- Community
- Contributing
- Developing
- Infrastructure
- Stargazers over time
- Generate
tf/json+tfstatefiles from existing infrastructure for all supported objects by resource. - Remote state can be uploaded to a GCS bucket.
- Connect between resources with
terraform_remote_state(local and bucket). - Save
tf/jsonfiles using a custom folder tree pattern. - Import by resource name and type.
- Support terraform 0.13 (for terraform 0.11 use v0.7.9).
Terraformer uses Terraform providers and is designed to easily support newly added resources. To upgrade resources with new fields, all you need to do is upgrade the relevant Terraform providers.
Import current state to Terraform configuration from a provider
Usage:
import [provider] [flags]
import [provider] [command]
Available Commands:
list List supported resources for a provider
Flags:
-b, --bucket string gs://terraform-state
-c, --connect (default true)
-С, --compact (default false)
-x, --excludes strings firewalls,networks
-f, --filter strings compute_firewall=id1:id2:id4
-h, --help help for google
-O, --output string output format hcl or json (default "hcl")
-o, --path-output string (default "generated")
-p, --path-pattern string {output}/{provider}/ (default "{output}/{provider}/{service}/")
--projects strings
-z, --regions strings europe-west1, (default [global])
-r, --resources strings firewall,networks or * for all services
-s, --state string local or bucket (default "local")
-v, --verbose verbose mode
-n, --retry-number number of retries to perform if refresh fails
-m, --retry-sleep-ms time in ms to sleep between retries
Use " import [provider] [command] --help" for more information about a command.
The tool requires read-only permissions to list service resources.
You can use --resources parameter to tell resources from what service you want to import.
To import resources from all services, use --resources="*" . If you want to exclude certain services, you can combine the parameter with --excludes to exclude resources from services you don't want to import e.g. --resources="*" --excludes="iam".
Filters are a way to choose which resources terraformer imports. It's possible to filter resources by its identifiers or attributes. Multiple filtering values are separated by :. If an identifier contains this symbol, value should be wrapped in ' e.g. --filter=resource=id1:'project:dataset_id'. Identifier based filters will be executed before Terraformer will try to refresh remote state.
Use Type when you need to filter only one of several types of resources. Multiple filters can be combined when importing different resource types. An example would be importing all AWS security groups from a specific AWS VPC:
terraformer import aws -r sg,vpc --filter Type=sg;Name=vpc_id;Value=VPC_ID --filter Type=vpc;Name=id;Value=VPC_ID
Notice how the Name is different for sg than it is for vpc.
Filtering is based on Terraform resource ID patterns. To find valid ID patterns for your resource, check the import part of the Terraform documentation.
Example usage:
terraformer import aws --resources=vpc,subnet --filter=vpc=myvpcid --regions=eu-west-1
Will only import the vpc with id myvpcid. This form of filters can help when it's necessary to select resources by its identifiers.
It is possible to filter by specific field name only. It can be used e.g. when you want to retrieve resources only with a specific tag key.
Example usage:
terraformer import aws --resources=s3 --filter="Name=tags.Abc" --regions=eu-west-1
Will only import the s3 resources that have tag Abc. This form of filters can help when the field values are not important from filtering perspective.
It is possible to filter by a field that contains a dot.
Example usage:
terraformer import aws --resources=s3 --filter="Name=tags.Abc.def" --regions=eu-west-1
Will only import the s3 resources that have tag Abc.def.
The plan command generates a planfile that contains all the resources set to be imported. By modifying the planfile before running the import command, you can rename or filter the resources you'd like to import.
The rest of subcommands and parameters are identical to the import command.
$ terraformer plan google --resources=networks,firewall --projects=my-project --regions=europe-west1-d
(snip)
Saving planfile to generated/google/my-project/terraformer/plan.json
After reviewing/customizing the planfile, begin the import by running import plan.
$ terraformer import plan generated/google/my-project/terraformer/plan.json
Terraformer by default separates each resource into a file, which is put into a given service directory.
The default path for resource files is {output}/{provider}/{service}/{resource}.tf and can vary for each provider.
It's possible to adjust the generated structure by:
- Using
--compactparameter to group resource files within a single service into oneresources.tffile - Adjusting the
--path-patternparameter and passing e.g.--path-pattern {output}/{provider}/to generate resources for all services in one directory
It's possible to combine --compact --path-pattern parameters together.
From source:
- Run
git clone <terraformer repo> - Run
go mod download - Run
go build -vfor all providers OR build with one providergo run build/main.go {google,aws,azure,kubernetes and etc} - Run
terraform initagainst anversions.tffile to install the plugins required for your platform. For example, if you need plugins for the google provider,versions.tfshould contain:
terraform {
required_providers {
google = {
source = "hashicorp/google"
}
}
required_version = ">= 0.13"
}
Or alternatively
- Copy your Terraform provider's plugin(s) to folder
~/.terraform.d/plugins/{darwin,linux}_amd64/, as appropriate.
From Releases:
- Linux
export PROVIDER={all,google,aws,kubernetes}
curl -LO https://github.com/GoogleCloudPlatform/terraformer/releases/download/$(curl -s https://api.github.com/repos/GoogleCloudPlatform/terraformer/releases/latest | grep tag_name | cut -d '"' -f 4)/terraformer-${PROVIDER}-linux-amd64
chmod +x terraformer-${PROVIDER}-linux-amd64
sudo mv terraformer-${PROVIDER}-linux-amd64 /usr/local/bin/terraformer
- MacOS
export PROVIDER={all,google,aws,kubernetes}
curl -LO https://github.com/GoogleCloudPlatform/terraformer/releases/download/$(curl -s https://api.github.com/repos/GoogleCloudPlatform/terraformer/releases/latest | grep tag_name | cut -d '"' -f 4)/terraformer-${PROVIDER}-darwin-amd64
chmod +x terraformer-${PROVIDER}-darwin-amd64
sudo mv terraformer-${PROVIDER}-darwin-amd64 /usr/local/bin/terraformer
If you want to use a package manager:
- Homebrew users can use
brew install terraformer. - Chocolatey users can use
choco install terraformer.
Links to download Terraform Providers:
- Major Cloud
- Cloud
- Infrastructure Software
- Network
- Cloudflare provider >1.16 - here
- VCS
- GitHub provider >=2.2.1 - here
- Monitoring & System Management
- Community
Information on provider plugins: https://www.terraform.io/docs/configuration/providers.html
Example:
terraformer import google --resources=gcs,forwardingRules,httpHealthChecks --connect=true --regions=europe-west1,europe-west4 --projects=aaa,fff
terraformer import google --resources=gcs,forwardingRules,httpHealthChecks --filter=compute_firewall=rule1:rule2:rule3 --regions=europe-west1 --projects=aaa,fff
For google-beta provider:
terraformer import google --resources=gcs,forwardingRules,httpHealthChecks --regions=europe-west4 --projects=aaa --provider-type beta
List of supported GCP services:
addressesgoogle_compute_address
autoscalersgoogle_compute_autoscaler
backendBucketsgoogle_compute_backend_bucket
backendServicesgoogle_compute_backend_service
bigQuerygoogle_bigquery_datasetgoogle_bigquery_table
cloudFunctionsgoogle_cloudfunctions_function
cloudsqlgoogle_sql_database_instancegoogle_sql_database
dataProcgoogle_dataproc_cluster
disksgoogle_compute_disk
externalVpnGatewaysgoogle_compute_external_vpn_gateway
dnsgoogle_dns_managed_zonegoogle_dns_record_set
firewallgoogle_compute_firewall
forwardingRulesgoogle_compute_forwarding_rule
gcsgoogle_storage_bucketgoogle_storage_bucket_aclgoogle_storage_default_object_aclgoogle_storage_bucket_iam_bindinggoogle_storage_bucket_iam_membergoogle_storage_bucket_iam_policygoogle_storage_notification
gkegoogle_container_clustergoogle_container_node_pool
globalAddressesgoogle_compute_global_address
globalForwardingRulesgoogle_compute_global_forwarding_rule
healthChecksgoogle_compute_health_check
httpHealthChecksgoogle_compute_http_health_check
httpsHealthChecksgoogle_compute_https_health_check
iamgoogle_project_iam_custom_rolegoogle_project_iam_membergoogle_service_account
imagesgoogle_compute_image
instanceGroupManagersgoogle_compute_instance_group_manager
instanceGroupsgoogle_compute_instance_group
instanceTemplatesgoogle_compute_instance_template
instancesgoogle_compute_instance
interconnectAttachmentsgoogle_compute_interconnect_attachment
kmsgoogle_kms_key_ringgoogle_kms_crypto_key
logginggoogle_logging_metric
memoryStoregoogle_redis_instance
monitoringgoogle_monitoring_alert_policygoogle_monitoring_groupgoogle_monitoring_notification_channelgoogle_monitoring_uptime_check_config
networksgoogle_compute_network
packetMirroringsgoogle_compute_packet_mirroring
nodeGroupsgoogle_compute_node_group
nodeTemplatesgoogle_compute_node_template
projectgoogle_project
pubsubgoogle_pubsub_subscriptiongoogle_pubsub_topic
regionAutoscalersgoogle_compute_region_autoscaler
regionBackendServicesgoogle_compute_region_backend_service
regionDisksgoogle_compute_region_disk
regionHealthChecksgoogle_compute_region_health_check
regionInstanceGroupsgoogle_compute_region_instance_group
regionSslCertificatesgoogle_compute_region_ssl_certificate
regionTargetHttpProxiesgoogle_compute_region_target_http_proxy
regionTargetHttpsProxiesgoogle_compute_region_target_https_proxy
regionUrlMapsgoogle_compute_region_url_map
reservationsgoogle_compute_reservation
resourcePoliciesgoogle_compute_resource_policy
regionInstanceGroupManagersgoogle_compute_region_instance_group_manager
routersgoogle_compute_router
routesgoogle_compute_route
schedulerJobsgoogle_cloud_scheduler_job
securityPoliciesgoogle_compute_security_policy
sslCertificatesgoogle_compute_managed_ssl_certificate
sslPoliciesgoogle_compute_ssl_policy
subnetworksgoogle_compute_subnetwork
targetHttpProxiesgoogle_compute_target_http_proxy
targetHttpsProxiesgoogle_compute_target_https_proxy
targetInstancesgoogle_compute_target_instance
targetPoolsgoogle_compute_target_pool
targetSslProxiesgoogle_compute_target_ssl_proxy
targetTcpProxiesgoogle_compute_target_tcp_proxy
targetVpnGatewaysgoogle_compute_vpn_gateway
urlMapsgoogle_compute_url_map
vpnTunnelsgoogle_compute_vpn_tunnel
Your tf and tfstate files are written by default to
generated/gcp/zone/service.
Example:
terraformer import aws --resources=vpc,subnet --connect=true --regions=eu-west-1 --profile=prod
terraformer import aws --resources=vpc,subnet --filter=vpc=vpc_id1:vpc_id2:vpc_id3 --regions=eu-west-1
AWS configuration including environmental variables, shared credentials file (~/.aws/credentials), and shared config file (~/.aws/config) will be loaded by the tool by default. To use a specific profile, you can use the following command:
terraformer import aws --resources=vpc,subnet --regions=eu-west-1 --profile=prod
You can also provide no regions when importing resources:
terraformer import aws --resources=cloudfront --profile=prod
In that case terraformer will not know with which region resources are associated with and will not assume any region. That scenario is useful in case of global resources (e.g. CloudFront distributions or Route 53 records) and when region is passed implicitly through environmental variables or metadata service.
accessanalyzeraws_accessanalyzer_analyzer
acmaws_acm_certificate
alb(supports ALB and NLB)aws_lbaws_lb_listeneraws_lb_listener_ruleaws_lb_listener_certificateaws_lb_target_groupaws_lb_target_group_attachment
api_gatewayaws_api_gateway_authorizeraws_api_gateway_documentation_partaws_api_gateway_gateway_responseaws_api_gateway_integrationaws_api_gateway_integration_responseaws_api_gateway_methodaws_api_gateway_method_responseaws_api_gateway_modelaws_api_gateway_resourceaws_api_gateway_rest_apiaws_api_gateway_stageaws_api_gateway_usage_planaws_api_gateway_vpc_link
appsyncaws_appsync_graphql_api
auto_scalingaws_autoscaling_groupaws_launch_configurationaws_launch_template
budgetsaws_budgets_budget
cloud9aws_cloud9_environment_ec2
cloudfrontaws_cloudfront_distribution
cloudformationaws_cloudformation_stackaws_cloudformation_stack_setaws_cloudformation_stack_set_instance
cloudhsmaws_cloudhsm_v2_clusteraws_cloudhsm_v2_hsm
cloudtrailaws_cloudtrail
cloudwatchaws_cloudwatch_dashboardaws_cloudwatch_event_ruleaws_cloudwatch_event_targetaws_cloudwatch_metric_alarm
codebuildaws_codebuild_project
codecommitaws_codecommit_repository
codedeployaws_codedeploy_app
codepipelineaws_codepipelineaws_codepipeline_webhook
cognitoaws_cognito_identity_poolaws_cognito_user_pool
customer_gatewayaws_customer_gateway
configaws_config_config_ruleaws_config_configuration_recorderaws_config_delivery_channel
datapipelineaws_datapipeline_pipeline
devicefarmaws_devicefarm_project
dynamodbaws_dynamodb_table
ec2_instanceaws_instance
eipaws_eip
elasticacheaws_elasticache_clusteraws_elasticache_parameter_groupaws_elasticache_subnet_groupaws_elasticache_replication_group
ebsaws_ebs_volumeaws_volume_attachment
elastic_beanstalkaws_elastic_beanstalk_applicationaws_elastic_beanstalk_environment
ecsaws_ecs_clusteraws_ecs_serviceaws_ecs_task_definition
ecraws_ecr_lifecycle_policyaws_ecr_repositoryaws_ecr_repository_policy
efsaws_efs_access_pointaws_efs_file_systemaws_efs_file_system_policyaws_efs_mount_target
eksaws_eks_cluster
elbaws_elb
emraws_emr_clusteraws_emr_security_configuration
eniaws_network_interface
esaws_elasticsearch_domain
firehoseaws_kinesis_firehose_delivery_stream
glueglue_crawleraws_glue_catalog_databaseaws_glue_catalog_table
iamaws_iam_groupaws_iam_group_policyaws_iam_group_policy_attachmentaws_iam_instance_profileaws_iam_policyaws_iam_roleaws_iam_role_policyaws_iam_role_policy_attachmentaws_iam_useraws_iam_user_group_membershipaws_iam_user_policyaws_iam_user_policy_attachment
igwaws_internet_gateway
iotaws_iot_thingaws_iot_thing_typeaws_iot_topic_ruleaws_iot_role_alias
kinesisaws_kinesis_stream
kmsaws_kms_keyaws_kms_aliasaws_kms_grant
lambdaaws_lambda_event_source_mappingaws_lambda_functionaws_lambda_function_event_invoke_configaws_lambda_layer_version
logsaws_cloudwatch_log_group
media_packageaws_media_package_channel
media_storeaws_media_store_container
mskaws_msk_cluster
nataws_nat_gateway
naclaws_network_acl
organizationaws_organizations_accountaws_organizations_organizationaws_organizations_organizational_unitaws_organizations_policyaws_organizations_policy_attachment
qldbaws_qldb_ledger
rdsaws_db_instanceaws_db_parameter_groupaws_db_subnet_groupaws_db_option_groupaws_db_event_subscription
resourcegroupsaws_resourcegroups_group
route53aws_route53_zoneaws_route53_record
route_tableaws_route_tableaws_main_route_table_associationaws_route_table_association
s3aws_s3_bucket
secretsmanageraws_secretsmanager_secret
securityhubaws_securityhub_accountaws_securityhub_memberaws_securityhub_standards_subscription
servicecatalogaws_servicecatalog_portfolio
sesaws_ses_configuration_setaws_ses_domain_identityaws_ses_email_identityaws_ses_receipt_ruleaws_ses_receipt_rule_setaws_ses_template
sfnaws_sfn_activityaws_sfn_state_machine
sgaws_security_groupaws_security_group_rule(if a rule cannot be inlined)
snsaws_sns_topicaws_sns_topic_subscription
sqsaws_sqs_queue
subnetaws_subnet
swfaws_swf_domain
transit_gatewayaws_ec2_transit_gateway_route_tableaws_ec2_transit_gateway_vpc_attachment
wafaws_waf_byte_match_setaws_waf_geo_match_setaws_waf_ipsetaws_waf_rate_based_ruleaws_waf_regex_match_setaws_waf_regex_pattern_setaws_waf_ruleaws_waf_rule_groupaws_waf_size_constraint_setaws_waf_sql_injection_match_setaws_waf_web_aclaws_waf_xss_match_set
waf_regionalaws_wafregional_byte_match_setaws_wafregional_geo_match_setaws_wafregional_ipsetaws_wafregional_rate_based_ruleaws_wafregional_regex_match_setaws_wafregional_regex_pattern_setaws_wafregional_ruleaws_wafregional_rule_groupaws_wafregional_size_constraint_setaws_wafregional_sql_injection_match_setaws_wafregional_web_aclaws_wafregional_xss_match_set
vpcaws_vpc
vpc_peeringaws_vpc_peering_connection
vpn_connectionaws_vpn_connection
vpn_gatewayaws_vpn_gateway
workspacesaws_workspaces_directoryaws_workspaces_ip_groupaws_workspaces_workspace
xrayaws_xray_sampling_rule
AWS services that are global will be imported without specified region even if several regions will be passed. It is to ensure only one representation of an AWS resource is imported.
List of global AWS services:
budgetscloudfrontiamorganizationroute53waf
Attribute filters allow filtering across different resource types by its attributes.
terraformer import aws --resources=ec2_instance,ebs --filter="Name=tags.costCenter;Value=20000:'20001:1'" --regions=eu-west-1
Will only import AWS EC2 instances along with EBS volumes annotated with tag costCenter with values 20000 or 20001:1. Attribute filters are by default applicable to all resource types although it's possible to specify to what resource type a given filter should be applicable to by providing Type=<type> parameter. For example:
terraformer import aws --resources=ec2_instance,ebs --filter=Type=ec2_instance;Name=tags.costCenter;Value=20000:'20001:1' --regions=eu-west-1
Will work as same as example above with a change the filter will be applicable only to ec2_instance resources.
Due to fact API Gateway generates a lot of resources, it's possible to issue a filtering query to retrieve resources related to a given REST API by tags. To fetch resources related to a REST API resource with a tag STAGE and value dev, add parameter --filter="Type=api_gateway_rest_api;Name=tags.STAGE;Value=dev".
Terraformer uses AWS ListQueues API call to fetch available queues. The API is able to return only up to 1000 queues and an additional name prefix should be passed to filter the list results. It's possible to pass QueueNamePrefix parameter by environmental variable SQS_PREFIX.
Terraformer by default will try to keep rules in security groups as long as no circular dependencies are detected. This approach is implemented to keep the rules as tidy as possible but there can be cases when this behaviour is not desirable (see GoogleCloudPlatform/terraformer#493). To make Terraformer split rules from security groups, add SPLIT_SG_RULES environmental variable with any value.
Support Azure CLI, Service Principal with Client Certificate & Service Principal with Client Secret
Example:
# Using Azure CLI (az login)
export ARM_SUBSCRIPTION_ID=[SUBSCRIPTION_ID]
# Using Service Principal with Client Certificate
export ARM_SUBSCRIPTION_ID=[SUBSCRIPTION_ID]
export ARM_CLIENT_ID=[CLIENT_ID]
export ARM_CLIENT_CERTIFICATE_PATH="/path/to/my/client/certificate.pfx"
export ARM_CLIENT_CERTIFICATE_PASSWORD=[CLIENT_CERTIFICATE_PASSWORD]
export ARM_TENANT_ID=[TENANT_ID]
# Service Principal with Client Secret
export ARM_SUBSCRIPTION_ID=[SUBSCRIPTION_ID]
export ARM_CLIENT_ID=[CLIENT_ID]
export ARM_CLIENT_SECRET=[CLIENT_SECRET]
export ARM_TENANT_ID=[TENANT_ID]
./terraformer import azure -r resource_group
./terraformer import azure -R my_resource_group -r virtual_network,resource_group
List of supported Azure resources:
analysisazurerm_analysis_services_server
app_serviceazurerm_app_service
containerazurerm_container_groupazurerm_container_registryazurerm_container_registry_webhook
cosmosdbazurerm_cosmosdb_accountazurerm_cosmosdb_sql_containerazurerm_cosmosdb_sql_databaseazurerm_cosmosdb_table
databaseazurerm_mariadb_configurationazurerm_mariadb_databaseazurerm_mariadb_firewall_ruleazurerm_mariadb_serverazurerm_mariadb_virtual_network_ruleazurerm_mysql_configurationazurerm_mysql_databaseazurerm_mysql_firewall_ruleazurerm_mysql_serverazurerm_mysql_virtual_network_ruleazurerm_postgresql_configurationazurerm_postgresql_databaseazurerm_postgresql_firewall_ruleazurerm_postgresql_serverazurerm_postgresql_virtual_network_ruleazurerm_sql_databaseazurerm_sql_active_directory_administratorazurerm_sql_elasticpoolazurerm_sql_failover_groupazurerm_sql_firewall_ruleazurerm_sql_serverazurerm_sql_virtual_network_rule
diskazurerm_managed_disk
dnsazurerm_dns_a_recordazurerm_dns_aaaa_recordazurerm_dns_caa_recordazurerm_dns_cname_recordazurerm_dns_mx_recordazurerm_dns_ns_recordazurerm_dns_ptr_recordazurerm_dns_srv_recordazurerm_dns_txt_recordazurerm_dns_zone
load_balancerazurerm_lbazurerm_lb_backend_address_poolazurerm_lb_nat_ruleazurerm_lb_probe
network_interfaceazurerm_network_interface
network_security_groupazurerm_network_security_group
private_dnsazurerm_private_dns_a_recordazurerm_private_dns_aaaa_recordazurerm_private_dns_cname_recordazurerm_private_dns_mx_recordazurerm_private_dns_ptr_recordazurerm_private_dns_srv_recordazurerm_private_dns_txt_recordazurerm_private_dns_zoneazurerm_private_dns_zone_virtual_network_link
public_ipazurerm_public_ipazurerm_public_ip_prefix
redis- `azurerm_redis_cache
resource_groupazurerm_resource_group
scalesetazurerm_virtual_machine_scale_set
security_centerazurerm_security_center_contactazurerm_security_center_subscription_pricing
storage_accountazurerm_storage_accountazurerm_storage_blobazurerm_storage_container
virtual_machineazurerm_virtual_machine
virtual_networkazurerm_virtual_network
You can either edit your alicloud config directly, (usually it is ~/.aliyun/config.json)
or run aliyun configure and enter the credentials when prompted.
Terraformer will pick up the profile name specified in the --profile parameter.
It defaults to the first config in the config array.
terraformer import alicloud --resources=ecs --regions=ap-southeast-3 --profile=defaultList of supported AliCloud resources:
dnsalicloud_dnsalicloud_dns_record
ecsalicloud_instance
keypairalicloud_key_pair
natalicloud_nat_gateway
pvtzalicloud_pvtz_zonealicloud_pvtz_zone_attachmentalicloud_pvtz_zone_record
ramalicloud_ram_rolealicloud_ram_role_policy_attachment
rdsalicloud_db_instance
sgalicloud_security_groupalicloud_security_group_rule
slbalicloud_slbalicloud_slb_server_groupalicloud_slb_listener
vpcalicloud_vpc
vswitchalicloud_vswitch
If you want to run Terraformer with the IBM Cloud provider plugin on your system, complete the following steps:
-
Export IBM Cloud API key as environment variables. Example:
export IC_API_KEY=<IBMCLOUD_API_KEY> export IC_REGION=<IBMCLOUD_REGION> terraformer import ibm -r ibm_cos,ibm_iam.... -
Use flag for Resource Group to classify resources accordingly. Example:
export IC_API_KEY=<IBMCLOUD_API_KEY> export IC_REGION=<IBMCLOUD_REGION> terraformer import ibm --resources=ibm_is_vpc --resource_group=a0d5213d831a454ebace7ed38ca9c8ca terraformer import ibm --resources=ibm_function --region=us-south
List of supported IBM Cloud resources:
ibm_kpibm_resource_instanceibm_kms_key
ibm_cosibm_resource_instanceibm_cos_bucket
ibm_iamibm_iam_user_policyibm_iam_access_groupibm_iam_access_group_membersibm_iam_access_group_policyibm_iam_access_group_dynamic_rule
ibm_container_vpc_clusteribm_container_vpc_clusteribm_container_vpc_worker_pool
ibm_database_etcdibm_database
ibm_database_mongoibm_database
ibm_database_postgresqlibm_database
ibm_database_rabbitmqibm_database
ibm_database_redisibm_database
ibm_is_instance_groupibm_is_instance_groupibm_is_instance_group_manageribm_is_instance_group_manager_policy
ibm_cisibm_cisibm_cis_dns_recordibm_cis_firewallibm_cis_domain_settingsibm_cis_global_load_balanceribm_cis_edge_functions_actionibm_cis_edge_functions_triggeribm_cis_healthcheckibm_cis_rate_limit
ibm_is_vpcibm_is_vpcibm_is_vpc_address_prefixibm_is_vpc_routeibm_is_vpc_routing_tableibm_is_vpc_routing_table_route
ibm_is_subnetibm_is_instanceibm_is_security_groupibm_is_security_group_rule
ibm_is_network_aclibm_is_public_gatewayibm_is_volumeibm_is_vpn_gatewayibm_is_vpn_gateway_connections
ibm_is_lbibm_is_lb_poolibm_is_lb_pool_memberibm_is_lb_listeneribm_is_lb_listener_policyibm_is_lb_listener_policy_rule
ibm_is_floating_ipibm_is_flow_logibm_is_ike_policyibm_is_imageibm_is_instance_templateibm_is_ipsec_policyibm_is_ssh_keyibm_functionibm_function_packageibm_function_actionibm_function_ruleibm_function_trigger
ibm_private_dnsibm_resource_instanceibm_dns_zoneibm_dns_resource_recordibm_dns_permitted_networkibm_dns_glb_monitoribm_dns_glb_poolibm_dns_glb
Example:
export DIGITALOCEAN_TOKEN=[DIGITALOCEAN_TOKEN]
./terraformer import digitalocean -r project,droplet
List of supported DigitalOcean resources:
cdndigitalocean_cdn
certificatedigitalocean_certificate
database_clusterdigitalocean_database_clusterdigitalocean_database_connection_pooldigitalocean_database_dbdigitalocean_database_replicadigitalocean_database_user
domaindigitalocean_domaindigitalocean_record
dropletdigitalocean_droplet
droplet_snapshotdigitalocean_droplet_snapshot
firewalldigitalocean_firewall
floating_ipdigitalocean_floating_ip
kubernetes_clusterdigitalocean_kubernetes_clusterdigitalocean_kubernetes_node_pool
loadbalancerdigitalocean_loadbalancer
projectdigitalocean_project
ssh_keydigitalocean_ssh_key
tagdigitalocean_tag
volumedigitalocean_volume
volume_snapshotdigitalocean_volume_snapshot
Example:
export FASTLY_API_KEY=[FASTLY_API_KEY]
export FASTLY_CUSTOMER_ID=[FASTLY_CUSTOMER_ID]
./terraformer import fastly -r service_v1,user
List of supported Fastly resources:
service_v1fastly_service_acl_entries_v1fastly_service_dictionary_items_v1fastly_service_dynamic_snippet_content_v1fastly_service_v1
userfastly_user_v1
Example:
export HEROKU_EMAIL=[HEROKU_EMAIL]
export HEROKU_API_KEY=[HEROKU_API_KEY]
./terraformer import heroku -r app,addon
List of supported Heroku resources:
account_featureheroku_account_feature
addonheroku_addon
addon_attachmentheroku_addon_attachment
appheroku_app
app_config_associationheroku_app_config_association
app_featureheroku_app_feature
app_webhookheroku_app_webhook
buildheroku_build
certheroku_cert
domainheroku_domain
drainheroku_drain
formationheroku_formation
pipelineheroku_pipeline
pipeline_couplingheroku_pipeline_coupling
team_collaboratorheroku_team_collaborator
team_memberheroku_team_member
Example:
export LINODE_TOKEN=[LINODE_TOKEN]
./terraformer import linode -r instance
List of supported Linode resources:
domainlinode_domainlinode_domain_record
imagelinode_image
instancelinode_instance
nodebalancerlinode_nodebalancerlinode_nodebalancer_configlinode_nodebalancer_node
rdnslinode_rdns
sshkeylinode_sshkey
stackscriptlinode_stackscript
tokenlinode_token
volumelinode_volume
Example:
$ export NS1_APIKEY=[NS1_APIKEY]
$ terraformer import ns1 -r zone,monitoringjob,team
List of supported NS1 resources:
zonens1_zone
monitoringjobns1_monitoringjob
teamns1_team
Example:
terraformer import openstack --resources=compute,networking --regions=RegionOne
List of supported OpenStack services:
blockstorageopenstack_blockstorage_volume_v1openstack_blockstorage_volume_v2openstack_blockstorage_volume_v3
computeopenstack_compute_instance_v2
networkingopenstack_networking_secgroup_v2openstack_networking_secgroup_rule_v2
Example:
$ export TENCENTCLOUD_SECRET_ID=<SECRET_ID>
$ export TENCENTCLOUD_SECRET_KEY=<SECRET_KEY>
$ terraformer import tencentcloud --resources=cvm,cbs --regions=ap-guangzhou
List of supported TencentCloud services:
astencentcloud_as_scaling_grouptencentcloud_as_scaling_config
cbstencentcloud_cbs_storage
cdntencentcloud_cdn_domain
cfstencentcloud_cfs_file_system
clbtencentcloud_clb_instance
costencentcloud_cos_bucket
cvmtencentcloud_instance
elasticsearchtencentcloud_elasticsearch_instance
gaaptencentcloud_gaap_proxytencentcloud_gaap_realserver
key_pairtencentcloud_key_pair
mongodbtencentcloud_mongodb_instance
mysqltencentcloud_mysql_instancetencentcloud_mysql_readonly_instance
redistencentcloud_redis_instance
scftencentcloud_scf_function
security_grouptencentcloud_security_group
ssltencentcloud_ssl_certificate
subnettencentcloud_subnet
tcaplustencentcloud_tcaplus_cluster
vpctencentcloud_vpc
vpctencentcloud_vpn_gateway
Example:
export VULTR_API_KEY=[VULTR_API_KEY]
./terraformer import vultr -r server
List of supported Vultr resources:
bare_metal_servervultr_bare_metal_server
block_storagevultr_block_storage
dns_domainvultr_dns_domainvultr_dns_record
firewall_groupvultr_firewall_groupvultr_firewall_rule
networkvultr_network
reserved_ipvultr_reserved_ip
servervultr_server
snapshotvultr_snapshot
ssh_keyvultr_ssh_key
startup_scriptvultr_startup_script
uservultr_user
Example:
export YC_TOKEN=[YANDEX_CLOUD_OAUTH_TOKEN]
export YC_FOLDER_ID=[YANDEX_FOLDER_ID]
./terraformer import yandex -r subnet
List of supported Yandex resources:
instanceyandex_compute_instance
diskyandex_compute_disk
subnetyandex_vpc_subnet
networkyandex_vpc_network
Your tf and tfstate files are written by default to
generated/yandex/service.
Example:
terraformer import kubernetes --resources=deployments,services,storageclasses
terraformer import kubernetes --resources=deployments,services,storageclasses --filter=deployment=name1:name2:name3
All Kubernetes resources that are currently supported by the Kubernetes provider, are also supported by this module. Here is the list of resources which are currently supported by Kubernetes provider v.1.4:
clusterrolebindingkubernetes_cluster_role_binding
configmapskubernetes_config_map
deploymentskubernetes_deployment
horizontalpodautoscalerskubernetes_horizontal_pod_autoscaler
limitrangeskubernetes_limit_range
namespaceskubernetes_namespace
persistentvolumeskubernetes_persistent_volume
persistentvolumeclaimskubernetes_persistent_volume_claim
podskubernetes_pod
replicationcontrollerskubernetes_replication_controller
resourcequotaskubernetes_resource_quota
secretskubernetes_secret
serviceskubernetes_service
serviceaccountskubernetes_service_account
statefulsetskubernetes_stateful_set
storageclasseskubernetes_storage_class
- Terraform Kubernetes provider is rejecting resources with ":" characters in their names (as they don't meet DNS-1123), while it's allowed for certain types in Kubernetes, e.g. ClusterRoleBinding.
- Because Terraform flatmap uses "." to detect the keys for unflattening the maps, some keys with "." in their names are being considered as the maps.
- Since the library assumes empty strings to be empty values (not "0"), there are some issues with optional integer keys that are restricted to be positive.
Example:
export OCTOPUS_CLI_SERVER=http://localhost:8081/
export OCTOPUS_CLI_API_KEY=API-CK7DQ8BMJCUUBSHAJCDIATXUO
terraformer import octopusdeploy --resources=tagsets
accountsoctopusdeploy_account
certificatesoctopusdeploy_certificate
environmentsoctopusdeploy_environment
feedsoctopusdeploy_feed
libraryvariablesetsoctopusdeploy_library_variable_set
lifecycleoctopusdeploy_lifecycle
projectoctopusdeploy_project
projectgroupsoctopusdeploy_project_group
projecttriggersoctopusdeploy_project_deployment_target_trigger
tagsetsoctopusdeploy_tag_set
Example:
export RABBITMQ_SERVER_URL=http://foo.bar.localdomain:15672
export RABBITMQ_USERNAME=[RABBITMQ_USERNAME]
export RABBITMQ_PASSWORD=[RABBITMQ_PASSWORD]
terraformer import rabbitmq --resources=vhosts,queues,exchanges
terraformer import rabbitmq --resources=vhosts,queues,exchanges --filter=vhost=name1:name2:name3
All RabbitMQ resources that are currently supported by the RabbitMQ provider, are also supported by this module. Here is the list of resources which are currently supported by RabbitMQ provider v.1.1.0:
bindingsrabbitmq_binding
exchangesrabbitmq_exchange
permissionsrabbitmq_permissions
policiesrabbitmq_policy
queuesrabbitmq_queue
usersrabbitmq_user
vhostsrabbitmq_vhost
Example using a Cloudflare API Key and corresponding email:
export CLOUDFLARE_API_KEY=[CLOUDFLARE_API_KEY]
export CLOUDFLARE_EMAIL=[CLOUDFLARE_EMAIL]
export CLOUDFLARE_ACCOUNT_ID=[CLOUDFLARE_ACCOUNT_ID]
./terraformer import cloudflare --resources=firewall,dns
or using a Cloudflare API Token:
export CLOUDFLARE_API_TOKEN=[CLOUDFLARE_API_TOKEN]
export CLOUDFLARE_ACCOUNT_ID=[CLOUDFLARE_ACCOUNT_ID]
./terraformer import cloudflare --resources=firewall,dns
List of supported Cloudflare services:
accesscloudflare_access_application
dnscloudflare_zonecloudflare_record
firewallcloudflare_access_rulecloudflare_filtercloudflare_firewall_rulecloudflare_zone_lockdowncloudflare_rate_limit
page_rulecloudflare_page_rule
account_membercloudflare_account_member
Example:
./terraformer import github --organizations=YOUR_ORGANIZATION --resources=repositories --token=YOUR_TOKEN // or GITHUB_TOKEN in env
./terraformer import github --organizations=YOUR_ORGANIZATION --resources=repositories --filter=repository=id1:id2:id4 --token=YOUR_TOKEN // or GITHUB_TOKEN in env
Supports only organizational resources. List of supported resources:
membersgithub_membership
organization_blocksgithub_organization_block
organization_projectsgithub_organization_project
organization_webhooksgithub_organization_webhook
repositoriesgithub_repositorygithub_repository_webhookgithub_branch_protectiongithub_repository_collaboratorgithub_repository_deploy_key
teamsgithub_teamgithub_team_membershipgithub_team_repository
user_ssh_keysgithub_user_ssh_key
Notes:
- Terraformer can't get webhook secrets from the GitHub API. If you use a secret token in any of your webhooks, running
terraform planwill result in a change being detected: =>configuration.#: "1" => "0"in tfstate only.
Example:
./terraformer import datadog --resources=monitor --api-key=YOUR_DATADOG_API_KEY // or DATADOG_API_KEY in env --app-key=YOUR_DATADOG_APP_KEY // or DATADOG_APP_KEY in env --api-url=DATADOG_API_URL // or DATADOG_HOST in env
./terraformer import datadog --resources=monitor --filter=monitor=id1:id2:id4 --api-key=YOUR_DATADOG_API_KEY // or DATADOG_API_KEY in env --app-key=YOUR_DATADOG_APP_KEY // or DATADOG_APP_KEY in env
List of supported Datadog services:
dashboarddatadog_dashboard
dashboard_listdatadog_dashboard_list
downtimedatadog_downtime
logs_archivedatadog_logs_archive
logs_archive_orderdatadog_logs_archive_order
logs_custom_pipelinedatadog_logs_custom_pipeline
logs_integration_pipelinedatadog_logs_integration_pipeline
logs_pipeline_orderdatadog_logs_pipeline_order
logs_indexdatadog_logs_index
logs_index_orderdatadog_logs_index_order
integration_awsdatadog_integration_aws
integration_aws_lambda_arndatadog_integration_aws_lambda_arn
integration_aws_log_collectiondatadog_integration_aws_log_collection
integration_azuredatadog_integration_azure- NOTE: Sensitive field
client_secretis not generated and needs to be manually set
- NOTE: Sensitive field
integration_gcpdatadog_integration_gcp- NOTE: Sensitive fields
private_key, private_key_id, client_idis not generated and needs to be manually set
- NOTE: Sensitive fields
integration_pagerdutydatadog_integration_pagerduty
integration_pagerduty_service_objectdatadog_integration_pagerduty_service_object
metric_metadatadatadog_metric_metadata- NOTE: Importing resource requires resource ID's to be passed via Filter option
monitordatadog_monitor
roledatadog_role
screenboarddatadog_screenboard
security_monitoring_default_ruledatadog_security_monitoring_default_rule
security_monitoring_ruledatadog_security_monitoring_rule
service_level_objectivedatadog_service_level_objective- NOTE: Importing resource requires resource ID's to be passed via Filter option
syntheticsdatadog_synthetics_test
synthetics_global_variablesdatadog_synthetics_global_variables- NOTE: Importing resource requires resource ID's to be passed via Filter option
synthetics_private_locationdatadog_synthetics_private_location
timeboarddatadog_timeboard
userdatadog_user
Example:
./terraformer import newrelic -r alert,dashboard,infra,synthetics --api-key=NRAK-XXXXXXXX --account-id=XXXXX
List of supported New Relic resources:
alertnewrelic_alert_channelnewrelic_alert_conditionnewrelic_alert_policy
dashboardnewrelic_dashboard
infranewrelic_infra_alert_condition
syntheticsnewrelic_synthetics_monitornewrelic_synthetics_alert_condition
Example:
export KEYCLOAK_URL=https://foo.bar.localdomain
export KEYCLOAK_CLIENT_ID=[KEYCLOAK_CLIENT_ID]
export KEYCLOAK_CLIENT_SECRET=[KEYCLOAK_CLIENT_SECRET]
terraformer import keycloak --resources=realms
terraformer import keycloak --resources=realms --filter=realm=name1:name2:name3
terraformer import keycloak --resources=realms --targets realmA,realmB
Here is the list of resources which are currently supported by Keycloak provider v.1.19.0:
realmskeycloak_default_groupskeycloak_groupkeycloak_group_membershipskeycloak_group_roleskeycloak_ldap_full_name_mapperkeycloak_ldap_group_mapperkeycloak_ldap_hardcoded_group_mapperkeycloak_ldap_hardcoded_role_mapperkeycloak_ldap_msad_lds_user_account_control_mapperkeycloak_ldap_msad_user_account_control_mapperkeycloak_ldap_user_attribute_mapperkeycloak_ldap_user_federationkeycloak_openid_audience_protocol_mapperkeycloak_openid_clientkeycloak_openid_client_default_scopeskeycloak_openid_client_optional_scopeskeycloak_openid_client_scopekeycloak_openid_client_service_account_rolekeycloak_openid_full_name_protocol_mapperkeycloak_openid_group_membership_protocol_mapperkeycloak_openid_hardcoded_claim_protocol_mapperkeycloak_openid_hardcoded_group_protocol_mapperkeycloak_openid_hardcoded_role_protocol_mapper(only for client roles)keycloak_openid_user_attribute_protocol_mapperkeycloak_openid_user_property_protocol_mapperkeycloak_openid_user_realm_role_protocol_mapperkeycloak_openid_user_client_role_protocol_mapperkeycloak_openid_user_session_note_protocol_mapperkeycloak_realmkeycloak_required_actionkeycloak_rolekeycloak_user
Example:
LOGZIO_API_TOKEN=foobar LOGZIO_BASE_URL=https://api-eu.logz.io ./terraformer import logzio -r=alerts,alert_notification_endpoints // Import Logz.io alerts and alert notification endpoints
List of supported Logz.io resources:
alertslogzio_alert
alert_notification_endpointslogzio_endpoint
Use with Commercetools
This provider use the terraform-provider-commercetools. The terraformer provider was build by Dustin Deus.
Example:
CTP_CLIENT_ID=foo CTP_CLIENT_SCOPE=scope CTP_CLIENT_SECRET=bar CTP_PROJECT_KEY=key ./terraformer plan commercetools -r=types // Only planning
CTP_CLIENT_ID=foo CTP_CLIENT_SCOPE=scope CTP_CLIENT_SECRET=bar CTP_PROJECT_KEY=key ./terraformer import commercetools -r=types // Import commercetools types
List of supported commercetools resources:
api_extensioncommercetools_api_extension
channelcommercetools_channel
product_typecommercetools_product_type
shipping_methodcommercetools_shipping_method
shipping_zonecommercetools_shipping_zone
statecommercetools_state
storecommercetools_store
subscriptioncommercetools_subscription
tax_categorycommercetools_tax_category
typescommercetools_type
Use with Mikrotik
This provider uses the terraform-provider-mikrotik. The terraformer provider was built by Dom Del Nano.
Example:
## Warning! You should not expose your mikrotik creds through your bash history. Export them to your shell in a safe way when doing this for real!
MIKROTIK_HOST=router-hostname:8728 MIKROTIK_USER=username MIKROTIK_PASSWORD=password terraformer import mikrotik -r=dhcp_lease
# Import only static IPs
MIKROTIK_HOST=router-hostname:8728 MIKROTIK_USER=username MIKROTIK_PASSWORD=password terraformer import mikrotik -r=dhcp_lease --filter='Name=dynamic;Value=false'
List of supported mikrotik resources:
mikrotik_dhcp_lease
Use with Xen Orchestra
This provider uses the terraform-provider-xenorchestra. The terraformer provider was built by Dom Del Nano on behalf of Vates SAS who is sponsoring Dom to work on the project.
Example:
## Warning! You should not expose your xenorchestra creds through your bash history. Export them to your shell in a safe way when doing this for real!
XOA_URL=ws://your-xenorchestra-domain XOA_USER=username XOA_PASSWORD=password terraformer import xenorchestra -r=acl
List of supported xenorchestra resources:
xenorchestra_aclxenorchestra_resource_set
Support Using Service Accounts or Using Application Default Credentials.
Example:
# Using Service Accounts
export GOOGLE_CREDENTIALS=/path/to/client_secret.json
export IMPERSONATED_USER_EMAIL="[email protected]"
# Using Application Default Credentials
gcloud auth application-default login \
--client-id-file=client_secret.json \
--scopes \
https://www.googleapis.com/auth/gmail.labels,\
https://www.googleapis.com/auth/gmail.settings.basic
./terraformer import gmailfilter -r=filter,label
List of supported GmailFilter resources:
labelgmailfilter_label
filtergmailfilter_filter
If you have improvements or fixes, we would love to have your contributions. Please read CONTRIBUTING.md for more information on the process we would like contributors to follow.
Terraformer was built so you can easily add new providers of any kind.
Process for generating tf/json + tfstate files:
- Call GCP/AWS/other api and get list of resources.
- Iterate over resources and take only the ID (we don't need mapping fields!).
- Call to provider for readonly fields.
- Call to infrastructure and take tf + tfstate.
- Call to provider using the refresh method and get all data.
- Convert refresh data to go struct.
- Generate HCL file -
tf/jsonfiles. - Generate
tfstatefiles.
All mapping of resource is made by providers and Terraform. Upgrades are needed only for providers.
For GCP compute resources, use generated code from
providers/gcp/gcp_compute_code_generator.
To regenerate code:
go run providers/gcp/gcp_compute_code_generator/*.go
- Simpler to add new providers and resources - already supports AWS, GCP, GitHub, Kubernetes, and Openstack. Terraforming supports only AWS.
- Better support for HCL + tfstate, including updates for Terraform 0.12.
- If a provider adds new attributes to a resource, there is no need change Terraformer code - just update the Terraform provider on your laptop.
- Automatically supports connections between resources in HCL files.
Terraforming gets all attributes from cloud APIs and creates HCL and tfstate files with templating. Each attribute in the API needs to map to attribute in Terraform. Generated files from templating can be broken with illegal syntax. When a provider adds new attributes the terraforming code needs to be updated.
Terraformer instead uses Terraform provider files for mapping attributes, HCL library from Hashicorp, and Terraform code.
Look for S3 support in terraforming here and official S3 support Terraforming lacks full coverage for resources - as an example you can see that 70% of S3 options are not supported:
- terraforming - https://github.com/dtan4/terraforming/blob/master/lib/terraforming/template/tf/s3.erb
- official S3 support - https://www.terraform.io/docs/providers/aws/r/s3_bucket.html