Terraform Notes

From Federal Burro of Information
Jump to navigationJump to search

Nuke a module

nuke a module and all of it's bits:

terraform plan -destroy -target=module.compute.module.softnas


a little script to clean up graphs: Cleandot

Dev pattern

From time to time as you learn about new cloud things † it will be easier to use the gui to get to know the object, it's peculiarities, and dependancies, than to write a terraform "from scratch". But you obviously want to immediately go to a terarform way of life the moment that object is up.

Here is how I do it:

1. Make the thing in the cloud. Maybe there is a wizard and sevral things get created in the process. Stay focused on one object for now.
2. Create the block in your tf files, use the example in the documentation, lean or "wrong" as it may be.
3. import that object.
4. Plan.
you will get alot of "found this" -> "will set to that".
Resolve each of those by setting your tf file to the "found this" values.
Other errors you get might be related to other dependant objects that have not been described in your tf.
Stay focused on that one new object you are adding.
5. Eventually you will "plan clean".
Commit that shit.

Now go back and consider all the objects related to the object that you made.

Should they be "inputs" into this TF?
Should they be included in this TF?
Should they be part of other TF's that this TF can use as a data source?

Just bite off one object at a time. Iterate. If you can think of a more elegant way write that down as a comment in your code.

I have found that terraform saves me so much time I can take the time to go back and make things more elegant all the time. And those packets of work are nice and tight.

Elegant is a journey not a destination.

† - this process is

Dynamic content / conditional blocks

Consider the bigquery export option for a gke cluster:

do we set it everytime? do we put it into a module?

Lets make an "enable" flag and use "dynamic content":

we pass to the module:

  resource_usage_export_flag    = false
  resource_usage_destination_id = google_bigquery_dataset.gke_billing_dataset.dataset_id
<pre>

and then in the module:

<pre>
    #bigquery_destination {
    #  dataset_id = var.resource_usage_destination_id
    #}  

    dynamic "bigquery_destination" {
      for_each = var.resource_usage_export_flag == true ? [var.resource_usage_destination_id] : []
      content {
        dataset_id = var.resource_usage_destination_id
      }   
    }

If resource_usage_export_flag is not true, then no bigquery clause is put in the cluster.

Object as variable

In your module define this complicated variable, type and defaults :

modules/myappdeployment/main.tf

variable "myredis_config" {
  type = object({
    size                                    = number
    alerting_notification_channels          = list(string)
    alerting_threshold_cpu                  = number
    alerting_threshold_memory               = number
    alerting_threshold_cache_hit_ratio_high = number
    alerting_threshold_cache_hit_ratio_low  = number
    alerting_threshold_clients_connected    = number
  })  
  default = { 
    size                                    = 1 
    alerting_notification_channels          = []
    alerting_threshold_cpu                  = 0.8 
    alerting_threshold_memory               = 0.8 
    alerting_threshold_cache_hit_ratio_high = 0.9 
    alerting_threshold_cache_hit_ratio_low  = 0.5 
    alerting_threshold_clients_connected    = 100 
  }
}

and use it those vars in your module:

module "myredis" {
  source                                  = "../redis/"
  name_prefix                             = "${var.env_short}-${var.product}-myredis"
  name_short                              = "${var.product}-myredis"
  memory_gb                               = var.myredis_config.size
  authorized_network                      = var.network
  dns_zone                                = var.dns_zone
  dns_zone_name                           = var.dns_zone_name
  labels                                  = var.labels
  alerting_notification_channels          = var.myredis_config.alerting_notification_channels
  alerting_threshold_cpu                  = var.myredis_config.alerting_threshold_cpu
  alerting_threshold_memory               = var.myredis_config.alerting_threshold_memory
  alerting_threshold_cache_hit_ratio_high = var.myredis_config.alerting_threshold_cache_hit_ratio_high
  alerting_threshold_cache_hit_ratio_low  = var.myredis_config.alerting_threshold_cache_hit_ratio_low
}

Then use that module and pass your vars:

module "myappdeployment" {
  source = "../../modules/myappdeployment"

  env       = var.env

  myredis_config = { 
    size = 1 
    alerting_notification_channels = [ 
      "projects/myproject/notificationChannels/XX",
      "projects/myproject/notificationChannels/YY"
    ]   
    alerting_threshold_cpu                  = 0.8 
    alerting_threshold_memory               = 0.8 
    alerting_threshold_cache_hit_ratio_high = 0.85
    alerting_threshold_cache_hit_ratio_low  = 0 
    alerting_threshold_clients_connected    = 100 
  }
}