Terraform Notes

From Federal Burro of Information
Revision as of 00:03, 17 June 2020 by David (talk | contribs)
Jump to navigationJump to search

nuke a module

nuke a module and all of it's bits:

terraform plan -destroy -target=module.compute.module.softnas


a little script to clean up graphs: Cleandot

Dev pattern

From time to time as you learn about new cloud things † it will be easier to use the gui to get to know the object, it's peculiarities, and dependancies, than to write a terraform "from scratch". But you obviously want to immediately go to a terarform way of life the moment that object is up.

Here is how I do it:

1. Make the thing in the cloud. Maybe there is a wizard and sevral things get created in the process. Stay focused on one object for now.
2. Create the block in your tf files, use the example in the documentation, lean or "wrong" as it may be.
3. import that object.
4. Plan.
you will get alot of "found this" -> "will set to that".
Resolve each of those by setting your tf file to the "found this" values.
Other errors you get might be related to other dependant objects that have not been described in your tf.
Stay focused on that one new object you are adding.
5. Eventually you will "plan clean".
Commit that shit.

Now go back and consider all the objects related to the object that you made.

Should they be "inputs" into this TF?
Should they be included in this TF?
Should they be part of other TF's that this TF can use as a data source?

Just bite off one object at a time. Iterate. If you can think of a more elegant way write that down as a comment in your code.

I have found that terraform saves me so much time I can take the time to go back and make things more elegant all the time. And those packets of work are nice and tight.

Elegant is a journey not a destination.

Dynamic content / conditional blocks

Consider the bigquery export option for a gke cluster:

do we set it everytime? do we put it into a module?

Lets make an "enable" flag and use "dynamic content":

we pass to the module:

  resource_usage_export_flag    = false
  resource_usage_destination_id = google_bigquery_dataset.gke_billing_dataset.dataset_id
<pre>

and then in the module:

<pre>
    #bigquery_destination {
    #  dataset_id = var.resource_usage_destination_id
    #}  

    dynamic "bigquery_destination" {
      for_each = var.resource_usage_export_flag == true ? [var.resource_usage_destination_id] : []
      content {
        dataset_id = var.resource_usage_destination_id
      }   
    }

If resource_usage_export_flag is not true, then no bigquery clause is put in the cluster.