Two months haven’t passed and we have a new version of HashiCorp Terraform vCloud Director Provider v2.2.0! You can now automatically download it via the terraform init
command and find the corresponding documentation on the HashiCorp Terraform website:
https://www.terraform.io/docs/providers/vcd/index.html
In this particular release we had a primary goal of merging all of the open community contributions (a friendly reminder that Terraform vCD Provider is open-source). As such, this release packs the following main features.
Note that if you’d like to read about Terraform vCloud Director Provider in general, please see the previous article as well:
vCloud Director Embraces Terraform
v2.2.0 Features
Contents
First of all, there are two completely new resources for the provider level operations (i.e. system administrator):
- New resource
vcd_external_network
– for automating creation of External Networks - New resource
vcd_org_vdc
– for automating creation of Organization VDCs
Then, the existing vcd_vapp_vm
resource received a major improvement in the way it handles networks. As an outcome of that, it’s now possible to add multiple NICs to Virtual Machines. Also, you have access to the MAC addresses:
- New argument
vcd_vapp_vm.network
for multiple NIC support and more flexible configuration - New argument
vcd_vapp_vm.network.X.mac
for storing the MAC address in Terraform state file
Another feature in vcd_vapp_vm
and the related vcd_vapp
resource is the ability to set metadata for vApp and its Virtual Machines separately:
- New argument
vcd_vapp_vm.metadata
for ability to add metadata to a VM - Improved
vcd_vapp.metadata
argument to add metadata to the vApp directly
Moreover, a small topping to the cake of the same vcd_vapp_vm
resource is a new flag for running hypervisors in a VM (hypervisor nesting):
- New argument
vcd_vapp_vm.expose_hardware_virtualization
for ability to enable hardware assisted CPU virtualization
Last but not least, there’s also a handful of additions to the test suite to help avoid unwanted bugs. Of course, it’s committers who will feel this first. For instance, we’ve added test grouping by tags to support selective and parallel runs for the growing suite. In this context, if you like Go programming language and developing cloud automation tools, please consider joining our open-source community with a code contribution!
Please also see our changelog for details with links to related GitHub pull requests.
Now let’s look at examples on how you can use some of these new features.
Example of configuring an Org VDC
To begin with, let’s say we’re a provider and want to automate creation of Organization VDCs. It’s fairly easy to define an Organization VDC, but there are three points to be aware of.
- First, use system administrator (as opposed to org administrator) credentials in the
provider
section of the Terraform template - Then, there are three allocation models supported in the
allocation_model
field, but one of their names differ from the one in the vCD GUIAllocationPool
– “Allocation pool”ReservationPool
– “Reservation pool”AllocationVApp
– “Pay as you go” (!) This name comes from vCD API and reflects that “AllocationVApp” model resources are committed to a vDC only when vApps are created
- Last, there are two choices which argument to use in
compute_capacity
blocklimit
is used withAllocationVApp
modelallocated
is used withAllocationPool
andReservationPool
models
So, let’s take a look at an example for the VDC with the AllocationVApp
(“Pay as you go”) model. Please note the in-line comments which reflect the above notes.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
provider “vcd” { url = “https://${var.vcd_host}/api” user = “administrator” # Need system administrator privileges org = “System” # Connecting to System because we’re administrator password = “${var.admin_pass}” allow_unverified_ssl = “true” } # v2.2.0+ resource “vcd_org_vdc” “vdc-pay” { org = “myorg” # Tell in which Org to create the VDC name = “TfPayVDC” description = “Terraform created VDC” allocation_model = “AllocationVApp” # The “Pay as you go” in vCD GUI network_pool_name = “vc1-TestbedCluster-VXLAN-NP” provider_vdc_name = “vc1-TestbedCluster” network_quota = 10 memory_guaranteed = 0.55 # Translates to percentage 55% cpu_guaranteed = 0.50 # Translates to percentage 50% compute_capacity { cpu { limit = 3123 # For “ReservationPool” and “AllocationPool” models, use “allocated” instead of “limit” } memory { limit = 4123 # For “ReservationPool” and “AllocationPool” models, use “allocated” instead of “limit” } } # Note how “storage_profile” can be defined multiple times storage_profile { name = “*” limit = 0 enabled = true default = true } storage_profile { name = “Development” limit = 0 enabled = true default = false } enable_thin_provisioning = true enable_fast_provisioning = true delete_force = true delete_recursive = true } |
Please see Org VDC documentation page for more details:
https://www.terraform.io/docs/providers/vcd/r/org_vdc.html
Example of a VM with multiple networks
To continue, let’s take a look at an example snippet which defines a VM with three NICs and some advanced parameters.
- Three NICs
- Connected to vApp network
- Connected to Org VDC network
- Not connected at all
- Hardware assisted CPU virtualization enabled
- Custom metadata set
It’s important to note that:
- You define more than one NIC by creating more than one
network
block - Order of the
network
blocks gets reflected in the operating system
As in the previous example, please see the in-line comments for explanations as well.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
provider “vcd” { url = “https://${var.vcd_host}/api” org = “myorg” vdc = “myorgvdc” user = “orgadmin” # In this case we *don’t* need system administrator privileges password = “${var.org_pass}” allow_unverified_ssl = “true” } resource “vcd_vapp_vm” “vm11” { vapp_name = “TfVApp” name = “TerraformVM11” catalog_name = “OperatingSystems” template_name = “Linux” memory = 384 cpus = 1 # v2.2.0+ network { # We can define as many “network” blocks as needed type = “vapp” name = “TfVAppNet” ip_allocation_mode = “POOL” is_primary = false } # v2.2.0+ network { type = “org” name = “TfNet” ip = “192.168.0.11” ip_allocation_mode = “MANUAL” is_primary = true } # v2.2.0+ network { type = “none” # This NIC won’t be connected to a network ip_allocation_mode = “NONE” } # v2.2.0+ expose_hardware_virtualization = true # v2.2.0+ metadata { # The attribute names below are user-defined (they land to vCD together with values) role = “test” env = “staging” version = “v2.2.0” } accept_all_eulas = “true” } |
Please see VM resource documentation page for more details:
https://www.terraform.io/docs/providers/vcd/r/vapp_vm.html
Next Steps
Most importantly, please give it a try! And if you have questions, don’t hesitate to ask. You can do it by joining our Slack channel #vcd-terraform-dev through VMware {code} or filing an issue in the Terraform vCloud Director Provider repository.
Hope to hear from you!