I learned about Tailscale from Scott Hanselman’s excellent Podcast
Hanselminutes.
Since it supports macOS, iOS / iPadOS, Linux, and more I quickly got a simple
network created between my devices at home. The setup was completely effortless,
and I was able to securely communicate between my devices at home or away.
Awesome! I also have a small Azure Subscription that I use to host this website
and to play around with Azure itself. How hard would it be to create a small
Linux VM in Azure and join it to my Tailscale network?
It turns out Tailscale does have documentation on accessing a Linux VM in
Azure, but the steps are all
manual. Instead, I wanted to see if I could get an Azure VM created and
automatically added to my Tailscale network using Terraform. I already use
Terraform to create this site, which is an Azure Static Webapp, so here’s what
it took to get things working.
I won’t be showing all the Terraform needed to create the VM, but you can follow
the azurerm_linux_virtual_machine
docs
to create a Linux VM. The first thing I did was create a VNet and Subnet with a
Network Security Group. Since I will be using Tailscale to connect to the VM, I
want to restrict access into my Subnet.
Here’s the VNet, Subnet, and Network Security Group:
resource "azurerm_virtual_network" "lab" {
name = "lab-vnet"
location = azurerm_resource_group.lab.location
resource_group_name = azurerm_resource_group.lab.name
address_space = ["10.0.0.0/24"]
}
resource "azurerm_subnet" "lab" {
name = "lab-subn"
resource_group_name = azurerm_resource_group.lab.name
virtual_network_name = azurerm_virtual_network.lab.name
address_prefixes = ["10.0.0.0/24"]
}
resource "azurerm_network_security_group" "lab_nsg" {
name = "lab-nsg"
location = azurerm_resource_group.lab.location
resource_group_name = azurerm_resource_group.lab.name
}
The Network Security Group is set up to allow for one TCP inbound connection,
for Tailscale, as described in the
docs.
resource "azurerm_network_security_rule" "lab_nsg" {
name = "Tailscale"
description = "Tailscale UDP port for direct connections. Reduces latency."
priority = 1010
direction = "Inbound"
access = "Allow"
protocol = "UDP"
source_port_range = "*"
destination_port_range = 41641
source_address_prefix = "*"
destination_address_prefix = "*"
resource_group_name = azurerm_resource_group.lab.name
network_security_group_name = azurerm_network_security_group.lab_nsg.name
}
With the VNet, Subnet, and Network Security Group, I’m ready to create my VM.
This is the part that took the most trial and error to work out. I decided that
I could use Cloudinit to ensure
my VM was configured upon creation, so now I needed to learn the steps it’d take
to do that.
resource "azurerm_linux_virtual_machine" "linux01" {
name = "linux01"
# additional properties elided for brevity
source_image_reference {
publisher = "Canonical"
offer = "0001-com-ubuntu-server-focal"
sku = "20_04-lts-gen2"
version = "latest"
}
custom_data = base64encode(templatefile("${path.module}/tailscale_cloudinit.tpl", {
tailscale_auth_key = var.tailscale_auth_key
}))
}
The important part to see here is the custom_data
block. You need to base64
encode the text. In addition, I’ve used a template file so that I can pass in an
Auth Key for Tailscale. I created a reusable key and then set it as an
environment variable as Terraform allows you to set variables as environment
variables. My environment variable was set as TFVAR_tailscale_auth_key
. You
could also use a one-off key if you wanted to create a single VM.
---
apt:
sources:
tailscale.list:
source: deb https://pkgs.tailscale.com/stable/ubuntu focal main
keyid: 2596A99EAAB33821893C0A79458CA832957F5868
packages:
- tailscale
runcmd:
- "tailscale up -authkey ${tailscale_auth_key} --advertise-tags=tag:server,tag:lab --advertise-routes=10.0.0.0/24,168.63.129.16/32 --accept-dns=false"
- "echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.conf"
- "echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.conf"
- "sysctl -p /etc/sysctl.conf"
While there isn’t seemingly much to this file. It did take quite a bit of
troubleshooting to figure this all out. Cloudinit is very nice, but I couldn’t
find any way in Azure to see the logs. Instead, while troubleshooting I had to
create a public IP to SSH into my VM and review the Cloudinit
logs
directly.
With the Cloudinit file complete, I was able to run this command and create a new Linux VM that automatically added Tailscale and added itself to my Tailscale network!
$ export TF_VAR_tailscale_auth_key=tskey-kqqJCQ1CNTRL-AAAAAAAAAAAAAAAAAAAAA
$ terraform apply
As you can see below, the new Linux VM is listed in my Tailscale network, and I
can reach it from my other devices on the network. All while keeping the Azure
Network locked down!
Let me know what you think on Twitter!
Update 2021.11.26
After posting this, @chrismarget was kind
enough to show me the
cloudinit_config
data source. Once I had some time, I was able to give it a try and it works
great! Here’s what the new Terraform code looks like when using
cloudinit_config
:
First, I need to add the cloudinit
provider.
Then I create the cloudinit_config
data source with the YAML configuration and
a shellscript instead of my runcmd
lines I did earlier:
data "cloudinit_config" "cloudinit" {
base64_encode = true
gzip = true
part {
content_type = "text/cloud-config"
content = file("${path.module}/tailscale/cloudinit.yml")
}
part {
content_type = "text/x-shellscript"
content = templatefile("${path.module}/tailscale/cloudinit.sh", {
tailscale_auth_key = var.tailscale_auth_key
})
}
}
Using this on the virtual machine now looks like:
resource "azurerm_linux_virtual_machine" "linux01" {
name = "linux01"
# additional properties elided for brevity
source_image_reference {
publisher = "Canonical"
offer = "0001-com-ubuntu-server-focal"
sku = "20_04-lts-gen2"
version = "latest"
}
custom_data = data.cloudinit_config.cloudinit.rendered
}
What’s great about this method is that it allows me to move the YAML part and
shell script portions of my cloudinit file into actual YAML files and shell
scripts. This means I can use my regular VS Code tooling to write these scripts
before they get packaged into the cloudinit format. So much easier to work with.
Thanks for showing me this, Chris!