Sam Merrell Tinkerer. Parent. ADHD. Developer.

Posts

Feed Favorites January 2022

Articles I’ve liked in the past month (2 to be exact).

  • Service Locator is not an Anti-Pattern
    • I appreciate Jimmy’s take on Service Locator. When I was learning about the Service Locator pattern, I was told that it was always A Bad Thing™️. As I’ve grown my skills, I’ve learned to try and avoid blanket statements like that. Similarly, I am not a fan of the term “Best Practice” for similar reasons. There is a time and place for a Best Practice, but it all depends on your situation and the tradeoffs you’re willing to make.
  • How to Adopt a Producer-Consumer Model for HashiCorp Vault

Using Tailscale with an Azure Linux VM and Terraform

I learned about Tailscale from Scott Hanselman’s excellent Podcast Hanselminutes. Since it supports macOS, iOS / iPadOS, Linux, and more I quickly got a simple network created between my devices at home. The setup was completely effortless, and I was able to securely communicate between my devices at home or away. Awesome! I also have a small Azure Subscription that I use to host this website and to play around with Azure itself. How hard would it be to create a small Linux VM in Azure and join it to my Tailscale network?

It turns out Tailscale does have documentation on accessing a Linux VM in Azure, but the steps are all manual. Instead, I wanted to see if I could get an Azure VM created and automatically added to my Tailscale network using Terraform. I already use Terraform to create this site, which is an Azure Static Webapp, so here’s what it took to get things working.

I won’t be showing all the Terraform needed to create the VM, but you can follow the azurerm_linux_virtual_machine docs to create a Linux VM. The first thing I did was create a VNet and Subnet with a Network Security Group. Since I will be using Tailscale to connect to the VM, I want to restrict access into my Subnet.

Here’s the VNet, Subnet, and Network Security Group:

resource "azurerm_virtual_network" "lab" {
  name                = "lab-vnet"
  location            = azurerm_resource_group.lab.location
  resource_group_name = azurerm_resource_group.lab.name
  address_space       = ["10.0.0.0/24"]
}

resource "azurerm_subnet" "lab" {
  name                 = "lab-subn"
  resource_group_name  = azurerm_resource_group.lab.name
  virtual_network_name = azurerm_virtual_network.lab.name
  address_prefixes     = ["10.0.0.0/24"]
}

resource "azurerm_network_security_group" "lab_nsg" {
  name                = "lab-nsg"
  location            = azurerm_resource_group.lab.location
  resource_group_name = azurerm_resource_group.lab.name
}

The Network Security Group is set up to allow for one TCP inbound connection, for Tailscale, as described in the docs.

resource "azurerm_network_security_rule" "lab_nsg" {
  name                        = "Tailscale"
  description                 = "Tailscale UDP port for direct connections. Reduces latency."
  priority                    = 1010
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "UDP"
  source_port_range           = "*"
  destination_port_range      = 41641
  source_address_prefix       = "*"
  destination_address_prefix  = "*"
  resource_group_name         = azurerm_resource_group.lab.name
  network_security_group_name = azurerm_network_security_group.lab_nsg.name
}

With the VNet, Subnet, and Network Security Group, I’m ready to create my VM. This is the part that took the most trial and error to work out. I decided that I could use Cloudinit to ensure my VM was configured upon creation, so now I needed to learn the steps it’d take to do that.

resource "azurerm_linux_virtual_machine" "linux01" {
  name                = "linux01"

  # additional properties elided for brevity

  source_image_reference {
    publisher = "Canonical"
    offer     = "0001-com-ubuntu-server-focal"
    sku       = "20_04-lts-gen2"
    version   = "latest"
  }

  custom_data = base64encode(templatefile("${path.module}/tailscale_cloudinit.tpl", {
    tailscale_auth_key = var.tailscale_auth_key
  }))
}

The important part to see here is the custom_data block. You need to base64 encode the text. In addition, I’ve used a template file so that I can pass in an Auth Key for Tailscale. I created a reusable key and then set it as an environment variable as Terraform allows you to set variables as environment variables. My environment variable was set as TFVAR_tailscale_auth_key. You could also use a one-off key if you wanted to create a single VM.

---
apt:
  sources:
    tailscale.list:
      source: deb https://pkgs.tailscale.com/stable/ubuntu focal main
      keyid: 2596A99EAAB33821893C0A79458CA832957F5868
packages:
  - tailscale
runcmd:
  - "tailscale up -authkey ${tailscale_auth_key} --advertise-tags=tag:server,tag:lab --advertise-routes=10.0.0.0/24,168.63.129.16/32 --accept-dns=false"
  - "echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.conf"
  - "echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.conf"
  - "sysctl -p /etc/sysctl.conf"

While there isn’t seemingly much to this file. It did take quite a bit of troubleshooting to figure this all out. Cloudinit is very nice, but I couldn’t find any way in Azure to see the logs. Instead, while troubleshooting I had to create a public IP to SSH into my VM and review the Cloudinit logs directly.

With the Cloudinit file complete, I was able to run this command and create a new Linux VM that automatically added Tailscale and added itself to my Tailscale network!

$ export TF_VAR_tailscale_auth_key=tskey-kqqJCQ1CNTRL-AAAAAAAAAAAAAAAAAAAAA
$ terraform apply

As you can see below, the new Linux VM is listed in my Tailscale network, and I can reach it from my other devices on the network. All while keeping the Azure Network locked down!

The Tailscale admin interface listing 5 machines including the newly added Linux server from Azure

Let me know what you think on Twitter!

Update 2021.11.26

After posting this, @chrismarget was kind enough to show me the cloudinit_config data source. Once I had some time, I was able to give it a try and it works great! Here’s what the new Terraform code looks like when using cloudinit_config:

First, I need to add the cloudinit provider.

provider "cloudinit" {}

Then I create the cloudinit_config data source with the YAML configuration and a shellscript instead of my runcmd lines I did earlier:

data "cloudinit_config" "cloudinit" {
  base64_encode = true
  gzip          = true
  part {
    content_type = "text/cloud-config"
    content      = file("${path.module}/tailscale/cloudinit.yml")
  }

  part {
    content_type = "text/x-shellscript"
    content = templatefile("${path.module}/tailscale/cloudinit.sh", {
      tailscale_auth_key = var.tailscale_auth_key
    })
  }
}

Using this on the virtual machine now looks like:

resource "azurerm_linux_virtual_machine" "linux01" {
  name                = "linux01"

  # additional properties elided for brevity

  source_image_reference {
    publisher = "Canonical"
    offer     = "0001-com-ubuntu-server-focal"
    sku       = "20_04-lts-gen2"
    version   = "latest"
  }

  custom_data = data.cloudinit_config.cloudinit.rendered
}

What’s great about this method is that it allows me to move the YAML part and shell script portions of my cloudinit file into actual YAML files and shell scripts. This means I can use my regular VS Code tooling to write these scripts before they get packaged into the cloudinit format. So much easier to work with. Thanks for showing me this, Chris!


Using the Luxafor Flag and a Raspberry Pi Zero W as a Teams Status Light

When working in an open office, how do you avoid being interrupted so often? Pre-COVID I got a Luxafor Flag as a way to indicate to my coworkers when I was busy. The Luxafor did great for this when paired with my Elgato Stream Deck. With the tap of a button on the Sream Deck, I could show whether I was busy or open to interruptions.

But then COVID hit and I was working from home full time. Our company made the switch from Slack to Microsoft Teams and now I had a new set of co-workers. The rest of my family. Like many people, for the first part of COVID, I spent my time working at a desk out in the open. That was fine, but distracting. As I started to realize I wasn’t going to be back in the office any time soon, I made the move into a different room of my house. But my co-workers (my kids), weren’t very familiar with how many meetings I was in during the day. I still had that Luxafor Flag, so I decided to put it to use.

Iteration 1

Just like at work, I placed the Luxafor Flag on the top of my monitor and controlled the color using the Stream Deck. This iteration didn’t last long because I kept forgetting to set my status when I was in meetings and my kids still had to open the door to see if I was busy or not.

Iteration 2

Since my company move to Microsoft Teams full time, I researched if I could get my Teams Status through the Teams client. Unfortunately, there doesn’t appear to be any local way to get status from Teams. Luckily, Microsoft had recently added Teams Presence into the Graph API. I now had my way to get my status in Teams.

I’ve been programming in .NET for over 10 years, so my first attempt was to write a .NET Core application to interact with the API. Aside from my level of comfort with C#, I assumed that Microsoft would likely have a good library for the Graph API. And while that was true, I ran into a snag.

The Luxafor has an API so you can write your own integrations to using the device. The API is implemented by exposing the Luxafor Flag as a USB HID device. My primary work laptop is a Mac. So how could I talk to the Luxafor over USB HID? I did some research but didn’t find either a Luxafor library in C# that ran on macOS or an easy to use library for interacting with the Luxafor over USB HID. And while I’m still interested in learning how to communicate with the Luxafor by USB HID myself, that wasn’t what I was trying to do. So I began my search in other languages.

I found the busylight-for-humans package in Python and it fit the bill perfectly. I’m not a particularly great Python developer, but I’ve been using Python much more recently so I’m familiar enough to know I could write my app in Python without a major struggle.

It took a few days for me to get a good handle on both how to connect my Python app to the Graph and then how to navigate the Graph API itself. But once I did, I had a script that I could run in the background that would poll my Teams status every 5 seconds and update itself based on my status. Which you can see in the example below:

Great! Now all I need is a very long USB cable to stretch from my computer to the outside of the door.

Iteration 3

That very long USB cable prompted me to look elsewhere since I couldn’t find a cable long enough. Instead, I dug out one of my old Raspberry Pi Model B’s that I have floating around. They are original B’s so they are very slow. After re-flashing my SD card with a current Raspberry Pi OS, I pulled down my source code and tried to run pipenv install.

Pipenv immediately yelled at me saying I didn’t have Python 3.9, since that’s what I had on my Mac when I wrote the app. After checking, I realized that the Raspberry Pi OS updates rather slowly and still hadn’t included Python 3.9, or even 3.8! Instead of trying to get Python 3.9 running, I modified my Pipfile and tried installing. Still no luck. It was struggling with all the dependencies it needed. Instead of working to resolve the issues, I decided to pip install every dependency I needed, and it worked! After a few more steps with busylight-for-humans, I was able to get my Python app running and pulling my Teams State. Now I could place the Raspberry Pi outside my room.

Iteration 3.1

After I got settled on the Raspberry Pi B, I got an itch to condense the package a bit and see if I could make it into something I could eventually stick to the wall. The Raspberry Pi B was just kind of sitting on a box and I wanted to improve how it looked a bit.

Of course, this gave me a chance to hit up Adafruit and buy a few things! I got a Raspberry Pi Zero W, a case, and an Adapter. Once those arrived, I was ready to get the Pi Zero W set up!

Instead of hacking the script like I had originally. I wanted to get it working out of the box. After running into a few hurdles with the Raspberry Pi — I learned I have a micro-HDMI cable, not a mini-HDMI plug the Pi Zero uses. The Pi was set up and ready to go.

Now the finished package is considerably smaller, actually stored in source control, and able to run from outside my office door. Here’s a picture of it mounted next to my door:

The Raspberry Pi Zero W mounted on a wall next to a door, with the Luxafor flag mounted next to the Pi

I plan on cleaning up the cables a bit and mounting the Pi better, but I’m happy with the results and I’ll see how well it works out!

Update 2022.01.12

I’ve made my GitHub repo public in case people are interested in using the code. Since I wrote this post, I’ve changed jobs and I’m not actively using the code. Fork the code and make it your own! If you end up using this code or modifying it, let me know on Twitter! Thanks to @TheNoname for asking if I could publish the code, otherwise I would’ve likely kept it private since I’m a little self-conscious of the code quality.


Azure DevOps Exploration

Building software has always been a hassle. Over the years, the effort it takes to create a reliable build system has decreased drastically. Tools like Travis CI dramatically reduce the time and effort it takes to go from nothing to a functioning continuous integration pipeline. I’ve tried a few CI tools like Travis, AppVeyor, and TeamCity. One CI application I had not tried, was Visual Studio Team Services  — better known as VSTS. Microsoft rebranded VSTS to Azure DevOps in September, so what better time to give Azure DevOps a try?

In order to test out Azure DevOps, I needed a project. Luckily, I had one — a Pomodoro application I have been writing for my Mac. Right now, the app tracks how many pomodoros I’ve completed in memory. Instead of trying to store the completed pomodoros locally, why not push those events into an Azure Function where it could save that information into Azure Table Storage? With a project in mind, I got started.

What is Azure DevOps?

Azure DevOps started its life as VSTS which bundled several tools into one application. Azure DevOps still has those same tools, but you can choose which of the tools you would like to use. So what are the tools available? First, you have Azure Pipelines. Pipelines seem to be the most publicized of the tools, and it happens to be what I’m most interested in learning. Pipelines provide two main things: build pipelines and release pipelines. Along with Pipelines, there is Azure Boards which is a Kanban board. Azure Artifacts hosts software artifacts such as NPM packages or Nuget Packages. Azure Repos lets you host your source code. Finally, there is Azure Test Plans which does what it describes — create, manage, and execute test plans.

Creating an Organization

To get started with Azure DevOps, I needed to create an organization. Going through the setup process was simple, I gave my organization a unique name and a region I wanted to host my projects in. After that, my organization was created. That was quick and easy, but I would love to be able to script the creation of an organization. From what research I did, that does not seem like it is possible yet. That isn’t a huge issue, but I very much prefer to have all my infrastructure setup and configuration done through automation and not by clicking through the portal.

Azure DevOps screen to create a project

My First Project

With my organization created, I was able to get started on my project.

Azure DevOps UI to create a new Project

Creating a project was as simple as picking a name and hitting create. From there, the project was created and all the services enabled. Since I only planned on using Pipelines, I went into the settings and unchecked the services I did not need. Easy.

The Build Pipeline

Next step, I created my first build pipeline. After clicking on Pipelines and then the New pipeline button I was faced with a problem.

Azure DevOps screen to create a build pipeline

I was planning on hosting the function code in GitLab since that is where I host the code to the Pomodoro app. I want to keep that code private for now and I didn’t want to spend time making GitLab work, I gave Azure Repos a try. Now, I realize I could have hooked up the pipeline to GitLab, but Azure Repos is working well for me. Enough so, that I plan on keeping the source code there and I might also move the repository for my Mac application as well.

I selected the newly created repository and then I was presented with a list of templates to start my build pipeline. Scrolling through the templates I noticed Microsoft has built steps for many common types of applications. From .NET, .NET Core, C++, Python, Ruby, Node, Docker to Xamarin or Xcode projects. It is clear that Azure DevOps will work with any project and that Microsoft wants you to know that.

Defining the Build

I started with the suggested Starter Pipeline template. Once I selected the template, I was given an editor showing the contents of the template. This is the template in its entirety:

# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml

pool:
  vmImage: 'Ubuntu 16.04'

steps:
- script: echo Hello, world!
  displayName: 'Run a one-line script'

- script: |
    echo Add other tasks to build, test, and deploy your project.
    echo See https://aka.ms/yaml    
  displayName: 'Run a multi-line script'

Pretty straightforward. This prompted me to look at each component of the YAML file to try and understand what it was describing. the pool section describes what sort of VM to run the build on. Since I am using an Azure Function V2, I’m using .NET Core so an Ubuntu image works nicely. Microsoft provides other images to use and you also can manage your own VMs to run the Pipelines Service. That isn’t something I wanted to do so I stuck with the Ubuntu image.

The next section is steps. Looking at it was easy enough to understand. Each item in the list is executed one after the other. The example has a single line script and a multi-line script. Since I wanted to see what the build would look like, I clicked the Save and Run button. Pipelines committed the azure-pipelines.yml file to my repository and started a build. Nice. With that file committed to the repository, this should make it very easy to define, and version, my build pipeline.

Commit, Push, Build

With the pipeline YAML file in my repo, I could edit the pipeline from Visual Studio Code. The process was easy to understand: change the script step, commit the change, push to the repository. As soon as I pushed the code, Pipelines was running a build with those changes. As with other CI services I’ve used, I ran into the problem of not being able to run locally before I commit and push. I didn’t spend much time trying to find a way to test locally, but it would be nice to have more confidence in my changes before I commit the code.

While working out how to build my Function App, I ran into trouble with the documentation. The starter script shows only script steps, but the documentation I read used tasks. The concept of tasks isn’t new to me, but I didn’t see any links to what tasks are available in Pipelines. It took me a half hour before I stumbled on the documentation for Tasks. The documentation for the tasks themselves is fairly clear, but the descriptions of the properties were confusing to me. For example, the dotnet core cli Task has a whole host of inputs. Each input is documented, but the table describing the inputs below doesn’t exactly match the name of the input. I couldn’t tell what Zip Published Projects matched to in the actual inputs. Maybe zipAfterPublish? Once I got my build where I thought I needed it to be, I was ready to move on to how to deploy my code.

On to the Release Pipeline

The Release Pipeline doesn’t seem to be versioned the same way as the build pipeline. That is somewhat understandable, but also felt strange when I realized that. Since I couldn’t version the pipeline, I went through the site to create my release pipeline. You get a nice prompt suggesting starter release pipelines for many different types of application as you can see below:

Azure DevOps release pipeline template selection

Since I was deploying a Function App, I searched for “function” and found a template available and selected it.

Deploying my Function App

Once I selected the template I was shown the UI for managing the release pipeline. The Azure Function template is very simple, which was a great place to start.

The default Azure Function release pipeline

After looking at the overview of the function, I realized I needed to click on the 1 job, 1 task section. From there, all I had to do was fill out the required information for my Function App. This consisted of the Azure Subscription I wanted to use and then the App Service for the Function App I was deploying. Pretty easy.

So now that I had that configured, the next step was to get the artifacts from my build pipeline into my release pipeline. I clicked on the “Add an artifact” option under the Artifacts section of the Release pipeline and I picked my source pipeline as well as selected using the Latest build. There are several options under what build version the pipeline can use but Latest fit what I was trying to do.

Now that my pipeline was ready, I created a release and went to get to deploy my code. But I couldn’t, the release couldn’t find any artifacts for me to find. With that, I went back to the build pipeline to try and figure out how I get artifacts published so that the Release pipeline could use them.

This took several attempts to figure out what I needed to do to promote my artifacts. At first, I thought that I needed to call dotnet publish and push the output into Azure DevOps Build.ArtifactStagingDirectory. At the time, I was trying to understand if Azure DevOps then picked up the staged artifacts and published them after the build passed. This was not correct and my attempted deployment failed because there were no artifacts to deploy.

My second attempt was to then publish the app and then zip the contents into the Staging Directory. Still no luck, but it felt like I was on the right track, I was just missing something. And indeed I was missing something, I then found the PublishArtifacts task. I updated my build process to publish the zip file that I had placed in the staging directory and then my release pipeline worked! I now had a working Azure Function and a simple build and release pipeline. All within the course of an evening. Not bad.

Impressions of Azure DevOps

I am quite pleased by what I’ve used in Azure DevOps. Rebranding VSTS to Azure DevOps was a smart move, VSTS had baggage that it was only for Microsoft applications. With the new name, I was interested enough to give it a try. The pipeline YAML file is a great way to manage the build process. I’m glad to see that Microsoft recognized what Travis CI, Appveyor and others have been doing and followed that process. Defining my release process was extremely easy, and from what I can tell, extremely powerful. I do find it strange my release process isn’t versioned the same way as the build process though. I would be curious to see what that file would look like.

I did have some hurdles finding documentation that was clear on how to hook up the build and release pipeline as well as describing where tasks were. These were annoying, but I did manage to figure out everything in a relatively short amount of time. The documentation was helpful, but like most documentation, always can use some more work and clarity. I’m confident Microsoft will keep improving this area of Azure DevOps.

Overall it was a great experience and I plan on still using Azure DevOps. I also plan on bringing this to my coworkers and investigating if it makes sense for us to start trying out Azure DevOps at work as well.