Sam Merrell's Blog Tinkerer. Parent. ADHD. Developer.

Using the Luxafor Flag and a Raspberry Pi Zero W as a Teams Status Light

When working in an open office, how do you avoid being interrupted so often? Pre-COVID I got a Luxafor Flag as a way to indicate to my coworkers when I was busy. The Luxafor did great for this when paired with my Elgato Stream Deck. With the tap of a button on the Sream Deck, I could show whether I was busy or open to interruptions.

But then COVID hit and I was working from home full time. Our company made the switch from Slack to Microsoft Teams and now I had a new set of co-workers. The rest of my family. Like many people, for the first part of COVID, I spent my time working at a desk out in the open. That was fine, but distracting. As I started to realize I wasn’t going to be back in the office any time soon, I made the move into a different room of my house. But my co-workers (my kids), weren’t very familiar with how many meetings I was in during the day. I still had that Luxafor Flag, so I decided to put it to use.

Iteration 1

Just like at work, I placed the Luxafor Flag on the top of my monitor and controlled the color using the Stream Deck. This iteration didn’t last long because I kept forgetting to set my status when I was in meetings and my kids still had to open the door to see if I was busy or not.

Iteration 2

Since my company move to Microsoft Teams full time, I researched if I could get my Teams Status through the Teams client. Unfortunately, there doesn’t appear to be any local way to get status from Teams. Luckily, Microsoft had recently added Teams Presence into the Graph API. I now had my way to get my status in Teams.

I’ve been programming in .NET for over 10 years, so my first attempt was to write a .NET Core application to interact with the API. Aside from my level of comfort with C#, I assumed that Microsoft would likely have a good library for the Graph API. And while that was true, I ran into a snag.

The Luxafor has an API so you can write your own integrations to using the device. The API is implemented by exposing the Luxafor Flag as a USB HID device. My primary work laptop is a Mac. So how could I talk to the Luxafor over USB HID? I did some research but didn’t find either a Luxafor library in C# that ran on macOS or an easy to use library for interacting with the Luxafor over USB HID. And while I’m still interested in learning how to communicate with the Luxafor by USB HID myself, that wasn’t what I was trying to do. So I began my search in other languages.

I found the busylight-for-humans package in Python and it fit the bill perfectly. I’m not a particularly great Python developer, but I’ve been using Python much more recently so I’m familiar enough to know I could write my app in Python without a major struggle.

It took a few days for me to get a good handle on both how to connect my Python app to the Graph and then how to navigate the Graph API itself. But once I did, I had a script that I could run in the background that would poll my Teams status every 5 seconds and update itself based on my status. Which you can see in the example below:

Great! Now all I need is a very long USB cable to stretch from my computer to the outside of the door.

Iteration 3

That very long USB cable prompted me to look elsewhere since I couldn’t find a cable long enough. Instead, I dug out one of my old Raspberry Pi Model B’s that I have floating around. They are original B’s so they are very slow. After re-flashing my SD card with a current Raspberry Pi OS, I pulled down my source code and tried to run pipenv install.

Pipenv immediately yelled at me saying I didn’t have Python 3.9, since that’s what I had on my Mac when I wrote the app. After checking, I realized that the Raspberry Pi OS updates rather slowly and still hadn’t included Python 3.9, or even 3.8! Instead of trying to get Python 3.9 running, I modified my Pipfile and tried installing. Still no luck. It was struggling with all the dependencies it needed. Instead of working to resolve the issues, I decided to pip install every dependency I needed, and it worked! After a few more steps with busylight-for-humans, I was able to get my Python app running and pulling my Teams State. Now I could place the Raspberry Pi outside my room.

Iteration 3.1

After I got settled on the Raspberry Pi B, I got an itch to condense the package a bit and see if I could make it into something I could eventually stick to the wall. The Raspberry Pi B was just kind of sitting on a box and I wanted to improve how it looked a bit.

Of course, this gave me a chance to hit up Adafruit and buy a few things! I got a Raspberry Pi Zero W, a case, and an Adapter. Once those arrived, I was ready to get the Pi Zero W set up!

Instead of hacking the script like I had originally. I wanted to get it working out of the box. After running into a few hurdles with the Raspberry Pi — I learned I have a micro-HDMI cable, not a mini-HDMI plug the Pi Zero uses. The Pi was set up and ready to go.

Now the finished package is considerably smaller, actually stored in source control, and able to run from outside my office door. Here’s a picture of it mounted next to my door:

The Raspberry Pi Zero W mounted on a wall next to a door, with the Luxafor flag mounted next to the Pi

I plan on cleaning up the cables a bit and mounting the Pi better, but I’m happy with the results and I’ll see how well it works out!

Update 2022.01.12

I’ve made my GitHub repo public in case people are interested in using the code. Since I wrote this post, I’ve changed jobs and I’m not actively using the code. Fork the code and make it your own! If you end up using this code or modifying it, let me know on Twitter! Thanks to @TheNoname for asking if I could publish the code, otherwise I would’ve likely kept it private since I’m a little self-conscious of the code quality.


Azure DevOps Exploration

Building software has always been a hassle. Over the years, the effort it takes to create a reliable build system has decreased drastically. Tools like Travis CI dramatically reduce the time and effort it takes to go from nothing to a functioning continuous integration pipeline. I’ve tried a few CI tools like Travis, AppVeyor, and TeamCity. One CI application I had not tried, was Visual Studio Team Services  — better known as VSTS. Microsoft rebranded VSTS to Azure DevOps in September, so what better time to give Azure DevOps a try?

In order to test out Azure DevOps, I needed a project. Luckily, I had one — a Pomodoro application I have been writing for my Mac. Right now, the app tracks how many pomodoros I’ve completed in memory. Instead of trying to store the completed pomodoros locally, why not push those events into an Azure Function where it could save that information into Azure Table Storage? With a project in mind, I got started.

What is Azure DevOps?

Azure DevOps started its life as VSTS which bundled several tools into one application. Azure DevOps still has those same tools, but you can choose which of the tools you would like to use. So what are the tools available? First, you have Azure Pipelines. Pipelines seem to be the most publicized of the tools, and it happens to be what I’m most interested in learning. Pipelines provide two main things: build pipelines and release pipelines. Along with Pipelines, there is Azure Boards which is a Kanban board. Azure Artifacts hosts software artifacts such as NPM packages or Nuget Packages. Azure Repos lets you host your source code. Finally, there is Azure Test Plans which does what it describes — create, manage, and execute test plans.

Creating an Organization

To get started with Azure DevOps, I needed to create an organization. Going through the setup process was simple, I gave my organization a unique name and a region I wanted to host my projects in. After that, my organization was created. That was quick and easy, but I would love to be able to script the creation of an organization. From what research I did, that does not seem like it is possible yet. That isn’t a huge issue, but I very much prefer to have all my infrastructure setup and configuration done through automation and not by clicking through the portal.

Azure DevOps screen to create a project

My First Project

With my organization created, I was able to get started on my project.

Azure DevOps UI to create a new Project

Creating a project was as simple as picking a name and hitting create. From there, the project was created and all the services enabled. Since I only planned on using Pipelines, I went into the settings and unchecked the services I did not need. Easy.

The Build Pipeline

Next step, I created my first build pipeline. After clicking on Pipelines and then the New pipeline button I was faced with a problem.

Azure DevOps screen to create a build pipeline

I was planning on hosting the function code in GitLab since that is where I host the code to the Pomodoro app. I want to keep that code private for now and I didn’t want to spend time making GitLab work, I gave Azure Repos a try. Now, I realize I could have hooked up the pipeline to GitLab, but Azure Repos is working well for me. Enough so, that I plan on keeping the source code there and I might also move the repository for my Mac application as well.

I selected the newly created repository and then I was presented with a list of templates to start my build pipeline. Scrolling through the templates I noticed Microsoft has built steps for many common types of applications. From .NET, .NET Core, C++, Python, Ruby, Node, Docker to Xamarin or Xcode projects. It is clear that Azure DevOps will work with any project and that Microsoft wants you to know that.

Defining the Build

I started with the suggested Starter Pipeline template. Once I selected the template, I was given an editor showing the contents of the template. This is the template in its entirety:

# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml

pool:
  vmImage: 'Ubuntu 16.04'

steps:
- script: echo Hello, world!
  displayName: 'Run a one-line script'

- script: |
    echo Add other tasks to build, test, and deploy your project.
    echo See https://aka.ms/yaml    
  displayName: 'Run a multi-line script'

Pretty straightforward. This prompted me to look at each component of the YAML file to try and understand what it was describing. the pool section describes what sort of VM to run the build on. Since I am using an Azure Function V2, I’m using .NET Core so an Ubuntu image works nicely. Microsoft provides other images to use and you also can manage your own VMs to run the Pipelines Service. That isn’t something I wanted to do so I stuck with the Ubuntu image.

The next section is steps. Looking at it was easy enough to understand. Each item in the list is executed one after the other. The example has a single line script and a multi-line script. Since I wanted to see what the build would look like, I clicked the Save and Run button. Pipelines committed the azure-pipelines.yml file to my repository and started a build. Nice. With that file committed to the repository, this should make it very easy to define, and version, my build pipeline.

Commit, Push, Build

With the pipeline YAML file in my repo, I could edit the pipeline from Visual Studio Code. The process was easy to understand: change the script step, commit the change, push to the repository. As soon as I pushed the code, Pipelines was running a build with those changes. As with other CI services I’ve used, I ran into the problem of not being able to run locally before I commit and push. I didn’t spend much time trying to find a way to test locally, but it would be nice to have more confidence in my changes before I commit the code.

While working out how to build my Function App, I ran into trouble with the documentation. The starter script shows only script steps, but the documentation I read used tasks. The concept of tasks isn’t new to me, but I didn’t see any links to what tasks are available in Pipelines. It took me a half hour before I stumbled on the documentation for Tasks. The documentation for the tasks themselves is fairly clear, but the descriptions of the properties were confusing to me. For example, the dotnet core cli Task has a whole host of inputs. Each input is documented, but the table describing the inputs below doesn’t exactly match the name of the input. I couldn’t tell what Zip Published Projects matched to in the actual inputs. Maybe zipAfterPublish? Once I got my build where I thought I needed it to be, I was ready to move on to how to deploy my code.

On to the Release Pipeline

The Release Pipeline doesn’t seem to be versioned the same way as the build pipeline. That is somewhat understandable, but also felt strange when I realized that. Since I couldn’t version the pipeline, I went through the site to create my release pipeline. You get a nice prompt suggesting starter release pipelines for many different types of application as you can see below:

Azure DevOps release pipeline template selection

Since I was deploying a Function App, I searched for “function” and found a template available and selected it.

Deploying my Function App

Once I selected the template I was shown the UI for managing the release pipeline. The Azure Function template is very simple, which was a great place to start.

The default Azure Function release pipeline

After looking at the overview of the function, I realized I needed to click on the 1 job, 1 task section. From there, all I had to do was fill out the required information for my Function App. This consisted of the Azure Subscription I wanted to use and then the App Service for the Function App I was deploying. Pretty easy.

So now that I had that configured, the next step was to get the artifacts from my build pipeline into my release pipeline. I clicked on the “Add an artifact” option under the Artifacts section of the Release pipeline and I picked my source pipeline as well as selected using the Latest build. There are several options under what build version the pipeline can use but Latest fit what I was trying to do.

Now that my pipeline was ready, I created a release and went to get to deploy my code. But I couldn’t, the release couldn’t find any artifacts for me to find. With that, I went back to the build pipeline to try and figure out how I get artifacts published so that the Release pipeline could use them.

This took several attempts to figure out what I needed to do to promote my artifacts. At first, I thought that I needed to call dotnet publish and push the output into Azure DevOps Build.ArtifactStagingDirectory. At the time, I was trying to understand if Azure DevOps then picked up the staged artifacts and published them after the build passed. This was not correct and my attempted deployment failed because there were no artifacts to deploy.

My second attempt was to then publish the app and then zip the contents into the Staging Directory. Still no luck, but it felt like I was on the right track, I was just missing something. And indeed I was missing something, I then found the PublishArtifacts task. I updated my build process to publish the zip file that I had placed in the staging directory and then my release pipeline worked! I now had a working Azure Function and a simple build and release pipeline. All within the course of an evening. Not bad.

Impressions of Azure DevOps

I am quite pleased by what I’ve used in Azure DevOps. Rebranding VSTS to Azure DevOps was a smart move, VSTS had baggage that it was only for Microsoft applications. With the new name, I was interested enough to give it a try. The pipeline YAML file is a great way to manage the build process. I’m glad to see that Microsoft recognized what Travis CI, Appveyor and others have been doing and followed that process. Defining my release process was extremely easy, and from what I can tell, extremely powerful. I do find it strange my release process isn’t versioned the same way as the build process though. I would be curious to see what that file would look like.

I did have some hurdles finding documentation that was clear on how to hook up the build and release pipeline as well as describing where tasks were. These were annoying, but I did manage to figure out everything in a relatively short amount of time. The documentation was helpful, but like most documentation, always can use some more work and clarity. I’m confident Microsoft will keep improving this area of Azure DevOps.

Overall it was a great experience and I plan on still using Azure DevOps. I also plan on bringing this to my coworkers and investigating if it makes sense for us to start trying out Azure DevOps at work as well.


Azure Web App for Containers Using Terraform

Recently at work I have been tasked with helping our organization transition from our traditional on-premises infrastructure to Azure. To do that, I’ve been learning how to automate our infrastructure by using HashiCorp’s Terraform. Terraform was introduced to me by a few members of our infrastructure team and I’ve found it quite fun to work with.

As I’ve been working on what direction we’d like to head, I’ve focused on new apps using Platform as a Service, specifically Azure Web Apps. A few months back I noticed that Web Apps had a new option for using Docker containers, so about a week ago I decided to see if I could create an Azure Web App for Containers using Terraform.

It turns out, this is already possible but it took some fiddling to figure out what I needed to set up in Terraform. I assume this isn’t very well documented yet because Azure Web App for Containers only recently went GA. So to make sure I remember how to do this, and if anyone else could use this, I’m writing this down here.

Creating the Container Registry

You first have to have the container registry created before you create the App Service. Creating the container registry is no different than what is described in the azurem provider documentation. The main thing to note is that from the tests I was running, I needed to have the container registry created well before I created the App Service Plan and App Service. Otherwise it seemed that the Azure Web App didn’t recognize the container registry.

The example documentation does use the Classic SKU, I went ahead and changed that to use Basic. If you do that, you do not need to create a separate Storage Account. I chose to do this because the Azure Portal suggested upgrading from Classic to Basic.

Creating the App Service Plan

The App Service plan is close to the same as the documentation. The main thing to make sure you set is the fact that it is a Linux plan.

resource "azurerm_app_service_plan" "containertest" {
  name                = "container-test-plan"
  location            = "eastus2"
  resource_group_name = "test-resource-group"
  kind                = "Linux"

  properties {
    reserved = true
  }
}

I also put this as a reserved instance, but I’m not 100% sure that is needed.

Creating the App Service

This was the trickiest part. Not only do you have to set some site_config properties. You also have to set a few app settings as well.

Here is what the Terraform config looks like to get this set up right.

resource "azurerm_app_service" "containertest" {
  name                = "someuniquename01"
  location            = "eastus2"
  resource_group_name = "test-resource-group"
  app_service_plan_id = "${azurerm_app_service_plan.containertest.id}"

  site_config {
    always_on        = true
    linux_fx_version = "DOCKER|${data.azurerm_container_registry.containertest.login_server}/testdocker-alpine:v1"
  }

  app_settings {
    "WEBSITES_ENABLE_APP_SERVICE_STORAGE" = "false"
    "DOCKER_REGISTRY_SERVER_URL"          = "https://${data.azurerm_container_registry.containertest.login_server}"
    "DOCKER_REGISTRY_SERVER_USERNAME"     = "${data.azurerm_container_registry.containertest.admin_username}"
    "DOCKER_REGISTRY_SERVER_PASSWORD"     = "${data.azurerm_container_registry.containertest.admin_password}"
  }
}

The big part here to notice is the linux_fx_version under site_config. The documentation in the azurerm provider isn’t super clear how this works. The example uses DOCKER|(golang:latest). What it doesn’t show or explain is that string uses Dockerhub. If you want to use a different registry the format looks like this DOCKER|<registryurl>/<container>:<tag>.

Under app_settings I had to create an app service from the Azure portal and poke around the settings to reverse engineer what I needed in my terraform config. Not only do you need the linux_fx_version property that has the registry URL, you also have to set that as an app setting. Once that happens, you need a username and password to access the registry. Fortunately Terraform exposes all of this information so you can reference the properties as data.

This should get you up and running with a Web App for Container. This took a few hours of my weekend to get running. Hopefully this can help others get up and running more quickly. I’ll eventually come back to this and try getting this down to the smallest set possible as there are a few items I’m not 100% sure I actually need. If I get a smaller setup, I’ll update the post with that information.


Getting to Know Swift

For the last several years I’ve primarily been a backend developer using C# and occasionally doing work on the frontend with JavaScript. I’ve dabbled in learning Rust as well but I wanted to try my hand at writing native iOS and macOS applications for a change.

After doing some research on where to begin, I started reading the book App Development with Swift by Apple. This book has been a great guide for learning Swift through developing iOS applications. The book is broken up into distinct sections where they teach you features of Swift and then show you how to use them when writing many iOS applications. Apple wrote this book to be used in classrooms but I’ve been following along on my own time during the evenings.

The book has you do bunches of small projects in XCode which has been extremely helpful in learning XCode. Since I’m a Vim person I’ve found it extremely difficult – but worth it – to learn the native keyboard shortcuts on the Mac as well as in XCode itself. Having to do so many different projects has already gotten me much more comfortable using XCode and Interface builder.

What about Xamarin

Xamarin looks like a great option for C# developers who want to keep using C# or share code with Android and iOS applications. This is not the problem I am trying to solve. I enjoy learning new things and since I use C# daily at work, I wanted to give native app development a chance. The concepts I learn writing iOS and macOS applications in Swift will help me if my company ever needs a native app written. Even though they’d likely use Xamarin (we are a C# heavy office after all) I will learn a great deal about the underlying platform. That knowledge will be useful whether or not I am writing code in Swift or C#.

What do I like about Swift right now

So far Swift has been a lot of fun to learn. I’m getting to try out a new language as well as two different native paradigms of iOS and macOS development. I found Objective-C’s syntax to be hard to follow and I was always a bit afraid of managing my own memory. So far this is what I have liked about Swift as well as what I’ve found confusing.

Optionals

Handling optionals in Swift is pretty easy. this syntax is pretty similar to what I learned from Rust. It is something that I’d love to see C# take a cue from.

let someOptional: OptionalType?
if let someOptional = someOptional {
    // use the now unwrapped someOptional
} else {
    // some optional was nil
}

Guard statements

Guard statements have been awesome, they combine a lot of what I like about how swift handles optionals and lets me invert the check so my code doesn’t get to be an arrowhead. I also like that you can do additional checks and have your function return.

func someFunction(some nullableParameter: SomeParameter) {
    guard let parameter = nullableParameter else { return }

    // use parameter here
}

One thing I’ll eventually want to figure out with guard statements, is what the general suggestions for formatting a guard statement are. I’m writing them like this for now:

guard let symbol = aDecoder.decodeObject(forKey: PropertyKey.symbol) as? String,
      let name = aDecoder.decodeObject(forKey: PropertyKey.name) as? String,
      let detailDescription = aDecoder.decodeObject(forKey: PropertyKey.detailDescription) as? String,
      let usage = aDecoder.decodeObject(forKey: PropertyKey.usage) as? String
else { return nil }

Generics

I haven’t gotten too far into using Generics in Swift yet but they feel pretty similar to what I’ve gotten used to in C# and Rust.

What confuses me about Swift so far

I’ve run into a few things in Swift that I’ve found confusing. Either from suggestions on how you should organize your code using Extensions or more historical aspects of iOS and macOS.

Extensions

I’m used to C# where extension methods currently are static methods that operate on a type. In Swift, that doesn’t seem to be the case, you can extend most classes it looks like including adding interface implementations or mutating functions. It also seems like some style guides prefer you to break up a class or struct by having interface implementations go into extensions. I’ll have to dig into extensions a little more to understand them.

iOS and macOS differences

This isn’t really specifically about Swift but more the platforms I’m using swift on. Either way, it is confusing that it looks like iOS and macOS have similar APIs in Swift but there seems to be a version of the same objects across the platforms. On macOS everything is NSSomething where in iOS it looks like there is always a UISomething. It hasn’t been extremely confusing but I have had to do some translating when trying to write code for the Mac when the examples I’ve been looking at have been for iOS.

What about Objective-C

As I’ve tried to learn about writing apps for macOS, I’ve noticed that a large number of examples are written in Objective-C. This isn’t a bad thing but I avoided learning iOS and macOS development for a long time because of Objective-C. So what about Objective-C now? Ultimately, it isn’t that bad and I’ve been able to understand most API examples even if they are written in Objective-C.

The release of Swift 4 has made large strides in the API surface area it also seems. Many of the APIs had a decidedly C or Objective-C feel to them but there seems to be a big effort by Apple to make more APIs feel natural in Swift.

Ultimately, I’ll teach myself at least some Objective-C. If I want to do anything significant in Swift I likely am going to have to deal with some Objective-C code for a long time to come. For now though, I’ll try to stick to just Swift.

Goals learning Swift

I decided to learn Swift because I wanted to learn a new programming paradigm I’ve never used before. At my job right now, I do almost exclusively backed web development using C#. I don’t get much time to work on UIs or native app development. To that end, I thought I’d give macOS development a try and have been working on a simple Pomodoro timer for the Mac.

I want to eventually have enough skills with Swift (and probably a little Objective-C) that I could be a passable native iOS or macOS developer. I don’t know what the job opportunities look like there but it doesn’t really matter that much. Perhaps I’ll try to release an app or two on the App Store, either way, I’ll have learned a new skill that I would enjoy.