The new guy

19 Dec 2023

On December 4th, I started working for Akiles, a PropTech company selling products such as door locks integrated with a cloud service for access control. In the past three years, I’ve spent a lot of time creating generic IoT infrastrucure both on the embedded and backend side, and I’m very excited to experience IoT in a real world use case.

Another great thing is that Akiles has been using Rust for their firmware for a while, and the Embassy project was started by one of the founders. Having already worked together upstream on the same code base was also a good way to understand a little of what I was getting into.

A photo showing flowers I received from my wife on the first day in my new job
Flowers from my wife on my first day.

Size matters

Having only worked at companies with a lot of employees, I was curious what it would be like at a small company.

The first thing I noticed is that there is no information overload. At big companies, there are always some kind of announcement in all places, which requires dicipline to avoid distracting onself.

The second thing I realized is that the entire company is in the same chat rooms and the minimal communication overhead as a result. Also there are no forms you have to fill out to perform your work (such as contributing to an open source project, looking at you Yahoo! 2010-edition).

A fresh start

During the first week I’ve spent a lot of time learning about the products and the software.

One thing I’ve kept in mind is to give myself the time and space to listen and learn. Luckily there were some good starter tasks in the backlog that got me going, where I could use to learn different aspects of the system, and I’ve already been able to add some features to the firmware (Rust) and server backend (Go) this way.

A photo showing the different Akiles products like cylinder lock, interior doorlock and some example access cards
A sample of products that I will be working on.

Open source

So what about open source? The reality is that a lot of the technology stack is not open source, which I think is unfortunate, but it’s understandable that this is has not been a priority, and it’s anyway not my focus right now. However, there are many opportunities for contributing to projects like Embassy and libraries that are part of the firmware. For instance I recently did some work on the sequential-storage queue mechanism as a result of requirements in our stack.

Remote

I’ve been a remote worker or almost 8 years now, and one thing I’ve realized is that even though working together remotely from day to day works well, over time the in-person meetings are important for maintaining a ‘social connection’, ‘mutual understanding’ and feeling like one team. So next year I’m looking forward to visit my new colleagues in Barcelona and say hello.





Leaving Red Hat

27 Nov 2023

Today (November 27th) is my last day at Red Hat. I joined Red Hat in May 2016, so it’s over 7 years, some of the most eventful years of my life so far. I moved from Trondheim to Hamar, became father for the second time, and found a new home where I’ll likely be staying for a long time.

It’s been almost 3 years since I posted on this blog last time, so this is a good time to reflect on my time there. Hopefully it will start a new trend with more regular updates.

Nostalgia

The first thing that I remember after joining was that Red Hat felt like the totally right place for me. Open source mindset throughout the organization and a place where good ideas could come from anyone and anywhere. I got to work on a new project that we eventually named EnMasse, which I’ve written about previously on this blog.

At the yearly face to face gatherings I got to know others from the team, and made many new friends. As the years passed, we grew the team and was able to meet customers using the associated products. I traveled to conferences to speak about the project and learned a lot about building communities. I think the most valuable lesson from this period was how to work with others remotely and that even if you are remote, a periodic refresh with face-to-face meeting is still needed to better understand each other.

Then one day in December 2020 I got the opportunity to join another team at Red Hat that was using Rust to build an Internet of Things ecosystem.

Re-discovering embedded

My previous blog post was written just after I decided to join my new team, working on the drogue.io project I have always been interested in embedded, and Rust, but hadn’t yet tried to combine those two. From past experience, embedded C was painful and I got a feeling of being unproductive compared to using higher level languages.

With Rust, all of that changed for me. The programming language itself reminded me about the good parts of Haskell, and the toolchain turned out to be very close to regular application development. Finally I re-discovered the world of embedded, and during the years on the Drogue IoT team, I learned more than I ever have. Thanks to my team mates and fantastic manager, I got to work on these things:

All of the above tied into an IoT ecosystem for both bare metal and Linux devices that integrated with Red Hat technologies on the backend side. Most blog posts during that can be found on the Drogue IoT Blog.

Moving on

Eventually our team had to move on to different work, but I had already been convinced I wanted to pursue embedded further. Working in upstream communities as one does at Red Hat allows you to connect with lots of other people, and I was lucky to find my next place this way.

A big thank you to all my great coworkers for the past 7 and a half years, I hope we will bump into each other at some conference.





On digitizing an ice hockey table game

27 Dec 2020

Stiga Table Hockey is a classical table game here in Norway.

Stiga

One of my side projects while studying was to implement an automated goal detection mechanism and log results online. I did manage to scrap together a php(!) application to keep track of scores, and some buttons to register goals, but I did not manage to get the goal detection implemented. As much fun as it can be to get a 10 year old web application to run again, I figured it was time for a rewrite.

Due to a renewed interest in Rust and embedded on my part, I decided to re-implement this project during the corona christmas holidays of 2020. All of the software is available on github.

Rethinking the design

The original plan was to support the following in the new design:

  • Detect when puck is inside the goal area
  • Handle button events to start and stop games
  • Display game state for the players
  • Forward live game data to a web service

Goal detection

The most important pieces of this work was to find a reliable way to detect goal. The initial approach I took was to drill a few holes in the goal and place a photoresistor and laser emitter on each side of the goal.

Although this mechanism detected a goal with high accuracy, it proved difficult to mount the sensors as it required very accurate positioning of the laser and photoresistor.

The second approach I tried was to use reed switches (magnetic field detection), drill a hole in the puck and embed a small magnet. After trying a few differently sized magnets, I found one that would get detected reliably, while not getting stuck on the hockey stick of the players (which is made of metal!).

Pick

Mounting the reed switch in the goal area was also quite simple:

Reed Switch

Start and stop events

The micro:bit cotnains two buttons. The left button will start/stop the game, and the right button will undo the last operation, in the event of a wrongly detected goal.

Displaying the score

My initial plan was to install a couple of 7-segment displays to show the score. However, the multiplexing these displays required more circuitry, so I postponed that to future work. Luckily, the micro:bit has a 5x5 LED grid, and I decided to use that to display the score.

Forwarding live game data

I started writing a REST API that the controller would talk to to store the game state. The API will eventually also be able to update the player rating depending on which players played against each other. Due to lack of time I decided to postpone this work, but the initial work is stored in the repository.

Hardware

The final build contains the following hardware:

Software

The controller software is written in Rust with great help from the nRF HAL crates for working with peripherals. The controller maintains a game state that is initialized when a game is started. The controller maintains the entire game state and all events throughout a game until one of the players reach the “winning score”.

Future work

Finished!

Even though the reed switches are detecting most goals, I still think there are some blind spots after testing, so I will mount another couple of switches.

I’d also like to utilize the PWM to generate some sound effects.

Connecting the controller to the cloud is on the road map, and I have an early implementation of the API that integrates with PostgreSQL. The missing piece is to have the controller communicate with internet, and I’m exploring multiple options, the likely approach being to use an ESP8266 for Wi-Fi connectivity.

All in all, its been a fun project, and I look forward to improve it further.





Making an indoor self-watering greenhouse

16 Dec 2020

This post should’ve been written one year ago, when I actually did this project. With another holiday project coming up, it feels necessary to at least post a summary of last years project.

In-door gardening seems to become popular, and rather than spending a lot of money on a nice-looking product like auk, I thought I would instead spend twice as much on my own beutiful rig:

concept

The goal is to monitor some herbs, measuring their soil moisture, and then water the plants with the goal of getting a good harvest. Of course, this can all be made much simpler and mechanical, but where is the fun in that?

The target architecture is as follows:

architecture

A microcontroller periodically measures moisture and reports a relative humidity to the cloud. In the cloud, there is an controller running that consumes these events and decides if the plant is too dry or not. If plan is too dry, the controller will send a message back to the microcontroller instructing it to water the plant. A console shows the current and historical state of the plants.

Hardware

  • ESP8266
  • Pump + Relay
  • Soil moisture sensor
  • Artifical light
  • Timer for artificial light

As a microcontroller I chose an ESP8266 using a NodeMCU-based board. This microcontroller has a WiFi chip builtin, which makes it easy to connect it to my home network and talk to the cloud serviec.

For watering the plants, I used a submersive pump mounted at the bottom of a plastic box. To control the pump on/off, I used a relay connected to the microntroller.

To measure the soil, I initially started off with a sensor from sparkfun, but it was not able to tell the difference from soaking wet to a little moist. I replaced that later with a capacity sensor from dfrobot that gave more accurate readings.

Finally, to ensure stable light conditions for the plants, I purchased an artifical light lamp and a timer that turns on and off at specific hours during the day.

Software

Microcontroller

The microcontroller is implemented using Arduino + PlatformIO, which works OK for hobby projects.

Messaging Layer

This is only needed to push data to/from cloud/devices. You can host this yourselv using the open source projects like Eclipse Hono and EnMasse, or use a commerical services as Bosch IoT Hub.

Greenhouse Backend

The cloud side of the greenhouse contains several components to handle event data and displaying graphs:

  • Console API - A GraphQL API for the console to pull event data.
  • Console - A single page application written in Svelte that uses the GraphQL API to retrieve data.
  • Controller - A Go-based server that subscribes to event data and sends commands back to an Eclipse Hono API to water the plants.
  • Hono Sink - A Go-based server that pulls data from an AMQP messaging service and stores it in an event store.
  • AMQP Event Store - A Go-based event store where all event data is stored.

The entire backend is written to run in Kubernetes.

Result

All in all, I’m pretty happy with the result, even though I suck at carpenting:

Mount

The whole thing in action:

Complete

Summary

All in all, this was a really fun project, and the basil plants did enjoy the extra care (and I was able to make lots of pesto!). Unfortunately, during our move to a new home, the entire thing got dismantled and I haven’t brought it up again. I am thinking about making a larger v2 of this now that we have more space.





Gitops and EnMasse - Part 2 (Operations)

06 May 2019

With the EnMasse 0.28.0 release, using a Gitops workflow to manage your messaging application is even easier than before. Part 2 is a followup on Gitops and EnMasse with focus on the operations side of things. I recommend that you read that article first to get an overview of gitops and EnMasse in general.

As in the previous article, lets assume that you have a team in your organization managing the messaging infrastructure using EnMasse on Kubernetes or OpenShift, and that you have 2 independent developer teams that both want to use messaging in their applications. The following diagram describes the flow:

Gitops

The operations team install EnMasse, and commit the desired configuration templates that they want to support to git. A CI process then applies the EnMasse configuration to the cluster.

In this article, we will start with an EnMasse release, remove the bits we don’t need, and apply configuration specific to the service we are going to offer. We want to provide the following:

  • Allow development teams to provision brokers of different t-shirt sizes on-demand
  • Allow development teams to manage authentication and authorization policies for their messaging applications
  • Allow operations to monitor the messaging infrastructure and receive alerts if development teams are having issues applying their configuration

Installation

Managing an EnMasse deployment in git can be as simple as unpacking the release bundle and committing the parts that is used for a particular installation. The examples used in this article were tested by applying the resources in the install/bundles/enmasse and install/components/example-roles folders.

The installation guide covers the process of installing EnMasse in detail.

Configuration

Once EnMasse is installed, it needs to be configured. EnMasse is configured by creating one or more instances of the following resources:

  • AuthenticationService - Describes an authentication service instance used to authenticate messaging clients.
  • AddressSpacePlan - Describes the messaging resources available for address spaces using this plan.
  • AddressPlan - Describes the messaging resources consumed by a particular address using this plan.
  • StandardInfraConfig - Describes the Qpid Dispatch Router and ActiveMQ Artemis configuration for the standard address space type.
  • BrokeredInfraConfig - Describes the ActiveMQ Artemis configuration for the brokered address space type.

When created, these resources define the configuration that is available to the messaging tenants. The relationship between all these entities are described in this figure:

EnMasse Entities

The green entities are those which are managed by the operations team, while the blue entities are created by the developer teams.

In this article, we will create a configuration to serve the needs of our developer teams. For evaluation purposes, applying the install/components/example-plans and install/components/example-authservices will give you a full EnMasse setup with various example configurations.

Authentication services

Authentication services are used to authenticate and authorize messaging clients using SASL. AMQ Online supports 3 types of authentication services supporting different SASL mechanisms:

  • none - Supports any mechanism, but will grant all clients full access.
  • standard - Supports PLAIN, SCRAMSHA1, SCRAMSHA256 and SCRAMSHA512 mechanisms as well as using OpenShift service account tokens.
  • external - Implement your own authentication service bridge to your own identity management system.

A standard authentication service will allow developer teams to apply authentication and authorization policies for their address spaces:

apiVersion: admin.enmasse.io/v1beta1
kind: AuthenticationService
metadata:
  name: standard-authservice
spec:
  type: standard

Infrastructure configuration

Configuration such as as memory, storage, access policies and other settings that relate to a broker can be specified in the infrastructure configuration.

The BrokeredInfraConfig resource type is used to define the configuration for the infrastructure serving the brokered address space types:

apiVersion: admin.enmasse.io/v1beta1
kind: BrokeredInfraConfig
metadata:
  name: small-broker
spec:
  broker:
    addressFullPolicy: FAIL
    resources:
      memory: 512Mi

Then, we also want to provide configuration for brokers that are larger:

apiVersion: admin.enmasse.io/v1beta1
kind: BrokeredInfraConfig
metadata:
  name: large-broker
spec:
  broker:
    addressFullPolicy: FAIL
    resources:
      memory: 2Gi
      storage: 10Gi

The above configuration will provide 2 different broker configurations that can be referenced by the address space plans.

Plans

Plans control how much resources that are consumed by developer teams. In the brokered address space type, the teams will anyway get a single broker for each address space, which makes the relationship between AddressSpacePlan and the BrokeredInfraConfig seem a bit over-complicated. However, for standard address space types, different plans may apply different resource limits using the same underlying infrastructure config, in which case there would not necessarily be a 1:1 mapping between the two.

Address space plans

The address space plan configures the max amount of resource that may be in use by an address space. In our case, we will have to define 2 plans, each referencing a BrokeredInfraConfig for the broker configuration:

---
apiVersion: admin.enmasse.io/v1beta2
kind: AddressSpacePlan
metadata:
  name: small
spec:
  addressSpaceType: brokered
  infraConfigRef: small-broker
  addressPlans:
  - broker-queue
  - broker-topic
  resourceLimits:
    broker: 1.0
---
apiVersion: admin.enmasse.io/v1beta2
kind: AddressSpacePlan
metadata:
  name: large
spec:
  addressSpaceType: brokered
  infraConfigRef: large-broker
  addressPlans:
  - broker-queue
  - broker-topic
  resourceLimits:
    broker: 1.0

Address plans

The address plan configures the amount of resource an address uses on the broker instances:

---
apiVersion: admin.enmasse.io/v1beta2
kind: AddressPlan
metadata:
  name: broker-queue
spec:
  addressType: queue
  resources:
    broker: 0.001
---
apiVersion: admin.enmasse.io/v1beta2
kind: AddressPlan
metadata:
  name: broker-topic
spec:
  addressType: topic
  resources:
    broker: 0.001

With this plan, developers may create up to 1000 addresses.

Monitoring

EnMasse provides examples for monitoring using Prometheus, Alertmanager and Grafana. The examples assume that you have deployed the Prometheus Operator for Prometheus and Altertmanager, and Grafana Operator for setting up Grafana dashboards. An easy way to get both is to install the Application Monitoring Operator, which is covered in the master branch documentation.

This section will focus on the resources operated by the above operators.

Service monitor and scraping

The ServiceMonitor resource allows us to define the endpoints that should be scraped by Prometheus. For EnMasse, the components use the health port of its components to provide Prometheus metrics:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: enmasse
  labels:
    monitoring-key: middleware
    app: enmasse
spec:
  selector:
    matchLabels:
      app: enmasse
  endpoints:
  - port: health
  namespaceSelector:
    matchNames:
    - enmasse-infra

Once the prometheus operator applies this configuration to Prometheus, all components in EnMasse will be scraped for metrics.

Health checks and alerts

The prometheus operator allow you to define alerts for metrics by defining a PrometheusRule. In our case, we want alerts to trigger if:

  • An EnMasse component is down (api-server or address-space-controller)
  • An AddressSpace has been in the “not ready” state for more than 5 minutes.
  • An Address has been in the “not ready” state for more than 5 minutes.

The first alert relates to the EnMasse infrastructure itself, whereas the last 2 relate to the resources created by the developer teams. By alerting on their state, we can receive alerts about infrastructure failures in advance of development teams raising an issue.


apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  labels:
    monitoring-key: middleware
    prometheus: k8s
    role: alert-rules
  name: enmasse
spec:
  groups:
  - name: ComponentHealth
    rules:
    - record: address_spaces_ready_total
      expr: sum(address_space_status_ready) by (service,namespace)
    - record: address_spaces_not_ready_total
      expr: sum(address_space_status_not_ready) by (service,namespace)
    - record: component_health
      expr: up{job="address-space-controller"} or on(namespace) (1- absent(up{job="address-space-controller"}) )
    - record: component_health
      expr: up{job="api-server"} or on(namespace) (1- absent(up{job="api-server"}) )

    - alert: ComponentHealth
      annotations: 
        description: "{{ $labels.job }} has been down for over 5 minutes"
        severity: critical
      expr: component_health == 0
      for: 300s
    - alert: AddressSpaceHealth
      annotations:
        description: Address Space(s) have been in a not ready state for over 5 minutes
        value: "{{ $value }}"
        severity: warning
      expr: address_spaces_not_ready_total > 0
      for: 300s
    - alert: AddressHealth
      annotations:
        description: Address(s) have been in a not ready state for over 5 minutes
        value: "{{ $value }}"
        severity: warning
      expr: addresses_not_ready_total > 0
      for: 300s

Pretty things - Grafana dashboards

As a respectable operations team, you must have graphs to look at while drinking coffee, or to point to when your manager asks if everything is running. EnMasse offers a selection of Grafana dashboards that allow you to inspect the health of the system, as well as some graphs from the Qpid Dispatch Router (if used) and ActiveMQ Artemis brokers.

These resources are mainly configuration of the Grafana UI and can be found here.

The system dashboard:

System

The broker dashboard:

Broker

Summary

We have seen how an operations team can manage EnMasse. The configuration is described as Kubernetes Custom Resources, and is tailored specifically against the needs of the developer teams. Finally, we have seen how the operations team can configuring monitoring of the messaging infrastructure.

Star the project on github and follow on twitter!