On digitizing an ice hockey table game

27 Dec 2020

Stiga Table Hockey is a classical table game here in Norway.

Stiga

One of my side projects while studying was to implement an automated goal detection mechanism and log results online. I did manage to scrap together a php(!) application to keep track of scores, and some buttons to register goals, but I did not manage to get the goal detection implemented. As much fun as it can be to get a 10 year old web application to run again, I figured it was time for a rewrite.

Due to a renewed interest in Rust and embedded on my part, I decided to re-implement this project during the corona christmas holidays of 2020. All of the software is available on github.

Rethinking the design

The original plan was to support the following in the new design:

  • Detect when puck is inside the goal area
  • Handle button events to start and stop games
  • Display game state for the players
  • Forward live game data to a web service

Goal detection

The most important pieces of this work was to find a reliable way to detect goal. The initial approach I took was to drill a few holes in the goal and place a photoresistor and laser emitter on each side of the goal.

Although this mechanism detected a goal with high accuracy, it proved difficult to mount the sensors as it required very accurate positioning of the laser and photoresistor.

The second approach I tried was to use reed switches (magnetic field detection), drill a hole in the puck and embed a small magnet. After trying a few differently sized magnets, I found one that would get detected reliably, while not getting stuck on the hockey stick of the players (which is made of metal!).

Pick

Mounting the reed switch in the goal area was also quite simple:

Reed Switch

Start and stop events

The micro:bit cotnains two buttons. The left button will start/stop the game, and the right button will undo the last operation, in the event of a wrongly detected goal.

Displaying the score

My initial plan was to install a couple of 7-segment displays to show the score. However, the multiplexing these displays required more circuitry, so I postponed that to future work. Luckily, the micro:bit has a 5x5 LED grid, and I decided to use that to display the score.

Forwarding live game data

I started writing a REST API that the controller would talk to to store the game state. The API will eventually also be able to update the player rating depending on which players played against each other. Due to lack of time I decided to postpone this work, but the initial work is stored in the repository.

Hardware

The final build contains the following hardware:

Software

The controller software is written in Rust with great help from the nRF HAL crates for working with peripherals. The controller maintains a game state that is initialized when a game is started. The controller maintains the entire game state and all events throughout a game until one of the players reach the “winning score”.

Future work

Finished!

Even though the reed switches are detecting most goals, I still think there are some blind spots after testing, so I will mount another couple of switches.

I’d also like to utilize the PWM to generate some sound effects.

Connecting the controller to the cloud is on the road map, and I have an early implementation of the API that integrates with PostgreSQL. The missing piece is to have the controller communicate with internet, and I’m exploring multiple options, the likely approach being to use an ESP8266 for Wi-Fi connectivity.

All in all, its been a fun project, and I look forward to improve it further.





Making an indoor self-watering greenhouse

16 Dec 2020

This post should’ve been written one year ago, when I actually did this project. With another holiday project coming up, it feels necessary to at least post a summary of last years project.

In-door gardening seems to become popular, and rather than spending a lot of money on a nice-looking product like auk, I thought I would instead spend twice as much on my own beutiful rig:

concept

The goal is to monitor some herbs, measuring their soil moisture, and then water the plants with the goal of getting a good harvest. Of course, this can all be made much simpler and mechanical, but where is the fun in that?

The target architecture is as follows:

architecture

A microcontroller periodically measures moisture and reports a relative humidity to the cloud. In the cloud, there is an controller running that consumes these events and decides if the plant is too dry or not. If plan is too dry, the controller will send a message back to the microcontroller instructing it to water the plant. A console shows the current and historical state of the plants.

Hardware

  • ESP8266
  • Pump + Relay
  • Soil moisture sensor
  • Artifical light
  • Timer for artificial light

As a microcontroller I chose an ESP8266 using a NodeMCU-based board. This microcontroller has a WiFi chip builtin, which makes it easy to connect it to my home network and talk to the cloud serviec.

For watering the plants, I used a submersive pump mounted at the bottom of a plastic box. To control the pump on/off, I used a relay connected to the microntroller.

To measure the soil, I initially started off with a sensor from sparkfun, but it was not able to tell the difference from soaking wet to a little moist. I replaced that later with a capacity sensor from dfrobot that gave more accurate readings.

Finally, to ensure stable light conditions for the plants, I purchased an artifical light lamp and a timer that turns on and off at specific hours during the day.

Software

Microcontroller

The microcontroller is implemented using Arduino + PlatformIO, which works OK for hobby projects.

Messaging Layer

This is only needed to push data to/from cloud/devices. You can host this yourselv using the open source projects like Eclipse Hono and EnMasse, or use a commerical services as Bosch IoT Hub.

Greenhouse Backend

The cloud side of the greenhouse contains several components to handle event data and displaying graphs:

  • Console API - A GraphQL API for the console to pull event data.
  • Console - A single page application written in Svelte that uses the GraphQL API to retrieve data.
  • Controller - A Go-based server that subscribes to event data and sends commands back to an Eclipse Hono API to water the plants.
  • Hono Sink - A Go-based server that pulls data from an AMQP messaging service and stores it in an event store.
  • AMQP Event Store - A Go-based event store where all event data is stored.

The entire backend is written to run in Kubernetes.

Result

All in all, I’m pretty happy with the result, even though I suck at carpenting:

Mount

The whole thing in action:

Complete

Summary

All in all, this was a really fun project, and the basil plants did enjoy the extra care (and I was able to make lots of pesto!). Unfortunately, during our move to a new home, the entire thing got dismantled and I haven’t brought it up again. I am thinking about making a larger v2 of this now that we have more space.





Gitops and EnMasse - Part 2 (Operations)

06 May 2019

With the EnMasse 0.28.0 release, using a Gitops workflow to manage your messaging application is even easier than before. Part 2 is a followup on Gitops and EnMasse with focus on the operations side of things. I recommend that you read that article first to get an overview of gitops and EnMasse in general.

As in the previous article, lets assume that you have a team in your organization managing the messaging infrastructure using EnMasse on Kubernetes or OpenShift, and that you have 2 independent developer teams that both want to use messaging in their applications. The following diagram describes the flow:

Gitops

The operations team install EnMasse, and commit the desired configuration templates that they want to support to git. A CI process then applies the EnMasse configuration to the cluster.

In this article, we will start with an EnMasse release, remove the bits we don’t need, and apply configuration specific to the service we are going to offer. We want to provide the following:

  • Allow development teams to provision brokers of different t-shirt sizes on-demand
  • Allow development teams to manage authentication and authorization policies for their messaging applications
  • Allow operations to monitor the messaging infrastructure and receive alerts if development teams are having issues applying their configuration

Installation

Managing an EnMasse deployment in git can be as simple as unpacking the release bundle and committing the parts that is used for a particular installation. The examples used in this article were tested by applying the resources in the install/bundles/enmasse and install/components/example-roles folders.

The installation guide covers the process of installing EnMasse in detail.

Configuration

Once EnMasse is installed, it needs to be configured. EnMasse is configured by creating one or more instances of the following resources:

  • AuthenticationService - Describes an authentication service instance used to authenticate messaging clients.
  • AddressSpacePlan - Describes the messaging resources available for address spaces using this plan.
  • AddressPlan - Describes the messaging resources consumed by a particular address using this plan.
  • StandardInfraConfig - Describes the Qpid Dispatch Router and ActiveMQ Artemis configuration for the standard address space type.
  • BrokeredInfraConfig - Describes the ActiveMQ Artemis configuration for the brokered address space type.

When created, these resources define the configuration that is available to the messaging tenants. The relationship between all these entities are described in this figure:

EnMasse Entities

The green entities are those which are managed by the operations team, while the blue entities are created by the developer teams.

In this article, we will create a configuration to serve the needs of our developer teams. For evaluation purposes, applying the install/components/example-plans and install/components/example-authservices will give you a full EnMasse setup with various example configurations.

Authentication services

Authentication services are used to authenticate and authorize messaging clients using SASL. AMQ Online supports 3 types of authentication services supporting different SASL mechanisms:

  • none - Supports any mechanism, but will grant all clients full access.
  • standard - Supports PLAIN, SCRAMSHA1, SCRAMSHA256 and SCRAMSHA512 mechanisms as well as using OpenShift service account tokens.
  • external - Implement your own authentication service bridge to your own identity management system.

A standard authentication service will allow developer teams to apply authentication and authorization policies for their address spaces:

apiVersion: admin.enmasse.io/v1beta1
kind: AuthenticationService
metadata:
  name: standard-authservice
spec:
  type: standard

Infrastructure configuration

Configuration such as as memory, storage, access policies and other settings that relate to a broker can be specified in the infrastructure configuration.

The BrokeredInfraConfig resource type is used to define the configuration for the infrastructure serving the brokered address space types:

apiVersion: admin.enmasse.io/v1beta1
kind: BrokeredInfraConfig
metadata:
  name: small-broker
spec:
  broker:
    addressFullPolicy: FAIL
    resources:
      memory: 512Mi

Then, we also want to provide configuration for brokers that are larger:

apiVersion: admin.enmasse.io/v1beta1
kind: BrokeredInfraConfig
metadata:
  name: large-broker
spec:
  broker:
    addressFullPolicy: FAIL
    resources:
      memory: 2Gi
      storage: 10Gi

The above configuration will provide 2 different broker configurations that can be referenced by the address space plans.

Plans

Plans control how much resources that are consumed by developer teams. In the brokered address space type, the teams will anyway get a single broker for each address space, which makes the relationship between AddressSpacePlan and the BrokeredInfraConfig seem a bit over-complicated. However, for standard address space types, different plans may apply different resource limits using the same underlying infrastructure config, in which case there would not necessarily be a 1:1 mapping between the two.

Address space plans

The address space plan configures the max amount of resource that may be in use by an address space. In our case, we will have to define 2 plans, each referencing a BrokeredInfraConfig for the broker configuration:

---
apiVersion: admin.enmasse.io/v1beta2
kind: AddressSpacePlan
metadata:
  name: small
spec:
  addressSpaceType: brokered
  infraConfigRef: small-broker
  addressPlans:
  - broker-queue
  - broker-topic
  resourceLimits:
    broker: 1.0
---
apiVersion: admin.enmasse.io/v1beta2
kind: AddressSpacePlan
metadata:
  name: large
spec:
  addressSpaceType: brokered
  infraConfigRef: large-broker
  addressPlans:
  - broker-queue
  - broker-topic
  resourceLimits:
    broker: 1.0

Address plans

The address plan configures the amount of resource an address uses on the broker instances:

---
apiVersion: admin.enmasse.io/v1beta2
kind: AddressPlan
metadata:
  name: broker-queue
spec:
  addressType: queue
  resources:
    broker: 0.001
---
apiVersion: admin.enmasse.io/v1beta2
kind: AddressPlan
metadata:
  name: broker-topic
spec:
  addressType: topic
  resources:
    broker: 0.001

With this plan, developers may create up to 1000 addresses.

Monitoring

EnMasse provides examples for monitoring using Prometheus, Alertmanager and Grafana. The examples assume that you have deployed the Prometheus Operator for Prometheus and Altertmanager, and Grafana Operator for setting up Grafana dashboards. An easy way to get both is to install the Application Monitoring Operator, which is covered in the master branch documentation.

This section will focus on the resources operated by the above operators.

Service monitor and scraping

The ServiceMonitor resource allows us to define the endpoints that should be scraped by Prometheus. For EnMasse, the components use the health port of its components to provide Prometheus metrics:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: enmasse
  labels:
    monitoring-key: middleware
    app: enmasse
spec:
  selector:
    matchLabels:
      app: enmasse
  endpoints:
  - port: health
  namespaceSelector:
    matchNames:
    - enmasse-infra

Once the prometheus operator applies this configuration to Prometheus, all components in EnMasse will be scraped for metrics.

Health checks and alerts

The prometheus operator allow you to define alerts for metrics by defining a PrometheusRule. In our case, we want alerts to trigger if:

  • An EnMasse component is down (api-server or address-space-controller)
  • An AddressSpace has been in the “not ready” state for more than 5 minutes.
  • An Address has been in the “not ready” state for more than 5 minutes.

The first alert relates to the EnMasse infrastructure itself, whereas the last 2 relate to the resources created by the developer teams. By alerting on their state, we can receive alerts about infrastructure failures in advance of development teams raising an issue.


apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  labels:
    monitoring-key: middleware
    prometheus: k8s
    role: alert-rules
  name: enmasse
spec:
  groups:
  - name: ComponentHealth
    rules:
    - record: address_spaces_ready_total
      expr: sum(address_space_status_ready) by (service,namespace)
    - record: address_spaces_not_ready_total
      expr: sum(address_space_status_not_ready) by (service,namespace)
    - record: component_health
      expr: up{job="address-space-controller"} or on(namespace) (1- absent(up{job="address-space-controller"}) )
    - record: component_health
      expr: up{job="api-server"} or on(namespace) (1- absent(up{job="api-server"}) )

    - alert: ComponentHealth
      annotations: 
        description: "{{ $labels.job }} has been down for over 5 minutes"
        severity: critical
      expr: component_health == 0
      for: 300s
    - alert: AddressSpaceHealth
      annotations:
        description: Address Space(s) have been in a not ready state for over 5 minutes
        value: "{{ $value }}"
        severity: warning
      expr: address_spaces_not_ready_total > 0
      for: 300s
    - alert: AddressHealth
      annotations:
        description: Address(s) have been in a not ready state for over 5 minutes
        value: "{{ $value }}"
        severity: warning
      expr: addresses_not_ready_total > 0
      for: 300s

Pretty things - Grafana dashboards

As a respectable operations team, you must have graphs to look at while drinking coffee, or to point to when your manager asks if everything is running. EnMasse offers a selection of Grafana dashboards that allow you to inspect the health of the system, as well as some graphs from the Qpid Dispatch Router (if used) and ActiveMQ Artemis brokers.

These resources are mainly configuration of the Grafana UI and can be found here.

The system dashboard:

System

The broker dashboard:

Broker

Summary

We have seen how an operations team can manage EnMasse. The configuration is described as Kubernetes Custom Resources, and is tailored specifically against the needs of the developer teams. Finally, we have seen how the operations team can configuring monitoring of the messaging infrastructure.

Star the project on github and follow on twitter!





Gitops and EnMasse

08 Apr 2019

With the EnMasse 0.28.0 release, using a Gitops workflow to manage your messaging application is even easier than before. This article explores the service model of EnMasse and how it maps to a Gitops workflow.

Gitops is a way to do Continuous Delivery where not only the source code of an application, but all configuration of an application is stored in git. Changes to a production environment involve creating a pull/change requests to a git repository. Once the PR has been tested and reviewed, it can be merged. When merged, a CD job is triggered that will apply the current state of the git repository to the system. There are variants of this where you run A/B testing and so on, the sky is the limit!

The declarative nature of the “gitops model” fits well with the declarative nature of Kubernetes. You can store your Kubernetes configuration in git, and trigger some process to apply the configuration to a Kubernetes cluster. If you store your application code together with the Kubernetes configuration, you enable development teams to be in full control of their application deployment to any cluster environment.

Traditionally, Kubernetes has mainly been used for stateless services. Stateful services are usually running outside the Kubernetes cluster. If a development team wants to use a stateful service, the team normally have to install and manage the service themselves or use a cloud provider service. This is changing with the introduction of all sorts of operators for Postgresql, Kafka, Elasticsearch etc.

EnMasse is an operator of a stateful messaging service that runs on Kuberentes, with the explicit distinction that the responsibility of operating the messaging service is separate from the tenants consuming it. This makes it easy for an operations team to use the gitops model to manage EnMasse, and for the development teams to use the gitops model to manage their messaging configuration.

Lets assume that you have a team in your organization managing the messaging infrastructure using EnMasse on Kubernetes or OpenShift, and that you have 2 independent developer teams that both want to use messaging in their applications. The following diagram describes the flow: Gitops

The operations team will deploy the messaging infrastructure (EnMasse), and commit the desired configuration templates that they want to support to git. A CI process then applies the EnMasse configuration to the cluster.

Independently of the operations team, the development teams commit their application code along with the messaging resource manifests, such as AddressSpace, Address and MessagingUser (we will get back to what these are), for their application. A CI process builds the applications and applies the application and messaging resources manifests.

Operations

For the operations team, managing EnMasse in git can be as simple as unpacking the release bundle and committing the parts that is used for a particular installation. In addition, the messaging configuration and available plans must be configured. A sample of the minimal amount of configuration needed can be found here.

Development teams

Application

First, lets create a simple messaging application. Writing messaging clients can be a challenging task, as its asynchronous by nature. The vertx-amqp-client allows you to write simple reactive AMQP 1.0 clients. The following example shows how the application can get all its configuration from the environment:

    Vertx vertx = Vertx.vertx();
    AmqpClientOptions options = new AmqpClientOptions()
        .setSsl(true)
        .setPemKeyCertOptions(new PemKeyCertOptions()
            .addCertPath(""/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"))
        .setHost(System.getenv("MESSAGING_HOST"))
        .setPort(Integer.parseInt(System.getenv("MESSAGING_PORT")))
        .setUsername("@@serviceaccount@@")
        .setPassword(new String(Files.readAllBytes(Paths.get("/var/run/secrets/kubernetes.io/serviceaccount/token")), StandardCharsets.UTF_8));

    AmqpClient client = AmqpClient.create(vertx, options);
    client.connect(ar -> {
            if (ar.succeeded()) {
                AmqpConnection connection = ar.result();

                connection.createSender("confirmations", done -> {
                    if (done.succeeded()) {
                        AmqpSender sender = done.result();
                        connection.createReceiver("orders"), order -> {
                            // TODO: Process order
                            AmqpMessage confirmation = AmqpMessage.create().withBody("Confirmed!").build();
                            sender.send(confirmation);
                        }, rdone -> {
                            if (rdone.succeeded()) {
                                startPromise.complete();
                            } else {
                                startPromise.fail(rdone.cause());
                            }
                        });
                    } else {
                        startPromise.fail(done.cause());
                    }
                });
            } else {
                startPromise.fail(ar.cause());
            }
        });

For full example clients, see example clients.

Messaging resources

Once your application written, some configuration for using the messaging resources available on your cluster is needed.

An EnMasse AddressSpace is a group of addresses that share connection endpoints as well as authentication and authorization policies. When creating an AddressSpace you can configure how your messaging endpoints are exposed:

apiVersion: enmasse.io/v1beta1
kind: AddressSpace
metadata:
  name: app
  namespace: team1
spec:
  type: standard
  plan: standard-small
  endpoints:
  - name: messaging
    service: messaging
    cert:
      provider: openshift
    exports:
    - name: messaging-config
      kind: ConfigMap

For more information about address spaces, see the address space documentation.

Messages are sent and received from an address. An address has a type that determines its semantics, and a plan that determines how much resources is reserved for this address. An address can be defined like this:

apiVersion: enmasse.io/v1beta1
kind: Address
metadata:
  name: app.orders
  namespace: team1
spec:
  address: orders
  type: queue
  plan: standard-small-queue

To ensure that only trusted applications are able to send and receive messages to your addresses, a messaging user must be created. For applications running on-cluster, you can authenticate clients using a Kubernetes service account. A serviceaccount user can be defined like this:

apiVersion: user.enmasse.io/v1beta1
kind: MessagingUser
metadata:
  name: myspace.app
  namespace: team1
spec:
  username: system:serviceaccount:team1:default
  authentication:
    type: serviceaccount
  authorization:
  - operations: ["send", "recv"]
    addresses: ["orders"]

With the above 3 resources, you have the basics needed for an application to use the messaging service.

But how does your application get to know the endpoints for its address space? You may have noticed the exports field in the addres space definition. Exports are a way to instruct EnMasse that you want a configmap with the hostname, ports and CA certificate to be created in your namespace. To allow EnMasse to create this resource, we also need to define a Role and RoleBinding for it:

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: messaging-config
  namespace: team1
spec:
rules:
  - apiGroups: [ "" ]
    resources: [ "configmaps" ]
    verbs: [ "create" ]
  - apiGroups: [ "" ]
    resources: [ "configmaps" ]
    resourceNames: [ "messaging-config" ]
    verbs: [ "get", "update", "patch" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: messaging-config
  namespace: team1
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: messaging-config
subjects:
- kind: ServiceAccount
  name: address-space-controller
  namespace: enmasse-infra

Wiring configuration into application

With messaging configuration in place we can write the deployment manifest for our application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  replicas: 1
  template:
    metadata:
      matchLabels:
        application: demo
    spec:
      containers:
        - name: app
          image: myapp:latest
          env:
            - name: MESSAGING_HOST
              valueFrom:
                configMapKeyRef:
                  name: messaging-config
                  key: service.host
            - name: MESSAGING_PORT
              valueFrom:
                configMapKeyRef:
                  name: messaging-config
                  key: service.port.amqps

As you can see, the values of the configmap is mapped as environment variables to our application.

Summary

We have seen how an operations team and a set of development teams can manage messaging as Kubernetes manifests. This allows your whole organisation to follow the gitops model when deploying your applications using messaging on Kubernetes and OpenShift.

Star the project on github and follow on twitter!





Building containers on Travis CI using Podman

31 Jan 2019

Since Podman and Buildah appeared on my radar, I’ve been wanting to try replacing docker. Podman is a replacement for docker, whereas buildah is a replacement for docker build. Although docker works OK, I’ve seen various issues with different versions of docker not working with Kubernetes and OpenShift, and that the local docker daemon sometimes becomes unresponsive and causes build failure in the EnMasse CI. Since podman and buildah does not use a local daemon for building images, they will work without root privileges.

The main difference between podman and buildah from a user perspective is that the podman has a wider feature set than buildah, and the podman cli is almost 1:1 with docker. Podman also has the ability to run containers and generate Kubernetes manifests, whereas buildah is focused only on building container images.

Using podman on Travis CI is somewhat a challenge, for a few reasons:

  • Podman is not available in the default Ubuntu repositories and a newer version of Ubuntu than the default travis one is needed
  • Podman assumes a Fedora/CentOS/RHEL container configuration (/etc/containers/registries.conf)

So to replace docker with podman, ensure you have the following set in your .travis.yml:

dist: xenial
before_install:
- sudo add-apt-repository -y ppa:projectatomic/ppa
- sudo apt-get update -qq
- sudo apt-get install -qq -y software-properties-common podman
- sudo mkdir -p /etc/containers
- sudo sh -c 'echo -e "[registries.search]\nregistries = [\"docker.io\"]" > /etc/containers/registries.conf'

The last 2 lines are necessary for podman to be able to fetch images from Docker Hub.

In your build scripts, you can replace docker with podman, and thats it!