Messaging as a Service

10 Aug 2016

Inspired by a great blog post by Jakub Scholz on “Scalable AMQP infrastructure using Kubernetes and Apache Qpid”, I wanted to write a post about the ongoing effort to build Messaging-as-a-Service at Red Hat. Messaging components such as the Apache Qpid Dispatch Router, ActiveMQ Artemis and Qpidd scales well individually, but scaling a large deployment can become unwieldy. As Scholtz demonstrates, there are a lot of manual setup when creating such a cluster using kubernetes directly.

The EnMasse project was created to provide the required tools and services for deploying and running a messaging service on Openshift. Running on Openshift means you can either run EnMasse on your own instance or in the cloud. You can also run EnMasse on Openshift Origin, which is the upstream community project. The long term goals with this project is to build a messaging service with the following properties:

  • Different communication patterns like request-response, pub-sub and events
  • Store-and-forward semantics
  • Support for a variety of different protocols like AMQP, MQTT, HTTP(1.1 & 2), CoAP and STOMP
  • Multi-tenancy
  • Scalability
  • Elasticity without disruption

The rest of this post will try to give an initial overview of EnMasse. EnMasse is still under development, so a lot of the features mentioned may not be implemented or they are work in progress.

EnMasse can be configured with a list of addresses. Each address can have 4 different semantics in EnMasse:

  • Anycast: Messages go from a client, through the router network, to another client connected to the router network on the same address.
  • Broadcast: Messages go from a client, through the router network to all clients connected to the router network on the same address.
  • Queue: Messages go from a client to a queue. Another client can read the message from the queue at a later point.
  • Topic: Aka. pub/sub. Messages go from a publisher client to multiple clients subscribed to the same address.

EnMasse is composed of the router network, the broker clusters, and the cluster admin components. The router network and the broker clusters handle messages, while the cluster admin components handles the router and broker configuration.

EnMasse Overview

EnMasse contains two configuration files, address and flavor. The configuration files are stored as JSON within openshift as configmaps. The flavor configuration contains the supported variants of broker and router configurations. The address configuration contains the addresses the cluster should be able to handle and their desired semantics such as store-and-forward, multicast and the flavor type. The intention is that the cluster administrator is responsible for the available flavors, while the developer only has to care about which addresses to configure.

The main components of EnMasse are the router, broker and the cluster admininistration components.

Router

EnMasse uses the Apache Qpid Dispatch Router to scale the service in terms of the number of connections it can handle as well as the throughput. The router also hides the brokers from the client so that the brokers themselves may be scaled, moved, upgraded and changed without the client noticing.

Broker

EnMasse creates broker clusters for queue and topic addresses. At present, EnMasse only supports ActiveMQ Artemis (or Red Hat JBoss A-MQ) as the message broker, though other brokers might be supported in the future. The brokers can also be scaled in the same way as routers, and EnMasse will ensure the cluster is configured correctly.

Cluster administration

EnMasse contains several cluster administration components that manages the router and broker configuration:

  • The configuration service provides a way for the router agent and the subscription service to subscribe for a list of all addresses configured in EnMasse.
  • The router agent is responsible for configuring the network of routers based on the address configuration.
  • The subscription service is responsible for managing durable subscriptions so that a reconnecting client will transparently connect to the same broker.
  • The storage controller is responsible for creating, reconfiguring and deleting broker clusters based on two configuration files stored as openshift config maps. This component can be omitted, but will ease the maintenance when you want to configure multiple addresses.

I will try to write more articles about EnMasse as we are progressing with new features and improvements. For now, you can easily get started by following the github example. Do not hesitate to report bugs, and all contributions are always welcome.





Performance mindset

25 Apr 2014

You’ve all heard it, and you all know it: premature optimizations are the root of all evil. But what does that mean? When is an optimization premature? I’ve come to think of this sort of ‘dilemma’ many times at work, where I see both my self and coworkers judging this by different standards.

Some programmers really don’t care about performance at all. They write code that looks perfect, and fix performance issues as they come. Others think about performance all the time, where they will happily trade good software engineering principles for improved performance. But I think that thinking about performance in both of these ways are wrong. Performance is about simple and ‘natural’ design.

Let me give one recent example from work. A few months back, we rewrote parts of our software to get new features. Most importantly this involved a new protocol to be used between some client and server. After a while, users of our software were complaining about a server process consuming an unreasonable amount of memory. After profiling the server in question, we found that they were right: there was no need for that process to use that much memory. While profiling, we found a lot of things consuming an unecessary amount of memory, but the most important of them was that some large piece of data was copied many times before sending it through the cable.

Why did we write software like that? Well, while writing the new protocol we thought that it didn’t matter. The data was typically not that big, and the server could handle the load fairly well according to benchmarks. Turns out that the user was serving much more data through that server than we anticipated. Luckily, increasing the JVM heap size made things go around, so it was no catastrophe.

We set off to fix it, and the amount of ‘garbage’ created was much less and memory usage improved significantly. But to get there, we had to introduce a new version of our protocol and refactor code paths to be able to pass data through the stack without copying. It was no more than 5 days of work, but those days could probably have been spent doing more useful things.

But surely, you’d expect the code to become horrible, full of dirty hacks and as unmaintainable as OpenSSL? Actually, it didn’t. The refactorings and protocol changes did not worsen the state of our software. In fact, the design became better and more understandable, because we had to think about how to represent the data in a fashion that reduced the amount of copying done. This gave us an abstraction that was easier to work with.

Premature optimizations are really evil if they make your software harder to read and debug. Optimizations at the design level on the other hand, can save you a lot of trouble later on. I am under the impression that a lot of software is written without a ‘performance mindset’, and that a lot of manpower is wasted due to this. If we used 1 day to properly think through our protocol and software design in terms of data flow, we could spend 4 days at the beach drinking beer.

I like beer.





Haskell and continuous delivery on debian

15 Sep 2013

Having run my toy web service wishsys for a few months, I thought it would be nice to setup a continuous delivery/deployment pipeline for it, so that I could develop a new feature and push it into production as quickly as possible (if it passes all tests). I thought writing a small article about this would be nice as well, as I found few resources for doing CD with haskell.

The wishsys code is written in haskell and uses the yesod web framework. Update: I noticed a comment on reddit mentioning keter, which is essentially what I should have used instead of debian packages since I’m using yesod. This guide should still be relevant regarding CI though.

Choosing CI system

After doing a little research, I ended up with two alternatives to use for CI.

I have some experience with Jenkins at work, and since its very generic, I thought it would be easy to build a haskell project with it. Since the wishsys code is hosted at github, I was tempted to try travis, since it integrates well with github. I decided to go with Jenkins instead, mainly because I had a VPS to run it, and I didn’t need the great Heroku support, as I would be deploying it to the same VPS. There is a good introduction of CI for Haskell and Yesod at yesodweb.

Git and branches

Having chosen which CI-system to use, I needed to structure the git repository in a way that fits my workflow. Since I wanted to have some control over what changes that are pushed out to production, I created a stable branch in the repository, which is the branch from which the production binaries are compiled. All development goes into other branches. For CI to have any meaning, all of these branches should be built all the time so I can be sure that my tests are stable and that they pass. The plan was to create a build job for every branch that builds and tests that particular branch.

Setting up Jenkins

Setting up Jenkins is very easy on Debian, as you just need to add the jenkins debian repo and install packages. Having done that before, I just started on creating a new project for wishsys. Jenkins has a lot of plugins, and since this is a github project, I installed the github plugin for jenkins as well.

Having only the stable branch, I created one job called wishsys-build-stable. To build haskell, I added an entry under “Execute shell”, which runs the following commands when building:

$CABAL sandbox init
$CABAL --enable-tests install

$CABAL is parameterized to /var/lib/jenkins/.cabal/bin/cabal since I needed a newer cabal version to get the sandbox functionality. I also configured the job to trigger for every commit, though I will probably change it to build as often as possible to ensure there are no unstable tests.

Creating a debian package

The next step was to create debian packages of the software, so I could easily install it in my VPS (manually or automatically). Since this was unfamiliar territory, it took some time and reading to grasp the package layout, but I found an intro guide for creating debian packages that helped me. In addition, I added a makefile that invokes all of the cabal commands to build my project and to do what the debian package tools expect.

Creating debian packages in jenkins

I then started looking for a jenkins plugin to help me build the debian package, and found the following plugins:

debian-package-builder seemed to only support automatic version manipulation for subversion repositories. Since wishsys uses git, I went for jenkins-debian-glue. Creating a debian package is complicated in itself, and I initially spent some time doing a lot of what jenkins-debian-glue tries to do automatically (automatically creating a tarball of the repo and running git-import-orig and git-buildpackage).

I used this guide to setup the jenkins jobs. I ended up with a wishsys-debian-source job for building the source package, and a wishsys-debian-binaries job for creating binary packages.

The jobs are run in the following order: wishsys-build-stable -> wishsys-debian-source -> wishsys-debian-binaries

The wishsys-build-stable job is run for each commit, and reuses the git checkout between builds to reduce build times. The wishsys-debian-source job simply creates a tarball of the git repo, and forwards the resulting tarball to the wishsys-debian-binaries job, which does a full clean build of the software before creating the binary package itself.

Summary

Setting up a CD pipeline is a great way to get your features tested and into production in minimal time. Though this setup is somewhat debian specific, the generic pattern should be reusable. In the future, I would like to avoid building the project twice (once in wishsys-build-stable, and once in wishsys-debian-binaries), but the build times are currently not an issue. Another improvement would be to get hunit and quickcheck test reports displayed in Jenkins.

The set of files necessary for debian build are available at github.





Wishing with yesod

17 Jun 2013

Today, I launched my wishsys service, which is just a simple service for creating wish lists with separate access for owners and guests. The original use case was my own wedding, so I created an even simpler version for that using snap. Snap worked great, but I had some hassle building the authentication mechanisms properly.

After the wedding, one of the guests wanted to use the same system for their wedding, so I thought I might as well create a more generic wish list service. This time, I went with yesod, as it seemed to provide more of a platform than a framework.

At Yahoo!, I work on a search platform, and there are a few things I expect from a platform. It should provide

  • Storage
  • Access control
  • Higher level APIs for request handling
  • Internalization
  • Good APIs
  • Good documentation
  • Ease of deployment
  • Test framework

Yesod did not let me down. It provides a real book, and not just a bunch of outdated wiki pages. Its solution for storage is excellent. Persistent allows me to write the definition of my data structures in a single place, and automatically generate a database schema and haskell types. I chose to use postgresql as my persistent backend, and by using the scaffold code, getting it working was trivial. Creating request handlers was so easy, I won’t even tell you how I did it.

My biggest yesod issue was authentication, since I had somewhat special requirements where I wanted to have two users with different access levels (admin and guest). I also missed a method in the authentication system to request a user to be logged in, regardless of what authentication backend used. I ended up looking at what HashDB did internally, and just copy that (If there is a better way, please let me know).

I used the hamlet template system to write HTML with minimal haskell clutter. Forms are a pleasure to work with, because I don’t have to repeat myself. I just had to create the form in one place, and I could then use it both for generating correct HTML and easily parse the POST request.

I just followed the deployment chapter when deploying, and then the service was suddenly live. Even more important to note is the development server, which automatically compiles the app if something changes. Great for local testing!

My biggest issue with yesod was understanding compilation errors messages. But, when I got things working, yesod was a great experience. It is one of the few open source projects I’ve seen that understands what it means to be a platform, and it thinks of your needs before you realize them. Kudos!

Btw, the wishsys source code can be found on github





Learning haskell through project euler

09 May 2013

I have tried using Haskell to various smaller projects, such as wishsys and a game that I never got really far into making. But learning a new programming language through the means of hobby projects only work as long as the project is contained and small. For my part, most hobby projects start out with great ideas and grand designs, but end up as a mess since I am unfamiliar with the programming language.

When using a new programming language, time is spent learning the language rather than developing the project. This in turn means that I end up learning the bare minimum to get the job done. And this defeats the purpose of using a project to learn a new language. If the goal is to finish the project, you should have used something you know well and feel most productive with. If the goals is to learn a programming language, you should start out with a small project instead.

For me, project euler is a great way to learn Haskell, because it contains a lot of problems that Haskell (and functional languages in general) is the perfect tool for solving. The projects I mentioned above involves using databases, multiple threads and other scary real world stuff, but I just wanted to learn Haskell. And better yet, once you have solved a problem, chances are you can find someone with an even more elegant solution written in the same programming language you are using. A great way to learn!