Menu

Archive for June, 2020

This Month in RabbitMQ, May 2020 Recap

Tuesday, June 30th, 2020

This month, Jack Vanlightly continues his blog series on Quorum Queues in RabbitMQ. Also, be sure to watch the replay of his related webinar.

Finally, Episode 5 of TGI RabbitMQ is out -- Gerhard Lazu walks us through how to run RabbitMQ on Kubernetes. Don’t miss!

Project Updates

  • RabbitMQ 3.8.4 was released in late May, the first release to feature Erlang 23 compatibility. Three weeks later 3.8.5 followed with complete Erlang 23 support.
  • Docker community-maintained RabbitMQ image has adopted Erlang 23 in less than two weeks since its release
  • rabbit-hole, the most popular Go RabbitMQ HTTP API client, has reached version 2.2.0
  • Merged an impressive pull request from GitHub user @joseliber that fixed the generation of password-encrypted certificates in the tls-gen project. This project is used by RabbitMQ, its client libraries, and other projects to easily generate self-signed certificates.

    (more…)

How quorum queues deliver locally while still offering ordering guarantees

Tuesday, June 23rd, 2020

The team was recently asked about whether and how quorum queues can offer the same message ordering guarantees as classic queues given that they will deliver messages from a local queue replica (leader or follower) when possible. Mirrored queues always deliver from the master (the leader), so delivering from any queue replica sounds like it could impact those guarantees. 

That is the subject of this post. Be warned, this post is a technical deep dive for the curious and the distributed systems enthusiast. We’ll take a look at how quorum queues can deliver messages from any queue replica, leader or follower, without additional coordination (extra to Raft) but maintaining message ordering guarantees.

(more…)

Cluster Sizing Case Study – Quorum Queues Part 2

Thursday, June 18th, 2020

In the last post we started a sizing analysis of our workload using quorum queues. We focused on the happy scenario that consumers are keeping up meaning that there are no queue backlogs and all brokers in the cluster are operating normally. By running a series of benchmarks modelling our workload at different intensities we identified the top 5 cluster size and storage volume combinations in terms of cost per 1000 msg/s per month.

  1. Cluster: 7 nodes, 8 vCPUs (c5.2xlarge), gp2 SDD. Cost: $54
  2. Cluster: 9 nodes, 8 vCPUs (c5.2xlarge), gp2 SDD. Cost: $69
  3. Cluster: 5 nodes, 8 vCPUs (c5.2xlarge), st1 HDD. Cost: $93
  4. Cluster: 5 nodes, 16 vCPUs (c5.4xlarge), gp2 SDD. Cost: $98
  5. Cluster: 7 nodes, 16 vCPUs (c5.4xlarge), gp2 SDD. Cost: $107
 

There are more tests to run to ensure these clusters can handle things like brokers failing and large backlogs accumulating during things like outages or system slowdowns.

All quorum queues are declared with the following properties:

  • x-quorum-initial-group-size=3
  • x-max-in-memory-length=0
 

The x-max-in-memory-length property forces the quorum queue to remove message bodies from memory as soon as it is safe to do. You can set it to a longer limit, this is the most aggressive - designed to avoid large memory growth at the cost of more disk reads when consumers do not keep up. Without this property message bodies are kept in memory at all times which can place memory growth to the point of memory alarms setting off which severely impacts the publish rate - something we want to avoid in this workload case study.

(more…)

Cluster Sizing Case Study – Quorum Queues Part 1

Thursday, June 18th, 2020

In a first post in this sizing series we covered the workload, the tests, and the cluster and storage volume configurations on AWS ec2. In this post we’ll run a sizing analysis with quorum queues. We also ran a sizing analysis on mirrored queues.

In this post we'll run the increasing intensity tests that will measure our candidate cluster sizes at varying publish rates, under ideal conditions. In the next post we'll run resiliency tests that measure whether our clusters can handle our target peak load under adverse conditions.

All quorum queues are declared with the following properties:

  • x-quorum-initial-group-size=3 (replication factor)
  • x-max-in-memory-length=0
 

The x-max-in-memory-length property forces the quorum queue to remove message bodies from memory as soon as it is safe to do. You can set it to a longer limit, this is the most aggressive - designed to avoid large memory growth at the cost of more disk reads when consumers do not keep up. Without this property message bodies are kept in memory at all times which can place memory growth to the point of memory alarms setting off which severely impacts the publish rate - something we want to avoid in this workload case study.

(more…)

Cluster Sizing Case Study – Mirrored Queues Part 2

Thursday, June 18th, 2020

In the last post we started a sizing analysis of our workload using mirrored queues. We focused on the happy scenario that consumers are keeping up meaning that there are no queue backlogs and all brokers in the cluster are operating normally. By running a series of benchmarks modelling our workload at different intensities we identified the top 5 cluster size and storage volume combinations in terms of cost per 1000 msg/s per month.

  1. Cluster: 5 nodes, 8 vCPUs, gp2 SDD. Cost: $58
  2. Cluster: 7 nodes, 8 vCPUs, gp2 SDD. Cost: $81
  3. Cluster: 5 nodes, 8 vCPUs, st1 HDD. Cost: $93
  4. Cluster: 5 nodes, 16 vCPUs, gp2 SDD. Cost: $98
  5. Cluster: 9 nodes, 8 vCPUs, gp2 SDD. Cost: $104
 

There are more tests to run to ensure these clusters can handle things like brokers failing and large backlogs accumulating during things like outages or system slowdowns.

(more…)

Cluster Sizing Case Study – Mirrored Queues Part 1

Thursday, June 18th, 2020

In a first post in this sizing series we covered the workload, cluster and storage volume configurations on AWS ec2. In this post we’ll run a sizing analysis with mirrored queues.

The first phase of our sizing analysis will be assessing what intensities each of our clusters and storage volumes can handle easily and which are too much.

All tests use the following policy:

  • ha-mode: exactly
  • ha-params: 2
  • ha-sync-mode: manual
(more…)

Cluster Sizing and Other Considerations

Thursday, June 18th, 2020

This is the start of a short series where we look at sizing your RabbitMQ clusters. The actual sizing wholly depends on your hardware and workload, so rather than tell you how many CPUs and how much RAM you should provision, we’ll create some general guidelines and use a case study to show what things you should consider.

Common Questions

What is the best combination of VM size and VM count for your RabbitMQ cluster? Should you scale up and go for three nodes with 32 CPU threads? Or should you scale out and go for 9 nodes with 8 CPU threads? What type of disk offers the best value for money? How much memory do you need? Which hardware configuration is better for throughput, latency, cost of ownership?

First of all, there is no single answer. If you run in the cloud then there are fewer options but if you run on-premise then the sheer number of virtualisation, storage and networking products and configurations out there makes this an impossible question.

While there is no single sizing guide with hard numbers, we can go through a sizing analysis and hopefully that will help you with your own sizing.

(more…)

How to run benchmarks

Thursday, June 4th, 2020

There can be many reasons to do benchmarking:

  • Sizing and capacity planning
  • Product assessment (can RabbitMQ handle my load?)
  • Discover best configuration for your workload
In this post we’ll take a look at the various options for running RabbitMQ benchmarks. But before we do, you’ll need a way to see the results and look at system metrics.

(more…)

This Month in RabbitMQ, April 2020 Recap

Monday, June 1st, 2020

A Webinar on Quorum Queues

Before we start with RabbitMQ project and community updates from April, we have a webinar to announce! Jack Vanlightly, a RabbitMQ core team member, will present on High Availability and Data Safety in Messaging on June 11th, 2020.

In this webinar, Jack Vanlightly will explain quorum queues, a new replicated queue type in RabbitMQ. Quorum queues were introduced in RabbitMQ 3.8 with a focus on data safety and efficient, predictable recovery from node failures. Jack will cover and contrast the design of quorum and classic mirrored queues.

After this webinar, you'll understand:

  • Why quorum queues offer better data safety than mirrored queues
  • How and why server resource usage changes when switching to quorum queues from mirrored queues
  • Some best practices when using quorum queues

Project Updates

Community Writings

Learn More

Ready to learn more? Check out these upcoming opportunities to learn more about RabbitMQ