Menu

rabbitmq + node.js = rabbit.js

November 12th, 2010 by Michael

For those who have been away from the internets, node.js is an evented JavaScript engine based on Google's V8. Because it is essentially one big, efficient event loop, it's a natural fit for programs that shuffle data backwards and forwards with little state in-between. And it's fun to program, an opinion apparently lots of people share, because there have been loads of libraries crop up around it.

Among the more impressive of these libraries is Socket.IO. One can combine Socket.IO with node.js's built-in web server to make a websocket server, with a socket abstraction for browsers that degrades to XHR tricks for when there's no websockets. (I would be happy to believe that node.js and Socket.IO were made for us by a benevolent and foresightful precursor race; but of course, they were made by hard-working clever people. Thank you, people!)

Once one has a socket abstraction in the browser, a whole world opens up. Specifically, for our purposes, a whole world of messaging. Since node.js has an AMQP client, we can easily hook it up with RabbitMQ; not only to bridge to other protocols and back-end systems, but also to provide messaging between browsers, and between application servers, and so on.

Following on from the work we've been doing with Martin Sustrik of ZeroMQ, I decided to make a very simple protocol for using on the browser sockets, reflecting the messaging patterns used in ZeroMQ (and thereby in RMQ-0MQ) -- pub/sub, request/reply, and push/pull (or pipeline). I wrote a node.js library that uses RabbitMQ to implement message patterns using its routing and buffering; the bridging then comes for free, since RabbitMQ has a bunch of protocol adapters and clients for various languages.

A brief explanation of the messaging patterns:

Publish/Subscribe is for the situation in which a published message should be delivered to multiple subscribers. In the general case, various kinds of routing can be used to filter the messages for each subscriber. This might be used to broadcast notifications from a backend system to users' browsers, for example.

Request/Reply is for RPC over messaging; requests are distributed round-robin among worker processes, and replies are routed back to the requesting socket. This might be used by browsers to query back-end services; or even for browsers to query each other.

Pipeline is for chaining together processes. Messages are pushed to worker processes in a round-robin, which themselves may push to another stage of processing. This might be used to co-ordinate a workflow among sets of users (or indeed individuals).

Having duly dispensed with ado, here is rabbit.js.

All it needs is a bare-bones RabbitMQ and node.js installed; and, the node-amqp and Socket.IO libraries. Instructions and the locations of these things are in the README. (Do note that you need my fork of node-amqp.)

It also includes a tiny message socket server; that is, a node.js server that accepts socket connections and speaks in length-prefixed messages. Since it's all going through RabbitMQ, you can talk to the browsers hooked up with Socket.IO via a socket. You can also use the in-process pipe server from code running in node.js itself.

All in all, I am surprised how much I could get done with only a handful of lines of code and some technologies that each hit a sweet spot -- node.js for fun network server programming, Socket.IO for magical browser sockets, and RabbitMQ for the no-tears messaging.

Exchange to Exchange bindings

October 19th, 2010 by Matthew Sackman

Arriving in RabbitMQ 2.1.1, is support for bindings between exchanges. This is an extension of the AMQP specification and making use of this feature will (currently) result in your application only functioning with RabbitMQ, and not the myriad of other AMQP 0-9-1 broker implementations out there. However, this extension brings a massive increase to the expressivity and flexibility of routing topologies, and solves some scalability issues at the same time.

Normal bindings allow exchanges to be bound to queues: messages published to an exchange will, provided the various criteria  of the exchange and its bindings are met, pass through the various bindings and be appended to the queue at the end of each binding. That's fine for a lot of use cases, but there's very little flexibility there: it's always just one hop -- the message being published to one exchange, with one set of bindings, and consequently one possible set of destinations. If you need something more flexible then you'd have to resort to publishing the same message multiple times. With exchange-to-exchange bindings, a message published once, can flow through any number of exchanges, with different types, and vastly more sophisticated routing topologies than previously possible.

Read the rest of this entry »

RabbitMQ/0MQ bridge

October 18th, 2010 by Martin Sústrik

Recently, Michael Bridgen and I implemented a bridge to connect the RabbitMQ broker with applications using 0MQ for messaging.

Here it is:

http://github.com/rabbitmq/rmq-0mq

So: What kind of benefit the users can get by using both products in parallel?

In short, there are two different points of view involved here. RabbitMQ users will experience different benefits than users of 0MQ. Those already using both will simply get an easy way to interconnect the two technologies.

This is RabbitMQ blog so let's start with Rabbit-focused point of view.

First and most obviously, using 0MQ as a client for RabbitMQ broker makes sense on the platforms where there's no AMQP client available. Ever fancied speaking to RabbitMQ broker from Lisp? Or -- say -- from Cobol? The bridge may come handy here.

The same applies to OS/HW platforms. Do you need to speak to the broker from a device heavily constrained on memory, with very slow CPU or limited battery life? 0MQ client is pretty concise and very fast, which means that you'll get reasonable performance even on slow chips. Effectiveness also translates to lower power consumption, which in turn makes the battery last longer. The r0mq bridge itself is collocated with the RabbitMQ broker requiring no additional resources on the client.

Another usage of the bridge is much more sophisticated. If you are using RabbitMQ in very simple scenarios you may not even appreciate it. If you are managing a big geographically distributed system, it may become a lifesaver. The bridge can be used to interconnect RabbitMQ brokers into loose federation. Just edit the configuration files for the two brokers and everything will just work. You won't have to care about order of starting up the brokers, managing the reconnections after network disruptions etc. A nice feature is that these federations are truly distributed. There's no need for central governance of the federation. Simply connect your broker to a broker in a different company. That one in turn connects to brokers in yet different companies etc. Ultimately you end up with loose world-wide federation maintained in collaborative fashion by all the participants.

Yet one more advantage you may take advantage of is efficient usage of network resources. Unlike AMQP, 0MQ allows you to split your messaging traffic into logically separate flows. You pass your entertainment video in a separate flow from commands used to keep the aircraft flying. This not only means that you'll never experience head-of-line blocking problems, but also that network-level QoS can be set separately for the entertainment channel and for the steering system. Also, network engineers will appreciate that you can monitor the network traffic flow-by-flow rather than having a big opaque chunk of bandwidth used by "messaging".

Finally, 0MQ is bundled with OpenPGM library which implements a reliable mutlicast protocol called PGM. The r0mq bridge thus allows to multicast messages from RabbitMQ broker to the clients (0MQ clients to be precise -- AMQP has no multicast support). This kind of functionality is extremely useful in scenarios where a lot of identical data is passed to many boxes on the LAN. If a separate copy of each datum is sent to each subscriber, you can easily exceed capacity of your network. With multicast, data is sent once only to all the subscribers thus keeping the bandwidth usage constant even when the number of subscribers grows.

When you looking at r0mq bridge from the other side, you are probably developing at a low level using 0MQ as your network transport.

Using RabbitMQ broker has some obvious uses. The most trivial of them is to use the broker as a bridge to connect 0MQ applications to AMQP, STOMP or XMPP applications.

However, the real use case is to use RabbitMQ as a "device" in 0MQ network. 0MQ comes with few simple precompiled devices. Some hardware can be used as 0MQ device (say IP switch in case of multicast transport). There've been some attempts to create more sophisticated devices but these are in very early stages of development. So, what 0MQ developers are missing is a full-blown, sophisticated and production-ready device.

RabbitMQ broker can serve as such a device. First of all, it has been deployed widely and thus it is stable enough to be used in production environment safely. As for the feature set, it offers much more than anything found in 0MQ world. The two most useful features are persistence and monitoring.

Persistence means that the messages passing through the broker are saved on the disk. When you shut down the broker, if the box crashes because of power outage or a different technical problem, the messages are still available on the disk. When the broker is restarted they will be sent further as if the crash hasn't happened. It is similar to how email works.

Monitoring is asked for maybe even more often than persistence. Once you have a non-trivial system you want to know what each node is doing: How many messages there are stored for a particular feed What's the current throughput? And so on. RabbitMQ can tell you these things through its command-line tools or through the management plugin.

As a conclusion I would like to stress that bridging 0MQ and RabbitMQ is not just that dumb kind of bridge you get when you bridge two incompatible but more or less equivalent products. RabbitMQ and 0MQ are focusing on different aspects of messaging. 0MQ puts much more focus on how the messages are transferred over the wire. RabbitMQ, on the other hand, focuses on how messages are stored, filtered and monitored. By combining the two technologies you can get the best from both worlds.

Enjoy!

Prompt-a-licious

October 2nd, 2010 by Michael

I am setting up my old MacBook, reclaimed from my housemate, to be usable for the programmings.

The first step was to install homebrew. I'm finding it a bit friendlier than macports, which seems to be irretrievably broken on the other MacBook.

After a few more steps (git, mercurial, node, rabbitmq of course), I found myself missing my pretty hg-prompt bash prompt. But I'm working with git much more these days, so I wondered if there was something that could do both.

There is: vcprompt, and what do you know it's in homebrew.

$ brew install vcprompt

To get the pretty prompt, I more or less transcribed what I had from hg-prompt. In .bashrc:

D=$'\e[37;40m'
PINK=$'\e[35;40m'
GREEN=$'\e[32;40m'
ORANGE=$'\e[33;40m'

vc_ps1() {
    vcprompt -f "(%n:${PINK}%b${D}${GREEN}%u%m${D})" 2>/dev/null
}

export PS1='${GREEN}\[email protected]\h${D} in ${ORANGE}\w${D}$(vc_ps1)\n$ '

By the way, if like me you forget which of .bashrc and .bash_profile is for what, this post explains it.

If you want to get fancy, there's a guide to customising the bash prompt on the Arch Linux wiki.

Broker vs Brokerless

September 22nd, 2010 by alexis

The RabbitMQ team has been working with Martin Sustrik to provide code and documentation for using RabbitMQ and ZeroMQ together.  Why is this a good idea?  Because the broker and brokerless approaches are complementary.  We'll be posting more about this as the codebase evolves.  This post is introductory and can be seen as commentary on Ilya Grigorik's excellent introduction to ZeroMQ and the InfoQ summary of Ilya's article.

I like ZeroMQ and think it is useful - of which more below.  But I have seen some brash claims made on its behalf.  This can lead to confusion.

So what is the 'brokerless' model?  In the comments to Ilya's and the InfoQ post, ZeroMQ is compared to SCTP and to JGroups.  These are important technologies and form a helpful starting point for thinking about brokerless messaging patterns.  Let's look at what you might need if you combine messaging (like SCTP) with pubsub groups (like JGroups) to make arbitrary networks using 'brokerless' peers.

Some things you might need in a brokerless network

If you set up a brokerless messaging network, three things that you might need are: discovery, availability and management.

Discovery is the problem of maintaining a roster of peers that a system can send messages to, and who can join this roster.

Availability is the problem of dealing with peers disappearing from time to time.  For example if you have 50 subscribers to a feed, and only 40 of them are available to receive updates, should you keep a copy of their messages until they reappear?  That could mean "for a very long time".   And if you do keep messages and lists of "who has seen what", then where is it best to do this?

This is also a problem when message receivers do not respond quickly.  To quote from Martin Sustrik of ZeroMQ, "You can never differentiate between 'network error and 'no response received'. TCP in no better. You'll have accept that or keep with a single box."

Management is an interesting area for analysis too.  ZeroMQ's model aligns messaging closely with sockets.  This means that, like in TCP, 'any' communication network can be implemented in such a way that it provides some messaging capability.  But, networks can be arbitrarily complex.  For example unless you don't care about it (and you may not) management of "who is connected to who, and who can be connected to who" can get complicated.   This kind of management problem gets more difficult the more you scale.  Models like JGroups usually make this problem go away by making a simplifying assumption, i.e.: everyone in the group talks to everyone else in the group.  Easy :-)

I am not suggesting that you always need these things.  The ZeroMQ philosophy is to home right in on networking, and this creates focus.  But if you do need them then you might end up implementing them yourself.  Enter the broker...

How can a broker help to solve these problems

Brokers can provide solutions for discovery, availability and management.  They can also form reliable networks, e.g. for email delivery and instant messaging services.

First: what is a 'broker'?  It is both a leader, and an intermediary.

A broker is a leader.  In distributed computing, the problems of management, discovery and availability are typically solved by electing a leader among the set of distributed components.  In the world of "messaging", such a leader is usually known as a "broker".  Stating that in order to be a leader, you need to be a broker, makes it much easier to work out who is the leader, than in a completely brokerless system in which "anyone can lead, but nobody knows how".

A broker is also an intermediary.  For example, instead of having to connect everyone in the group directly, communicators simply connect to the broker (or brokers).  A broker may also be used to solve availability problems such as "offline consumer", by providing persistence and managing recovery on behalf of systems that cannot do it themselves.

Thus, brokers simplify network design by making reasonable assumptions.  Of course, when those assumptions don't hold, you may not want a broker.

Brokers are not 'centralized'

A commonly held misconception about brokers is that they are 'centralized'.  Brokers are NOT necessarily a 'centralized' solution.  Intermediaries can be decentralized.  You can have multiple brokers in a single network in order to increase throughput and availability.  Sometimes these networks of servers are called federations.  Note that individual brokers do not need to be 'highly available' in order to have a redundant network of servers.

This is, for example, how email (SMTP) and XMPP networks work.  Both email and instant messaging are brokered models, and both use multiple brokers in a simple and redundant way.  For example, mail transfer agents provide a delivery and routing network for email.  It would be difficult to come up with a design for this that was completely peer to peer, without reinventing 'special peers' - also known as brokers.

So what model is simplest?

Peer to peer models are not inherently more or less simple than brokered models.  If you do not need discovery, availability, management, or intermediation then it may be simpler to not use them.  But if you need them, it may be simpler to not implement them yourself.

Networks of servers (brokers) are not more or less redundant or decentralized than networks of clients (peers).  Both the broker and brokerless model have their pros and cons in terms of reliability, and other considerations eg latency.

The two models solve different problems.

For example, RabbitMQ and ZeroMQ are complementary.  From a RabbitMQ point of view ZeroMQ is a 'smart client' that can use its buffers like a queue.  That's useful in some cases.  From a ZeroMQ point of view, RabbitMQ is a network device that provides services that you would not necessarily want to have to implement yourself.

We want our customers and users to always have the best toolset available which is why we have provided the Github repo for you to play with.  Thanks again to Martin Sustrik for his work on this.

Watch this space for more on this interesting area of work and discussion.

RabbitMQ on github

September 20th, 2010 by David

We've received quite a few requests recently for us to put the RabbitMQ code on github.

RabbitMQ is open source, and the Mercurial repositories where we work on the code are publicly accessible. But github is rapidly establishing itself as the Facebook of open-source development: It makes it easy to follow projects and participate in their development, all within a slick web-based UI.

So from today, we are mirroring our repositories to github. You can find them at http://github.com/rabbitmq. The repositories on github track our Mercurial repositories with a delay of a few minutes.

The main development of RabbitMQ will continue to take place on Mercurial. Converting our development workflow and infrastructure to git would take a lot of effort that we'd prefer to spend improving RabbitMQ. And besides, members of the team differ in their opinions about the relative merits of hg and git.

If you wish to contribute to RabbitMQ, we are happy to receive changes via github, or Mercurial hosting sites such as bitbucket, or even as old-fashioned patches!

Very fast and scalable topic routing – part 1

September 14th, 2010 by Vlad Alexandru Ionescu

Among other things, lately we have been preoccupied with improving RabbitMQ's routing performance. In particular we have looked into speeding up topic exchanges by using a few well-known algorithms as well as some other tricks. We were able to reach solutions many times faster than our current implementation.

First, a little about the problem we are trying to solve. Here is a quote from the AMQP 0-9-1 spec:

The topic exchange type works as follows: 01d39485b74c9185569f7f9540cf3eac The routing key used for a topic exchange MUST consist of zero or more words delimited by dots. Each word may contain the letters A-Z and a-z and digits 0-9. The routing pattern follows the same rules as the routing key with the addition that * matches a single word, and # matches zero or more words. Thus the routing pattern *.stock.# matches the routing keys usd.stock and eur.stock.db but not stock.nasdaq.

Our goal is to match messages (routing keys) against bindings (patterns) in a fast and scalable manner.

Here is a list of approaches that we tried out:

  • 1. Caching messages' topics on a per-word basis. This is what the AMQP spec suggests and there are some studies on this already.
  • 2. Indexing patterns on a per-word basis. This is similar with 1, except we prepare the patterns beforehand, rather than preparing for topic keys that have been previously sent.
  • 3. Trie implementation. Arrange the words in the patterns in a trie structure and follow a route down the trie to see if a particular topic matches.
  • 4. A deterministic finite automate (DFA) implementation. This is a well-known approach for string matching, in general.

Each of these approaches have pros and cons. We generally aimed for:

  • good complexity in both space and time, to make it scalable
  • ease of implementation
  • good performance for the commonly used situations
  • good worst-case performance
  • making it quick in the simple cases (where scalability in number of bindings is not a concern)

From the start, we were able to beat the current implementation by a factor of 3 times (in all cases) just by being more careful when splitting the keys into words (not repeating splitting both the pattern and the topic for each pattern, every time).

We found approaches 1 and 2 to be particularly unfit for the needs. They were the slowest, they do not have a good complexity, because they involve intersecting sets for each level, and they can not be adapted to include functionality for "#". Thus, we concentrated our attention on approaches 3 and 4.

The trie

Here is an example of a trie structure, if we were to add patterns "a.b.c", "a.*.b.c", "a.#.c", "b.b.c":

In order to match a pattern (say for example "a.d.d.d.c"), we start at root and follow the topic string down the tree word by word. We can go deeper either through an exact match, a "*" or a "#". In the case of the "#" we can go deeper with all the versions of the tail of the topic. For our example, we would go through "#" with "d.d.d.c", "d.d.c", "d.c", "c" and "".

The trie implementation has a number of advantages: good size complexity; adding a new binding is cheap; and it is the easiest to implement; but, also the disadvantage that it backtracks for "*" and "#", in order to find all possible matches.

The DFA

This approach is based on constructing an NFA that accepts the patterns of the bindings, and from it constructing the equivalent DFA and using it instead. Since we are also interested in which pattern matches and not only if it matches or not, we cannot merge the tails of the patterns in the NFA.

To construct the DFA, we modeled the behaviour of "#" like this:

For example, the patterns "a.b.c", "a.*.b.c", "a.#.c", "b.b.c" would be represented in an NFA like this:

The nodes 11, 4, 6 and 8 would have information attached to them which would point to the respective bindings.

In order to convert the NFA to a DFA, we tried various approaches and went as far as generating source code for the structures behind the graphs, to make it as fast as possible. The best solution we ended up with was building the DFA on the fly, the same way it is built in good regular expressions compilers (see for example this article).

The advantage of the DFA approach is that there is no need to backtrack, once the DFA has been built. On the other hand, there are quite a number of disadvantages: it occupies significantly more memory than the trie; there is a significant cost for adding new bindings, since the entire DFA has to be dropped and rebuilt; and it is more complex and therefore harder to implement and maintain.

In the following articles we will present more details about the two structures, how they performed in benchmarks, their space and time complexities and the details behind the DFA optimizations that we have tried.

To be continued.

Management plugin – preview release

September 7th, 2010 by Simon MacMullen

The previously mentioned management plugin is now in a state where it's worth looking at and testing. In order to make this easy, I've made a special once-only binary release just for the management plugin (in future we'll make binary releases of it just like the other plugins). Download all the .ez files from here and install them as described here, then let us know what you think. (Update 2010-09-22: Note that the plugins referenced in this blog post are for version 2.0.0 of RabbitMQ. We've now released 2.1.0 - for this and subsequent versions you can get the management plugin from here).

After installation, point your browser at http://server-name:55672/mgmt/. You will need to authenticate as a RabbitMQ user (on a fresh installation the user "guest" is created with password "guest"). From here you can manage exchanges, queues, bindings, virtual hosts, users and permissions. Hopefully the UI is fairly self-explanatory.

The management UI is implemented as a single static HTML page which makes background queries to the HTTP API. As such it makes heavy use of Javascript. It has been tested with recent versions of Firefox, Chromium and Safari, and with versions of Microsoft Internet Explorer back to 6.0. Lynx users should use the HTTP API directly :)

The management plugin will create an HTTP-based API at http://server-name:55672/api/. Browse to that location for more information on the API. For convenience the documentation can also be obtained from our Mercurial server.

WARNING: The management plugin is still at an early stage of development. You should be aware of the following limitations:

  • Permissions are only enforced sporadically. If a user can authenticate with the HTTP API, they can do anything.
  • Installing the management plugin will turn on fine-grained statistics in the server. This can slow a CPU-bound server by 5-10%.
  • All sorts of other features may be missing or buggy. See the TODO file for more information.

Note: if you want to build the plugin yourself, you should be aware that right now the Erlang client does not work in the default branch, so you need a mix of versions. The following commands should work:

hg clone http://hg.rabbitmq.com/rabbitmq-public-umbrella
cd rabbitmq-public-umbrella
make checkout
hg update -r rabbitmq_v2_0_0 -R rabbitmq-server
hg update -r rabbitmq_v2_0_0 -R rabbitmq-codegen
hg update -r rabbitmq_v2_0_0 -R rabbitmq-erlang-client
hg clone http://hg.rabbitmq.com/rabbitmq-management
make
cd rabbitmq-management
make

Of course this will be fixed soon. (Ignore the above, this is fixed.)

Finally, this post would not be complete without some screenshots...

Growing Up

August 27th, 2010 by alexis

Some three and a half years after we launched RabbitMQ, we have this week released RabbitMQ 2.0.

This means some big changes.  The most important of these is our new Scalable Storage Engine.  RabbitMQ has always provided persistence for failure recovery.  But now, you can happily push data into Rabbit regardless of how much data is already stored, and you don’t need to worry about slow consumers disrupting processing.  As the demands on your application grow, Rabbit can scale with you, in a stable, reliable way.

Before introducing RabbitMQ 2.0, let me reiterate that as Rabbit evolves you can count on the same high level of commitment to you as a customer or end user, regardless of whether you are a large enterprise, or a next-gen start-up, or an open source community.  As always, get in touch if you need help or commercial support.

New capabilities

So what can you expect? In short, a more capable bunny. In particular:

1. RabbitMQ 2.0 has an all new Scalable Storage Engine.  There is also a persistence API.

2. Native support for multi-protocol messaging delivering better interoperability and more choice

3. The release coincides with first class Spring integration for both Java and .NET from SpringSource - with more to come e.g. a Grails plugin

4. Foundations for future additional management and monitoring features as part of rabbit-management and other tools

5. Plugins are now distributed as drop-in binary packages, making them much easier to use.

As always, read the full release notes before upgrading.

Scalable Storage means greater stability

The vision of Rabbit is that messaging should “just work”. Part of this is the need for “stable behaviour at scale”. You ideally need a broker that you can just start up and forget about. Our new Scalable Storage Engine takes us closer to this.

RabbitMQ has always had its own storage engine whose job is to persist messages. We use this to provide guaranteed eventual delivery, support for transactions, and high availability.

But, in RabbitMQ 1.x, the broker used a naïve albeit effective storage model: every message stored on disk would also be cached in RAM. While this helps performance it makes it much harder to manage scale unless you are able to predict growth and overprovision memory accordingly.

With RabbitMQ 2.0, the broker is capable of completely flushing messages to disk. It has a much smarter caching and paging model. This improves memory usage by paging messages out to disk, and in from disk, as needed. In other words, you don’t have to worry about one slow consumer disrupting your entire set up, and can happily push data into Rabbit regardless of how much data is already stored. This improves stability at scale.

We'll blog about this soon.  In the meantime, please try it out.  Storage enthusiasts might like to check out the notes in the codebase.

Storage API

RabbitMQ now provides a storage API. This allows “pluggable” persistence.

RabbitMQ includes its own persistent message database, which is optimised for messaging use cases. But what if you just have to use Oracle or MySQL? Or, what if you want to experiment with combinations of RabbitMQ with the latest and greatest NoSQL key-value store? Now you can do that.  Please contact us if you want to write a driver for your favourite store.

Natively Multiprotocol

Our aim is lower the cost of integration by making messaging simpler, providing a stable manageable server, and supporting the main technologies that you need.

Rabbit has supported AMQP since its very inception. RabbitMQ began with support for AMQP 0-8, and as AMQP evolved, added support for 0-9-1 on a branch.  AMQP 0-9-1 is fewer than 50 pages long, making it highly readable, as well as tree-friendly. A shorter, simpler protocol is easier to implement and validate which is important for customers who want AMQP from more than one vendor.

With the 2.0 release, RabbitMQ can now directly support multiple protocols at its core - with AMQP 0-8 and 0-9-1 pre-integrated.

In addition, RabbitMQ extensions provide support for a wide range of protocols including XMPP, STOMP, HTTP JSON/RPC, pubsub for HTTP (e.g. Google's PubSubHubBub), and SMTP.

Contact us if you want support for more protocols, or have questions about our future plans; e.g.: providing a safe evolutionary path to future versions of AMQP.

Better Plugin Support

We aim to provide support for a wide range of messaging applications without making the broker bloated and complex.  Plugins are key to this.  They let us and you extend and customise the capability of RabbitMQ.  Previously you could only load our plugins by building from source.  With the 2.0 release we are distributing pre-compiled plugins.  Drop them into a directory from where RabbitMQ can load them.

To get a feel for what you can do, I recommend taking a look at Tony Garnock-Jones excellent introduction to plugins from the last Erlang Factory.  Congratulations are due to Jon Brisbin for being the first person to create a plugin for RabbitMQ 2.0, in less than a day or two, adding webhooks for RabbitMQ.  Matthew Sackman and Tony Garnock-Jones have also created some custom exchanges.  Please note that many of these are demos and examples so the usual caveats apply.

First class Spring support in Java and .NET from SpringSource

We think that messaging should be intuitive regardless of the application platform you develop for. In Java, the clear leader is Spring and we are part of the SpringSource division of Vmware, thus adding fully fledged Spring support was a must. We are extremely grateful to Mark Fisher and Mark Pollack from the SpringSource team for bringing this to fruition. With the release of RabbitMQ 2.0 we are highlighting to the whole community that Spring support is officially available.

If you are a .NET user, you have been able to run RabbitMQ as a Windows service, and use it from .NET languages and WCF for some years now. This is great if you want to do messaging from Windows based applications, such as Excel, to back end services written in Java or any other of the hundreds of AMQP integration points. Now, with Spring.NET support we offer you a common application development model as well, that works equally well in both Java and .NET.

Putting it all together: more freedom to choose

We believe that protocol interoperability and easier integration give you choice. What if you are deploying for the enterprise, and need to connect RabbitMQ to legacy enterprise messaging systems that do not support AMQP? Spring gives you a way forward. With Spring Integration, you have first class access to most messaging systems. Our commitment is to support customers' choice of technology.  Freedom from lock-in means that you can expect to evolve your systems in line with business needs, instead of being constrained by vendors’ product plans and pricing schemes.

Management and monitoring

We have made important improvements to Rabbit’s management, and overall serviceability.

Rock solid management, across ‘any’ use case, is the heart of what makes messaging non-trivial. It’s easy to find messaging tools that focus on just one or two use cases, and do a reasonable job. But providing that all important “it just works” experience in the majority of cases at once, and then running stably over time, and not crashing once a month at 2am, well that’s hard. It takes care and it really needs good management tooling.

Rabbit has always provided some tools for management and the community has done an outstanding job in extending them and making them more useful and relevant. But we want more features, and this means making changes to the broker, and especially at the API level.  These APIs are critical for showing end users what we think is important about messaging, and what is not. How people interact with a tool, and how they use it, defines their relationship with that tool.

In 2.0 you will find support for instrumentation for gathering metrics and statistics about the health of your Rabbit, without significantly impacting performance. Typically people expect to see: current message rates, emerging bottlenecks, support for remedial action such as telling specific clients to throttle back when the broker is busy. Features like these will become visible as part of the rabbit-management plugin and other tools. The foundations are now in place for their addition incrementally.

Tell us what else you want from management

Please contact us with requests and ideas for more management and monitoring features. We’ll be adding more to RabbitMQ’s core, making it even easier to operate a large Rabbit installation 24/7, indefinitely. We’ll be looking for feedback from the community to make sure it’s really easy for people to integrate management and monitoring feeds into their favourite tools.

Plus, for SpringSource customers: expect each new feature to also show up in the Hyperic plugin, enabling you to monitor and manage RabbitMQ brokers, queues and exchanges inside your complete Spring stack and thus derive immediate “in context” intelligence about the overall behaviour of your application.

What else is next and how can I help?

As much as possible, future features will appear in stages. Our release philosophy will be evolutionary, while keeping the same focus on quality as with the previous releases. We’ll be working very hard on all of the following items, and more besides, so please get in touch if you want to use them or help:

1. Making Rabbit even easier to use.

2. Even better support for community plugins!

3. “Elastic” messaging for cloud services on EC2 and elsewhere

4. New styles of federation, complementing our existing rabbitmq-shovel relay

5. Even better HA, to cover a wider class of failover cases

We want your feedback! The best way you can help is to talk about what you want to do on the rabbitmq-discuss mailing list. That’s also the best place for you to get help if you need it. Or, if you want to go public to the max: talk to us on Twitter where we are @rabbitmq

Thank-you

We want to thank two groups of people for getting us to here.

First – the community. We especially wish to thank those people who tested the new persistence technology throughout this year.

Second – we want to thank our customers and everyone else who has commercially sponsored RabbitMQ over the years. Whether you “bought a new feature”, or took a real risk with us, your contribution is just as important to us.

Management, monitoring and statistics

August 6th, 2010 by Simon MacMullen

For a long time the management and monitoring capabilities built into RabbitMQ have consisted of rabbitmqctl. While it's a reasonable tool for management (assuming you like the command line), rabbitmqctl has never been very powerful as a monitoring tool. So we're going to build something better.

Of course, plenty of people don't like command line tools and so several people have built alternate means of managing RabbitMQ. Alice / Wonderland and Spring AMQP are two that leap to mind. However there isn't much standardisation, and people don't always find it very convenient to talk to Rabbit via epmd. So we're going to build something easier.

With that in mind, I'd like to announce the existence of RabbitMQ Management. This is a plugin to provide management and monitoring via a RESTful interface, with a web GUI. Think of it as being like a super-Alice, and also an integration point for any other tools that people want to build.

So RabbitMQ Management will allow easier management and better monitoring. How better? Well, as part of the upcoming rabbit release we've added a statistics gathering feature to the broker. This can count messages as they're published, routed, delivered and acked, on a per channel / queue / exchange basis, so we can determine things like which channels are publishing fast or consuming slowly, which queues are being published to from which exchanges, which connections and hosts are busiest, and so on.

Of course, doing all this extra bookkeeping comes at a cost; when statistics gathering is turned on the server can run around 10% slower (assuming it's CPU-bound; which usually means handling transient messages. If it's IO bound the performance impact will likely be less). At the moment statistics gathering is turned on automatically when the management plugin is installed but we'll make it configurable.

With that, I'd like to add a warning: it's currently at a very early stage of development, and really should not be trusted for anything other than experimentation and playing with. The REST API will change, the UI will change, the plugin might crash your server, and the TODO is currently almost hilariously long. But hopefully this gives you an idea of what we're going to do.