This tutorial assumes RabbitMQ is installed and running on localhost on the standard port (5672). In case you use a different host, port or credentials, connections settings would require adjusting.
If you're having trouble going through this tutorial you can contact us through the mailing list or RabbitMQ community Slack.
In the second tutorial we learned how to use Work Queues to distribute time-consuming tasks among multiple workers.
But what if we need to run a function on a remote computer and wait for the result? Well, that's a different story. This pattern is commonly known as Remote Procedure Call or RPC.
In this tutorial we're going to use RabbitMQ to build an RPC system: a client and a scalable RPC server. As we don't have any time-consuming tasks that are worth distributing, we're going to create a dummy RPC service that returns Fibonacci numbers.
To illustrate how an RPC service could be used we're going to create a simple client class. It's going to expose a method named call which sends an RPC request and blocks until the answer is received:
client = FibonacciClient.new('rpc_queue') puts ' [x] Requesting fib(30)' response = client.call(30) puts " [.] Got #{response}"
A note on RPC
Although RPC is a pretty common pattern in computing, it's often criticised. The problems arise when a programmer is not aware whether a function call is local or if it's a slow RPC. Confusions like that result in an unpredictable system and adds unnecessary complexity to debugging. Instead of simplifying software, misused RPC can result in unmaintainable spaghetti code.
Bearing that in mind, consider the following advice:
- Make sure it's obvious which function call is local and which is remote.
- Document your system. Make the dependencies between components clear.
- Handle error cases. How should the client react when the RPC server is down for a long time?
When in doubt avoid RPC. If you can, you should use an asynchronous pipeline - instead of RPC-like blocking, results are asynchronously pushed to a next computation stage.
In general doing RPC over RabbitMQ is easy. A client sends a request message and a server replies with a response message. In order to receive a response we need to send a 'callback' queue address with the request. We can use the default queue. Let's try it:
queue = channel.queue('', exclusive: true) exchange = channel.default_exchange exchange.publish(message, routing_key: 'rpc_queue', reply_to: queue.name) # ... then code to read a response message from the callback_queue ...
Message properties
The AMQP 0-9-1 protocol predefines a set of 14 properties that go with a message. Most of the properties are rarely used, with the exception of the following:
- :persistent: Marks a message as persistent (with a value of true) or transient (false). You may remember this property from the second tutorial.
- :content_type: Used to describe the mime-type of the encoding. For example for the often used JSON encoding it is a good practice to set this property to: application/json.
- :reply_to: Commonly used to name a callback queue.
- :correlation_id: Useful to correlate RPC responses with requests.
In the method presented above we suggest creating a callback queue for every RPC request. That's pretty inefficient, but fortunately there is a better way - let's create a single callback queue per client.
That raises a new issue, having received a response in that queue it's not clear to which request the response belongs. That's when the :correlation_id property is used. We're going to set it to a unique value for every request. Later, when we receive a message in the callback queue we'll look at this property, and based on that we'll be able to match a response with a request. If we see an unknown :correlation_id value, we may safely discard the message - it doesn't belong to our requests.
You may ask, why should we ignore unknown messages in the callback queue, rather than failing with an error? It's due to a possibility of a race condition on the server side. Although unlikely, it is possible that the RPC server will die just after sending us the answer, but before sending an acknowledgment message for the request. If that happens, the restarted RPC server will process the request again. That's why on the client we must handle the duplicate responses gracefully, and the RPC should ideally be idempotent.
Our RPC will work like this:
The Fibonacci task:
def fibonacci(value) return value if value.zero? || value == 1 fibonacci(value - 1) + fibonacci(value - 2) end
We declare our fibonacci function. It assumes only valid positive integer input. (Don't expect this one to work for big numbers, and it's probably the slowest recursive implementation possible).
The code for our RPC server rpc_server.rb looks like this:
#!/usr/bin/env ruby require 'bunny' class FibonacciServer def initialize @connection = Bunny.new @connection.start @channel = @connection.create_channel end def start(queue_name) @queue = channel.queue(queue_name) @exchange = channel.default_exchange subscribe_to_queue end def stop channel.close connection.close end def loop_forever # This loop only exists to keep the main thread # alive. Many real world apps won't need this. loop { sleep 5 } end private attr_reader :channel, :exchange, :queue, :connection def subscribe_to_queue queue.subscribe do |_delivery_info, properties, payload| result = fibonacci(payload.to_i) exchange.publish( result.to_s, routing_key: properties.reply_to, correlation_id: properties.correlation_id ) end end def fibonacci(value) return value if value.zero? || value == 1 fibonacci(value - 1) + fibonacci(value - 2) end end begin server = FibonacciServer.new puts ' [x] Awaiting RPC requests' server.start('rpc_queue') server.loop_forever rescue Interrupt => _ server.stop end
The server code is rather straightforward:
The code for our RPC client rpc_client.rb:
#!/usr/bin/env ruby require 'bunny' require 'thread' class FibonacciClient attr_accessor :call_id, :response, :lock, :condition, :connection, :channel, :server_queue_name, :reply_queue, :exchange def initialize(server_queue_name) @connection = Bunny.new(automatically_recover: false) @connection.start @channel = connection.create_channel @exchange = channel.default_exchange @server_queue_name = server_queue_name setup_reply_queue end def call(n) @call_id = generate_uuid exchange.publish(n.to_s, routing_key: server_queue_name, correlation_id: call_id, reply_to: reply_queue.name) # wait for the signal to continue the execution lock.synchronize { condition.wait(lock) } response end def stop channel.close connection.close end private def setup_reply_queue @lock = Mutex.new @condition = ConditionVariable.new that = self @reply_queue = channel.queue('', exclusive: true) reply_queue.subscribe do |_delivery_info, properties, payload| if properties[:correlation_id] == that.call_id that.response = payload.to_i # sends the signal to continue the execution of #call that.lock.synchronize { that.condition.signal } end end end def generate_uuid # very naive but good enough for code examples "#{rand}#{rand}#{rand}" end end client = FibonacciClient.new('rpc_queue') puts ' [x] Requesting fib(30)' response = client.call(30) puts " [.] Got #{response}" client.stop
Now is a good time to take a look at our full example source code (which includes basic exception handling) for rpc_client.rb and rpc_server.rb.
Our RPC service is now ready. We can start the server:
ruby rpc_server.rb # => [x] Awaiting RPC requests
To request a fibonacci number run the client:
ruby rpc_client.rb # => [x] Requesting fib(30)
The design presented here is not the only possible implementation of a RPC service, but it has some important advantages:
Our code is still pretty simplistic and doesn't try to solve more complex (but important) problems, like:
If you want to experiment, you may find the management UI useful for viewing the queues.
Please keep in mind that this and other tutorials are, well, tutorials. They demonstrate one new concept at a time and may intentionally oversimplify some things and leave out others. For example topics such as connection management, error handling, connection recovery, concurrency and metric collection are largely omitted for the sake of brevity. Such simplified code should not be considered production ready.
Please take a look at the rest of the documentation before going live with your app. We particularly recommend the following guides: Publisher Confirms and Consumer Acknowledgements, Production Checklist and Monitoring.
If you have questions about the contents of this tutorial or any other topic related to RabbitMQ, don't hesitate to ask them on the RabbitMQ mailing list.
If you'd like to contribute an improvement to the site, its source is available on GitHub. Simply fork the repository and submit a pull request. Thank you!