Logging
Overview
Log files is a very important aspect of system observability, much like monitoring.
RabbitMQ starts logging early on node start. Many important pieces of information about node's state and configuration will be logged during or after node boot.
Developers and operators should inspect logs when troubleshooting an issue or assessing the state of the system.
RabbitMQ supports a number of features when it comes to logging.
This guide covers topics such as:
- Supported log outputs: file and standard streams (console)
- Log file location
- Supported log levels
- How to activate debug logging
- How to tail logs of a running node without having access to the log file
- Watching internal events
- Connection lifecycle events logged
- Logging in JSON
- Log categories
- Advanced log formatting
- How to inspect service logs on systemd-based Linux systems
- Log rotation
- Logging to Syslog
- Logging to a system topic exchange,
amq.rabbitmq.log
and more.
Log Outputs
RabbitMQ nodes can log to multiple outputs. Logging to a file is one of the most common options for RabbitMQ installations.
Logging to standard output and error streams is another popular option. Syslog is a yet another one supported out of the box.
Different outputs can have different log levels. For example, the console output can log all messages including debug information while the file output can only log error and higher severity messages.
Default Log Output and Behavior
Nodes log to a file by default, if no outputs are explicitly configured. If some are configured, they will be used.
If logging to a file plus another output is necessary, a file output must be explicitly listed next to the other desired log outputs, for example, the standard stream one.
Log File Location
There are two ways to configure log file location. One is the configuration file. This option is recommended.
The other is the RABBITMQ_LOGS
environment variable. It can be useful in development environments.
RABBITMQ_LOGS
cannot be combined with the configuration file settings. When RABBITMQ_LOGS
is set, the logging-related settings from rabbitmq.conf
will be effectively ignored.
See the File and Directory Location guide to find default log file location for various platforms.
Log file location can be found in the RabbitMQ management UI on the node page
as well as using rabbitmq-diagnostics
:
- bash
- PowerShell
- cmd
rabbitmq-diagnostics -q log_location
rabbitmq-diagnostics.bat -q log_location
rabbitmq-diagnostics.bat -q log_location
The RABBITMQ_LOGS
variable value can be either a file path or a hyphen (-
).
Setting the value to a hyphen like so
# Instructs the node to log to standard streams.
# IMPORTANT: the environment variable takes precedence over the configuration file.
# When it is set, all logging-related rabbitmq.conf settings will be
# effectively ignored.
RABBITMQ_LOGS=-
the node will send all log messages to standard I/O streams, namely to standard output.
The environment variable takes precedence over the configuration file. When it is set,
all logging-related rabbitmq.conf
settings will be effectively ignored.
The recommended way of overriding log file location is via rabbitmq.conf
.
How Logging is Configured
Several sections below cover various configuration settings related to logging.
They all use rabbitmq.conf
, the modern configuration format.
See the Configuration guide for a general overview of how to configure RabbitMQ.
Logging to a File
Logging to a file is one of the most common options for RabbitMQ installations. In modern releases, RabbitMQ nodes only log to a file if explicitly configured to do so using the configuration keys listed below:
log.file
: log file path orfalse
to deactivate the file output. Default value is taken from theRABBITMQ_LOGS
environment variable or configuration filelog.file.level
: log level for the file output. Default level isinfo
log.file.formatter
: controls log entry format, text lines or JSONlog.file.rotation.date
,log.file.rotation.size
,log.file.rotation.count
for log file rotation settingslog.file.formatter.time_format
: controls timestamp formatting
The following example overrides log file name:
log.file = rabbit.log
The following example overrides log file location:
log.file = /opt/custom/var/log/rabbit.log
The following example instructs RabbitMQ to log to a file at the debug
level:
log.file.level = debug
For a list of supported log levels, see Log Levels.
Logging to a file can be deactivated with
log.file = false
Logging in JSON format to a file:
log.file.formatter = json
By default, RabbitMQ will use RFC 3339 timestamp format. It is possible to switch to a UNIX epoch-based format:
log.file = true
log.file.level = info
# use microseconds since UNIX epoch for timestamp format
log.file.formatter.time_format = epoch_usecs
The rest of this guide describes more options, including more advanced ones.
Log Rotation
When logging to a file, the recommended rotation option is logrotate
RabbitMQ nodes always append to the log files, so a complete log history is preserved.
Log file rotation is not performed by default. Debian and RPM packages will set up
log rotation via logrotate
after package installation.
log.file.rotation.date
, log.file.rotation.size
, log.file.rotation.count
settings control log file rotation
for the file output.
Rotation Using Logrotate
On Linux, BSD and other UNIX-like systems, logrotate is a widely used log file rotation tool. It is very mature and supports a lot of options.
RabbitMQ Debian and RPM packages will set up logrotate
to run weekly on files
located in default /var/log/rabbitmq
directory. Rotation configuration can be found in /etc/logrotate.d/rabbitmq-server
.
Built-in Periodic Rotation
log.file.rotation.date
cannot be combined with log.file.rotation.size
, the two
options are mutually exclusive
Use log.file.rotation.date
to set up minimalistic periodic rotation:
# rotate every night at midnight
log.file.rotation.date = $D0
# keep up to 5 archived log files in addition to the current one
log.file.rotation.count = 5
# archived log files will be compressed
log.file.rotation.compress = true
# rotate every day at 23:00 (11:00 p.m.)
log.file.rotation.date = $D23
Built-in File Size-based Rotation
log.file.rotation.size
cannot be combined with log.file.rotation.date
, the two
options are mutually exclusive
log.file.rotation.size
controls rotation based on the current log file size:
# rotate when the file reaches 10 MiB
log.file.rotation.size = 10485760
# keep up to 5 archived log files in addition to the current one
log.file.rotation.count = 5
# archived log files will be compressed
log.file.rotation.compress = true
Logging to Console (Standard Output)
Logging to standard streams (console) is another popular option for RabbitMQ installations, in particular when RabbitMQ nodes are deployed in containers. RabbitMQ nodes only log to standard streams if explicitly configured to do so.
Here are the main settings that control console (standard output) logging:
log.console
(boolean): set totrue
to activate console output. Default is `falselog.console.level
: log level for the console output. Default level isinfo
log.console.formatter
: controls log entry format, text lines or JSONlog.console.formatter.time_format
: controls timestamp formatting
To activate console logging, use the following config snippet:
log.console = true
The following example deactivates console logging
log.console = false
The following example instructs RabbitMQ to use the debug
logging level when logging to console:
log.console.level = debug
For a list of supported log levels, see Log Levels.
Logging to console in JSON format:
log.console.formatter = json
When console output is activated, the file output will also be activated by default.
To deactivate the file output, set log.file
to false
:
log.console = true
log.console.level = info
log.file = false
By default, RabbitMQ will use RFC 3339 timestamp format. It is possible to switch to a UNIX epoch-based format:
log.console = true
log.console.level = info
log.file = false
# use microseconds since UNIX epoch for timestamp format
log.console.formatter.time_format = epoch_usecs
Please note that RABBITMQ_LOGS=-
will deactivate the file output
even if log.file
is configured.
Logging to Syslog
RabbitMQ logs can be forwarded to a Syslog server via TCP or UDP. UDP is used by default and requires Syslog service configuration. TLS is also supported.
Syslog output has to be explicitly configured:
log.syslog = true
Syslog Endpoint Configuration
By default the Syslog logger will send log messages to UDP port 514 using the RFC 3164 protocol. RFC 5424 protocol also can be used.
In order to use UDP the Syslog service must have UDP input configured.
UDP and TCP transports can be used with both RFC 3164 and RFC 5424 protocols. TLS support requires the RFC 5424 protocol.
The following example uses TCP and the RFC 5424 protocol:
log.syslog = true
log.syslog.transport = tcp
log.syslog.protocol = rfc5424
To use TLS, a standard set of TLS options must be provided:
log.syslog = true
log.syslog.transport = tls
log.syslog.protocol = rfc5424
log.syslog.ssl_options.cacertfile = /path/to/ca_certificate.pem
log.syslog.ssl_options.certfile = /path/to/client_certificate.pem
log.syslog.ssl_options.keyfile = /path/to/client_key.pem
Syslog service IP address and port can be customised:
log.syslog = true
log.syslog.ip = 10.10.10.10
log.syslog.port = 1514
If a hostname is to be used rather than an IP address:
log.syslog = true
log.syslog.host = my.syslog-server.local
log.syslog.port = 1514
Syslog metadata identity and facility values also can be configured.
By default identity will be set to the name part of the node name (for example, rabbitmq
in rabbitmq@hostname
)
and facility will be set to daemon
.
To set identity and facility of log messages:
log.syslog = true
log.syslog.identity = my_rabbitmq
log.syslog.facility = user
Logging to Syslog in JSON format:
log.syslog = true
log.syslog.formatter = json
Less commonly used Syslog client options can be configured using the advanced config file.
JSON Logging
RabbitMQ nodes can format log messages as JSON, which can be convenient for parsing by other pieces of software.
Logging to a file in JSON format:
log.file.level = info
log.file.formatter = json
Logging to the console in JSON format:
log.console = true
log.console.level = info
log.console.formatter = json
log.file = false
Logging to Syslog in JSON format:
log.syslog = true
log.syslog.formatter = json
Note that JSON object field mapping can be customized to match a specific JSON-based logging format expected by the log collection tooling.
Log Message Categories
RabbitMQ has several categories of messages, which can be logged with different levels or to different files. The categories are:
connection
: connection lifecycle events for AMQP 0-9-1, AMQP 1.0, MQTT and STOMP.channel
: channel logs. Mostly errors and warnings on AMQP 0-9-1 channels.queue
: queue logs. Mostly debug messages.federation
: federation plugin logs.upgrade
: verbose upgrade logs. These can be excessive.default
: all other log entries. You cannot override file location for this category.
It is possible to configure a different log level or file location for each message category
using log.<category>.level
and log.<category>.file
configuration variables.
By default each category will not filter by level. If an is output configured to log debug
messages, the debug messages will be printed for all categories. Configure a log level for a
category to override.
For example, given debug level in the file output, the following will deactivate debug logging for connection events:
log.file.level = debug
log.connection.level = info
To redirect all federation logs to the rabbit_federation.log
file, use:
log.federation.file = rabbit_federation.log
To deactivate a log type, you can use the none
log level. For example, to deactivate
upgrade logs:
log.upgrade.level = none
Log Levels
Log levels is another way to filter and tune logging. Log levels have
a strict ordering. Each log message has a severity from debug
being
the lowest severity to critical
being the highest.
Logging verbosity can be controlled on multiple layers by setting log
levels for categories and outputs. More verbose log levels will
include more log messages from debug
being the most verbose to
none
being the least.
The following log levels are used by RabbitMQ:
Log level | Verbosity | Severity |
---|---|---|
debug | most verbose | lowest severity |
info | ||
warning | ||
error | ||
critical | highest severity | |
none | least verbose | not applicable |
The default log level is info
.
If a log message has lower severity than the category level, the message will be dropped and not sent to any output.
If a category level is not configured, its messages will always be sent to all outputs.
To make the default
category log only errors or higher severity messages, use
log.default.level = error
The none
level means no logging.
Each output can use its own log level. If a message has lower severity than the output level, the message will not be logged.
For example, if no outputs are configured to log
debug
messages, even if the category level is set to debug
, the
debug messages will not be logged.
On the other hand, if an output is configured to log debug
messages,
it will get them from all categories, unless a category is configured
with a less verbose level.
Changing Log Level
There are two ways of changing effective log levels:
- Via configuration file(s): this is more flexible but requires a node restart between changes
- Using CLI tools,
rabbitmqctl set_log_level <level>
: the changes are transient (will not survive node restart) but can be used to activate and deactivate, for example, debug logging at runtime for a period of time.
To set log level to debug
on a running node:
rabbitmqctl -n rabbit@target-host set_log_level debug
To set the level to info
:
rabbitmqctl -n rabbit@target-host set_log_level info
Tailing Logs Using CLI Tools
Modern releases support tailing logs of a node using CLI tools. This is convenient when log file location is not known or is not easily accessible but CLI tool connectivity is allowed.
To tail three hundred last lines on a node rabbitmq@target-host
, use rabbitmq-diagnostics log_tail
:
# This is semantically equivalent to using `tail -n 300 /path/to/rabbit@hostname.log`.
# Use -n to specify target node, -N is to specify the number of lines.
rabbitmq-diagnostics -n rabbit@target-host log_tail -N 300
This will load and print last lines from the log file.
If only console logging is activated, this command will fail with a "file not found" (enoent
) error.
To continuously inspect as a stream of log messages as they are appended to a file,
similarly to tail -f
or console logging, use rabbitmq-diagnostics log_tail_stream
:
# This is semantically equivalent to using `tail -f /path/to/rabbit@hostname.log`.
# Use Control-C to stop the stream.
rabbitmq-diagnostics -n rabbit@target-host log_tail_stream
This will continuously tail and stream lines added to the log file.
If only console logging is activated, this command will fail with a "file not found" (enoent
) error.
The rabbitmq-diagnostics log_tail_stream
command can only be used against a running RabbitMQ node
and will fail if the node is not running or the RabbitMQ application on it
was stopped using rabbitmqctl stop_app
.
Activating Debug Logging
When debug logging is enabled, the node will log a lot of information that can be useful for troubleshooting. This log severity is meant to be used when troubleshooting, say, the peer discovery activity.
For example to log debug messages to a file:
log.file.level = debug
To print log messages to standard I/O streams:
log.console = true
log.console.level = debug
To switch to debug logging at runtime:
rabbitmqctl -n rabbit@target-host set_log_level debug
To set the level back to info
:
rabbitmqctl -n rabbit@target-host set_log_level info
It is possible to deactivate debug logging for some categories:
log.file.level = debug
log.connection.level = info
log.channel.level = info
Advanced Log Format
This section covers features related to advanced log formatting. These settings are not necessary in most environments but can be used to adapt RabbitMQ logging to a specific format.
Most examples in this section use the following format:
log.file.formatter.level_format = lc4
However, the key can also be one of
log.file.formatter.level_format
log.console.formatter.level_format
log.exchange.formatter.level_format
In other words, most settings documented in this section are not specific to a particular
log output, be it file
, console
or exchange
.
Time Format
Timestamps format can be set to one of the following formats:
rfc3339_space
: the RFC 3339 format with spaces, this is the default formatrfc3339_T
: same as above but with tabsepoch_usecs
: timestamp (time since UNIX epoch) in microsecondsepoch_secs
: timestamp (time since UNIX epoch) in seconds
# this is the default format
log.file.formatter.time_format = rfc3339_space
For example, the following format
log.file.formatter.time_format = epoch_usecs
will result in log messages that look like this:
1728025620684139 [info] <0.872.0> started TCP listener on [::]:5672
1728025620687050 [info] <0.892.0> started TLS (SSL) listener on [::]:5671
Log Level Format
Log level can be formatted differently:
# full value, lower case is the default format
log.file.formatter.level_format = lc
# use the four character, upper case format
log.file.formatter.level_format = uc4
The following values are valid:
lc
: full value, lower case (the default), e.g.warning
orinfo
uc
: full value, upper case, e.g.WARNING
orINFO
lc3
: three characters, lower case, e.g.inf
ordbg
uc3
: three characters, upper case, e.g.INF
orWRN
lc4
: four characters, lower case, e.g.dbug
orwarn
uc4
: four characters, upper case, e.g.DBUG
orWARN
Log Message Format
This setting should only be used as a last resort measure when overriding log format us a hard requirement of log collection tooling
Besides the formatting of individual log message components (event time, log level, message, and so on), the entire log line format can be changed using the `` configuration setting.
The setting must be set to a message pattern that uses the following
$variables
:
$time
$level
- Erlang process
$pid
- Log
$msg
This is what the default format looks like:
# '$time [$level] $pid $msg' is the default format
log.console.formatter.plaintext.format = $time [$level] $pid $msg
The following customized format
# '$time [$level] $pid $msg' is the default format
log.console.formatter.plaintext.format = $level $time $msg
will produce log messages that look like this:
info 2024-10-04 03:23:52.968389-04:00 connection 127.0.0.1:57181 -> 127.0.0.1:5672: user 'guest' authenticated and granted access to vhost '/'
debug 2024-10-04 03:24:03.338466-04:00 Will reconcile virtual host processes on all cluster members...
debug 2024-10-04 03:24:03.338587-04:00 Will make sure that processes of 9 virtual hosts are running on all reachable cluster nodes
Notice how the Erlang process pid is excluded. This information can be essential for root cause analysis (RCA) and therefore the default format is highly recommended.
JSON Field Mapping
JSON logging can be customized in the following ways:
- Individual keys can be renamed by using a
{standard key}:{renamed key}
expression - Individual keys can be dropped using a
{standard key:-}
expression - All keys except for the explicitly listed ones can be dropped using a
*:-
expression
The log.file.formatter.json.field_map
key then must be set
to a string value that contains a number of the above expressions.
Before demonstrating an example, here is a message with the default mapping:
{
"time":"2024-10-04 03:38:29.709578-04:00",
"level":"info",
"msg":"Time to start RabbitMQ: 2294 ms",
"line":427,
"pid":"<0.9.0>",
"file":"rabbit.erl",
"mfa":["rabbit","start_it",1]
}
{
"time":"2024-10-04 03:38:35.600956-04:00",
"level":"info",
"msg":"accepting AMQP connection 127.0.0.1:57604 -> 127.0.0.1:5672",
"pid":"<0.899.0>",
"domain":"rabbitmq.connection"
}
Now, an example that uses JSON logging with a custom field mapping:
# log as JSON
log.file.formatter = json
# Rename the 'time' field to 'ts', 'level' to 'lvl' and 'msg' to 'message',
# drop all other fields.
# Use an 'escaped string' just to make the value stand out
log.file.formatter.json.field_map = 'time:ts level:lvl msg:message *:-'
The example above will produce the following messages. Notice how some information is omitted compared to the default example above:
{
"ts":"2024-10-04 03:34:43.600462-04:00",
"lvl":"info",
"message":"Time to start RabbitMQ: 2577 ms"
}
{
"ts":"2024-10-04 03:34:49.142396-04:00",
"lvl":"info",
"message":"accepting AMQP connection 127.0.0.1:57507 -> 127.0.0.1:5672"
}
Forced Single Line Logging
This setting can lead to incomplete log messages and should only be used as a last resort measure when overriding log format us a hard requirement of log collection tooling
Multi-line messages can be truncated to a single line:
# Accepted values are 'on' and 'off'.
# The default is 'off'.
log.console.formatter.single_line = on
This setting can lead to incomplete log messages and should be used only as a last resort measure.
Service Logs
On systemd
-based Linux distributions, system service logs can be
inspected using journalctl --system
journalctl --system
which requires superuser privileges. Its output can be filtered to narrow it down to RabbitMQ-specific entries:
sudo journalctl --system | grep rabbitmq
Service logs will include standard output and standard error streams of the node.
The output of journalctl --system
will look similar to this:
Aug 26 11:03:04 localhost rabbitmq-server[968]: ## ##
Aug 26 11:03:04 localhost rabbitmq-server[968]: ## ## RabbitMQ 4.0.3. Copyright (c) 2005-2024 Broadcom. All Rights Reserved. The term "Broadcom" refers to Broadcom Inc. and/or its subsidiaries.
Aug 26 11:03:04 localhost rabbitmq-server[968]: ########## Licensed under the MPL. See https://www.rabbitmq.com/
Aug 26 11:03:04 localhost rabbitmq-server[968]: ###### ##
Aug 26 11:03:04 localhost rabbitmq-server[968]: ########## Logs: /var/log/rabbitmq/rabbit@localhost.log
Aug 26 11:03:04 localhost rabbitmq-server[968]: /var/log/rabbitmq/rabbit@localhost_upgrade.log
Aug 26 11:03:04 localhost rabbitmq-server[968]: Starting broker...
Aug 26 11:03:05 localhost rabbitmq-server[968]: systemd unit for activation check: "rabbitmq-server.service"
Aug 26 11:03:06 localhost rabbitmq-server[968]: completed with 6 plugins.
Logged Events
Connection Lifecycle Events
Successful TCP connections that send at least 1 byte of data will be logged. Connections that do not send any data, such as health checks of certain load balancer products, will not be logged.
Here's an example:
2018-11-22 10:44:33.654 [info] <0.620.0> accepting AMQP connection <0.620.0> (127.0.0.1:52771 -> 127.0.0.1:5672)
The entry includes client IP address and port (127.0.0.1:52771
) as well as the target
IP address and port of the server (127.0.0.1:5672
). This information can be useful
when troubleshooting client connections.
Once a connection successfully authenticates and is granted access to a virtual host, that is also logged:
2018-11-22 10:44:33.663 [info] <0.620.0> connection <0.620.0> (127.0.0.1:52771 -> 127.0.0.1:5672): user 'guest' authenticated and granted access to vhost '/'
The examples above include two values that can be used as connection identifiers
in various scenarios: connection name (127.0.0.1:57919 -> 127.0.0.1:5672
) and an Erlang process ID of the connection (<0.620.0>
).
The latter is used by rabbitmqctl and the former is used by the HTTP API.
A client connection can be closed cleanly or abnormally. In the former case the client closes AMQP 0-9-1 (or 1.0, or STOMP, or MQTT) connection gracefully using a dedicated library function (method). In the latter case the client closes TCP connection or TCP connection fails. RabbitMQ will log both cases.
Below is an example entry for a successfully closed connection:
2018-06-17 06:23:29.855 [info] <0.634.0> closing AMQP connection <0.634.0> (127.0.0.1:58588 -> 127.0.0.1:5672, vhost: '/', user: 'guest')
Abruptly closed connections will be logged as warnings:
2018-06-17 06:28:40.868 [warning] <0.646.0> closing AMQP connection <0.646.0> (127.0.0.1:58667 -> 127.0.0.1:5672, vhost: '/', user: 'guest'):
client unexpectedly closed TCP connection
Abruptly closed connections can be harmless. For example, a short lived program can naturally stop and don't have a chance to close its connection. They can also hint at a genuine issue such as a failed application process or a proxy that closes TCP connections it considers to be idle.
Watching Internal Events
RabbitMQ nodes have an internal mechanism. Some of its events can be of interest for monitoring,
audit and troubleshooting purposes. They can be consumed as JSON objects using a rabbitmq-diagnostics
command:
# will emit JSON objects
rabbitmq-diagnostics consume_event_stream
When used interactively, results can be piped to a command line JSON processor such as jq:
rabbitmq-diagnostics consume_event_stream | jq
The events can also be exposed to applications for consumption with a plugin.
Events are published as messages with blank bodies. All event metadata is stored in message metadata (properties, headers).
Below is a list of published events.
Core Broker
Queue, Exchange and Binding events:
queue.deleted
queue.created
exchange.created
exchange.deleted
binding.created
binding.deleted
Connection and Channel events:
connection.created
connection.closed
channel.created
channel.closed
Consumer events:
consumer.created
consumer.deleted
Policy and Parameter events:
policy.set
policy.cleared
parameter.set
parameter.cleared
Virtual host events:
vhost.created
vhost.deleted
vhost.limits.set
vhost.limits.cleared
User management events:
user.authentication.success
user.authentication.failure
user.created
user.deleted
user.password.changed
user.password.cleared
user.tags.set
Permission events:
permission.created
permission.deleted
topic.permission.created
topic.permission.deleted
Alarm events:
alarm.set
alarm.cleared
Shovel Plugin
Worker events:
shovel.worker.status
shovel.worker.removed
Federation Plugin
Link events:
federation.link.status
federation.link.removed
Consuming Log Entries Using a System Log Exchange
RabbitMQ can forward log entries to a system exchange, amq.rabbitmq.log
, which
will be declared in the default virtual host.
This feature is deactivated by default.
To activate this logging, set the log.exchange
configuration key to true
:
# activate log forwarding to amq.rabbitmq.log, a topic exchange
log.exchange = true
log.exchange.level
can be used to control the log level that
will be used by this logging target:
log.exchange = true
log.exchange.level = warning
amq.rabbitmq.log
is a regular topic exchange and can be used as such.
Log entries are published as messages. Message body contains the logged message
and routing key is set to the log level.
Application that would like to consume log entries need to declare a queue
and bind it to the exchange, using a routing key to filter a specific log level,
or #
to consume all log entries allowed by the configured log level.