Menu

Installing RabbitMQ Cluster Operator in a Kubernetes cluster

Overview

This guide covers the installation of the RabbitMQ Cluster Kubernetes Operator in a Kubernetes cluster. If you are installing in OpenShift, follow the instructions in Installation on OpenShift section.

Compatibility

The Operator requires


Installation

To install the Operator, run the following command:

kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"
# namespace/rabbitmq-system created
# customresourcedefinition.apiextensions.k8s.io/rabbitmqclusters.rabbitmq.com created
# serviceaccount/rabbitmq-cluster-operator created
# role.rbac.authorization.k8s.io/rabbitmq-cluster-leader-election-role created
# clusterrole.rbac.authorization.k8s.io/rabbitmq-cluster-operator-role created
# rolebinding.rbac.authorization.k8s.io/rabbitmq-cluster-leader-election-rolebinding created
# clusterrolebinding.rbac.authorization.k8s.io/rabbitmq-cluster-operator-rolebinding created
# deployment.apps/rabbitmq-cluster-operator created

At this point, the RabbitMQ Cluster Kubernetes Operator is successfully installed. Once the RabbitMQ Cluster Kubernetes Operator pod is running, head over to Using Kubernetes RabbitMQ Cluster Kubernetes Operator for instructions on how to deploy RabbitMQ using a Kubernetes Custom Resource.

If you want to install a specific version of the Operator, you will have to obtain the manifest link from the Operator Releases. Please note that releases prior to 0.46.0 do not have this manifest. We strongly recommend to install versions 0.46.0+

Installation using kubectl-rabbitmq plugin

The kubectl rabbitmq plugin provides commands for managing RabbitMQ clusters. The plugin can be installed using krew:

kubectl krew install rabbitmq

To get the list of available commands, use:

kubectl rabbitmq help
# USAGE:
#   Install RabbitMQ Cluster Operator (optionally provide image to use a relocated image or a specific version)
#     kubectl rabbitmq install-cluster-operator [IMAGE]
# [...]
kubectl rabbitmq install-cluster-operator
# namespace/rabbitmq-system created
# customresourcedefinition.apiextensions.k8s.io/rabbitmqclusters.rabbitmq.com created
# serviceaccount/rabbitmq-cluster-operator created
# role.rbac.authorization.k8s.io/rabbitmq-cluster-leader-election-role created
# clusterrole.rbac.authorization.k8s.io/rabbitmq-cluster-operator-role created
# rolebinding.rbac.authorization.k8s.io/rabbitmq-cluster-leader-election-rolebinding created
# clusterrolebinding.rbac.authorization.k8s.io/rabbitmq-cluster-operator-rolebinding created
# deployment.apps/rabbitmq-cluster-operator created

(Optional) Relocate the Image

If you can't pull images from Docker Hub directly to your Kubernetes cluster, you need to relocate the images to your private registry first. The exact steps depend on your environment but will likely look like this:

docker pull rabbitmqoperator/cluster-operator:{some-version}
docker tag rabbitmqoperator/cluster-operator:{some-version} {someregistry}/cluster-operator-dev:{some-version}
docker push {someregistry}/cluster-operator:{some-version}

The value of {someregistry} should be the address of an OCI compatible registry. The value of {some-version} is a version number of the Cluster Operator.

You also need to update the deployment to use your private registry. Download the manifest from the release you are relocating and edit the section in Deployment image. You can locate this section by grep'ing the string image:

grep -C3 image: releases/rabbitmq-cluster-operator.yaml
# [...]
# --
#           valueFrom:
#             fieldRef:
#               fieldPath: metadata.namespace
#         image: rabbitmqoperator/cluster-operator:0.49.0
#         name: operator
#         resources:
#           limits:

If you require authentication to pull images from your private image registry, you must Configure Kubernetes Cluster Access to Private Images.

(Optional) Configure Kubernetes Cluster Access to Private Images

If you relocated the image to a private registry and your registry requires authentication, you need to follow these steps to allow Kubernetes to pull the image.

kubectl -n rabbitmq-system create secret \
docker-registry rabbitmq-cluster-registry-access \
--docker-server=DOCKER-SERVER \
--docker-username=DOCKER-USERNAME \
--docker-password=DOCKER-PASSWORD

Where:

  • DOCKER-SERVER is the server URL for your private image registry.
  • DOCKER-USERNAME is your username for your private image registry authentication.
  • DOCKER-PASSWORD is your password for your private image registry authentication.

For example:

kubectl -n rabbitmq-system create secret \
docker-registry rabbitmq-cluster-registry-access \
--docker-server=docker.io/my-registry \
--docker-username=my-username \
--docker-password=example-password1

Now update your service account by running:

kubectl -n rabbitmq-system patch serviceaccount \
rabbitmq-cluster-operator -p '{"imagePullSecrets": [{"name": "rabbitmq-cluster-registry-access"}]}'

Installation on OpenShift

The RabbitMQ cluster operator runs as user ID 1000. The RabbitMQ pod runs the RabbitMQ container as user ID 999 and an init container as user ID 0. By default OpenShift has security context constraints which disallow to create pods running with these user IDs. To install the RabbitMQ cluster operator on OpenShift, you need to perform the following steps:

  1. Download the installation manifest from the release page in GitHub.

    Edit the Namespace object named rabbitmq-system to include the following annotations:

    apiVersion: v1
    kind: Namespace
    metadata:
      annotations:
    ...
    openshift.io/sa.scc.supplemental-groups: 1000/1
    openshift.io/sa.scc.uid-range: 1000/1
    

  2. Run the installation command.

      kubectl create -f cluster-operator.yml
      # namespace/rabbitmq-system created
      # customresourcedefinition.apiextensions.k8s.io/rabbitmqclusters.rabbitmq.com created
      # serviceaccount/rabbitmq-cluster-operator created
      # role.rbac.authorization.k8s.io/rabbitmq-cluster-leader-election-role created
      # clusterrole.rbac.authorization.k8s.io/rabbitmq-cluster-operator-role created
      # rolebinding.rbac.authorization.k8s.io/rabbitmq-cluster-leader-election-rolebinding created
      # clusterrolebinding.rbac.authorization.k8s.io/rabbitmq-cluster-operator-rolebinding created
      # deployment.apps/rabbitmq-cluster-operator created

  3. Create a Security Context Constraint that allows the RabbitMQ pod to have the capabilities FOWNER and CHOWN:

    oc apply -f rabbitmq-scc.yml

    where rabbitmq-scc.yml contains:

    kind: SecurityContextConstraints
    apiVersion: security.openshift.io/v1
    metadata:
      name: rabbitmq-cluster
    allowPrivilegedContainer: false
    runAsUser:
      type: MustRunAsRange
    seLinuxContext:
      type: MustRunAs
    fsGroup:
      type: MustRunAs
    supplementalGroups:
      type: RunAsAny
    requiredDropCapabilities:
      - "ALL"
    allowedCapabilities:
      - "FOWNER"
      - "CHOWN"
    volumes:
      - "configMap"
      - "secret"
      - "persistentVolumeClaim"
      - "downwardAPI"
      - "emptyDir"
      - "projected"
    

  4. For every namespace where RabbitMQ cluster custom resources will be created (here we assume default namespace), change the following fields:

    oc edit namespace default
    

    apiVersion: v1
    kind: Namespace
    metadata:
      annotations:
        ...
        openshift.io/sa.scc.supplemental-groups: 999/1
        openshift.io/sa.scc.uid-range: 0-999
    

  5. For every RabbitMQ cluster (here we assume the name my-rabbitmq) assign the previously created security context constraint to the cluster's service account.

    oc adm policy add-scc-to-user rabbitmq-cluster -z my-rabbitmq-server
    

Getting Help and Providing Feedback

If you have questions about the contents of this guide or any other topic related to RabbitMQ, don't hesitate to ask them on the RabbitMQ mailing list.

Help Us Improve the Docs <3

If you'd like to contribute an improvement to the site, its source is available on GitHub. Simply fork the repository and submit a pull request. Thank you!