This guide covers the installation of the RabbitMQ Cluster Kubernetes Operator in a Kubernetes cluster. If you are installing in OpenShift, follow the instructions in Installation on OpenShift section.
The Operator requires
To install the Operator, run the following command:
kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml" # namespace/rabbitmq-system created # customresourcedefinition.apiextensions.k8s.io/rabbitmqclusters.rabbitmq.com created # serviceaccount/rabbitmq-cluster-operator created # role.rbac.authorization.k8s.io/rabbitmq-cluster-leader-election-role created # clusterrole.rbac.authorization.k8s.io/rabbitmq-cluster-operator-role created # rolebinding.rbac.authorization.k8s.io/rabbitmq-cluster-leader-election-rolebinding created # clusterrolebinding.rbac.authorization.k8s.io/rabbitmq-cluster-operator-rolebinding created # deployment.apps/rabbitmq-cluster-operator created
At this point, the RabbitMQ Cluster Kubernetes Operator is successfully installed. Once the RabbitMQ Cluster Kubernetes Operator pod is running, head over to Using Kubernetes RabbitMQ Cluster Kubernetes Operator for instructions on how to deploy RabbitMQ using a Kubernetes Custom Resource.
If you want to install a specific version of the Operator, you will have to obtain the manifest link from the Operator Releases. Please note that releases prior to 0.46.0 do not have this manifest. We strongly recommend to install versions 0.46.0+
If you want to relocate the Operator image to a custom location, the section Relocate the Image has instructions to relocate the Operator image to a private registry.
The kubectl rabbitmq plugin provides commands for managing RabbitMQ clusters. The plugin can be installed using krew:
kubectl krew install rabbitmq
To get the list of available commands, use:
kubectl rabbitmq help # USAGE: # Install RabbitMQ Cluster Operator (optionally provide image to use a relocated image or a specific version) # kubectl rabbitmq install-cluster-operator [IMAGE] # [...] kubectl rabbitmq install-cluster-operator # namespace/rabbitmq-system created # customresourcedefinition.apiextensions.k8s.io/rabbitmqclusters.rabbitmq.com created # serviceaccount/rabbitmq-cluster-operator created # role.rbac.authorization.k8s.io/rabbitmq-cluster-leader-election-role created # clusterrole.rbac.authorization.k8s.io/rabbitmq-cluster-operator-role created # rolebinding.rbac.authorization.k8s.io/rabbitmq-cluster-leader-election-rolebinding created # clusterrolebinding.rbac.authorization.k8s.io/rabbitmq-cluster-operator-rolebinding created # deployment.apps/rabbitmq-cluster-operator created
If you can't pull images from Docker Hub directly to your Kubernetes cluster, you need to relocate the images to your private registry first. The exact steps depend on your environment but will likely look like this:
docker pull rabbitmqoperator/cluster-operator:{some-version} docker tag rabbitmqoperator/cluster-operator:{some-version} {someregistry}/cluster-operator:{some-version} docker push {someregistry}/cluster-operator:{some-version}
The value of {someregistry} should be the address of an OCI compatible registry. The value of {some-version} is a version number of the Cluster Operator.
If you require authentication to pull images from your private image registry, you must Configure Kubernetes Cluster Access to Private Images.
Download the manifest from the release you are relocating and edit the section in Deployment image. You can locate this section by grep'ing the string image:
grep -C3 image: releases/cluster-operator.yml # [...] # -- # valueFrom: # fieldRef: # fieldPath: metadata.namespace # image: rabbitmqoperator/cluster-operator:0.49.0 # name: operator # resources: # limits:
If you relocated the image to a private registry and your registry requires authentication, you need to follow these steps to allow Kubernetes to pull the image.
First, create the Service Account that the Operator will use to run and to pull images:
apiVersion: v1 kind: ServiceAccount metadata: name: rabbitmq-cluster-operator namespace: rabbitmq-system
Second, create a Secret with the credentials to pull from the private registry:
kubectl -n rabbitmq-system create secret \ docker-registry rabbitmq-cluster-registry-access \ --docker-server=DOCKER-SERVER \ --docker-username=DOCKER-USERNAME \ --docker-password=DOCKER-PASSWORD
Where:
For example:
kubectl -n rabbitmq-system create secret \ docker-registry rabbitmq-cluster-registry-access \ --docker-server=docker.io/my-registry \ --docker-username=my-username \ --docker-password=example-password1
Now update the Operator Service Account by running:
kubectl -n rabbitmq-system patch serviceaccount \ rabbitmq-cluster-operator -p '{"imagePullSecrets": [{"name": "rabbitmq-cluster-registry-access"}]}'
Please note that the name of the Operator Service Account is not configurable and it must be rabbitmq-cluster-operator.
The RabbitMQ cluster operator runs as user ID 1000. The RabbitMQ pod runs the RabbitMQ container as user ID 999 and an init container as user ID 0. By default OpenShift has security context constraints which disallow to create pods running with these user IDs. To install the RabbitMQ cluster operator on OpenShift, you need to perform the following steps:
Download the installation manifest from the release page in GitHub.
Edit the Namespace object named rabbitmq-system to include the following annotations:
apiVersion: v1 kind: Namespace metadata: annotations: ... openshift.io/sa.scc.supplemental-groups: 1000/1 openshift.io/sa.scc.uid-range: 1000/1
Run the installation command.
kubectl create -f cluster-operator.yml # namespace/rabbitmq-system created # customresourcedefinition.apiextensions.k8s.io/rabbitmqclusters.rabbitmq.com created # serviceaccount/rabbitmq-cluster-operator created # role.rbac.authorization.k8s.io/rabbitmq-cluster-leader-election-role created # clusterrole.rbac.authorization.k8s.io/rabbitmq-cluster-operator-role created # rolebinding.rbac.authorization.k8s.io/rabbitmq-cluster-leader-election-rolebinding created # clusterrolebinding.rbac.authorization.k8s.io/rabbitmq-cluster-operator-rolebinding created # deployment.apps/rabbitmq-cluster-operator created
Create a Security Context Constraint that allows the RabbitMQ pod to have the capabilities FOWNER and CHOWN:
oc apply -f rabbitmq-scc.yml
where rabbitmq-scc.yml contains:
kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: rabbitmq-cluster allowPrivilegedContainer: false runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs fsGroup: type: MustRunAs supplementalGroups: type: RunAsAny requiredDropCapabilities: - "ALL" allowedCapabilities: - "FOWNER" - "CHOWN" - "DAC_OVERRIDE" volumes: - "configMap" - "secret" - "persistentVolumeClaim" - "downwardAPI" - "emptyDir" - "projected"
For every namespace where RabbitMQ cluster custom resources will be created (here we assume default namespace), change the following fields:
oc edit namespace default
apiVersion: v1 kind: Namespace metadata: annotations: ... openshift.io/sa.scc.supplemental-groups: 999/1 openshift.io/sa.scc.uid-range: 0-999
For every RabbitMQ cluster (here we assume the name my-rabbitmq) assign the previously created security context constraint to the cluster's service account.
oc adm policy add-scc-to-user rabbitmq-cluster -z my-rabbitmq-server
(optional) If the cluster operator fails to create with the below error:
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 74s (x107 over 9h) replicaset-controller Error creating: pods "rabbitmq-cluster-operator-79888fd8c8-" is forbidden: unable to validate against any security context constraint: []
This could be a result of the default SELinuxContext in the Openshift project is not compatible with the cluster operator. To fix this issue, add an additional annotation in the rabbitmq-system namespace:
apiVersion: v1 kind: Namespace metadata: annotations: ... openshift.io/sa.scc.mcs: 's0:c26,c5'
If you have questions about the contents of this guide or any other topic related to RabbitMQ, don't hesitate to ask them on the RabbitMQ mailing list.
If you'd like to contribute an improvement to the site, its source is available on GitHub. Simply fork the repository and submit a pull request. Thank you!